NOTES ON STATISTICS, PROBABILITY and MATHEMATICS


Choosing the Right Statistical Test with R Examples:


What follows is extracted and collated from here and here.

hsb2 <- within(read.csv("https://stats.idre.ucla.edu/stat/data/hsb2.csv"), {
    race <- as.factor(race)
    schtyp <- as.factor(schtyp)
    prog <- as.factor(prog)
})

attach(hsb2)

names(hsb2)

1. ONE DEPENDENT VARIABLE:


1.1. ZERO INDEPENDENT VARIABLES (one population):


1.1.1. INTERVAL and NORMAL DEPENDENT VARIABLE: one-sample t test


A one sample t-test allows us to test whether a sample mean (of a normally distributed interval variable) significantly differs from a hypothesized value.

summary(write)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   31.00   45.75   54.00   52.77   60.00   67.00
t.test(write, mu = 50)
## 
##  One Sample t-test
## 
## data:  write
## t = 4.1403, df = 199, p-value = 5.121e-05
## alternative hypothesis: true mean is not equal to 50
## 95 percent confidence interval:
##  51.45332 54.09668
## sample estimates:
## mean of x 
##    52.775

The mean of the variable write for this particular sample of students is 52.775, which is statistically significantly different from the test value of 50. We would conclude that this group of students has a significantly higher mean on the writing test than 50.


1.1.2. ORDINAL or INTERVAL: one-sample median test


A one sample median test allows us to test whether a sample median differs significantly from a hypothesized value. We will use the same variable, write, as we did in the one sample t-test example above, but we do not need to assume that it is interval and normally distributed (we only need to assume that write is an ordinal variable and that its distribution is symmetric). We will test whether the median writing score (write) differs significantly from 50.

wilcox.test(write, mu = 50)
## 
##  Wilcoxon signed rank test with continuity correction
## 
## data:  write
## V = 13177, p-value = 3.702e-05
## alternative hypothesis: true location is not equal to 50

The results indicate that the median of the variable write for this group is statistically significantly different from 50.


1.1.3. CATEGORICAL (2 categories): Binomial test


A one sample binomial test allows us to test whether the proportion of successes on a two-level categorical dependent variable significantly differs from a hypothesized value. For example, using the hsb2 data file, say we wish to test whether the proportion of females (female) differs significantly from 50%, i.e., from .5.

table(female)
## female
##   0   1 
##  91 109
prop.test(sum(female), length(female), p = 0.5)
## 
##  1-sample proportions test with continuity correction
## 
## data:  sum(female) out of length(female), null probability 0.5
## X-squared = 1.445, df = 1, p-value = 0.2293
## alternative hypothesis: true p is not equal to 0.5
## 95 percent confidence interval:
##  0.4733037 0.6149394
## sample estimates:
##     p 
## 0.545

The results indicate that there is no statistically significant difference (p = .2292). In other words, the proportion of females does not significantly differ from the hypothesized value of 50%.


1.1.4. CATEGORICAL: Chi-square goodness-of-fit


A chi-square goodness of fit test allows us to test whether the observed proportions for a categorical variable differ from hypothesized proportions. For example, let’s suppose that we believe that the general population consists of 10% Hispanic, 10% Asian, 10% African American and 70% White folks We want to test whether the observed proportions from our sample differ significantly from these hypothesized proportions.

prop.table(table(race))
## race
##     1     2     3     4 
## 0.120 0.055 0.100 0.725
chisq.test(table(race), p = c(10, 10, 10, 70)/100)
## 
##  Chi-squared test for given probabilities
## 
## data:  table(race)
## X-squared = 5.0286, df = 3, p-value = 0.1697

These results show that racial composition in our sample does not differ significantly from the hypothesized values that we supplied (chi-square with three degrees of freedom = 5.03, p = .1697).


1.2. ONE INDEPENDENT VARIABLE with TWO LEVELS (independent groups):


1.2.1. INTERVAL and NORMAL: 2 independent sample t-test


An independent samples t-test is used when you want to compare the means of a normally distributed interval dependent variable for two independent groups. For example, using the hsb2 data file, say we wish to test whether the mean for write is the same for males and females.

t.test(write ~ female)
## 
##  Welch Two Sample t-test
## 
## data:  write by female
## t = -3.6564, df = 169.71, p-value = 0.0003409
## alternative hypothesis: true difference in means between group 0 and group 1 is not equal to 0
## 95 percent confidence interval:
##  -7.499159 -2.240734
## sample estimates:
## mean in group 0 mean in group 1 
##        50.12088        54.99083

The results indicate that there is a statistically significant difference between the mean writing score for males and females (t = -3.6564, p = .0003). In other words, females have a statistically significantly higher mean score on writing (54.99) than males (50.12).


1.2.2. ORDINAL or INTERVAL: Wilcoxon-Mann Whitney test


The Wilcoxon-Mann-Whitney test is a non-parametric analog to the independent samples t-test and can be used when you do not assume that the dependent variable is a normally distributed interval variable (you only assume that the variable is at least ordinal). We will use the same data file (the hsb2 data file) and the same variables in this example as we did in the independent t-test example above and will not assume that write, our dependent variable, is normally distributed.

wilcox.test(write ~ female)
## 
##  Wilcoxon rank sum test with continuity correction
## 
## data:  write by female
## W = 3606, p-value = 0.0008749
## alternative hypothesis: true location shift is not equal to 0

The results suggest that there is a statistically significant difference between the underlying distributions of the write scores of males and the write scores of females. You can determine which group has the higher rank by looking at the how the actual rank sums compare to the expected rank sums under the null hypothesis. The sum of the female ranks was higher while the sum of the male ranks was lower. Thus the female group had higher rank.


1.2.3. CATEGORICAL: Chi-square test and Fisher’s exact test


A chi-square test is used when you want to see if there is a relationship between two categorical variables. Using the hsb2 data file, let’s see if there is a relationship between the type of school attended (schtyp) and students’ gender (female). Remember that the chi-square test assumes the expected value of each cell is five or higher. This assumption is easily met in the examples below. However, if this assumption is not met in your data, please see the section on Fisher’s exact test below.

summary(schtyp)
##   1   2 
## 168  32
chisq.test(table(female, schtyp))
## 
##  Pearson's Chi-squared test with Yates' continuity correction
## 
## data:  table(female, schtyp)
## X-squared = 0.00054009, df = 1, p-value = 0.9815

These results indicate that there is no statistically significant relationship between the type of school attended and gender (chi-square with one degree of freedom = 0.00054, p = 0.9815.

Let’s look at another example, this time looking at the relationship between gender (female) and socio-economic status (ses). The point of this example is that one (or both) variables may have more than two levels, and that the variables do not have to have the same number of levels. In this example, female has two levels (male and female) and ses has three levels (low, medium and high).

chisq.test(table(female, ses))
## 
##  Pearson's Chi-squared test
## 
## data:  table(female, ses)
## X-squared = 4.5765, df = 2, p-value = 0.1014

Again we find that there is no statistically significant relationship between the variables (chi-square with two degrees of freedom = 4.5765, p = 0.101).


The Fisher’s exact test is used when you want to conduct a chi-square test, but one or more of your cells has an expected frequency of five or less. Remember that the chi-square test assumes that each cell has an expected frequency of five or more, but the Fisher’s exact test has no such assumption and can be used regardless of how small the expected frequency is. In the example below, we have cells with observed frequencies of two and one, which may indicate expected frequencies that could be below five, so we will use Fisher’s exact test.

fisher.test(table(race, schtyp))
## 
##  Fisher's Exact Test for Count Data
## 
## data:  table(race, schtyp)
## p-value = 0.5975
## alternative hypothesis: two.sided

These results suggest that there is not a statistically significant relationship between race and type of school (p = 0.597). Note that the Fisher’s exact test does not have a “test statistic”, but computes the p-value directly.


1.3. ONE INDEPENDENT VARIABLE with TWO or more LEVELS (independent groups):


1.3.1. INTERVAL & NORMAL: one-way ANOVA:


A one-way analysis of variance (ANOVA) is used when you have a categorical independent variable (with two or more categories) and a normally distributed interval dependent variable and you wish to test for differences in the means of the dependent variable broken down by the levels of the independent variable. For example, using the hsb2 data file, say we wish to test whether the mean of write differs between the three program types (prog). The command for this test would be:

summary(prog)
##   1   2   3 
##  45 105  50
aov(write ~ prog)
## Call:
##    aov(formula = write ~ prog)
## 
## Terms:
##                      prog Residuals
## Sum of Squares   3175.698 14703.177
## Deg. of Freedom         2       197
## 
## Residual standard error: 8.639179
## Estimated effects may be unbalanced
summary(aov(write ~ prog))
##              Df Sum Sq Mean Sq F value   Pr(>F)    
## prog          2   3176  1587.8   21.27 4.31e-09 ***
## Residuals   197  14703    74.6                     
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The mean of the dependent variable differs significantly among the levels of program type. However, we do not know if the difference is between only two of the levels or all three of the levels. (The F test for the Model is the same as the F test for prog because prog was the only variable entered into the model. If other variables had also been entered, the F test for the Model would have been different from prog.)


1.3.2. ORDINAL or INTERVAL: Kruskal Wallis


The Kruskal Wallis test is used when you have one independent variable with two or more levels and an ordinal dependent variable. In other words, it is the non-parametric version of ANOVA and a generalized form of the Mann-Whitney test method since it permits 2 or more groups. We will use the same data file as the one way ANOVA example above (the hsb2 data file) and the same variables as in the example above, but we will not assume that write is a normally distributed interval variable.

kruskal.test(write, prog)
## 
##  Kruskal-Wallis rank sum test
## 
## data:  write and prog
## Kruskal-Wallis chi-squared = 34.045, df = 2, p-value = 4.047e-08

If some of the scores receive tied ranks, then a correction factor is used, yielding a slightly different value of chi-squared. With or without ties, the results indicate that there is a statistically significant difference among the three type of programs.


1.3.3. CATEGORICAL: Chi square test



1.4. ONE INDEPENDENT VARIABLE with TWO levels (dependent/matched groups):


1.4.1. INTERVAL & NORMAL: paired t-test:


A paired (samples) t-test is used when you have two related observations (i.e. two observations per subject) and you want to see if the means on these two normally distributed interval variables differ from one another. For example, using the hsb2 data file we will test whether the mean of read is equal to the mean of write.

t.test(write, read, paired = TRUE)
## 
##  Paired t-test
## 
## data:  write and read
## t = 0.86731, df = 199, p-value = 0.3868
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -0.6941424  1.7841424
## sample estimates:
## mean of the differences 
##                   0.545

These results indicate that the mean of read is not statistically significantly different from the mean of write (t = -0.8673, p = 0.3868).


1.4.2. ORDINAL or INTERVAL: Wilcoxon signed rank test


Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) is a nonparametric test of the null hypothesis that it is equally likely that a randomly selected value from one sample will be less than or greater than a randomly selected value from a second sample.

Under the null hypothesis H0, the probability of an observation from the population X exceeding an observation from the second population Y equals the probability of an observation from Y exceeding an observation from X.

You should use the rank-sum test when the data are not paired. You’ll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively dependent, while unpaired values are not dependent. Often the dependence-pairing occurs because they’re observations on the same unit (repeated measures), but it doesn’t have to be on the same unit, just in some way tending to be associated (while measuring the same kind of thing), to be considered as ‘paired’.

You use the Wilcoxon signed rank sum test when you do not wish to assume that the difference between the two variables is interval and normally distributed (but you do assume the difference is ordinal).

a = c(110, 115, 128, 142, 123, 129, 130, 128 ,134, 133, 128, 147, 137, 112, 138, 128, 132, 139, 133, 135, 133, 125, 134, 139, 138, 142, 152, 140, 144, 147 ,153 ,141)
b = c(122, 118, 120 ,131 ,124 ,118 ,120 ,140 ,124, 120, 134, 127, 127 ,134, 133, 137 ,137 ,135 ,129 ,138 ,143, 128 ,121 ,129, 133, 138, 142, 131, 135, 132, 146, 135)

wilcox.test(a,b)
## 
##  Wilcoxon rank sum test with continuity correction
## 
## data:  a and b
## W = 632.5, p-value = 0.1067
## alternative hypothesis: true location shift is not equal to 0

You should use the signed rank test when the data are paired:

after = c(125, 115, 130, 140, 140, 115, 140, 125, 140, 135)
before = c(110, 122, 125, 120, 140, 124, 123, 137, 135, 145)
sgn = sign(after-before)
abs = abs(after - before)
d = data.frame(after,before,sgn,abs)
wilcox.test(d$before,d$after, paired = T, alternative = "two.sided", correct=F)
## 
##  Wilcoxon signed rank test
## 
## data:  d$before and d$after
## V = 18, p-value = 0.5936
## alternative hypothesis: true location shift is not equal to 0

Or using the same example as above, but without assuming that the difference between read and write is interval and normally distributed.

wilcox.test(write, read, paired = TRUE)
## 
##  Wilcoxon signed rank test with continuity correction
## 
## data:  write and read
## V = 9261, p-value = 0.3666
## alternative hypothesis: true location shift is not equal to 0

The results suggest that there is not a statistically significant difference between read and write.

If you believe the differences between read and write were not ordinal but could merely be classified as positive and negative, then you may want to consider a sign test in lieu of sign rank test.


1.4.3. CATEGORICAL: McNemar test:


You would perform McNemar’s test if you were interested in the marginal frequencies of two binary outcomes. These binary outcomes may be the same outcome variable on matched pairs (like a case-control study) or two outcome variables from a single group. For example, let us consider two questions, Q1 and Q2, from a test taken by 200 students. Suppose 172 students answered both questions correctly, 15 students answered both questions incorrectly, 7 answered Q1 correctly and Q2 incorrectly, and 6 answered Q2 correctly and Q1 incorrectly. These counts can be considered in a two-way contingency table. The null hypothesis is that the two questions are answered correctly or incorrectly at the same rate (or that the contingency table is symmetric).

(X <- matrix(c(172, 7, 6, 15), 2, 2))
##      [,1] [,2]
## [1,]  172    6
## [2,]    7   15
mcnemar.test(X)
## 
##  McNemar's Chi-squared test with continuity correction
## 
## data:  X
## McNemar's chi-squared = 0, df = 1, p-value = 1

McNemar’s chi-square statistic suggests that there is not a statistically significant difference in the proportions of correct/incorrect answers to these two questions.


1.5. ONE INDEPENDENT VARIABLE with TWO or more levels (dependent/matched groups)


1.5.1. INTERVAL & NORMAL: one-way repeated measures ANOVA:


You would perform a one-way repeated measures analysis of variance if you had one categorical independent variable and a normally distributed interval dependent variable that was repeated at least twice for each subject. This is the equivalent of the paired samples t-test, but allows for two or more levels of the categorical variable. This tests whether the mean of the dependent variable differs by the categorical variable. We have an example data set called rb4, which is used in Kirk’s book Experimental Design. In this data set, y is the dependent variable, a is the repeated measure and s is the variable that indicates the subject number.

require(car)
require(foreign)
kirk <- within(read.dta("https://stats.idre.ucla.edu/stat/stata/examples/kirk/rb4.dta"), 
    {
        s <- as.factor(s)
        a <- as.factor(a)
    })

model <- lm(y ~ a + s, data = kirk)
analysis <- Anova(model, idata = kirk, idesign = ~s)
print(analysis)
## Anova Table (Type II tests)
## 
## Response: y
##           Sum Sq Df F value    Pr(>F)    
## a           49.0  3 11.6271 0.0001056 ***
## s           31.5  7  3.2034 0.0180188 *  
## Residuals   29.5 21                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

There may be up to four different p-values. The “regular” (0.0001) is the p-value that you would get if you assumed compound symmetry in the variance-covariance matrix. Because that assumption is often not valid, the three other p-values offer various corrections (the Huynh-Feldt, H-F, Greenhouse-Geisser, G-G and Box’s conservative, Box). No matter which p-value you use, our results indicate that we have a statistically significant effect of a at the .05 level.


1.5.2. ORDINAL or INTERVAL: Friedman test:


You perform a Friedman test when you have one within-subjects independent variable with two or more levels and a dependent variable that is not interval and normally distributed (but at least ordinal). We will use this test to determine if there is a difference in the reading, writing and math scores. The null hypothesis in this test is that the distribution of the ranks of each type of score (i.e., reading, writing and math) are the same.

friedman.test(cbind(read, write, math))
## 
##  Friedman rank sum test
## 
## data:  cbind(read, write, math)
## Friedman chi-squared = 0.64491, df = 2, p-value = 0.7244

Friedman’s chi-square has a value of 0.64 and a p-value of 0.7244 and is not statistically significant. Hence, there is no evidence that the distributions of the three types of scores are different.


1.5.3. CATEGORICAL (2 categories): repeated measures logistic regression


If you have a binary outcome measured repeatedly for each subject and you wish to run a logistic regression that accounts for the effect of these multiple measures from each subjects, you can perform a repeated measures logistic regression. The exercise data file contains 3 pulse measurements of 30 people assigned to 2 different diet regiments and 3 different exercise regiments. If we define a “high” pulse as being over 100, we can then predict the probability of a high pulse using diet regiment.

First, we use xtset to define which variable defines the repetitions. In this dataset, there are three measurements taken for each id, so we will use id as our panel variable. Then we can use i: before diet so that we can create indicator variables as needed.

require(lme4)
exercise <- within(read.dta("https://stats.idre.ucla.edu/stat/stata/whatstat/exercise.dta"), 
    {
        id <- as.factor(id)
        diet <- as.factor(diet)
    })

lmm <- glmer(highpulse ~ diet + (1 | id), data = exercise, family = binomial)


library(lmerTest)
summary(lmm)
## Generalized linear mixed model fit by maximum likelihood (Laplace
##   Approximation) [glmerMod]
##  Family: binomial  ( logit )
## Formula: highpulse ~ diet + (1 | id)
##    Data: exercise
## 
##      AIC      BIC   logLik deviance df.resid 
##    105.5    113.0    -49.7     99.5       87 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -1.1218 -0.3639 -0.2650  0.5249  1.7553 
## 
## Random effects:
##  Groups Name        Variance Std.Dev.
##  id     (Intercept) 3.315    1.821   
## Number of obs: 90, groups:  id, 30
## 
## Fixed effects:
##             Estimate Std. Error z value Pr(>|z|)  
## (Intercept)  -2.0036     0.8102  -2.473   0.0134 *
## diet2         1.1446     0.9487   1.207   0.2276  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##       (Intr)
## diet2 -0.719
anova(lmm)
## Analysis of Variance Table
##      npar Sum Sq Mean Sq F value
## diet    1 1.6262  1.6262  1.6262

These results indicate that diet is not statistically significant (Z = 1.2, p = 0.228).


1.6. TWO or more INDEPENDENT VARIABLES (independent groups):


1.6.1. INTERVAL & NORMAL: factorial ANOVA


A factorial ANOVA has two or more categorical independent variables (either with or without the interactions) and a single normally distributed interval dependent variable. For example, using the hsb2 data file we will look at writing scores (write) as the dependent variable and gender (female) and socio-economic status (ses) as independent variables, and we will include an interaction of female by ses.

table(hsb2$ses)
## 
##  1  2  3 
## 47 95 58
anova(lm(write ~ female * ses, data = hsb2))
## Analysis of Variance Table
## 
## Response: write
##             Df  Sum Sq Mean Sq F value    Pr(>F)    
## female       1  1176.2 1176.21 14.7212 0.0001680 ***
## ses          1  1042.3 1042.32 13.0454 0.0003862 ***
## female:ses   1     0.0    0.04  0.0005 0.9827570    
## Residuals  196 15660.3   79.90                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
summary(lm(write ~ female * ses, data = hsb2))
## 
## Call:
## lm(formula = write ~ female * ses, data = hsb2)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -22.279  -6.079   1.462   6.971  18.527 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 43.31092    3.12020  13.881   <2e-16 ***
## female       5.36679    3.94610   1.360   0.1754    
## ses          3.16176    1.38180   2.288   0.0232 *  
## female:ses   0.03884    1.79471   0.022   0.9828    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.939 on 196 degrees of freedom
## Multiple R-squared:  0.1241, Adjusted R-squared:  0.1107 
## F-statistic: 9.256 on 3 and 196 DF,  p-value: 9.384e-06

These results indicate that the overall model is statistically significant (F = 9.2, p = 0.000). The variables female and ses are also statistically significant (F = 14.7212, p = 0.0001 and F = 13.0454, p = 0.0003, respectively). However, that interaction between female and ses is not statistically significant (F = 0.0005, p = 0.982).


1.6.2. ORDINAL & INTERVAL: ordered logistic regression:


Ordered logistic regression is used when the dependent variable is ordered, but not continuous. For example, using the hsb2 data file we will create an ordered variable called write3. This variable will have the values 1, 2 and 3, indicating a low, medium or high writing score. We do not generally recommend categorizing a continuous variable in this way; we are simply creating a variable to use for this example. We will use gender (female), reading score (read) and social studies score (socst) as predictor variables in this model.

library(MASS)
hsb2$write3 <- cut(hsb2$write, c(0, 48, 57, 70),  right = TRUE, labels = c(1,2,3))
table(hsb2$write3)
## 
##  1  2  3 
## 61 61 78
m <- polr(write3 ~ female + read + socst, data = hsb2, Hess=TRUE)
summary(m)
## Call:
## polr(formula = write3 ~ female + read + socst, data = hsb2, Hess = TRUE)
## 
## Coefficients:
##          Value Std. Error t value
## female 1.28543    0.32445   3.962
## read   0.11772    0.02136   5.512
## socst  0.08019    0.01944   4.124
## 
## Intercepts:
##     Value   Std. Error t value
## 1|2  9.7037  1.1968     8.1080
## 2|3 11.8001  1.3041     9.0486
## 
## Residual Deviance: 312.5526 
## AIC: 322.5526

The results indicate that the overall model is statistically significant (p < .0000), as are each of the predictor variables (p < .000). There are two cutoff points for this model because there are three levels of the outcome variable.

One of the assumptions underlying ordinal logistic (and ordinal probit) regression is that the relationship between each pair of outcome groups is the same. In other words, ordinal logistic regression assumes that the coefficients that describe the relationship between, say, the lowest versus all higher categories of the response variable are the same as those that describe the relationship between the next lowest category and all higher categories, etc. This is called the proportional odds assumption or the parallel regression assumption. Because the relationship between all pairs of groups is the same, there is only one set of coefficients (only one model). If this was not the case, we would need different models (such as a generalized ordered logit model) to describe the relationship between each pair of outcome groups.


1.6.3. CATEGORICAL (2 categories): factorial logistic regression


A factorial logistic regression is used when you have two or more categorical independent variables but a dichotomous dependent variable. For example, using the hsb2 data file we will use female as our dependent variable, because it is the only dichotomous (0/1) variable in our data set; certainly not because it common practice to use gender as an outcome variable. We will use type of program (prog) and school type (schtyp) as our predictor variables. Because prog is a categorical variable (it has three levels), we need to create dummy codes for it.

summary(glm(female ~ prog * schtyp, data = hsb2, family = binomial))
## 
## Call:
## glm(formula = female ~ prog * schtyp, family = binomial, data = hsb2)
## 
## Deviance Residuals: 
##    Min      1Q  Median      3Q     Max  
## -1.893  -1.249   1.064   1.107   1.199  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)
## (Intercept)   -0.05129    0.32036  -0.160    0.873
## prog2          0.32459    0.39108   0.830    0.407
## prog3          0.21835    0.43191   0.506    0.613
## schtyp2        1.66073    1.14131   1.455    0.146
## prog2:schtyp2 -1.93402    1.23271  -1.569    0.117
## prog3:schtyp2 -1.82779    1.84025  -0.993    0.321
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 275.64  on 199  degrees of freedom
## Residual deviance: 272.49  on 194  degrees of freedom
## AIC: 284.49
## 
## Number of Fisher Scoring iterations: 3

The results indicate that the overall model is not statistically significant. Furthermore, none of the coefficients are statistically significant either.


1.7. ONE INTERVAL IV:


1.7.1. INTERVAL & NORMAL: Correlation:


A correlation is useful when you want to see the linear relationship between two (or more) normally distributed interval variables. For example, using the hsb2 data file we can run a correlation between two continuous variables, read and write.

cor(read, write)
## [1] 0.5967765
cor.test(read, write)
## 
##  Pearson's product-moment correlation
## 
## data:  read and write
## t = 10.465, df = 198, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.4993831 0.6792753
## sample estimates:
##       cor 
## 0.5967765

In the second example, we will run a correlation between a dichotomous variable, female, and a continuous variable, write. Although it is assumed that the variables are interval and normally distributed, we can include dummy variables when performing correlations.

cor(female, write)
## [1] 0.2564915
cor.test(read, write)
## 
##  Pearson's product-moment correlation
## 
## data:  read and write
## t = 10.465, df = 198, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.4993831 0.6792753
## sample estimates:
##       cor 
## 0.5967765

In the first example above, we see that the correlation between read and write is 0.5968. By squaring the correlation and then multiplying by 100, you can determine what percentage of the variability is shared. Let’s round 0.5968 to be 0.6, which when squared would be .36, multiplied by 100 would be 36%. Hence read shares about 36% of its variability with write. In the output for the second example, we can see the correlation between write and female is 0.2565. Squaring this number yields .06579225, meaning that female shares approximately 6.5% of its variability with write.


1.7.2. INTERVAL & NORMAL: Simple linear regression:


Simple linear regression allows us to look at the linear relationship between one normally distributed interval predictor and one normally distributed interval outcome variable. For example, using the hsb2 data file, say we wish to look at the relationship between writing scores (write) and reading scores (read); in other words, predicting write from read.

lm(write ~ read)
## 
## Call:
## lm(formula = write ~ read)
## 
## Coefficients:
## (Intercept)         read  
##     23.9594       0.5517
summary(lm(write ~ read))
## 
## Call:
## lm(formula = write ~ read)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -20.5447  -5.1225   0.6451   6.3259  15.4553 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 23.95944    2.80574   8.539 3.55e-15 ***
## read         0.55171    0.05272  10.465  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.625 on 198 degrees of freedom
## Multiple R-squared:  0.3561, Adjusted R-squared:  0.3529 
## F-statistic: 109.5 on 1 and 198 DF,  p-value: < 2.2e-16

We see that the relationship between write and read is positive (.5517051) and based on the t-value (10.47) and p-value (0.000), we would conclude this relationship is statistically significant. Hence, we would say there is a statistically significant positive linear relationship between reading and writing.


1.7.3. ORDINAL or INTERVAL: non-parametric correlation:


A Spearman correlation is used when one or both of the variables are not assumed to be normally distributed and interval (but are assumed to be ordinal). The values of the variables are converted in ranks and then correlated. In our example, we will look for a relationship between read and write. We will not assume that both of these variables are normal and interval.

cor.test(write, read, method = "spearman")
## 
##  Spearman's rank correlation rho
## 
## data:  write and read
## S = 510993, p-value < 2.2e-16
## alternative hypothesis: true rho is not equal to 0
## sample estimates:
##       rho 
## 0.6167455

The results suggest that the relationship between read and write (rho = 0.6167, p = 0.000) is statistically significant.


1.7.4. CATEGORICAL: simple logistic regression:


Logistic regression assumes that the outcome variable is binary (i.e., coded as 0 and 1). We have only one variable in the hsb2 data file that is coded 0 and 1, and that is female. We understand that female is a silly outcome variable (it would make more sense to use it as a predictor variable), but we can use female as the outcome variable to illustrate how the code for this command is structured and how to interpret the output. In our example, female will be the outcome variable, and read will be the predictor variable. As with OLS regression, the predictor variables must be either dichotomous or continuous; they cannot be categorical.

glm(female ~ read, family = binomial)
## 
## Call:  glm(formula = female ~ read, family = binomial)
## 
## Coefficients:
## (Intercept)         read  
##     0.72609     -0.01044  
## 
## Degrees of Freedom: 199 Total (i.e. Null);  198 Residual
## Null Deviance:       275.6 
## Residual Deviance: 275.1     AIC: 279.1
summary(glm(female ~ read, family = binomial))
## 
## Call:
## glm(formula = female ~ read, family = binomial)
## 
## Deviance Residuals: 
##    Min      1Q  Median      3Q     Max  
## -1.352  -1.243   1.045   1.094   1.206  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept)  0.72609    0.74195   0.979    0.328
## read        -0.01044    0.01392  -0.750    0.453
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 275.64  on 199  degrees of freedom
## Residual deviance: 275.07  on 198  degrees of freedom
## AIC: 279.07
## 
## Number of Fisher Scoring iterations: 3

The results indicate that reading score (read) is not a statistically significant predictor of gender (i.e., being female), z = -0.75, p = 0.453. Likewise, the test of the overall model is not statistically significant, LR chi-squared 0.56, p = 0.4527.


1.8. ONE or MORE INTERVAL IVs and/or ONE or MORE CATEGORICAL IVs:


1.8.1.1. INTERVAL and NORMAL: Multiple regression:


Multiple regression is very similar to simple regression, except that in multiple regression you have more than one predictor variable in the equation. For example, using the hsb2 data file we will predict writing score from gender (female), reading, math, science and social studies (socst) scores.

lm(write ~ female + read + math + science + socst)
## 
## Call:
## lm(formula = write ~ female + read + math + science + socst)
## 
## Coefficients:
## (Intercept)       female         read         math      science        socst  
##      6.1388       5.4925       0.1254       0.2381       0.2419       0.2293
summary(lm(write ~ female + read + math + science + socst))
## 
## Call:
## lm(formula = write ~ female + read + math + science + socst)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -17.040  -4.015  -0.264   3.938  14.989 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  6.13876    2.80842   2.186 0.030025 *  
## female       5.49250    0.87542   6.274 2.24e-09 ***
## read         0.12541    0.06496   1.931 0.054989 .  
## math         0.23807    0.06713   3.547 0.000489 ***
## science      0.24194    0.06070   3.986 9.51e-05 ***
## socst        0.22926    0.05284   4.339 2.30e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6.059 on 194 degrees of freedom
## Multiple R-squared:  0.6017, Adjusted R-squared:  0.5914 
## F-statistic:  58.6 on 5 and 194 DF,  p-value: < 2.2e-16

The results indicate that the overall model is statistically significant (F = 58.60, p = 0.0000). Furthermore, all of the predictor variables are statistically significant except for read.


1.8.1.2 INTERVAL and NORMAL: Analysis of covariance ANCOVA:


Analysis of covariance is like ANOVA, except in addition to the categorical predictors you also have continuous predictors as well. For example, the one way ANOVA example used write as the dependent variable and prog as the independent variable. Let’s add read as a continuous variable to this model, as shown below.

lm(write ~ read + prog)
## 
## Call:
## lm(formula = write ~ read + prog)
## 
## Coefficients:
## (Intercept)         read        prog2        prog3  
##     27.8195       0.4726       1.8963      -2.8930
summary(lm(write ~ read + prog))
## 
## Call:
## lm(formula = write ~ read + prog)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.5558  -4.7721   0.5312   5.3848  18.4442 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 27.81953    3.03415   9.169  < 2e-16 ***
## read         0.47259    0.05676   8.327 1.41e-14 ***
## prog2        1.89626    1.37528   1.379   0.1695    
## prog3       -2.89303    1.54287  -1.875   0.0623 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.444 on 196 degrees of freedom
## Multiple R-squared:  0.3925, Adjusted R-squared:  0.3832 
## F-statistic: 42.21 on 3 and 196 DF,  p-value: < 2.2e-16

The results indicate that even after adjusting for reading score (read), writing scores still significantly differ by program type.


1.8.2.1. CATEGORICAL: multiple logistic regression:


Multiple logistic regression is like simple logistic regression, except that there are two or more predictors. The predictors can be interval variables or dummy variables, but cannot be categorical variables. If you have categorical predictors, they should be coded into one or more dummy variables. We have only one variable in our data set that is coded 0 and 1, and that is female. We understand that female is a silly outcome variable (it would make more sense to use it as a predictor variable), but we can use female as the outcome variable to illustrate how the code for this command is structured and how to interpret the output.

glm(female ~ read + write, family = binomial)
## 
## Call:  glm(formula = female ~ read + write, family = binomial)
## 
## Coefficients:
## (Intercept)         read        write  
##    -1.70614     -0.07101      0.10637  
## 
## Degrees of Freedom: 199 Total (i.e. Null);  197 Residual
## Null Deviance:       275.6 
## Residual Deviance: 247.8     AIC: 253.8

These results show that both read and write are significant predictors of female.


1.8.2.2. CATEGORICAL: discriminant analysis:


Discriminant analysis is used when you have one or more normally distributed interval independent variables and a categorical dependent variable. It is a multivariate technique that considers the latent dimensions in the independent variables for predicting group membership in the categorical dependent variable. For example, using the hsb2 data file, say we wish to use read, write and math scores to predict the type of program a student belongs to (prog).

(fit <- lda(factor(prog) ~ read + write + math, data = hsb2))
## Call:
## lda(factor(prog) ~ read + write + math, data = hsb2)
## 
## Prior probabilities of groups:
##     1     2     3 
## 0.225 0.525 0.250 
## 
## Group means:
##       read    write     math
## 1 49.75556 51.33333 50.02222
## 2 56.16190 56.25714 56.73333
## 3 46.20000 46.76000 46.42000
## 
## Coefficients of linear discriminants:
##              LD1         LD2
## read  0.02919876  0.04385321
## write 0.03832289 -0.13698224
## math  0.07034625  0.07931008
## 
## Proportion of trace:
##    LD1    LD2 
## 0.9874 0.0126

The main point is that two canonical variables are identified by the analysis, the first of which seems to be more related to program type than the second.


2. TWO OR MORE DEPENDENT VARIABLES:


2.1. ONE IV with TWO or MORE levels (independent groups):


2.1.1. INTERVAL and NORMAL: One-way MANOVA:


MANOVA (multivariate analysis of variance) is like ANOVA, except that there are two or more dependent variables. In a one-way MANOVA, there is one categorical independent variable and two or more dependent variables. For example, using the hsb2 data file, say we wish to examine the differences in read, write and math broken down by program type (prog).

summary(manova(cbind(read, write, math) ~ prog))
##            Df  Pillai approx F num Df den Df    Pr(>F)    
## prog        2 0.26721   10.075      6    392 2.304e-10 ***
## Residuals 197                                             
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

According to all three criteria, the students in the different programs differ in their joint distribution of read, write and math.


2.2. TWO + INDEPENDENT VARIABLES:


2.2.1. INTERVAL and NORMAL: multivariate multiple linear regression:


Multivariate multiple regression is used when you have two or more dependent variables that are to be predicted from two or more predictor variables. In our example, we will predict write and read from female, math, science and social studies (socst) scores.

M1 <- lm(cbind(write, read) ~ female + math + science + socst, data = hsb2)
summary(Anova(M1))
## 
## Type II MANOVA Tests:
## 
## Sum of squares and products for error:
##          write     read
## write 7258.783 1091.057
## read  1091.057 8699.762
## 
## ------------------------------------------
##  
## Term: female 
## 
## Sum of squares and products for the hypothesis:
##           write       read
## write 1413.5284 -133.48461
## read  -133.4846   12.60544
## 
## Multivariate Tests: female
##                  Df test stat approx F num Df den Df     Pr(>F)    
## Pillai            1 0.1698853 19.85132      2    194 1.4335e-08 ***
## Wilks             1 0.8301147 19.85132      2    194 1.4335e-08 ***
## Hotelling-Lawley  1 0.2046528 19.85132      2    194 1.4335e-08 ***
## Roy               1 0.2046528 19.85132      2    194 1.4335e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## ------------------------------------------
##  
## Term: math 
## 
## Sum of squares and products for the hypothesis:
##          write      read
## write 714.8665  856.2825
## read  856.2825 1025.6735
## 
## Multivariate Tests: math
##                  Df test stat approx F num Df den Df     Pr(>F)    
## Pillai            1 0.1599321 18.46685      2    194 4.5551e-08 ***
## Wilks             1 0.8400679 18.46685      2    194 4.5551e-08 ***
## Hotelling-Lawley  1 0.1903800 18.46685      2    194 4.5551e-08 ***
## Roy               1 0.1903800 18.46685      2    194 4.5551e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## ------------------------------------------
##  
## Term: science 
## 
## Sum of squares and products for the hypothesis:
##          write     read
## write 857.8824 901.3191
## read  901.3191 946.9551
## 
## Multivariate Tests: science
##                  Df test stat approx F num Df den Df     Pr(>F)    
## Pillai            1 0.1664254 19.36631      2    194 2.1459e-08 ***
## Wilks             1 0.8335746 19.36631      2    194 2.1459e-08 ***
## Hotelling-Lawley  1 0.1996526 19.36631      2    194 2.1459e-08 ***
## Roy               1 0.1996526 19.36631      2    194 2.1459e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## ------------------------------------------
##  
## Term: socst 
## 
## Sum of squares and products for the hypothesis:
##          write     read
## write 1105.653 1277.393
## read  1277.393 1475.810
## 
## Multivariate Tests: socst
##                  Df test stat approx F num Df den Df     Pr(>F)    
## Pillai            1 0.2206710 27.46604      2    194 3.1399e-11 ***
## Wilks             1 0.7793290 27.46604      2    194 3.1399e-11 ***
## Hotelling-Lawley  1 0.2831551 27.46604      2    194 3.1399e-11 ***
## Roy               1 0.2831551 27.46604      2    194 3.1399e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Many researchers familiar with traditional multivariate analysis may not recognize the tests above. They do not see Wilks’ Lambda, Pillai’s Trace or the Hotelling-Lawley Trace statistics, the statistics with which they are familiar.

These results show that female has a significant relationship with the joint distribution of write and read.


2.3. ZERO INDEPENDENT VARIABLES:


2.3.1. INTERVAL and NORMAL: factor analysis:

Factor analysis is a form of exploratory multivariate analysis that is used to either reduce the number of variables in a model or to detect relationships among variables. All variables involved in the factor analysis need to be continuous and are assumed to be normally distributed. The goal of the analysis is to try to identify factors which underlie the variables. There may be fewer factors than variables, but there may not be more factors than variables. For our example, let’s suppose that we think that there are some common factors underlying the various test scores.

require(psych)
fa(r = cor(model.matrix(~read + write + math + science + socst - 1, data = hsb2)), rotate = "none", fm = "pa", 2)
## Factor Analysis using method =  pa
## Call: fa(r = cor(model.matrix(~read + write + math + science + socst - 
##     1, data = hsb2)), nfactors = 2, rotate = "none", fm = "pa")
## Standardized loadings (pattern matrix) based upon correlation matrix
##          PA1   PA2   h2   u2 com
## read    0.81  0.06 0.66 0.34 1.0
## write   0.76  0.00 0.58 0.42 1.0
## math    0.80  0.17 0.67 0.33 1.1
## science 0.75  0.26 0.62 0.38 1.2
## socst   0.79 -0.48 0.85 0.15 1.6
## 
##                        PA1  PA2
## SS loadings           3.06 0.33
## Proportion Var        0.61 0.07
## Cumulative Var        0.61 0.68
## Proportion Explained  0.90 0.10
## Cumulative Proportion 0.90 1.00
## 
## Mean item complexity =  1.2
## Test of the hypothesis that 2 factors are sufficient.
## 
## The degrees of freedom for the null model are  10  and the objective function was  2.51
## The degrees of freedom for the model are 1  and the objective function was  0.01 
## 
## The root mean square of the residuals (RMSR) is  0.01 
## The df corrected root mean square of the residuals is  0.03 
## 
## Fit based upon off diagonal values = 1
## Measures of factor score adequacy             
##                                                    PA1  PA2
## Correlation of (regression) scores with factors   0.95 0.79
## Multiple R square of scores with factors          0.91 0.62
## Minimum correlation of possible factor scores     0.82 0.23

3. TWO SETS OF TWO + DEPENDENT VARIABLES


3.1. ZERO INDEPENDENT VARIABLES:


3.1.1. INTERVAL and NORMAL: canonical correlation:


Canonical correlation is a multivariate technique used to examine the relationship between two groups of variables. For each set of variables, it creates latent variables and looks at the relationships among the latent variables. It assumes that all variables in the model are interval and normally distributed.

require(CCA)
head(cc(cbind(read, write), cbind(math, science)))
## $cor
## [1] 0.7728409 0.0234784
## 
## $names
## $names$Xnames
## [1] "read"  "write"
## 
## $names$Ynames
## [1] "math"    "science"
## 
## $names$ind.names
## NULL
## 
## 
## $xcoef
##              [,1]       [,2]
## read  -0.06326131 -0.1037908
## write -0.04924918  0.1219084
## 
## $ycoef
##                [,1]       [,2]
## math    -0.06698268  0.1201425
## science -0.04824063 -0.1208860
## 
## $scores
## $scores$xscores
##               [,1]         [,2]
##   [1,] -0.26358835 -0.589561062
##   [2,] -1.30420707 -0.877901269
##   [3,]  1.49454321 -1.556539586
##   [4,] -0.24916276 -2.187572699
##   [5,]  0.36902478  0.448346869
##   [6,]  0.55880872  0.759719249
##   [7,] -0.16550344  0.990333008
##   [8,]  1.48691695  1.066177022
##   [9,] -0.88940214 -0.602764022
##  [10,] -0.41133590 -0.223835983
##  [11,] -0.15787718 -1.632383600
##  [12,] -0.90382773  0.995247615
##  [13,] -1.66976282 -1.274946875
##  [14,] -0.61554542  1.062803275
##  [15,]  0.24930149  1.265470254
##  [16,]  0.83307890  0.601575756
##  [17,]  0.36902478  0.448346869
##  [18,] -0.50983426  0.019980737
##  [19,] -1.59970217 -0.146451110
##  [20,]  0.50317366 -1.966788153
##  [21,] -0.49540867 -1.578030900
##  [22,] -1.18489724  0.128686136
##  [23,]  0.77023105 -1.325925827
##  [24,] -0.45337228 -0.900933442
##  [25,]  1.39563138  0.510987923
##  [26,]  1.93015961 -0.030998216
##  [27,] -1.40991823  0.164921269
##  [28,]  0.12277887  1.057888668
##  [29,]  1.24829729 -0.946997788
##  [30,]  0.44671169 -1.045873974
##  [31,]  1.71956420 -1.592774720
##  [32,] -1.46555329 -2.561586132
##  [33,] -1.50841660  0.408737989
##  [34,]  1.22707237 -0.373691122
##  [35,] -0.29202606  0.782751422
##  [36,] -1.09361167  0.683875235
##  [37,] -1.05796116 -1.487443067
##  [38,] -1.26217068 -0.200803810
##  [39,]  1.40325764 -2.111728685
##  [40,]  1.90934814 -1.281402340
##  [41,]  0.61527070 -0.161194929
##  [42,] -0.48181000  0.471379042
##  [43,] -0.04578015  0.173209623
##  [44,]  1.22707237 -0.373691122
##  [45,] -1.40991823  0.164921269
##  [46,] -0.48181000  0.471379042
##  [47,]  0.77023105 -1.325925827
##  [48,] -1.11442313 -0.566528889
##  [49,]  0.02428050  1.301705388
##  [50,] -0.36208671 -0.345744343
##  [51,] -0.45378574  0.922777348
##  [52,]  1.81084977 -1.037585621
##  [53,]  0.95280219 -0.215547629
##  [54,] -0.98790051 -0.358947303
##  [55,] -1.76826119 -1.031130155
##  [56,]  1.51535467 -0.306135462
##  [57,]  1.74037567 -0.342370595
##  [58,]  1.32557073 -0.617507842
##  [59,] -0.88940214 -0.602764022
##  [60,]  0.45351102 -0.021169003
##  [61,]  0.47473595 -0.594475669
##  [62,] -0.12346705  1.667430467
##  [63,]  0.95280219 -0.215547629
##  [64,]  2.12715634 -0.518631655
##  [65,]  0.67173268 -1.082109108
##  [66,]  1.10054974 -0.581272708
##  [67,] -0.55187065 -0.657116722
##  [68,]  1.00926417 -1.136461807
##  [69,] -0.19991357 -2.309481059
##  [70,]  1.11497533 -2.179284345
##  [71,]  0.95280219 -0.215547629
##  [72,] -0.55187065 -0.657116722
##  [73,] -2.01450711 -0.421588356
##  [74,] -1.30420707 -0.877901269
##  [75,]  0.20767856 -1.235337994
##  [76,]  0.96001499 -1.014553448
##  [77,] -0.58030837  0.715195762
##  [78,] -1.30420707 -0.877901269
##  [79,]  2.16919273  0.158465804
##  [80,]  0.91076580 -0.892645088
##  [81,] -0.98790051 -0.358947303
##  [82,]  1.21264678  1.224320515
##  [83,] -1.30420707 -0.877901269
##  [84,] -1.28339561  0.372502856
##  [85,]  0.40467530 -1.722971433
##  [86,] -0.62955755  0.837104122
##  [87,]  0.59445924 -1.411599054
##  [88,]  1.33916940  1.431902101
##  [89,]  1.21347370 -2.423101065
##  [90,]  0.01068183 -0.747704555
##  [91,] -0.43977362  1.148476501
##  [92,] -0.49540867 -1.578030900
##  [93,] -1.45195462 -0.512176189
##  [94,]  1.26910876  0.303406337
##  [95,]  0.95280219 -0.215547629
##  [96,] -0.31325099  1.356058087
##  [97,] -1.78948611 -0.457823489
##  [98,] -1.28339561  0.372502856
##  [99,]  1.58541532  0.822360302
## [100,] -1.18489724  0.128686136
## [101,] -1.35345626 -0.755992909
## [102,]  0.02428050  1.301705388
## [103,]  0.66451988 -0.283103289
## [104,] -0.64315622 -1.212305821
## [105,] -0.29202606  0.782751422
## [106,] -0.23556409 -0.138162756
## [107,] -0.94586412  0.318150156
## [108,]  1.96539666 -0.378605729
## [109,]  0.27052642  0.692163589
## [110,] -1.78948611 -0.457823489
## [111,] -0.26358835 -0.589561062
## [112,]  0.65730709  0.515902529
## [113,] -1.11442313 -0.566528889
## [114,] -1.59970217 -0.146451110
## [115,] -1.71901201 -1.153038515
## [116,]  1.45889269  0.614778716
## [117,]  0.52357167  1.107326761
## [118,] -2.01450711 -0.421588356
## [119,] -0.19352770  0.538934702
## [120,]  0.99483858  0.461549830
## [121,] -0.55187065 -0.657116722
## [122,]  0.17924085  0.136974490
## [123,]  0.17924085  0.136974490
## [124,]  0.66451988 -0.283103289
## [125,] -0.12346705  1.667430467
## [126,] -0.38331164  0.227562323
## [127,]  0.72098186 -1.204017467
## [128,]  0.82586610  1.400581574
## [129,]  0.32698840 -0.228750589
## [130,]  2.02865797 -0.274814935
## [131,] -0.60833263  0.263797456
## [132,] -0.90382773  0.995247615
## [133,] -1.45195462 -0.512176189
## [134,]  0.58683298  1.211117555
## [135,] -0.86137788 -0.151365716
## [136,] -2.00729431 -1.220594175
## [137,]  0.02428050  1.301705388
## [138,]  0.43228610  0.552137663
## [139,]  1.41685631 -0.062318743
## [140,]  0.20046577 -0.436332176
## [141,]  1.86648483  1.688921781
## [142,]  0.58683298  1.211117555
## [143,]  0.86151662 -0.770736728
## [144,]  0.12277887  1.057888668
## [145,] -0.29202606  0.782751422
## [146,]  0.36902478  0.448346869
## [147,] -0.31325099  1.356058087
## [148,]  0.55880872  0.759719249
## [149,]  0.91076580 -0.892645088
## [150,]  0.34779986  1.021653535
## [151,]  1.10776254 -1.380278527
## [152,] -0.86817722 -1.176070688
## [153,]  0.37582412  1.473051841
## [154,]  0.27052642  0.692163589
## [155,] -0.75608018  0.629522535
## [156,] -1.30420707 -0.877901269
## [157,] -0.09502933  0.295117983
## [158,]  0.43908543  1.576842634
## [159,]  1.32557073 -0.617507842
## [160,] -1.57167791  0.304947196
## [161,] -0.12346705  1.667430467
## [162,] -0.16508998 -0.833377782
## [163,] -0.07421787  1.545522107
## [164,] -0.75608018  0.629522535
## [165,] -0.29202606  0.782751422
## [166,]  0.95280219 -0.215547629
## [167,] -0.16550344  0.990333008
## [168,]  0.77661692  1.522489934
## [169,] -0.75608018  0.629522535
## [170,] -0.65758181  0.385705816
## [171,]  0.43908543  1.576842634
## [172,]  0.66451988 -0.283103289
## [173,]  1.47331828 -0.983232921
## [174,] -0.79811657 -0.047574923
## [175,]  0.70655627  0.393994170
## [176,] -1.03714969 -0.237038943
## [177,] -1.50841660  0.408737989
## [178,]  0.77661692  1.522489934
## [179,]  0.17924085  0.136974490
## [180,] -0.58752117  1.514201580
## [181,] -0.94586412  0.318150156
## [182,]  0.70655627  0.393994170
## [183,] -0.68601953  1.758018300
## [184,] -0.77730510  1.202829201
## [185,] -0.55949691  1.965599886
## [186,] -1.40991823  0.164921269
## [187,] -0.04578015  0.173209623
## [188,]  0.76301825 -0.526920009
## [189,] -1.13564806  0.006777776
## [190,]  0.47473595 -0.594475669
## [191,]  0.58683298  1.211117555
## [192,]  0.81865331  2.199587393
## [193,]  0.17924085  0.136974490
## [194,]  0.40384838  1.924450147
## [195,] -0.27121460  2.033155546
## [196,] -0.48181000  0.471379042
## [197,]  0.98082645  0.235850677
## [198,]  0.27815267 -1.930553019
## [199,] -0.62955755  0.837104122
## [200,] -1.28339561  0.372502856
## 
## $scores$yscores
##                [,1]        [,2]
##   [1,]  1.013980334 -0.81276184
##   [2,] -0.561661891 -1.30522812
##   [3,] -0.387441410 -0.58065576
##   [4,]  0.322640484 -0.81722302
##   [5,] -0.347186284  0.38420150
##   [6,] -0.427696537 -1.54551302
##   [7,]  0.657553868 -1.41793527
##   [8,]  1.131974678  0.63489582
##   [9,] -0.387441410 -0.58065576
##  [10,]  0.132448995  0.14614718
##  [11,]  0.054709777 -0.33665321
##  [12,] -0.427696537 -1.54551302
##  [13,] -1.670868810  1.09910797
##  [14,] -0.443667546  0.14242953
##  [15,]  1.182986345  2.20269592
##  [16,]  0.735293086 -0.93513488
##  [17,]  0.199431671  0.02600473
##  [18,] -0.789337471  0.14019895
##  [19,] -0.778580931  0.74314179
##  [20,] -0.347186284  0.38420150
##  [21,]  0.499304397 -3.83045019
##  [22,] -2.469446463  0.24993198
##  [23,]  0.360124575 -1.29927988
##  [24,] -0.733111335 -0.58288635
##  [25,]  1.131974678  0.63489582
##  [26,]  1.064992001  0.75503827
##  [27,] -1.335955426  0.49839571
##  [28,] -0.588389441 -0.22022841
##  [29,]  0.864043971  1.11546562
##  [30,]  0.092193868 -0.81871008
##  [31,] -0.057742495  1.10951738
##  [32,] -1.346711967 -0.10454714
##  [33,] -1.376210553 -0.46646155
##  [34,] -1.263758281 -1.91263214
##  [35,] -0.264232597 -1.42388351
##  [36,] -0.800094012 -0.46274390
##  [37,] -2.180002675  0.97524787
##  [38,] -1.711123937  0.13425071
##  [39,]  1.343679249  0.87741131
##  [40,]  1.466888062  0.03418356
##  [41,]  1.255183491 -0.20833193
##  [42,] -0.923302825  0.38048385
##  [43,] -0.443667546  0.14242953
##  [44,]  0.735293086 -0.93513488
##  [45,] -0.226748507 -1.90594038
##  [46,] -1.520932447 -0.82911949
##  [47,]  1.051464425 -1.29481870
##  [48,] -1.700367396  0.73719355
##  [49,] -0.749082344  1.10505620
##  [50,] -0.191707849  1.34980229
##  [51,] -0.808079517  0.38122738
##  [52,]  1.214928364 -1.17318919
##  [53,] -0.470395097  1.22742925
##  [54,]  0.092193868 -0.81871008
##  [55,] -2.190759215  0.37230502
##  [56,]  1.826085563  2.08627112
##  [57,]  1.622366497  0.99978435
##  [58,]  0.713780005 -2.14102057
##  [59,] -0.454424087 -0.46051331
##  [60,]  0.421892783  0.87146307
##  [61,]  0.215402681 -1.66193782
##  [62,] -1.386967094 -1.06940440
##  [63,]  1.775073896  0.51847102
##  [64,]  1.544627280  0.51698396
##  [65,]  0.941783188  1.59826602
##  [66,]  0.936241116 -1.29556223
##  [67,] -0.644615577  0.50285689
##  [68,]  0.853287430  0.51252278
##  [69,] -1.060039214 -0.82614537
##  [70,]  0.622840814  0.51103572
##  [71,]  1.064992001  0.75503827
##  [72,] -0.655372118 -0.10008596
##  [73,] -1.767350073  0.85733600
##  [74,] -1.427222220 -2.03426166
##  [75,]  0.148420004 -1.54179537
##  [76,]  0.976496243 -0.33070497
##  [77,]  0.046724273  0.50731807
##  [78,] -0.762609921 -0.94480077
##  [79,]  1.064992001  0.75503827
##  [80,]  0.384408693  1.35351994
##  [81,] -0.443667546  0.14242953
##  [82,] -0.532163305 -0.94331371
##  [83,] -1.912071967  0.49467806
##  [84,] -0.226748507 -1.90594038
##  [85,]  1.225684905 -0.57024634
##  [86,] -1.298471336  0.01633884
##  [87,]  0.054709777 -0.33665321
##  [88,]  1.389148845 -0.44861683
##  [89,]  1.708091219  0.63861347
##  [90,] -1.001042042 -0.10231655
##  [91,] -0.660586586  2.19079945
##  [92,] -0.438453078 -2.14845587
##  [93,] -1.654897801 -0.58883459
##  [94,]  0.421892783  0.87146307
##  [95,]  0.679066950 -0.21204958
##  [96,] -1.097523305 -0.34408851
##  [97,] -1.979054644  0.61482051
##  [98,] -2.056793862  0.13202012
##  [99,]  1.466888062  0.03418356
## [100,] -1.536903457  0.85882306
## [101,] -1.587915124 -0.70897704
## [102,] -0.907331815 -1.30745871
## [103,]  1.153487759  1.84078151
## [104,] -0.001516359  0.38643209
## [105,] -0.465180628 -1.06345616
## [106,] -0.419383429  2.79522935
## [107,] -0.872291157  1.94828395
## [108,]  0.888000485 -1.41644821
## [109,]  0.534345056 -0.57470752
## [110,] -1.392181562  1.22148101
## [111,]  0.405594171 -2.62530802
## [112,]  1.399905385  0.15432601
## [113,] -0.009501864  1.23040337
## [114,] -0.703612749 -0.22097194
## [115,] -0.443667546  0.14242953
## [116,]  1.523114198 -0.68890174
## [117,] -0.309702193 -0.09785537
## [118,] -0.923302825  0.38048385
## [119,] -1.057268178  0.62076875
## [120,]  1.466888062  0.03418356
## [121,]  0.266414348 -0.09413772
## [122,]  0.534345056 -0.57470752
## [123,]  0.596113264  1.59603543
## [124,]  0.228930258  0.38791915
## [125,]  1.373177835  1.23932572
## [126,] -0.521406764 -0.34037086
## [127,]  0.890771521  0.03046591
## [128,] -0.001516359  0.38643209
## [129,]  0.009240182  0.98937493
## [130,]  1.882311700  1.36318582
## [131,] -0.001516359  0.38643209
## [132,] -1.400167067  2.06545229
## [133,] -0.135481713  0.62671699
## [134,]  0.612084273 -0.09190713
## [135,]  0.622840814  0.51103572
## [136,] -1.223503154 -0.94777488
## [137,] -0.387441410 -0.58065576
## [138,]  0.220944753  1.23189042
## [139,]  1.791044905 -1.16947154
## [140,]  0.622840814  0.51103572
## [141,]  1.024736875 -0.20981899
## [142,]  0.266414348 -0.09413772
## [143,]  0.663095940  1.47589298
## [144,]  0.689823490  0.39089327
## [145,] -0.414168960  0.50434395
## [146,]  0.831774349 -0.69336292
## [147,]  0.628055282 -1.77984969
## [148,]  1.024736875 -0.20981899
## [149,]  1.016751370  0.63415229
## [150,]  1.440160512  1.11918327
## [151,]  1.121218137  0.03195297
## [152,] -0.856320148  0.26034140
## [153,]  0.936241116 -1.29556223
## [154,]  0.188675131 -0.57693811
## [155,] -0.521406764 -0.34037086
## [156,] -0.711598254  0.62299934
## [157,]  0.073451823 -0.57768164
## [158,]  0.344153566  0.38866268
## [159,]  1.188200814 -0.08818948
## [160,] -1.402938103  0.61853816
## [161,] -0.079255576 -0.09636831
## [162,]  0.218173717 -0.21502370
## [163,] -0.427696537 -1.54551302
## [164,] -1.737851487  1.21925042
## [165,]  0.159176545 -0.93885253
## [166,]  1.418647431 -0.08670242
## [167,] -0.465180628 -1.06345616
## [168,]  1.147945687 -1.05304674
## [169,] -0.845563607  0.86328424
## [170,]  0.054709777 -0.33665321
## [171,]  0.601327732 -0.69484998
## [172,]  1.147945687 -1.05304674
## [173,]  1.718847760  1.24155631
## [174,] -1.068024719  0.01782590
## [175,]  1.391919881  0.99829729
## [176,] -0.931288329  1.22445513
## [177,] -0.845563607  0.86328424
## [178,] -0.146238253  0.02377414
## [179,]  0.215402681 -1.66193782
## [180,] -0.692856208  0.38197091
## [181,]  0.333397025 -0.21428017
## [182,]  0.931026648  0.99532317
## [183,] -0.829592598 -0.82465831
## [184,] -0.068499036  0.50657454
## [185,] -1.577158583 -0.10603420
## [186,] -1.057268178  0.62076875
## [187,] -0.213220930  0.14391659
## [188,]  1.188200814 -0.08818948
## [189,] -0.376684870  0.02228708
## [190,] -0.079255576 -0.09636831
## [191,]  1.255183491 -0.20833193
## [192,]  0.802275763 -1.05527733
## [193,] -0.175736839 -0.33814027
## [194,]  1.574125866  0.87889837
## [195,] -0.403412420  1.10728679
## [196,]  0.518374046  1.11323503
## [197,]  1.745575310  0.15655660
## [198,] -0.443667546  0.14242953
## [199,] -0.655372118 -0.10008596
## [200,] -0.883047698  1.34534111
## 
## $scores$corr.X.xscores
##             [,1]      [,2]
## read  -0.9271970 -0.374574
## write -0.8538903  0.520453
## 
## $scores$corr.Y.xscores
##               [,1]         [,2]
## math    -0.7177974  0.008701966
## science -0.6750187 -0.011433002
## 
## $scores$corr.X.yscores
##             [,1]         [,2]
## read  -0.7165758 -0.008794398
## write -0.6599214  0.012219404
## 
## $scores$corr.Y.yscores
##               [,1]       [,2]
## math    -0.9287778  0.3706371
## science -0.8734252 -0.4869583

Home Page

NOTE: These are tentative notes on different topics for personal use - expect mistakes and misunderstandings.