NOTES ON STATISTICS, PROBABILITY and MATHEMATICS


Power Calculation:


See also this post.

Motivating scenario: Meta-alpha protein (MAP) levels of 25 mg/mL are considered top normal in the population at large (\(\mu_o\)). However, we are studying a group of 20 (\(n=20\)) subjects with a severe deficit in glycophospholids (GLy) with MAP concentrations of 28 mg/mL with a standard deviation of 5 mg/mL. The question is what is the power of a \(t\) test to reject the \(H_o\) hypothesis when it is false, and conclude that people with deficit in GLy have higher levels of MAP?

These plots show the dependence of power on the true mean of the alternative hypothesis: the closer it is to the mean of \(H_o\) the less power to reject \(H_o\) when it is false. At its limit, when both means coincide, the power becomes identical to the \(\alpha\) risk.


These plots show the dependence of power on the true mean of the alternative hypothesis: the closer it is to the mean of \(H_o\) the less power to reject \(H_o\) when it is false. At its limit, when both means coincide, the power becomes identical to the \(alpha\) risk.


The probability of a type II error is \(\beta\). In other words, it is the probability of not rejecting \(H_o\) when it is false.

Therefore, probability of rejecting \(H_o\) when it is false will be \(1 - \beta\). And this is exactly the definition of POWER:


\(POWER \, = \, 1\,-\,\beta\)


Power is exactly the probability that the test statistic lies in the rejection region under the premise that \(H_a\) is correct.

For a single-group mean test we are comparing a normalized mean to a \(t\) quantile to test if the group mean belongs to a different population, i.e. \(H_a: \mu_a>\mu_o\).


\(\Pr\left(\Large\frac{\bar X - \mu_o}{s/\sqrt{n}}\,> t_{1-\alpha,\,n-1}\mid \mu=\mu_a\right)\)


However, if \(n\) is large we can do instead normal calculations:


\(\Pr\left(\Large\frac{\bar X - \mu_o}{s/\sqrt{n}}\,> z_{1-\alpha}\mid\mu=\mu_a\right)\small \tag 1\)


That is to say, we want to calculate the probability that our test statistic will reject the null under or given that the alternative is true.


\[\begin{align} \Pr\left(\Large\frac{\bar X - \mu_a \, + \mu_a -\mu_o}{s/\sqrt{n}}\,> z_{1-\alpha}\mid \mu=\mu_a\right)\\[2ex] =\Pr\left(\Large\frac{\bar X - \mu_a}{\sigma/\sqrt{n}} > \,z_{1-\alpha}\, -\frac{\mu_a -\mu_o}{s/\sqrt{n}}\mid \mu=\mu_a\right)\small \tag 2\\[2ex] =\Pr\left(Z> \, z_{1-\alpha} -\frac{\mu_a -\mu_o}{s/\sqrt{n}}\mid \mu=\mu_a\right) \end{align}\]


Here’re two examples from a Coursera course:

  1. Researchers would like to conduct a study of \(100\) healthy adults to detect a four year mean brain volume loss of \(0.01 \,\text{mm}^3\). Assume that the standard deviation of four year volume loss in this population is \(0.04 \,\text{mm}^3\). What would be the power of the study for a 5% one sided test versus a null hypothesis of no volume loss?

The hypothesis is \(H_o: \mu_{\Delta}=0\) v. \(H_a: \mu_{\Delta}>0\) where \(\mu_{\Delta}\) is the volume loss. The test statistics is \(\large \frac{\bar X_{\Delta}}{\frac{sd}{\sqrt{n}}}=10\frac{\bar X_{\Delta}}{0.04}\) which is rejected if it is larger than \(Z_{0.95}=1.645\).

We want to calculate:

\(\Pr\left(\frac{\bar X_\Delta}{\sigma_\Delta / 10} > 1.645 \mid \mu_\Delta = .01\right) \,\text{[eq.1 above]} = \Pr\left( \frac{\bar X_\Delta - .01}{.004} > 1. 645 - \frac{.01}{.004} \mid \mu_\Delta = .01\right)\,\text{[eq.2 above]} = \Pr(Z > −.855) = .80\)



pnorm(-0.855, lower.tail = FALSE)
## [1] 0.8037244



Or note that \(\bar X_{\Delta}\) is \(N(0.01,0.004)\) under the alternative and we want the \(\Pr(\bar X_{\Delta} > 1.645\times 0.04)\) under \(H_a\) (this can be seen by simplifying equation \(\Pr\left( \frac{\bar X_\Delta - .01}{.004} > 1. 645 - \frac{.01}{.004} \mid \mu_\Delta = .01\right)\,=\Pr\left( \frac{\bar X_\Delta}{.004} > 1. 645 \mid \mu_\Delta = .01\right)\,\).



pnorm(1.645 * 0.004, mean = 0.01, sd = 0.004, lower.tail = FALSE)
## [1] 0.8037244



  1. Researchers would like to conduct a study of \(n\) healthy adults to detect a four year mean brain volume loss of \(0.01\,\text{mm}^3\). Assume that the standard deviation of four year volume loss in this population is \(0.04\, \text{mm}^3\).

What would be the value of \(n\) needed for \(90\%\) power of type one error rate of \(5\%\) one sided test versus a null hypothesis of no volume loss?

The hypothesis is \(H_o: \mu_{\Delta}=0\) v. \(H_a: \mu_{\Delta}>0\) where \(\mu_{\Delta}\) is the volume loss. The test statistics is \(\frac{\bar X_{\Delta}}{0.04/\sqrt{n}}\) which is rejected if it is larger than \(Z_{0.95}=1.645\).

We want to calculate:

\(\Pr\left(\frac{\bar X_\Delta}{\sigma_\Delta / \sqrt{n}} > 1.645 \mid \mu_\Delta = .01\right) = \Pr\left( \frac{\bar X_\Delta - .01}{.04 / \sqrt{n}} > 1. 645 - \frac{.01}{.04 / \sqrt{n}} \mid \mu_\Delta = .01\right) = \Pr(Z > 1.645 - \sqrt{n} / 4) = .90\)

So we need \(1.645 - \sqrt{n} / 4 = Z_{.10} = \text{qnorm}(.1) = -1.282\) and thus \(n = ( 4 \times (1.645 + 1.282) )^ 2=137.0773.\)

ceiling((4 * (qnorm(0.95) - qnorm(0.1)))^2) # The negative sign is because qnorm(0.1) is negative.
## [1] 138

But we started off saying that we were comparing to a \(t\) quantile, so how do we calculate power for a \(t\) test:

\[\begin{align} \Pr\,\left(\frac{\bar X - \mu_o}{s/\sqrt{n}}\,> t_{(1-\alpha, n-1)}\,\mid \mu=\mu_a\right)\\[2ex] =\Pr\,\left(\sqrt{n} (\bar X - \mu_o)\,> t_{(1-\alpha, n-1)}\, s\,\mid \mu=\mu_a\right)\\[2ex] =\Pr\,\left(\frac{\sqrt{n}\,(\bar X - \mu_o)}{\sigma}\,> t_{(1-\alpha, n-1)}\,\frac{s}{\sigma}\,\mid \mu=\mu_a\right)\\[2ex] =\Pr\,\left( \frac{\sqrt{n}\,(\bar X - \mu_a)}{\sigma} + \frac{\sqrt{n}\,(\mu_a - \mu_o)}{\sigma}\,> \frac{t_{(1-\alpha, n-1)}}{\sqrt{n-1}}\,\sqrt{\frac{(n-1)\,s^2}{\sigma^2}}\,\,\mid \mu=\mu_a\right)\\[2ex] =\Pr\,\left( \frac{\sqrt{n}\,(\bar X - \mu_a)}{\sigma} + \frac{\sqrt{n}\,(\mu_a - \mu_o)}{\sigma}\,> \frac{t_{(1-\alpha, n-1)}}{\sqrt{n-1}}\,\sqrt{\chi^2_{n-1}}\,\,\mid\mu=\mu_a\right)\\[2ex] \Pr\,\left( Z + \frac{\sqrt{n}\,(\mu_a - \mu_o)}{\sigma}\,> \frac{t_{(1-\alpha, n-1)}}{\sqrt{n-1}}\,\sqrt{\chi^2_{n-1}}\,\,\mid\mu=\mu_a\right) \end{align}\]

Note that \(\frac{(\mu_a - \mu_o)}{\sigma}\) is the effect size. We don’t really have to know \(\mu_a\): the effect size (or standardized effect size) is all we need.

For the introduction of the \(\Large \chi^2\) refer to:

Proof of Chi Square distribution of ratio of \(\sim s^2/\sigma^2\). And remember:

\(s^2=\frac{\sum(X-\bar{X})^2}{n-1}\).


Since the last equations involve more complex (bivariate) calculations, the power calculation can be achieved via Monte Carlo simulation:

simuls <- 1e5         # Number of simulations
n <- 20               # Sample size 
sigma <- 5            # Standard deviation
mu_0 <- 25            # The mean we assume under Ho
mu_a <- 28            # The mean under Ha
Z  <- rnorm(simuls)   # Part of the inequality above
chisqr <- rchisq(simuls, df = n - 1)
t <- qt(.95, n - 1)
# The power is:
mean(Z + sqrt(n) * (mu_a - mu_0) / sigma > t / sqrt(n - 1) * sqrt(chisqr))
## [1] 0.8278

Compare to built-in formulas (equivalent to sample.prop.test for proportions):

power.t.test(n = 20, delta = mu_a - mu_0, sd = sigma, type = "one.sample", alt = "one.sided")
## 
##      One-sample t test power calculation 
## 
##               n = 20
##           delta = 3
##              sd = 5
##       sig.level = 0.05
##           power = 0.8266395
##     alternative = one.sided



RELATIONSHIPS:

  1. Power goes up as \(\alpha\) is larger.

  2. Power of a one-sided test is greater than the power of the associated two-sided test - just think of it in terms of smaller \(\alpha\) cuts (\(\alpha/2\))

  3. Power goes up the further \(\mu_a\) gets away from \(\mu_o\).

  4. Power goes up as \(n\) goes up.


Home Page

NOTE: These are tentative notes on different topics for personal use - expect mistakes and misunderstandings.