Power Values For Phase I Power Analysis
|power||k||R2||minimum n||power||k||R2||minimum n|
The results in this table .demonstrate that a sample size of 50 is adequate to detect, with power equal to .80, an R2 of .30 or larger for a model with anywhere from 8 to 10 independent variables. A sample size of 40 would be adequate to detect an R2 of .40 or larger with power equal to .90 under the same conditions. If power equal to .90 is desired, a sample size of 50 is adequate to detect an R2 of .40 or larger for a model with anywhere from 8 to 10 independent variables, whereas a sample size of 40 is adequate to detect an R2 of .40 or larger for a model with either 8 or 9 independent variables.
It should be noted that the test of R2 was not the only test conducted during the regression analysis. The final model was selected based on several criteria - not just the value of R2. Because these criteria are, ultimately, judgment calls based on comparing several models with several criteria, it was impossible to conduct an exact power analysis. The results presented in Table 3, however, provide a rough estimate of the adequacy of the sample size.
An approximation for the power of the one-sample t-test, presented by Lehman (1975) was used for the power analysis. This formula provided the sample size necessary to detect, with a specified level of power, a specified difference between the pre- and post-exposure y-velocity measures. In order to use this formula, several quantities had to be estimated and several assumptions made. Data from Kennedy and Stanney (1996) were used to estimate these quantities. It was assumed that power of at least .80 was desired.
First, an effect size of interest had to be specified. The effect in this case was expected to be the difference between the pre- and post-exposure y-velocity measures. Kennedy and Stanney obtained pre- and post-exposure mean measures of 12.5 and 18.4, respectively, with 10 U.S. Coast Guard pilots. Considering that pilots generally score well on tests of postural stability, their means likely represent conservative upper bounds for means of college students. Because the difference between these means was significant (p < .001), an interval around this difference was used to obtain effect sizes of interest in this research. Thus, effect sizes of 5, 7.5, and 10 were used for power computations.
Next, an estimate of the variance was needed. Note that this variance is the variance of the difference (2diff ). Kennedy and Stanney reported variances of 10.304 and 56.250 for the pre- and post-exposure y-velocity measures, respectively, of the pilots (2pre and 2post). The variance of the difference is related to the individual variances through the following formula:
2diff = 2pre + 2post - 2cov 
where cov = the covariance between the pre- and post-exposure measures.
The covariance, in turn, is related to the correlation (r) through the following formula:
r = cov / (2pre * 2post)
cov = r * (2pre * 2post) .
Assuming a conservative correlation of .5 between the pre- and post-exposure y-velocity measures, the covariance between the pre- and post-exposure measures was estimated to be 12.038. Thus, using Equation , the variance of the difference between the pre- and post-exposure y-velocity measures was estimated to be 42.478. It followed that the standard deviation of the difference was estimated to be 6.518. Thus, for use in the power computations, standard deviation estimates of 4, 8, and 12 were employed. The results of the power analysis for the paired t-test appear in Table 4. With power at .903, it is apparent that a sample size of 50 is more than adequate to detect even the minimal effect of a difference of 5 from pre- to post-exposure even with a very conservative estimate of the standard deviation ( = 12). Under those conditions, a sample size of 40 would also suffice (power = .839).
Power Values For Phase III Power Analysis
|Effect Size||Standard Deviation Estimate||n||Power||Effect Size||Standard Deviation Estimate||n||Power|