Power Analysis and Sample Size Planning in ANCOVA Designs

The analysis of covariance (ANCOVA) has notably proven to be an effective tool in a broad range of scientific applications. Despite the well-documented literature about its principal uses and statistical properties, the corresponding power analysis for the general linear hypothesis tests of treatment differences remains a less discussed issue. The frequently recommended procedure is a direct application of the ANOVA formula in combination with a reduced degrees of freedom and a correlation-adjusted variance. This article aims to explicate the conceptual problems and practical limitations of the common method. An exact approach is proposed for power and sample size calculations in ANCOVA with random assignment and multinormal covariates. Both theoretical examination and numerical simulation are presented to justify the advantages of the suggested technique over the current formula. The improved solution is illustrated with an example regarding the comparative effectiveness of interventions. In order to facilitate the application of the described power and sample size calculations, accompanying computer programs are also presented.

Similar content being viewed by others

Sample Size Planning for Cluster-Randomized Interventions Probing Multilevel Mediation

Article 21 July 2018

The Analysis Plan

Chapter © 2013

Power of Statistical Tests for Subgroup Analysis in Meta-Analysis

Chapter © 2020

Avoid common mistakes on your manuscript.

1 Introduction

The analysis of covariance (ANCOVA) was originally developed by Fisher (1932) to reduce error variance in experimental studies. Its essential nature and principal use were well explicated by Cochran (1957) and subsequent articles in the same issue of Biometrics. The value and use of ANCOVA have also received considerable attention in social science, for example, see Elashoff (1969), Keselman et al. (1998), and Porter and Raudenbush (1987). Comprehensive introduction and fundamental principles can be found in the excellent texts of Fleiss (2011), Huitema (2011), Keppel and Wickens (2004), Maxwell and Delaney (2004), and Rutherford (2011). It is essential to note that ANCOVA provides a useful approach for combining the advantages of two highly acclaimed procedures of analysis of variance (ANOVA) and multiple linear regression. The extensive literature shows that it is one of the major methods of statistical analysis in applied research across many scientific fields.

The importance and implications of statistical power analysis in scientific research are well demonstrated in Cohen (1988), Kraemer and Blasey (2015), Murphy et al. (2014), and Ryan (2013), among others. Accordingly, it is of great practical value to develop theoretically sound and numerically accurate power and sample size procedures for detecting treatment differences within the context of ANCOVA. There are numerous published sources that address statistical theory and applications of power analysis for ANOVA and multiple linear regression. Specifically, various algorithms and tables for power and sample size calculations in ANOVA have been presented in the classic sources of Bratcher et al. (1970), Pearson and Hartley (1951), Scheffe (1961), and Tiku (1967, 1972). The corresponding results for multiple regression and correlation, especially the distinct notion of fixed and random regression settings, were given in Gatsonis and Sampson (1989), Mendoza and Stafford (2001), Sampson (1974), and Shieh (2006, 2007). However, relatively little research has attempted to address the corresponding issues for ANCOVA.

This lack of further discussion can partly be attributed to the simple framework and conceptual modification of Cohen (1988) on the use of ANOVA method for power evaluation in ANCOVA research. It is argued that the ANCOVA of original responses is essentially the ANOVA of the regression-adjusted or statistically controlled measurements obtained from the linear regression of unadjusted responses on the covariates that is common to all treatment groups. However, some modifications are required to account for the number of covariate variables and the strength of correlation between the response and covariate variables. Accordingly, both the error degrees of freedom and variance component are reduced. Then, the power and sample size computations in ANCOVA proceed in exactly the same way as in analogous ANOVA designs. The methodology of Cohen (1988) has become common practice for power analysis in ANCOVA settings as repeatedly demonstrated in Huitema (2011), Keppel and Wickens (2004), Levin (1997), Maxwell and Delaney (2004), and Yang et al. (1996).

It is well known that the ANOVA adopts the fundamental assumptions of independence, normality, and constant variance. The corresponding hypothesis testing and theoretical considerations are valid only if these assumptions are satisfied. The consequences of violations of independence assumption in ANOVA have been reported in Kenny and Judd (1986), Pavur and Nath (1984), and Scariano and Davenport (1987), among others. An essential assumption underlying ANCOVA is the regression coefficients associating the response variable with the covariate variables are the same for each treatment group. Therefore, the regression adjustment in Cohen’s (1988, pp. 379–380) covariance framework includes the common regression coefficient estimates derived from the multiple regression between the response and covariate variables across all treatment groups. Unlike the original responses, the adjusted responses are generally correlated and thus violate the independence of observations assumption for ANOVA. Therefore, Cohen’s (1988) procedure is intrinsically inexact, even with the technical considerations of a deflated degrees of freedom and a correlation-adjusted variance. Consequently, this prevailing method only provides approximate power and sample size calculations in ANCOVA designs. It should be stressed that no research to date has acknowledged this crucial problem and the result has most likely been interpreted as an exact solution.

Toward the goal of choosing the most appropriate methodology for ANCOVA studies, the present article focuses on the Wald tests for the general linear hypothesis of treatment effects. Under the two different assumptions of a priori specified covariate values and multinormal distributed covariate variables, the exact power functions of the Wald statistic are derived. The analytic derivations for a general linear hypothesis require the involved operations of matrix algebra and sophisticated evaluations of matrix t variables that have not been reported elsewhere. Detailed numerical investigations were conducted to evaluate the existing formulas for power and sample size computations under a wide range of model settings, including non-normal covariate variables. According to the analytic justification and empirical assessment, the suggested approach has a decisive advantage over the conventional method. An applied example regarding the comparative effectiveness of interventions is presented to illustrate the distinct features and practical usefulness of the proposed techniques. Computer codes are also presented to implement the recommended power calculation and sample size determination in planning ANCOVA studies.

2 General Linear Hypothesis

A one-way fixed-effects ANCOVA model with multiple covariates can be expressed as

$$\begin Y_ =\upmu _ +\sum \limits _^P \upbeta _ +\upvarepsilon _ ,> \end$$

where \(Y_\) is the score of the jth subject in the ith treatment group on the response variable, \(\upmu _\) is the ith group intercept, \(X_\) is the score of the jth subject in the ith treatment group on the kth covariate, \(\upbeta _\) is the slope coefficient of the kth covariate, and \(\upvarepsilon _\) is the independent \(N(0, \upsigma ^)\) error with \(i = 1,\ldots , G\, (\ge 2), j = 1, \ldots , N_\) , and \(k = 1, \ldots , P\, (\ge 1)\) . The least-square estimator for the ith intercept \(\upmu _\) is given by

$$\begin <\hat<\upmu >>_ =>_-\sum \limits _^P \bar_<\hat<\upbeta >>_ \end$$

for \(i \ne i^\prime \) , i and \(i^\prime = 1,\ldots , G\) . Because the covariances between regression-adjusted estimators \(\<<\hat<\upmu >>_, <\ldots >, <\hat<\upmu >>_\>\) are generally not zero, they should not be treated as independent variables. For notational simplicity, the prescribed properties are expressed in matrix form:

$$\begin \hat<\varvec<\upmu >> \sim N_(<<\varvec<\upmu >>>, \upsigma ^<> \mathbf), \end$$

The adjusted group means are the expected group responses evaluated at the grand covariate means:

$$\begin \upmu _i^*=\upmu _+\sum _^P >_\upbeta _\quad \hbox < for >i = 1,\ldots , G, \end$$

where \(\bar_=\mathop \sum \limits _^\mathop \sum \limits _^ X_/N_\) , \(k = 1,\ldots , P, \) and \(N_=\mathop \sum \limits _^G N_\) . A natural and unbiased estimator of the adjusted group mean \(\upmu _i^*\) is

$$\begin <\hat<\upmu >> _i^*= <\hat<\upmu >>_+\sum _^P >_ <\hat<\upbeta >>_=>_ - \sum _^P <\hat<\upbeta >>_(>_ - >_). \end$$

Then, the least-squares estimators \(<\hat<\upmu >>_i^*\) of the adjusted group means \(\upmu _i^*\) have the following distributions:

$$\begin (<\hat<\upmu >>_, <\hat<\upmu >>_<^\prime > )=\upsigma ^(<\bar<\mathbf>>_ - \mathbf)^<\mathrm><> \mathbf_^ (<\bar<\mathbf>>_<^\prime > - \mathbf), \end$$

where \(\mathbf=\mathop \sum \limits _^G \mathop \sum \limits _^ \mathbf_/N_ = (\bar_, <\ldots >, \bar_)^<\mathrm>\) for \(i \ne i^\prime \) , i and \(i^\prime = 1,\ldots , G\) . The vector of adjusted group mean estimators \(\hat<\varvec<\upmu >>^* = (<\hat<\upmu >>_1^, <\ldots >, <\hat<\upmu >>_G^*)^<\mathrm>\) has the distribution

$$\begin <\hat<\varvec<\upmu >>>^*\sim N_(<<\varvec<\upmu >>>^*, \upsigma ^<> \mathbf^*), \end$$

To test the general linear hypothesis about treatment effects or adjusted mean effects in terms of

$$\begin \hbox _ \,\mathbf\upmu ^* = \mathbf_\hbox < versus H>_\, \mathbf\upmu ^* \ne \mathbf_, \end$$

where C is a \(c \times G\) contrast matrix of full row rank and \(\mathbf_\) is a \(c \times 1\) null column vector, the Wald test statistic is of the form

$$\begin W^* = (\mathbf\hat<\varvec<\upmu >>^*)^<\mathrm>(\mathbf^*\mathbf^<\mathrm>)^(\mathbf\hat<\varvec<\upmu >>^*)/\<(G - 1)<\hat<\upsigma >>^\> \end$$

where \(<\hat<\upsigma >>^ = /\upnu \) , \( = \mathop \sum \limits _^G \mathop \sum \limits _^ (Y_ - \bar_)^ - \mathbf_^<\mathrm> \mathbf_^ \mathbf_\) , and \(\upnu =N_ - G - P\) . Note that the contrast matrix is confined to satisfy \(\mathbf_ = \mathbf_\) . Hence, the general linear hypothesis of \(\hbox _\, \mathbf\upmu ^* = \mathbf_\) versus \(\hbox _ \,\mathbf\upmu ^* \ne \mathbf_\) is equivalent to

$$\begin \hbox _\, \mathbf<\varvec<\upmu >> = \mathbf_\hbox < versus H>_\, \mathbf<\varvec<\upmu >> \ne \mathbf_. \end$$

Also, the Wald test statistic can be rewritten as

$$\begin W^* = (\mathbf\hat<\varvec<\upmu >>)^<\mathrm>(\mathbf^<\mathrm>)^(\mathbf\hat<\varvec<\upmu >>)/\<(G - 1)\hat<\upsigma >^\>. \end$$

The Wald-type test has great practical and pedagogical appeal than the test procedure under the full-reduced-model formulation. Because of its simplicity and generality, the associated properties are derived and presented in the subsequent illustration. Under the null hypothesis with \(\mathbf<\varvec<\upmu >> = \mathbf_\) , the test statistic \(W^*\) has an F distribution

$$\begin W^* \sim F(c, \upnu ), \end$$

where \(F(c, \upnu )\) is an F distribution with c and \(\upnu \) degrees of freedom, \(\upnu =N_ - G - P\) , and \(N_=\mathop \sum \limits _^G N_\) . Hence, \(H_\) is rejected at the significance level \(\upalpha \) if \(W^* > F_\) , where \(F_\) is the upper \((100\cdot \upalpha )\) th percentile of the F distribution \(F(c, \upnu )\) . For fixed covariate values of \(\<\mathbf_, j = 1, <\ldots >, N_\) and \(i = 1,\ldots , G\>\) , the test statistic \(W^*\) has the general distribution

$$\begin W^* \sim F(c, \upnu , \Lambda ), \end$$

where \(F(c, \upnu , \Lambda )\) is a non-central F distribution with c and \(\upnu \) degrees of freedom and non-centrality parameter

$$\begin \Lambda = (\mathbf<\varvec<\upmu >>)^<\mathrm>(\mathbf^<\mathrm>)^(\mathbf<\varvec<\upmu >>)/\upsigma ^. \end$$

The associated power function of the general linear hypothesis is readily obtained as

$$\begin \Psi (\Lambda )=P\ F_\>. \end$$

3 Random Covariate Models

The prescribed statistical inferences about the general linear hypothesis are based on the conditional distribution of the covariate outcomes. As noted in Gatsonis and Sampson (1989), Mendoza and Stafford (2001), and Sampson (1974), the actual values of covariates cannot be known in advance just as the primary responses. It is vital to treat the covariates as random variables and to derive the distribution of the test statistic over possible values of the covariate variables. Moreover, Elashoff (1969) and Harwell (2003) emphasized that the statistical assumptions underlying the ANCOVA include the random assignment of subjects to treatments and the covariate variables are independent of the treatment effects. Moreover, the normal covariate setting is commonly employed to provide a fundamental framework for analytical derivation and theoretical discussion in ANCOVA studies as in Elashoff (1969) and Harwell (2003). Thus, it is constructive to assume the covariates have independent and identical normal distribution

where \(<<\varvec<\uptheta >>>\) is a \(P \times 1\) vector and \(>>\) is a \(P \times P\) positive-definite variance–covariance matrix for \(i = 1,\ldots , G\) , and \(j = 1, <\ldots >, N_\) .

Under the multinormal distribution of \(\<\mathbf_ \sim N_(<<\varvec<\uptheta >>>, >>), j = 1, <\ldots >, N_\) and \(i = 1,\ldots , G\>\) , it is straightforward to show (Gupta and Nagar 1999, Theorem 2.3.10 and Theorem 3.3.6) that \(\mathbf = \bar<\mathbf><> \mathbf^<\mathrm>(\mathbf^<\mathrm>)^\) has a matrix normal distribution and \(\mathbf_\) has a Wishart distribution

where \(\mathbf_\) is an identity matrix of dimension c. Accordingly, both \(\mathbf = \<\mathbf_ + \mathbf^<\mathrm>\>^<> \mathbf\) and \(\mathbf^* = \mathbf<\varvec<\upxi >>\) have an inverted matrix variate t-distribution (Gupta and Nagar 1999, Section 4.4):

$$\begin A^* \sim B_1^I (P/2, (\upnu + 1)/2) \equiv \hbox \. \end$$

Following these results, standard matrix algebra shows that non-centrality parameter \(\Lambda \) defined in Equation 15 has the alternative form

$$\begin \Lambda = <\varvec<\upxi >>^<\mathrm>(\mathbf_ - \mathbf^<\mathrm><> \mathbf)<\varvec<\upxi >> = \Gamma B^*, \end$$

where \(B^* = (1 - A^*) \sim \hbox \<(\upnu + 1)/2, P/2\>\) . In connection with the effect size measures in ANOVA, the first component \(>\) in \(\Lambda \) is rewritten as

$$\begin \Gamma = N_\upgamma ^ \end$$

where \(\upgamma ^=\upsigma _<\upgamma >^ /\upsigma ^\) , \(\upsigma _<\upgamma >^ = (\mathbf<\varvec<\upmu >>)^<\mathrm>(\mathbf^<\mathrm>)^(\mathbf<\varvec<\upmu >>)\) , \(\mathbf = \hbox (1/q_, <\ldots >, 1/q_)\) , \(q_=N_/N_\) for \(i = 1,\ldots , G\) . Consequently, the non-centrality term \(\Lambda \) has a useful formulation

$$\begin \Lambda = N_\upgamma ^B^*. \end$$

It should be pointed out that Gupta and Nagar (1999) only provides the generic definition and analytic properties of an inverted matrix variate t-distribution. Their results are applied and extended here to the context of ANCOVA. Accordingly, under the random covariate modeling framework, the \(W^*\) statistic has the two-stage distribution

$$\begin W^*B^* \sim F(c, \upnu , \Lambda )\hbox < and >B^* \sim \<(\upnu + 1)/2, P/2\>. \end$$

The exact power function can be formulated as

$$\begin \Psi _(\Lambda ) = E_[P\ F_ \>], \end$$

where the expectation \(E_\) is taken with respect to the distribution of \(B^*\) .

Notably, the omnibus test of the equality of treatment effects is a special case of the general linear hypothesis by specifying the contrast matrix as \(\mathbf_<\varvec<\upmu >> = \mathbf_<(G - 1)>\) where

$$\begin \mathbf_ = (\mathbf_<(G - 1)>, -\mathbf_<(G - 1)>) \end$$

is a \((G - 1) \times G\) contrast matrix of full row rank. The component \(\upgamma ^\) in the non-centrality term \(\Lambda \) is simplified as

$$\begin \updelta ^=\upsigma _<\updelta >^ /\upsigma ^, \end$$

where \(\upsigma _<\updelta >^ =\sum _^G q_(\upmu _ - \tilde<\upmu >)^\) and \(\tilde<\upmu >=\sum _^G q_\upmu _\) . The corresponding non-central component \(\Lambda \) is expressed as

$$\begin \Lambda _=N_\updelta ^B^*. \end$$

The power function of the omnibus F test of treatment differences is simplified as

$$\begin \Psi _(\Lambda _)=E_[P\ F_<(G-1),\,\upnu ,\,\upalpha >\>]. \end$$

Note that \(\upsigma _<\updelta >^ \) reduces to the form \(\upsigma _<\updelta >^ =\mathop \sum \limits _^G (\upmu _ - \bar<\upmu >)^/G\) with \(\bar<\upmu >=\mathop \sum \limits _^G \upmu _/G\) when \(q_ = 1/G\) for all \(i = 1, <\ldots >, G\) . Hence, \(\updelta ^\) has the same form as the signal to noise ratio \(f^\) in ANOVA (Fleishman 1980) for balanced designs. Although the prescribed application of general linear hypothesis is discussed only from the perspective of a one-way ANCOVA design, the number of groups G may also represent the total number of combined factor levels of a multi-factor ANCOVA design. Hence, using a contrast matrix associated with a specific designated hypothesis, the same concept and process of assessing treatment effects can be readily extended to two-way and higher-order ANCOVA designs.

4 Sample Size Determination

It is essential to note that the power function \(\Psi _\) depends on the group intercepts \(\< \upmu _,\ldots , \upmu _\>\) and variance component \(\upsigma ^\) through the non-centrality \(\Lambda \) or the effect size \(\upgamma ^\) , but not the covariate coefficients \(\< \upbeta _, \ldots , \upbeta _\>\) . Also, under the prescribed stochastic assumptions for the covariate variables, the multivariate normal distribution leads to the unique conditional property on a beta distribution in the general distribution of the test statistic \(W^*\) . Due to the fundamental property of the contrast matrix, the resulting distribution and power function do not depend on the mean vector \(<\varvec<\uptheta >> \) and variance–covariance matrix \(> \) of the multinormal covariate distribution. To determine sample sizes in planning research designs, the power functions \(\Psi _\) can be applied to calculate the sample sizes \(\< N_>,\ldots , N_\>\) needed to attain the specified power \(1 - \upbeta \) for the chosen significance level \(\upalpha \) , contrast matrix C, intercept parameters \(\< \upmu _,\ldots , \upmu _\>\) , variance component \(\upsigma ^\) , and the number of covariates P.

For an ANCOVA design with a priori designated sample size ratios \(\,\ldots , r_\>\) with \(r_ = N_/N_\) for \(i = 1, \ldots , G\) . The required computation is simplified to deciding the minimum sample sizes \(N_>\) (with \(N_ = N_>\cdot r_, i = 2,\ldots , G)\) required to achieve the selected power level with the power functions \(\Psi _\) . Using the embedded functions in popular software systems, optimal sample sizes can be readily computed through an iterative process. The SAS/IML (SAS Institute 2017) and R (R Development Core Team 2017) programs employed to perform the suggested power and sample size calculations are available as supplementary material. The proposed power and sample size procedures for the general linear hypothesis tests of ANCOVA subsume the results in Shieh (2017) for a single contrast test as a special case. Notably, the derivations and manipulations of an inverted matrix variate t are more involved than that of a Hotelling’s \(T^\) distribution as demonstrated in Shieh (2017).

Alternatively, a simple procedure for the comparison of treatment effects has been described in Cohen (1988, pp. 379–380). Unlike the proposed two-stage distribution, it is suggested that \(W^*\) has a simplified F distribution

$$\begin W^*\sim F(G - 1, \upnu , \Lambda _), \end$$ $$\begin \Psi _(\Lambda _) = P\< F(G - 1, \upnu , \Lambda _) > F_<(G-1),\,\upnu ,\,\upalpha >\> . \end$$

It is easily seen from the model assumption given in Equation 1 that \(\upsigma _Y^2 = \hbox (Y_) =<\varvec<\upbeta >>^<\mathrm > > <\varvec<\upbeta >> + \upsigma ^\) and \(\uprho = (Y_, \mathop \sum \limits _^X_\upbeta _) =<\varvec<\upbeta >> ^<\mathrm > > <\varvec<\upbeta >> /\< \upsigma _Y^2 \cdot <\varvec<\upbeta >>^<\mathrm > > <\varvec<\upbeta >> \>^>\) where \(<\varvec<\upbeta >> = (\upbeta _, \ldots , \upbeta _)^<\mathrm >\) . Hence, the advantage of ANCOVA over ANOVA in the reduction of error variance from \(\upsigma _Y^2 \) to \(\upsigma ^ = (1 - \uprho ^)\upsigma _Y^2\) by a factor \((1 - \uprho ^)\) . For ease of illustration, the power function of the omnibus F test of treatment differences in ANOVA is also presented here:

$$\begin \Psi _(\Lambda _) = P\ < F(G - 1, N_- G, \Lambda _) > F_<(G-1),\,(N_T-G),\,\upalpha >\> , \end$$

5 Numerical Assessments

To further demonstrate the contrasting features and practical consequences of the proposed approach and existing methods, detailed empirical appraisals are conducted to examine their performance in power and sample size calculations. For ease of comparison, the numerical illustration considered in Maxwell and Delaney (2004, pp. 441-443) for sample size planning and power analysis is utilized as the fundamental framework.

In particular, Maxwell and Delaney (2004) described an ANOVA design with \(G = 3\) , group intercepts \(\< \upmu _, \upmu _, \upmu _\>=\< 400, 450, 500\>\) , and error variance \(\upsigma _Y^2 = 10,000\) . Then, an ANCOVA model is introduced with the inclusion of an influential covariate variable X with \(\uprho = (X, Y) = 0.5\) to partially account for the variance in the response variable Y. The corresponding unexplained error variance \(\upsigma ^\) in ANCOVA is reduced as \(\upsigma ^= (1 - \uprho ^)\upsigma _Y^2 = 7,500\) . To detect the treatment differences, they showed that the total sample sizes required to have a nominal power of 0.80 are 63 and 48 for the balanced ANOVA and ANCOVA designs, respectively. Thus, the ANCOVA design has the potential benefits to attain the same power with nearly 25% fewer subjects than an ANOVA. It should be noted that the power formulas \(\Psi _\) and \(\Psi _\) given in Equations 31 and 32, respectively, were applied for sample size calculations in Maxwell and Delaney (2004). To show a profound implication of the sample size procedures, extensive simulation study was performed under a wide range of model configurations.

figure 1

figure 2

figure 3

According to the power comparisons, the ANOVA method generally does not provide accurate sample size calculations for an ANCOVA design. Unsurprisingly, the only exceptions occurred when the number of covariates is small and the correlation between the covariates and the response variable is close to zero as in Tables 1 and 4. The approximate ANCOVA method consistently gives larger power estimate than the simulated power for all cases considered here. The discrepancy noticeably increases with the number of covariates and the magnitude of effect size. The resulting errors can be as large as 0.0845, 0.1105, and 0.3049 associated with the scenarios of \(P = 10\) in Tables 1, 2 and 3, respectively. For the relative smaller effect sizes in Tables 4, 5 and 6, the performance of the approximate ANCOVA formula has improved with the errors of 0.0506, 0.0628, and 0.2710 for the cases of \(P= 10\) . Consequently, the overestimation problem of the power function \(\Psi _\) suggests that the computed sample sizes are generally inadequate to achieve the designated power level.

Regarding the accuracy of the proposed exact ANCOVA approach, the corresponding results in Tables 1, 2, 3, 4, 5 and 6 show that the differences between the estimated and simulated powers are fairly small. The largest absolute error is 0.0106 for the two cases of \(P = 8\) and 5 in Tables 2 and 5, respectively. All the other 58 cases in Tables 1, 2, 3, 4, 5 and 6 have an absolute error less than 0.01. These numerical results imply that the proposed exact approach outperforms the ANOVA method and the approximate ANCOVA procedure for all design configurations considered here. Therefore, the suggested power and sample size calculations can be recommended for general use.

6 An Example

A documented example of Maxwell and Delaney (2004) is presented and extended next to demonstrate the usefulness of the suggested power and sample size procedures and accompanying software programs for the omnibus test of treatment effects in ANCOVA designs.

Specifically, Maxwell and Delaney (2004, Table 9.7, p. 429) provided the data for assessing the effectiveness of different interventions for depression. There are 10 participants with random assignment in each of the three intervention groups of (1) selective serotonin reuptake inhibitor (SSRI) antidepressant medication, (2) placebo, or (3) wait list control. The measurements are the pretest and posttest Beck Depression Inventory (BDI) scores of depressive individuals. The primary interest of the ANCOVA study is on the group differences of posttest BDI measurements using the pretest BDI scores as covariates. The results show that the estimates of adjusted group means and error variance are \(\<<\hat<\upmu >>_1^*,<\hat<\upmu >>_2^*,<\hat<\upmu >>_3^*\>=\< 7.5366, 11.9849, 13.9785\>\) and \(<\hat<\upsigma >>^2 = 29.0898\) , respectively. The omnibus F test statistic of treatment differences is \(W^* = 3.73\) , which yields a p-value of 0.0376. Therefore, the test result suggests that the intervention effects are significantly different at \(\upalpha = 0.05\) . Although this is not the focus in the illustration of Maxwell and Delaney (2004), it can be computed from an ANOVA of posttest scores that the variance estimate is \(<\hat<\upsigma >>_Y^2= 39.6185\) . Hence, the sample squared correlation between the posttest and pretest BDI scores is \(<\hat<\uprho >>^2 = 1 -<\hat<\upsigma >>^2<\hat<\upsigma >>_Y^2 = 1 - 29.0898/39.6185 = 0.2658\) . The observed value of the ANOVA F test of group differences is \(F^* = 3.03\) with a p-value of 0.0647. At the significance level 0.05, the omnibus test of no intervention group difference on the posttest BDI scores cannot be rejected. Although null hypothesis significance testing is useful in various applications, it is important to consult the recent articles of Wasserstein and Lazar (2016) and Wasserstein et al. (2019) for the recommended principles underlying proper use and interpretation of statistical significance and p-values.

In view of the prospective nature of advance research planning, the general guidelines suggest published findings or expert opinions can offer reliable information for the vital characteristics of future study. Accordingly, it is prudent to adopt a minimal meaningful effect size in order to enhance the generalizability of the result and the accumulation of scientific knowledge. For illustration, the prescribed summary statistics of the three-group depression intervention study are employed as population adjusted mean effects and variance component. The suggested power procedure shows that the resulting power for the omnibus test of group differences is \(\Psi _ = 0.6145\) when the significance level \(\upalpha \) equals to 0.05. Because the computed power is substantially smaller than the common levels of 0.80 or 0.90, this implies that the group sample size \(N = 10\) does not provide a decent chance of detecting the potential differences between treatment groups. To determine the proper sample size, the proposed sample size computations showed that the balanced group sample sizes of 15 and 19 are required to attain the nominal power of 0.8 and 0.9, respectively. The total sample sizes \(N_ = 45\) and 57 are substantially larger than 30 of the exemplifying design. Essentially, it requires 50% and 90% increases of the sample size to meet the common power levels of 0.80 and 0.90, respectively. These design configurations are presented in the user specifications of the SAS/IML and R programs presented in the supplemental programs. Researchers can easily identify these statements and then modify the input values in the computer code to incorporate their own model characteristics.

7 Conclusions

ANCOVA provides a useful approach for combining the advantages of two widely established procedures of ANOVA and multiple linear regression. Despite the close resemblance among the three types of statistical analyses, their power computation and sample size determination are still theoretically distinct when the stochastic properties of the continuous covariates or predictors are taken into account. It is generally recognized that the use of ANCOVA may considerably reduce the number of subjects required than an ANOVA design to attain the required precision and power. For planning and evaluating randomized ANCOVA designs, an ANOVA-based sample size formula has been proposed in Cohen (1988) to accommodate the reduced error variance and degrees of freedom because of the use of effective and influential covariates. The procedure is very appealing from a computational standpoint and has been implemented in some statistical packages. However, no further analytical discussion and numerical evaluation are available to validate the appropriateness and implications of Cohen’s (1988) method in the literature.

This article aims to address the potential limitation and approximate nature of the prevailing method and to describe an alternative and exact approach for power and sample size calculations in ANCOVA designs. It is demonstrated both theoretically and empirically that the seemly exact technique of Cohen (1988) does not involve all of the covariate properties in ANCOVA. Exact power and sample size procedures are described for the general linear hypothesis tests of treatment effects under the assumption that the covariate variables have a joint multinormal distribution. The simulation results reveal that the proposed technique is superior to the current method under a wide range of ANCOVA designs. More importantly, additional numerical assessments show that the suggested power function and sample size procedure preserve reasonably good performance under various non-normal situations, such as exponential, Gamma, Laplace, Log normal, uniform, and discrete uniform distributions. Hence, the proposed two-stage distribution and power function of the Wald statistic for the general linear hypothesis tests possess desirable robust properties and are also applicable to other continuous covariate distributions in various ANCOVA designs. Consequently, the presented methodology expands the power assessment and sample size determination of Shieh (2017) for contrast analysis in ANCOVA. To enhance the practical values, computer algorithms are also provided to facilitate the recommended power calculations and sample size determinations. With respect to the importance and implementation of random sampling, the fundamental and standard sampling designs and estimation methods can be found in Thompson (2012). Heterogeneity of variance is one of the unique and problematic factors known as detrimental to the statistical inferences in ANCOVA (Harwell 2003; Rheinheimer and Penfield 2011). A potential topic for future study is to develop proper power and sample size procedures within the variance heterogeneity framework.

Change history

References

  • Bratcher, T. L., Moran, M. A., & Zimmer, W. J. (1970). Tables of sample sizes in the analysis of variance. Journal of Quality Technology, 2, 156–164. https://doi.org/10.1080/00224065.1970.11980429. ArticleGoogle Scholar
  • Cochran, W. G. (1957). Analysis of covariance: Its nature and uses. Biometrics, 13, 261–281. https://doi.org/10.2307/2527916. ArticleGoogle Scholar
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Google Scholar
  • Elashoff, J. D. (1969). Analysis of covariance: A delicate instrument. American Educational Research Journal, 6, 381–401. https://doi.org/10.2307/2527916. ArticleGoogle Scholar
  • Fisher, R. A. (1932). Statistical methods for research workers (4th ed.). Edinburgh: Oliver and Boyd. Google Scholar
  • Fleishman, A. I. (1980). Confidence intervals for correlation ratios. Educational and Psychological Measurement, 40, 659–670. https://doi.org/10.1177/001316448004000309. ArticleGoogle Scholar
  • Fleiss, J. L. (2011). Design and analysis of clinical experiments. New York, NY: Wiley. Google Scholar
  • Gatsonis, C., & Sampson, A. R. (1989). Multiple correlation: Exact power and sample size calculations. Psychological Bulletin, 106, 516–524. https://doi.org/10.1037/0033-2909.106.3.516. ArticlePubMedGoogle Scholar
  • Gupta, A. K., & Nagar, D. K. (1999). Matrix variate distributions. Boca Raton, FL: CRC. Google Scholar
  • Harwell, M. (2003). Summarizing Monte Carlo results in methodological research: The single-factor, fixed-effects ANCOVA case. Journal of Educational and Behavioral Statistics, 28, 45–70. https://doi.org/10.3102/10769986028001045. ArticleGoogle Scholar
  • Huitema, B. (2011). The analysis of covariance and alternatives: Statistical methods for experiments, quasi-experiments, and single-case studies (Vol. 608). New York, NY: Wiley. BookGoogle Scholar
  • Kenny, D. A., & Judd, C. M. (1986). Consequences of violating the independence assumption in analysis of variance. Psychological Bulletin, 99, 422–431. https://doi.org/10.1037/0033-2909.99.3.422. ArticleGoogle Scholar
  • Keppel, G., & Wickens, T. D. (2004). Design and analysis: A researcher’s handbook (4th ed.). Upper Saddle River, NJ: Pearson. Google Scholar
  • Keselman, H. J., Huberty, C. J., Lix, L. M., Olejnik, S., Cribbie, R. A., Donahue, B., et al. (1998). Statistical practices of educational researchers: An analysis of their ANOVA, MANOVA, and ANCOVA analyses. Review of Educational Research, 68, 350–386. https://doi.org/10.3102/00346543068003350. ArticleGoogle Scholar
  • Kraemer, H. C., & Blasey, C. (2015). How many subjects? Statistical power analysis in research (2nd ed.). Los Angeles, CA: Sage. Google Scholar
  • Levin, J. R. (1997). Overcoming feelings of powerlessness in “aging” researchers: A primer on statistical power in analysis of variance designs. Psychology and Aging, 12, 84–106. https://doi.org/10.1037//0882-7974.12.1.84. ArticlePubMedGoogle Scholar
  • Maxwell, S. E., & Delaney, H. D. (2004). Designing experiments and analyzing data: A model comparison perspective (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates. Google Scholar
  • Mendoza, J. L., & Stafford, K. L. (2001). Confidence interval, power calculation, and sample size estimation for the squared multiple correlation coefficient under the fixed and random regression models: A computer program and useful standard tables. Educational and Psychological Measurement, 61, 650–667. https://doi.org/10.1177/00131640121971419. ArticleGoogle Scholar
  • Murphy, K. R., Myors, B., & Wolach, A. (2014). Statistical power analysis: A simple and general model for traditional and modern hypothesis tests (4th ed.). New York, NY: Routledge. BookGoogle Scholar
  • Pavur, R., & Nath, R. (1984). Exact \(F\) tests in an ANOVA procedure for dependent observations. Multivariate Behavioral Research, 19(408–420), 3. https://doi.org/10.1207/s15327906mbr1904_. ArticleGoogle Scholar
  • Pearson, E. S., & Hartley, H. O. (1951). Charts of the power function for analysis of variance tests, derived from the non-central \(F\) -distribution. Biometrika, 38, 112–130. https://doi.org/10.2307/2332321. ArticlePubMedGoogle Scholar
  • Porter, A. C., & Raudenbush, S. W. (1987). Analysis of covariance: Its model and use in psychological research. Journal of Counseling Psychology, 34, 383–392. https://doi.org/10.1037//0022-0167.34.4.383. ArticleGoogle Scholar
  • R Development Core Team. (2017). R: A language and environment for statistical computing [Computer software and manual]. Retrieved fromhttp://www.r-project.org.
  • Rheinheimer, D. C., & Penfield, D. A. (2011). The effects of Type I error rate and power of the ANCOVA \(F\) test and selected alternatives under nonnormality and variance heterogeneity. Journal of Experimental Education, 69, 373–391. https://doi.org/10.1080/00220970109599493. ArticleGoogle Scholar
  • Rutherford, A. (2011). ANOVA and ANCOVA: A GLM approach. Hoboken, NJ: Wiley. BookGoogle Scholar
  • Ryan, T. P. (2013). Sample size determination and power. Hoboken, NJ: Wiley. BookGoogle Scholar
  • Sampson, A. R. (1974). A tale of two regressions. Journal of the American Statistical Association, 69, 682–689. https://doi.org/10.2307/2286002. ArticleGoogle Scholar
  • SAS Institute. (2017). SAS/IML user’s guide 14.3. Cary, NC: SAS Institute Inc. Google Scholar
  • Scariano, S. M., & Davenport, J. M. (1987). The effects of violations of independence assumptions in the one-way ANOVA. The American Statistician, 41, 123–129. https://doi.org/10.2307/2684223. ArticleGoogle Scholar
  • Scheffe, H. (1961). The analysis of variance. New York, NY: Wiley. Google Scholar
  • Shieh, G. (2006). Exact interval estimation, power calculation and sample size determination in normal correlation analysis. Psychometrika, 71, 529–540. https://doi.org/10.1007/s11336-04-1221-6. ArticleGoogle Scholar
  • Shieh, G. (2007). A unified approach to power calculation and sample size determination for random regression models. Psychometrika, 72, 347–360. https://doi.org/10.1007/s11336-007-9012-5. ArticleGoogle Scholar
  • Shieh, G. (2017). Power and sample size calculations for contrast analysis in ANCOVA. Multivariate Behavioral Research, 52, 1–11. https://doi.org/10.1080/00273171.2016.1219841. ArticlePubMedGoogle Scholar
  • Thompson, S. K. (2012). Sampling. Hoboken, NJ: Wiley. https://doi.org/10.1002/9781118162934. BookGoogle Scholar
  • Tiku, M. L. (1967). Tables of the power of the \(F\) -test. Journal of the American Statistical Association, 62, 525–539. https://doi.org/10.2307/2283980. ArticleGoogle Scholar
  • Tiku, M. L. (1972). More tables of the power of the \(F\) -test. Journal of the American Statistical Association, 67, 709–710. https://doi.org/10.2307/2284473. ArticleGoogle Scholar
  • Wasserstein, R. L., & Lazar, N. A. (2016). The ASA’s statement on \(p\) -values: Context, process, and purpose. The American Statistician, 70, 129–133. https://doi.org/10.1080/00031305.2016.1154108. ArticleGoogle Scholar
  • Wasserstein, R. L., Schirm, A. L., & Lazar, N. A. (2019). Moving to a world beyond “ \(p < 0.05\) ”. The American Statistician, 73, 1–19. https://doi.org/10.1080/00031305.2019.1583913. ArticleGoogle Scholar
  • Yang, H., Sackett, P. R., & Arvey, R. D. (1996). Statistical power and cost in training evaluation: Some new considerations. Personnel Psychology, 49, 651–668. https://doi.org/10.1111/j.1744-6570.1996.tb01588.x. ArticleGoogle Scholar

Acknowledgements

The author is grateful to the Associate Editor and referees for their valuable comments and suggestions which greatly improved the presentation and content of the article.

Author information

Authors and Affiliations

  1. Department of Management Science, National Chiao Tung University, Hsinchu, 30010, Taiwan, ROC Gwowen Shieh
  1. Gwowen Shieh