In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main focus of interest (e.g., Cohen & Cohen, 1983). For example, in the area of reading achievement, there is a general interest in the specific abilities that predict reading development. Because these specific abilities are often correlated with more general abilities, such as verbal intelligence, the latter abilities are controlled for first (e.g., Wagner, Torgesen, & Rashotte, 1994). An additional reason for performing a hierarchical regression analysis is that, in these research applications, as well as in many others, the independent variables are often highly correlated. When correlated independent variables are included simultaneously in the regression model, multicollinearity arises (Cohen & Cohen, 1983). Though regularly used with observed variables, hierarchical regression analysis has not been performed with latent variables. In most applications of structural equation modeling (SEM), the latent predictors have been entered simultaneously into the regression model, although in several cases hierarchical regression analysis would have been the more appropriate approach (e.g., Guthrie et al., 1998; Normandeau & Guay, 1998; Wagner et al., 1994; Wagner et al., 1997). In this article we describe how a hierarchical regression analysis may be conducted in SEM. The main procedure proposed is to perform a Cholesky or triangular decomposition of the intercorrelations among the latent predictors (Harman, 1976; Loehlin, 1996). First the procedure is described and then an example of a hierarchical regression analysis with latent variables is given. Copyright © 1999, Lawrence Erlbaum Associates, Inc.