skip to main content
Washington State Institute for Public Policy
Back Button

State early childhood education programs: Low-income

Pre-K to 12 Education
Benefit-cost methods last updated December 2023.  Literature review updated July 2019.
Open PDF
To be considered a pre-kindergarten program, the program must have had an age requirement of three or four at the start of the program and an education focus. This analysis includes studies of children attending a state- or district-funded pre-kindergarten programs that target low-income students after 1975. Programs examined offered ECE services on a part-time and/or full-time basis. Comparison students may have received child care provided by family or non-family members, another preschool program, or Head Start.

We exclude programs that provided childcare subsidies, focused on the provision of general childcare, focused on parent and child development, and/or provided more extensive wraparound services.
For an overview of WSIPP's Benefit-Cost Model, please see this guide. The estimates shown are present value, life cycle benefits and costs. All dollars are expressed in the base year chosen for this analysis (2022). The chance the benefits exceed the costs are derived from a Monte Carlo risk analysis. The details on this, as well as the economic discount rates and other relevant parameters are described in our Technical Documentation.
Benefit-Cost Summary Statistics Per Participant
Benefits to:
Taxpayers $3,584 Benefits minus costs $12,651
Participants $9,346 Benefit to cost ratio $4.79
Others $4,926 Chance the program will produce
Indirect ($1,863) benefits greater than the costs 90%
Total benefits $15,992
Net program cost ($3,342)
Benefits minus cost $12,651

^WSIPP’s benefit-cost model does not monetize this outcome.

Meta-analysis is a statistical method to combine the results from separate studies on a program, policy, or topic in order to estimate its effect on an outcome. WSIPP systematically evaluates all credible evaluations we can locate on each topic. The outcomes measured are the types of program impacts that were measured in the research literature (for example, crime or educational attainment). Treatment N represents the total number of individuals or units in the treatment group across the included studies.

An effect size (ES) is a standard metric that summarizes the degree to which a program or policy affects a measured outcome. If the effect size is positive, the outcome increases. If the effect size is negative, the outcome decreases. See Estimating Program Effects Using Effect Sizes for additional information.

Adjusted effect sizes are used to calculate the benefits from our benefit cost model. WSIPP may adjust effect sizes based on methodological characteristics of the study. For example, we may adjust effect sizes when a study has a weak research design or when the program developer is involved in the research. The magnitude of these adjustments varies depending on the topic area.

WSIPP may also adjust the second ES measurement. Research shows the magnitude of some effect sizes decrease over time. For those effect sizes, we estimate outcome-based adjustments which we apply between the first time ES is estimated and the second time ES is estimated. We also report the unadjusted effect size to show the effect sizes before any adjustments have been made. More details about these adjustments can be found in our Technical Documentation.

Meta-Analysis of Program Effects
Outcomes measured Treatment age No. of effect sizes Treatment N Adjusted effect sizes(ES) and standard errors(SE) used in the benefit - cost analysis Unadjusted effect size (random effects model)
First time ES is estimated Second time ES is estimated
ES SE Age ES SE Age ES p-value
4 3 274591 -0.043 0.004 8 -0.043 0.004 8 -0.043 0.001
4 3 274605 0.013 0.046 8 0.013 0.046 8 0.013 0.778
4 1 1852 -0.032 0.056 8 n/a n/a n/a -0.032 0.572
4 6 4615 0.293 0.022 5 0.091 0.025 17 0.293 0.001
4 1 1852 0.033 0.078 8 n/a n/a n/a 0.033 0.672
1In addition to the outcomes measured in the meta-analysis table, WSIPP measures benefits and costs estimated from other outcomes associated with those reported in the evaluation literature. For example, empirical research demonstrates that high school graduation leads to reduced crime. These associated measures provide a more complete picture of the detailed costs and benefits of the program.

2“Others” includes benefits to people other than taxpayers and participants. Depending on the program, it could include reductions in crime victimization, the economic benefits from a more educated workforce, and the benefits from employer-paid health insurance.

3“Indirect benefits” includes estimates of the net changes in the value of a statistical life and net changes in the deadweight costs of taxation.
Detailed Monetary Benefit Estimates Per Participant
Affected outcome: Resulting benefits:1 Benefits accrue to:
Taxpayers Participants Others2 Indirect3 Total
Test scores Labor market earnings associated with test scores $3,967 $9,346 $4,926 $0 $18,239
K-12 grade repetition K-12 grade repetition $67 $0 $0 $33 $100
K-12 special education K-12 special education ($450) $0 $0 ($225) ($675)
Program cost Adjustment for deadweight cost of program $0 $0 $0 ($1,671) ($1,671)
Totals $3,584 $9,346 $4,926 ($1,863) $15,992
Click here to see populations selected
Detailed Annual Cost Estimates Per Participant
Annual cost Year dollars Summary
Program costs $9,330 2018 Present value of net program costs (in 2022 dollars) ($3,342)
Comparison costs $6,384 2018 Cost range (+ or -) 25%
The cost of participation in early childhood education programs targeting low-income children are estimated from Washington’s Early Childhood Education Assistance Program (ECEAP) for low-income preschoolers (2019-20 ECEAP Contractor Slots, Models, Overincome Allotments, and Funding The comparison group cost consists of an estimate of all ECEAP/Head Start-eligible children in Washington who are participating in Head Start, receiving state-funded childcare subsidies, or receiving no state-funded care. The cost of Head Start participation was provided by T. Saenz-Thompson (personal communication, Office of Head Start Region 10, October 24, 2019). The cost of receiving state-funded childcare subsidies is based on Washington’s childcare subsidy reimbursement rates as of February 2019 ( The comparison group cost is a weighted average of the cost of Head Start, state-subsidized childcare, and no state-funded care.
The figures shown are estimates of the costs to implement programs in Washington. The comparison group costs reflect either no treatment or treatment as usual, depending on how effect sizes were calculated in the meta-analysis. The cost range reported above reflects potential variation or uncertainty in the cost estimate; more detail can be found in our Technical Documentation.
Benefits Minus Costs
Benefits by Perspective
Taxpayer Benefits by Source of Value
Benefits Minus Costs Over Time (Cumulative Discounted Dollars)
The graph above illustrates the estimated cumulative net benefits per-participant for the first fifty years beyond the initial investment in the program. We present these cash flows in discounted dollars. If the dollars are negative (bars below $0 line), the cumulative benefits do not outweigh the cost of the program up to that point in time. The program breaks even when the dollars reach $0. At this point, the total benefits to participants, taxpayers, and others, are equal to the cost of the program. If the dollars are above $0, the benefits of the program exceed the initial investment.

Citations Used in the Meta-Analysis

Andrews, R.J., Jargowsky, P.A., & Kuhne, K. (2012). The effects of Texas's targeted pre-kindergarten program on academic performance. Cambridge, MA: National Bureau of Economic Research.

Bania, N., Kay, N., Aos, S., & Pennucci, A. (2014). Outcome evaluation of Washington State’s Early Childhood Education and Assistance Program (Document No. 14-12-2201). Olympia: Washington State Institute for Public Policy.

Barnett, W.S., Frede, E.C., Mobasher, H., & Mohr, P. (1988). The efficacy of public preschool programs and the relationship of program quality to efficacy. Educational Evaluation and Policy Analysis, 10(1), 37–49.

Frede, E., Jung, K., Barnett, W. S., Lamy, C.E., & Figueras, A. (2007). The Abbott Preschool Program longitudinal effects study (APPLES): Interim report. New Brunswick, NJ: Rutgers University, National Institute for Early Education Research.

Hustedt, J.T., Barnett, W.S., Jung, K. & Thomas, J. (2007). The effects of the Arkansas Better Chance program on young children's school readiness. New Brunswick, NJ: Rutgers University, National Institute for Early Education Research.

Hustedt, J.T., Barnett, W.S., Jung, K., & Figueras-Daniel, A. (2009). Continued impacts of New Mexico pre-k on children's readiness for kindergarten: Results from the third year of implementation. New Brunswick, NJ: Rutgers University, National Institute for Early Education Research.

Lipsey, M.W., Hofer, K.G., Dong, N., Farran, D.C., & Bilbrey, C. (2013). Evaluation of the Tennessee voluntary prekindergarten program: End of pre-K results from the randomized control trial. Nashville, TN: Vanderbilt University, Peabody Research Institute.

Wong, V.C., Cook, B., & Jung, K. (2008). An effectiveness-based evaluation of five state pre-kindergarten programs. Journal of Policy Analysis and Management, 27(1), 122-154.

Xiang, Z., & Schweinhart, L.J. (2002). Effects five years later: The Michigan School Readiness Program evaluation through age 10. Ypsilanti, MI: High/Scope Educational Research Foundation.