|Benefit-Cost Summary Statistics Per Participant|
|Taxpayers||$3,343||Benefits minus costs||$15,165|
|Participants||$7,852||Benefit to cost ratio||$133.95|
|Others||$4,142||Chance the program will produce|
|Indirect||($57)||benefits greater than the costs||99 %|
|Net program cost||($114)|
|Benefits minus cost||$15,165|
|Detailed Monetary Benefit Estimates Per Participant|
|Benefits from changes to:1||Benefits to:|
|Labor market earnings associated with test scores||$3,343||$7,852||$4,142||$0||$15,336|
|Adjustment for deadweight cost of program||$0||$0||$0||($57)||($57)|
|Detailed Annual Cost Estimates Per Participant|
|Annual cost||Year dollars||Summary|
|Program costs||$107||2013||Present value of net program costs (in 2018 dollars)||($114)|
|Comparison costs||$0||2013||Cost range (+ or -)||10 %|
|Estimated Cumulative Net Benefits Over Time (Non-Discounted Dollars)|
|The graph above illustrates the estimated cumulative net benefits per-participant for the first fifty years beyond the initial investment in the program. We present these cash flows in non-discounted dollars to simplify the “break-even” point from a budgeting perspective. If the dollars are negative (bars below $0 line), the cumulative benefits do not outweigh the cost of the program up to that point in time. The program breaks even when the dollars reach $0. At this point, the total benefits to participants, taxpayers, and others, are equal to the cost of the program. If the dollars are above $0, the benefits of the program exceed the initial investment.|
|Meta-Analysis of Program Effects|
|Outcomes measured||Treatment age||No. of effect sizes||Treatment N||Adjusted effect sizes(ES) and standard errors(SE) used in the benefit - cost analysis||Unadjusted effect size (random effects model)|
|First time ES is estimated||Second time ES is estimated|
Al Otaiba, S., Connor, C.M., Folsom, J.S., Greulich, L., Meadows, J., & Li, Z. (2011). Assessment data-informed guidance to individualize kindergarten reading instruction: Findings from a cluster-randomized control field trial. The Elementary School Journal, 111(4), 535-560.
Connor, C.M., Morrison, F.J., Fishman, B.J., Schatschneider, C., & Underwood, P. (2007). The early years. Algorithm-guided individualized reading instruction. Science, 315(5811), 464-5.
Fuchs, L.S., Fuchs, D., Karns, K., Hamlett, C.L., & Katzaroff, M. (1999). Mathematics performance assessment in the classroom: Effects on teacher planning and student problem solving. American Educational Research Journal, 36(3), 609-646.
Heller, J.I., Daehler, K.R., Wong, N., Shinohara, M., & Miratrix, L.W. (2012). Differential effects of three professional development models on teacher knowledge and student achievement in elementary science. Journal of Research in Science Teaching, 49(3), 333-362.
Konstantopoulos, S., Miller, S.R., & van der Ploeg, A. (2013). The impact of Indiana's system of interim assessments on mathematics and reading achievement. Educational Evaluation and Policy Analysis, 35(4), 481-499.
Quint, J.C., Sepanik, S., & Smith, J.K. (2008). Using student data to improve teaching and learning: Findings from an evaluation of the Formative Assessments of Students Thinking in Reading (FAST-R) Program in Boston elementary schools. New York: MDRC.
Slavin, R.E., Cheung, A., Holmes, G.C., Madden, N.A., & Chamberlain, A. (2013). Effects of a data-driven district reform model on state assessment outcomes. American Educational Research Journal, 50(2), 371-396.
Tyler, J.H. (2013). If you build it will they come? Teachers' online use of student performance data. Education Finance and Policy, 8(2), 168-207.