Abstract
We estimate the effects on postsecondary education outcomes of the Kalamazoo Promise, a generous, place-based college scholarship. We identify Promise effects using two forms of difference-in-differences: (i) comparing eligible to ineligible graduates before and after the Promise’s initiation and (ii) comparing the treated district to comparison districts before and after the Promise’s initiation. According to our estimates, the Promise increases college enrollment and credential attainment. Stronger effects occur for women. The results also provide suggestive but less precise evidence that Promise effects extend to disadvantaged groups.
I. Introduction
The Kalamazoo Promise, as the first major “place-based” scholarship program in the United States, has generated broad interest.1 This program, often simply called the Promise, was announced on November 10, 2005, and offers large college tuition subsidies with few conditions to graduates of Kalamazoo Public Schools (KPS). Funded by anonymous private donors, the Promise pays up to 100 percent of tuition and fees for any public postsecondary institution in Michigan and is “first-dollar,” so aid is not reduced by other scholarships. The only conditions to qualify for the Promise are that a student be continuously enrolled in KPS since at least ninth grade, that he or she live in the school district and graduate from KPS, and that he or she is admitted to any public college in the state.
In this paper, we estimate the effects of the Kalamazoo Promise on postsecondary education outcomes, including attendance and degree completion. We estimate differencein-differences models that compare outcomes of eligible students with those of ineligible students both before and after the announcement of the Promise.
There is great interest in understanding how college scholarships affect college completion (Oreopoulos and Salvanes 2011; Zimmerman 2014). Do scholarship programs induce additional college completion, or simply transfer funds to persons who would have completed college anyway? We also care about college scholarship programs because of concerns about their distributional effects. Scholarship programs might be progressive if they help disadvantaged groups complete college, but could have regressive effects if they disproportionately benefit advantaged groups that have greater odds of attending and finishing college. Many college scholarship programs, including the ones most studied, have merit or need requirements designed to target scholarships toward those with greater likelihood of college completion or greater economic disadvantage (or both).
Why are place-based scholarship programs different? First, because they tend to have fewer merit or need requirements, they can be much simpler in design, which may increase program effectiveness (Bettinger et al. 2012). Second, their breadth in coverage and community nature means more students tend to be eligible, which may increase program awareness and change social expectations about attending college. However, the fewer requirements in these types of scholarships could also have adverse distributional consequences from poor targeting. They may devote too many dollars to students with insufficient preparation to complete college, for example, or to inframarginal upper-income students who would have completed college anyway.
We know a fair amount about merit-based college scholarships, but much less is known about place-based scholarships.2 Merit-based scholarships have been shown to increase college enrollment in general, and, if the program limits the choice of colleges, to shift enrollment toward the targeted colleges (Dynarski 2002, 2004; Cornwell, Mustard, and Sridhar 2006; Abraham and Clark 2006; Kane 2003, 2006). But meritbased aid’s effects on college graduation are mixed. Scott-Clayton (2011) finds that the West Virginia PROMISE increased the probability of completing a bachelor’s degree within four years. In contrast, Sjoquist and Winters (2015) find that state-based merit aid programs, as a whole, have generally not been successful in increasing college attainment. Cohodes and Goodman (2014) find that the Adams scholarship, a merit-aid, tuition-waiver program for Massachusetts public colleges, resulted in a decline in college graduation rates, by inducing students to forgo higher-quality colleges.
Less research has investigated place-based scholarships with minimal academic requirements like the Kalamazoo Promise.3 Much of the existing research has focused on community or school district outcomes, such as K–12 enrollment and housing prices, where effects can be observed sooner. Bartik, Eberts, and Huang (2010) and Hershbein (2013), for example, show that the declining enrollment in KPS abruptly reversed following the introduction of the Kalamazoo Promise. Miller (2018) finds the Promise had little effect on housing prices, while LeGower and Walsh (2017), who study nearly two dozen place-based scholarships (including Kalamazoo’s), find positive effects on school district enrollment and housing prices, with generally larger effects for more- generous and less-restrictive programs.
Among the limited work examining postsecondary outcomes, Bozick, Gonzalez, and Engberg (2015) find that the Pittsburgh Promise, a place-based scholarship with merit and attendance requirements, had little effect on overall college enrollment but did increase the likelihood of attending a four-year school, while Page and Iriti (2016) find similar but more broad-based enrollment increases. Carruthers and Fox (2016) find that the Knox Achieves scholarship program for community colleges resulted in increased college enrollment, more accumulated college credits, and a slight shift from four-year colleges to community colleges. Because the scholarship cost averaged only $1,000 per student, Carruthers and Fox argue that the financial aid was not the key factor affecting student outcomes. Rather, “the simple message of ‘free community college’. may have fundamentally reshaped the postsecondary educational expectations of participants” (p. 99).4
Nonetheless, it is still not well understood whether these scholarships increase college completion, and if so, under what circumstances. It might seem obvious that financial aid for college will increase college completion, but policy design and local context matter. As shown by Cohodes and Goodman (2014), if the scholarship encourages enrollment in lower-quality colleges, it could reduce college completion. However, the Massachusetts context, with scholarships restricted to relatively low-quality public colleges in a region with much higher-quality private colleges, may not generalize to the rest of the country. Throughout most of the rest of the United States, compared to the Northeast, public college options are often stronger (Bowen, Chingos, and McPherson 2011). For the Kalamazoo Promise, the restriction to public Michigan colleges still includes two selective institutions, the University of Michigan–Ann Arbor and Michigan State University.5
This study helps fill the gaps in our knowledge by estimating the effects of the first prominent place-based scholarship, the Kalamazoo Promise, on college enrollment and completion, both overall and for certain groups of students. To our knowledge, it is the first study of place-based scholarship programs to examine college completion and is among relatively few studies of any scholarship program that examines this outcome.
Our identification comes from the timing of the Promise’s introduction. The unexpected announcement of the Promise in the fall of 2005 created a situation in which some KPS students found themselves eligible for at least 65 percent of future tuition subsidies, while others discovered they were ineligible for the scholarship. Using this natural experiment, we compare changes in postsecondary outcomes before and after the Promise’s creation between Promise-” eligible” students (or students before the Promise who would have been eligible based on when they enrolled in KPS) and Promise- ineligible students.
To corroborate our “within-KPS” identification strategy, we also perform a “between- district” analysis. Using Michigan data on cohorts of high school graduates by district, we compare college outcomes for KPS and other similar districts in Michigan before and after the announcement of the Promise. Adding credence to our analysis, we obtain similar results across the two strategies.
We find large effects of the Promise on several postsecondary outcomes. Using the within-KPS analysis, we estimate that the Promise increased the chance of students enrolling in a four-year college by 18 percent. Although this enrollment effect is economically large, it is imprecisely estimated. However, when using the between-district analysis, we find a large and statistically significant 27-percent increase in the likelihood of enrolling in a four-year college within six months of high school graduation. Further, we find that the enrollment increase is associated with a strong substitution effect in college choice, with affected students switching toward Promise-eligible schools, including a large increase in enrollment at Michigan’s flagship universities. These substitution estimates are large and precisely estimated under both strategies.
Most importantly, we find that, as of six years after high school graduation, the Promise increased postsecondary credential attainment by a statistically significant 12 percentage points, from a pre-Promise baseline of 36 percent to 48 percent; this represents a proportional increase in credential attainment of one-third. About two-thirds of this boost in credential attainment is due to a greater share of students earning a bachelor’s degree.
We examine the heterogeneity of Promise effects by gender, race, and economic disadvantage. First, we consistently find that the college completion results are strong for women, but weaker for men. These results may reflect disadvantaged men having more persistent problems (Autor et al. 2015). Second, we find evidence that the Promise effects are not driven by relatively advantaged students. The enrollment and college completion results are statistically indistinguishable and quantitatively similar regardless of whether students qualified for a lunch subsidy, although the estimates are sometimes imprecise. These results imply that the Promise had a relatively strong proportional impact on traditionally disadvantaged groups, in part because it induced them to go to a more selective college than otherwise. Third, it is not clear whether Promise impacts vary by student race. Although estimates from within-KPS comparisons show larger effects for students of color than for white students, the pattern is reversed, albeit less pronounced, for the between-district analysis.
In the next section, we describe the institutional details of the Kalamazoo Promise. We follow by outlining our data and methodology and then present our results. We also evaluate the strength of our identification assumptions through several robustness checks and examine heterogeneity of Promise effects across different student groups. We conclude by discussing implications of our results for policy.
II. Background on KPS and the Promise
Kalamazoo Public Schools (KPS) is a midsized, mostly urban school district in southwest Michigan, with just over 10,000 students on the eve of the Promise. Like many urban school districts, KPS is poorer and more ethnically diverse than nearby areas. As of the year before the Promise’s announcement, the school-aged population within the district’s boundaries had a poverty rate of 28 percent, and African Americans and Hispanics made up 47 percent and 8 percent of district enrollment. For other Kalamazoo area districts, the poverty rate was only 8 percent, and African Americans and Hispanics were just 5 percent and 2 percent of enrollment.6
Announced in November 2005 and taking effect for the high school class of 2006, the Kalamazoo Promise is a scholarship available to all students who graduate from KPS, reside in the district, and have been continuously enrolled since the beginning of high school.7 Unlike most student aid, the Promise has neither merit requirements (high school GPA or test scores) nor financial need requirements. According to the donors who anonymously fund the scholarship, the Promise’s purpose is to improve KPS, attract people to Kalamazoo, and increase local college graduates, which should attract employers and enhance Kalamazoo’s economic development (Miller-Adams 2009). Applying for the Promise is quick and simple compared to most other student financial aid, especially the Free Application for Federal Student Aid (FAFSA). In their senior year of high school, students fill out a one-page form consisting of basic contact information and only a half-dozen questions (see Appendix Figure A.1).
Bettinger et al. (2012) find that helping students fill out complex financial aid forms— or more generally simplifying the application process—can increase aid receipt and improve college outcomes. The simplicity of the Promise has contributed to its high use rate of more than 85 percent.8 The Promise pays up to 100 percent of tuition and fees at any public community college or university in Michigan.9 The award is treated as first-dollar, meaning that it is applied before grant money from other sources.10 The Promise benefit depends on the length of continuous enrollment in the district’s schools: students who have been in KPS since kindergarten receive the 100-percent subsidy, students enrolled since first through third grade receive 95 percent, and students first enrolled afterward have subsidy rates decreased by five percentage points for each subsequent grade through ninth grade (at a 65 percent scholarship). No scholarship is available for students whose last continuous spell in KPS begins after the start of ninth grade.
Figure 1 shows the relationship between the grade first (and continuously) enrolled in KPS and the Promise’s generosity. The figure shows the large drop in expected generosity between enrolling before and after ninth grade. As discussed in our data and methods section, our identification strategy exploits this sizable change in generosity.
Source: Eligibility rules from the Kalamazoo Promise.
Table 1 shows the number of KPS graduates from the district’s two mainline and one alternative high school between 2003 and 2013. Graduates are divided into two groups: (i) those who are Promise-eligible (or would have been eligible if the Promise had existed in the past) and (ii) those who are ineligible for the Promise because they entered the district too late to be Promise-eligible.
High School Graduates in KPS and Promise (Pseudo-) Eligibility
Eligible students have ten years from high school graduation to use the scholarship. The Promise pays for up to 130 credits, just above the number typically needed for a bachelor’s degree. Students must be enrolled full-time (12 or more credit hours per semester), with the exception of Kalamazoo Valley Community College (KVCC), the local two-year college, where the required enrollment intensity is only half-time. Although there is no initial GPA eligibility requirement, enrolled students must keep a college GPA of at least 2.0 per enrollment period to maintain eligibility. Students falling below this college GPA threshold can regain eligibility by attending college for a semester without Promise support and raising their GPA above the cutoff.
Through 2014, total scholarships paid by the Promise reached $61 million, with an approximately steady spending level (in real terms) of $10–11 million per year being reached by 2011. As of the fall of 2014, approximately 1,400 KPS graduates were using the Promise, which amounts to average spending per recipient of about $4,000 per semester. The value of the scholarship varies with the specific institution a student attends, however, with per-student spending averaging roughly $1,000 per semester at the local community college (where Promise recipients may enroll less than full time), to nearly $5,000 per semester at a university. During the six years after high school graduation, the average present value of Promise scholarship spending per Promise-eligible graduate is $17,620. For eligible students who end up getting a bachelor’s degree, the average present value of Promise scholarship spending is $33,359.11
III. Data and Methodology
A. Data
Our primary data come from KPS and Promise administrative records merged with National Student Clearinghouse (NSC) data on college attendance. Our data span high school graduates from the classes of 2003–2013. From KPS, we obtain information on student characteristics: sex, race/ethnicity, participation in the federal assisted lunch program at any point during high school, and high school of graduation.
Most important for our identification strategy, the KPS records provide a history of student enrollment and residency in the district, which allows us to construct a Promise eligibility indicator (see Online Appendix A). Our sample includes three pre-Promise cohorts (the graduating classes of 2003–2005) and up to eight post-Promise cohorts (the classes of 2006–2013). The enrollment data go back to 1997, which allows us to track enrollment histories for all our cohorts back to sixth grade. Hence, the data allow us to distinguish KPS graduates eligible for any tuition subsidies—that is, a subsidy of at least 65 percent—from KPS graduates who are ineligible for any subsidies. However, for the earlier cohorts, we cannot identify the exact fractional scholarship (above 65 percent) for which earlier cohorts would have been eligible had the Promise existed.12 We doubt that many students and families would be overly sensitive to marginal changes in the percentage of tuition subsidized relative to the very large change from 0 to 65 percent. Thus, we discretize the Promise eligibility variable to be binary: any Promise eligibility versus no Promise eligibility. Eligibility equals one if the student would have been eligible for any tuition subsidy (65 percent or more), based on continuous enrollment and residency in KPS since the ninth-grade fall count date. If the student enrolled after that date, or was not a district resident, we count that student as ineligible based on Promise rules. We refer to this as rules-based eligibility.
In the post-Promise period (2006–2013), we can also observe actual eligibility based on the administrative records directly from the Kalamazoo Promise. The observed administrative eligibility data do not perfectly match rules-based eligibility based on attendance history largely because the Promise administratively granted exceptions to some higher-risk students.13 Because of this discrepancy, we use an instrumental variables strategy, described below, to estimate a local treatment effect of Promise-eligibility on outcomes by instrumenting observed eligibility with rules-based eligibility.
The KPS high school–level data are joined to NSC data using a student-level identifier.14 The NSC provides for each KPS graduate the specific colleges attended, the dates and intensity of attendance, and degrees or credentials earned. These data allow us to investigate the effects of the Promise on college attendance and completions.15
B. Methodology
The surprise announcement of the Promise created a large variation in expected college tuition costs that differed between students on the basis of prior enrollment decisions in KPS. A naive approach to estimating the Promise’s effects would be to compare outcomes of students observed as eligible after the announcement of the Promise with those observed as ineligible. In practice, this naive approach could be implemented by regressing any outcome on a dummy variable indicating observed eligibility.
There at least two reasons why such a naive approach would likely yield a biased estimate of Promise effects. First, students eligible for the Promise—because they enrolled in KPS before the start of ninth grade—may not on average be similar to ineligible students, who necessarily enrolled after ninth grade began.16 Second, the Promise administratively granted eligibility to some at-risk students who would have been considered ineligible had the program rules been followed strictly, and this selection potentially compromises the exogenous variation based on past enrollment decision alone. Even if we were to account for observable student differences, we would worry that estimates of the “effect” of the Promise would be confounded by unobservables.
A more credible way of estimating the effect of the Promise, meant to address the first issue described above, is to compare eligible and ineligible students, as determined by the Promise administration, but holding constant any time-invariant pre-Promise differences between students who enrolled in KPS before or after ninth grade. In practice, this involves estimating the following difference-in-differences (DD) equation:
1
where i denotes the individual student, s denotes the high school, and t denotes the academic year in which we observe the student. The outcome variable (for example, college graduation) is denoted by y.
In the post-Promise period, equals one if the student is observed as eligible according to Promise administrative records and zero otherwise; in the pre-Promise period,
equals one if the student is eligible based on historical enrollment in KPS and zero otherwise. After ×
is an interaction between
and After (a dummy that equals one if the student graduated after the Promise was in effect—the class of 2006 and later—and zero if before). The regression also includes graduation-year-by-high-school dummies, γst, encompassing years 2003–2013 and three high schools.17 The vector x contains student-level observables (listed in Table 2), and u denotes student i’s unobservable traits, which we allow to be heteroskedastic.18
Descriptive Statistics of Sample
The coefficient of greatest interest in Equation 1 is δ2: the regression-adjusted difference in average outcomes between Promise-eligible and ineligible students, net of pre-Promise differences between students who enrolled before or after ninth grade. We refer to this estimate as , the ordinary least squares (OLS) estimate.
There are two related concerns that may compromise the ability of to uncover the effect of actual Promise eligibility—having the scholarship available—on outcomes. First, the eligibility indicator
is not defined symmetrically in the pre- and post-Promise period. This is because, in the pre-Promise period, we do not observe which ineligible students would have administratively been granted eligibility by the Promise administrators—all we observe is whether the student would be eligible according to the Promise’s “length of enrollment” rules. Second, these exceptions occurring in the post-Promise period might be driven by selection on unobservables.19
Recognizing that such selection is a threat to identification, we also estimate the effect of Promise defining eligibility strictly according to the length of enrollment rule. This model is summarized by the following equation:
2
Equation 2 is the analogue of Equation 1, where we replace , with Elig. Elig equals one if the student is eligible for a Promise tuition subsidy based on the length of enrollment rule (regardless of whether the Promise had taken effect yet or not), and zero otherwise. All other variables are defined as in Equation 1. We refer to the estimates of δ2 obtained from Equation 2 as
, the reduced-form (RF) estimate.
The RF estimate is an intention-to-treat effect. Hence, it provides a lower bound of the effect of actual Promise eligibility—that is, having the scholarship available. To obtain an estimate of actual eligibility after the Promise, we simply rescale the RF estimate by the compliance rate. During the post-period, we find that the coefficient on the rules-based eligibility indicator in predicting administrative eligibility is about 0.7. As such, our main implied treatment-on-the-treated effect—being granted the scholarship in practice—is simply RF/0.7.20 This is a simple Wald estimate, which we refer to as the instrumental variables (IV) estimate. In our main results, we report the OLS, RF, and IV estimates, with the IV estimates being the treatment estimates that are most robust to selection bias.
Why report OLS estimates? Even if potentially biased, the OLS estimates are usually more precisely estimated than the IV estimates. For this reason, if the OLS estimates show Promise effects that are substantively and statistically significant—and the IV estimates are imprecise but not substantively or statistically different from the OLS estimates—we view this as supporting a Promise effect.
The within-KPS strategy described above and in Equations 1 and 2 has the advantage of controlling for any unobserved effects that vary over time and that affect all KPS students equally. Specifically, the validity of our difference-in-differences strategy rests on four assumptions. The first assumption is that outcomes are trending similarly for eligible and ineligible students before the Promise. In a hypothetical world without the Promise, outcomes of eligible and ineligible students would have followed a common, parallel trend, conditional on observables. The second assumption is that no other change in KPS besides the Promise affected eligible and ineligible students’ outcomes in a differential way. The third assumption is that rules-based eligibility affects outcomes only through observed eligibility and that rules-based eligibility is a relevant instrument for observed eligibility. The fourth assumption is that, had the Promise existed in the pre-Promise period, the discrepancy between observed eligibility and rules-based eligibility, conditional on available observables, would have been the same as in the post-Promise period.
A key issue for our quasi-experimental analysis is the potential for bias from a change in the relative composition of the eligible and ineligible groups. We deal with changing group composition in part by directly including controls for observables, such as sex, race and ethnicity, participation in the federal subsidized lunch program, and high school of graduation by cohort.21 Although below we address in detail the concern of less observable compositional changes, including the possible role of selective migration, we briefly examine in Table 2 compositional changes by comparing demographic changes between eligibles and ineligibles, before and after the Promise.
Overall, the district’s graduates became more diverse after the Promise, and the share of students eligible for subsidized lunch increased (largely due to the Great Recession). Comparing the two groups, we find (marginally) significant differential changes between the eligible and ineligible groups in two variables: the fractions of Asian students (of whom there are relatively few) and students enrolled in High School 1.22
Ineligible students are more likely to be students of color and participate in the assisted lunch program relative to Promise-eligible students, both before and after the Promise announcement, but these differences are not statistically different from zero. We control for these characteristics in our estimation of Equations 1 and 2.23
It is still possible that despite using instrumental variables, selection on unobservables remains, particularly later in the post-treatment period, and that this selection could confound our estimates. We address this possibility in several ways.
First, we examine year-by-year trends in effects of Promise eligibility. In addition to shedding light on the possibility of differential pre-trends across groups, this analysis indicates how quickly the Promise effects appear. If selection on unobservables is a problem, it would be expected to worsen over time, as students (and their families) have more time to change their behavior (for example, cumulative selective in-migration and out-migration would affect more students). If Promise impacts show up immediately, differential selection is likely less salient.
Second, we later restrict the sample to exclude all students who entered KPS after the Promise announcement, which eliminates bias due to selective in-migration.24 However, this restriction comes at the expense of statistical power.
Third, we experiment with restricting the eligible sample to students moving into KPS between seventh grade and ninth grade, rather than all grades before high school.
This restricts sample size greatly. But it may address concerns that ineligibles, who by definition are “movers” who entered KPS after ninth grade, could somehow be different in unobservables from “stayers” who have been in KPS since kindergarten or some other early grade.
Finally, we supplement the primary within-KPS analysis with between-district comparisons of KPS and other Michigan school districts. While such between-district comparisons may also be subject to selection bias on unobservables, they would not be subject to possible biases due to changes in the small group of ineligible KPS students. Therefore, our main within-KPS analysis is supported if the between-district analysis yields similar results.
IV. Results for Postsecondary Outcomes
We consider Promise effects on enrollment and credential completion. We provide both IV and OLS estimates, as well as RF estimates for completeness. The IV estimates are more robust to possible selection bias, but the OLS estimates may be more efficient if this selection bias is small. We include Hausman tests of whether the more robust IV estimates are statistically significantly different from the potentially more efficient OLS estimates.
A. Enrollment
Table 3 presents results for enrollment outcomes. The four panels examine enrollment at either any postsecondary institution or at a four-year school, within six or 12 months of high school graduation. The table reports estimated coefficients on the interaction between Promise eligibility and graduation after the Promise, δ2, which we interpret as the effects of the Promise scholarship. Column 1 presents OLS estimates, Column 2 presents RF estimates, and Column 3 presents the IV estimates. The table also presents p-values from a Hausman test of equality of the OLS and IV estimates of δ2. As described below, using conventional thresholds we find that the OLS and IV estimates are not statistically different.
Promise Effects on Enrollment
Promise Effects on Enrollment by Type of School
Promise Effects on College First Attended
The advantage of looking at short-term outcomes is that they can be measured for more cohorts. Because our postsecondary data run through the 2013–2014 school year, we observe seven post-Promise graduating classes, 2006–2013, as well as three pre-Promise graduating classes, 2003–2005.
For enrollment at any postsecondary institution within six months of high school graduation (Panel A), the estimated IV Promise effect is an imprecise but sizable net increase of 7.1 percentage points. The percentage point increase is large relative to the mean enrollment probability of 61.2 percent among eligibles in the pre-Promise period, representing an increase of 12 percent (0.071/0.612 = 0.116).25 The OLS estimate is only slightly larger—8.3 percentage points (14 percent)—but statistically significant. When the enrollment horizon is extended to 12 months (Panel B), the effects are smaller for both OLS and IV estimates. This finding suggests that the Promise may operate in part by accelerating the time to first enrollment, but the data are not precise on this point, and we do not emphasize it.
At either a six-month or 12-month horizon, OLS estimates suggest that the Promise increases four-year college enrollment by about nine percentage points, while IV estimates are slightly smaller, at six to seven percentage points, and noisier (Panels C and D). That the four-year enrollment effects are on par with the overall enrollment effects suggests two conclusions: (i) that Promise-induced enrollment is driven by the four- year sector and (ii) that the Promise may induce substitution from the two-year to the four-year sector.26 Because the base enrollment at four-year colleges is lower, the implied percentage effect from the IV estimate is slightly larger, at 18 percent (0.071/0.402 = 0.177). The corresponding OLS estimate implies a similar 23 percent increase (0.094/0.402 = 0.234). That these results differ little across horizon length suggests that the Promise’s effect on overall enrollment timing is driven by the four-year sector, which is plausible.27
For four-year college enrollment at six months, the IV and OLS estimates are quite close, but only the OLS estimates are statistically different from zero at conventional levels. Because this is suggestive but inconclusive of a Promise effect on four-year college enrollment, we briefly preview the between-district results (looking ahead to Table 8). These results show that the Promise led to a statistically significant 7.1 percentage point increase, and this effect, given the baseline shown later in Table 7, implies a 27 percent increase. These magnitudes are remarkably close to those in Table 3.
Promise Effects on Enrollment and Completion Using Between-District Analysis
Promise IV Effects by Group
Previous research shows that aid can affect which college a student attends. We explore this response in Table 4, which shows estimates of college attendance at Promise- eligible and Promise-ineligible schools. The first panel shows that attendance at a Promise-eligible institution—public two-year and four-year colleges in Michigan—increases by a large amount: 19 percentage points. This is an increase of 40 percent over the pre-Promise base (0.193/0.480 = 0.402), which echoes the findings by Andrews, DesJardins, and Ranchhod (2010) on ACT score-sending. We obtain similar point estimates when looking at Michigan four-year publics (Panel B), but because the pre-Promise base attendance at Michigan four-year schools is lower, the proportional effect on Michigan four-year attendance is nearly 59 percent (0.166/0.281 = 0.591).
The third panel shows that gains at Promise-eligible schools are partially due to losses at ineligible institutions. Such attendance declined by 12.2 percentage points, or 92 percent. Reassuringly, the sum of the estimates in Panels B and C accord closely with the net attendance results from Table 3. While not shown in Table 4, the drop in attendance at noneligible schools is driven by out-of-state schools, not private schools in Michigan. Of note, the IV estimates are similar to the OLS estimates and nearly identical in Panels A and B.
To show how the Promise affected enrollment at specific colleges, in Table 5 we report estimates for the probability of enrolling at specific postsecondary institutions. The overall increase in six-month enrollment is mostly driven by a 42 percent (=0.071/ 0.169) increase in the likelihood of attending the local public four-year, Western Michigan University, and a more than doubling of the likelihood of attending Michigan State University (=0.066/0.039), located 80 miles away. Although the estimated effect on enrolling at the University of Michigan is small, the likelihood of enrolling at the “flagships” (MSU or UM) is large, an increase of about 96 percent (=0.071/0.074). There are modest but imprecise positive impacts of attending the local Kalamazoo Valley Community College, possibly due to the offsetting effects mentioned earlier. Finally, we find no significant effect on enrolling at the local, private, and (at the time) non-Promise-eligible liberal arts school, Kalamazoo College. The patterns are similar when extending the college enrollment horizon to 12 months after graduation.28
As discussed in the introduction, evidence indicates that attending a higher-quality college increases college completion. Although Table 5 suggests that the Promise increased average college quality of KPS graduates, this hypothesis can be examined directly. We classify colleges based on 2004 Barron’s selectivity categories, which range from “most selective” to “non-selective.” We define a series of dummy dependent variables that equal one if the student enrolls in a college of a given selectivity or higher and zero otherwise; we then estimate Equation 1 using a linear probability model. The results (Appendix Table A.1) show that the Promise significantly increased enrollment in colleges in the “selective” and “very selective” categories, with no decline in enrollment in the highest two selectivity categories.29
We have also investigated whether these initial enrollment impacts persist by examining the number of credits attempted over different horizons after high school graduation (Appendix Table A.7).30 While the OLS estimates show statistically significant increases in cumulative credits attempted through at least four years after high school graduation, the IVestimates, although still uniformly positive, are smaller and have wide confidence intervals. Since data limitations do not permit us to validate these estimates using the between-district analysis, we consider these credits results only suggestive.
B. Credential Completion
We now turn to Promise effects on degree completion (Table 6), the key outcome for both researchers and policymakers. This outcome is also one over which the financial aid literature is most divided, with some studies finding positive impacts (Scott-Clayton 2011) and others none (Cohodes and Goodman 2014; Sjoquist and Winters 2015).
Promise Effects on Degree Attainment
Our degree attainment estimates focus on two outcomes at two time horizons. The outcomes are (i) receipt of any credential, including a certificate, an associate’s degree, or a bachelor’s degree, and (ii) receipt of a bachelor’s degree. The time horizons are four years and six years after high school graduation.
For the four-year time horizon, we have data for five post-Promise cohorts, through the class of 2010. For the six-year time horizon, we have data for only three post-Promise cohorts, through the class of 2008.
For the four-year horizon, for either any credential (Panel A) or bachelor’s degree (Panel C), the OLS estimates are close to zero and statistically insignificant. The IV estimates are slightly negative but also statistically insignificant. However, because the median duration to a bachelor’s degree is well over four years (Bound, Lovenheim, and Turner 2012; Cataldi et al. 2011), and Promise funding is available by taking just 12 credits per semester, or a five-year pace for a bachelor’s degree, four years may be too soon to expect an impact.31
Over a six-year horizon, both OLS and IV estimates are positive and large. The Promise IV effect on attaining any credential by six years is 12 percentage points. The 90-percent confidence interval ranges from 2 to 21 percentage points. The coefficient estimate of 12 percentage points would be judged by most researchers and policymakers to be a meaningful increase. Relative to a pre-Promise mean credential attainment of 36 percent among eligibles, the estimate represents an increase in credential attainment of 32 percent (=0.116/0.36). The corresponding OLS estimate is similar and equal to 10 percentage points.
For the attainment of a bachelor’s degree (Panel D), we again find sizable OLS- and IV-point estimates. The 7.9 percentage point increase translates to a percentage boost of 26 percent in the likelihood of earning a bachelor’s degree (=0.079/0.30). Although the IV-point estimate is less precise (although significant at a 13-percent level), both are strikingly similar in magnitude. These latter results suggest that most of the Promise effect on degree attainment comes from increasing bachelor’s attainment.
V. Sensitivity Analyses and Additional Estimates
We now examine the internal validity of our empirical approach through several tests of our identification strategy. Finding that results are robust, we explore heterogeneous impacts of the Promise on students by socioeconomic status, ethnicity, and gender.
Our identifying assumption is that the Promise-eligible and -ineligible groups do not have trends in unobservables that affect postsecondary outcomes. As discussed earlier, there are two possible threats to this identification. First, there might be differential pretrending: outcomes between the two groups were diverging even before the Promise. Second, the Promise may have induced changes in composition of the two groups—perhaps due to selective in- and out-migration—that led to more favorable outcomes for eligible versus ineligible students. We demonstrate below that the results are robust to excluding students who entered the district after the Promise. We use district-level data to show that a between-district analysis, resting on different identification, largely buttresses the findings from the within-KPS analysis.
A. Examining the Parallel Trend Assumption
To address the parallel trend assumption, we examine the differences (conditional on covariates) in postsecondary outcomes for the two groups separately for each year. This strategy allows us to look both at pre-trending before the Promise and at the timing of the Promise “effect” in subsequent years. If our identification assumption is valid, we expect an abrupt increase in the outcome among eligibles in the first year of the program, with no clear change among ineligibles. Of course, there may be trends consistent with true effects of the Promise. For example, over time, Promise-eligible students may become better prepared academically (Bartik and Lachowska 2013).
We estimate IV regression models of the following form:
3
where Year is a vector of calendar year dummies from 2003 to 2013. We set year 2003 as the omitted category but allow all years t to be interacted with the Elig-indicator (as before multiplied by the first-stage coefficient
that roughly equals 0.7). Equation 3 allows us to interpret each coefficient γ3 as the average difference in the outcomes of eligible students relative to the outcomes of ineligible students in year t. The other variables are defined as in Equation 1.
In Figure 2, we present fitted probabilities of enrollment in a four-year school within six months of high school graduation (compare it to Table 3, Panel C, Column 3).32 If these fitted probabilities were diverging by eligibility in the pre-Promise period, we might be concerned that the Table 3 results were spurious. This is not the case: the fitted probabilities evolve similarly between the classes of 2003 and 2005.
Notes: The plotted values represent fitted probabilities of attending a four-year college within six months of high school graduation, by class year and Promise eligibility, allowing Promise effects to vary by year. The vertical black line indicates when the Promise began. The figure is based on IVestimates. See the Methodology section for details, Appendix Table A.2 for the IV point estimates, and Appendix Table A.6 for the first-stage estimates. Point-wise 95 percent confidence intervals are shown for the difference between eligible and ineligibles.
Reassuringly, there is a sharp spike in attendance among eligibles that begins for the class of 2006 and that remains elevated over the remaining horizon, perhaps even increasing slightly over time. The time path for ineligibles is noisy, owing to smaller sample sizes, but there is little sustained increase, and the probabilities oscillate from year to year. These patterns support our identifying assumptions.
Figure 3 similarly presents results allowing for the full interaction of Promise eligibility and cohort for the fitted probability of attaining any credential within six years of high school graduation. The patterns are in accord with Figure 2: there is a jump among eligibles for the 2006 cohort, with the level remaining elevated over the rest of the cohort horizon; the levels for ineligibles fluctuate from year to year with no sustained increase.
Notes: The plotted values represent fitted probabilities of attaining any credentials within six years of high school graduation, by class year and Promise eligibility, allowing the Promise effects to vary by year. See the notes for Figure 2 and Table 6. Point-wise 95 percent confidence intervals are shown for the difference between eligible and ineligibles.
The lack of any sustained post-Promise trend in academic effects makes it less likely that the Promise estimates are influenced by selective migration, which would be expected to lead to biases that grow over time.33 The lack of a sustained post-Promise trend also suggests that during this initial period, Promise effects have not grown dramatically due to students, parents, and the school district adjusting behavior.
B. Examining Selective In-Migration
To further address concerns about changing student composition, we check whether the estimates are robust to excluding students who enter KPS after the Promise. This strategy necessarily excludes an increasing number of students from the sample over time, particularly from the ineligible group, who are by definition later entrants.
We have re-estimated selected outcomes—enrollment, credits attempted, and credential attainment—using this smaller sample that includes only students who were enrolled in KPS before the Promise was announced. The results, along with comparisons to the earlier estimates, are shown in Appendix Table A.5. In most cases, the resulting point estimates from excluding late-entrant students are not close to being substantively or statistically different from the baseline estimates in Tables 3–6.
The exception is for enrollment at a four-year college. Excluding new entrants shrinks the point estimate, and the difference from baseline is statistically significant at the 5 percent level.
This finding has two interpretations. The first interpretation is that dropping new entrants reduces selection bias. The difficulty with this interpretation is that the completion estimates are robust to dropping late entrants, which would imply that the Promise increases postsecondary persistence without increasing four-year attendance, which is inconsistent with prior research.
A second interpretation is that dropping new entrants may cause bias, by making the post-Promise ineligible group less comparable to the pre-Promise ineligibles. Dropping new entrants has its effect largely by reducing the ineligibles in 2008 to a much smaller group, whose students are more likely to go to a four-year college.34 The exclusion of late entrants means that ineligibles from the class of 2008 contain no students who entered KPS in Grades 11 or 12, yet students who entered in these grades are still included among the pre-Promise ineligibles. If pre-Promise and post-Promise ineligibles are less comparable, then the ineligible dummy may not adequately control for fixed unobservables.
Next we estimate the effect of the Promise on enrollment and credentials, using district-level data on high school graduates provided by the Michigan Consortium for Educational Research (MCER).35 For each Michigan high school district, we observe graduation-cohort averages of college-going outcomes for the classes of 2003–2013. These data allow us to compare KPS to other districts in Michigan before and after the Promise was in effect, an identification strategy that supplements our more-detailed within-KPS analysis. We estimate the regression equation:
4
where s denotes the district and t denotes the academic year. The outcome variable of interest is y; KPS equals one for KPS and zero for the other control school districts and After × KPS is an interaction between KPS and After (a dummy that equals one if the data are observed after 2005 and zero if before). The vector x includes time-varying district controls: the student–teacher ratio, and the shares of students who are Black, Hispanic, and eligible for subsidized lunch. These controls are from the U.S. Department of Education’s Common Core of Data and are district-wide, not just for high school graduates. (When a control is missing, we set it to the cross-district mean that year and include a dummy missing indicator.) γs and γt denote district and year fixed effects, respectively. As parallel trending seems doubtful for diverse districts, we control for district-specific time trends, γs × t. (In Online Appendix Table C2, we show results without district time trends.) We weight observations by the size of the district’s graduating class.
The coefficient of main interest is δ2, the regression-adjusted difference in average college-going outcomes of students from KPS, compared both to other school districts and to pre-Promise outcomes. Our preferred set of control school districts are the members of the Michigan Middle Cities Education Association (MCEA), a consortium of 31 middle-sized Michigan urban school districts (http://www.middlecities.org) that face similar challenges as KPS.36 None of these school districts are adjacent to KPS and hence unlikely to be affected by selective in- and out-migration due to the Promise, which was shown in Hershbein (2013) to be concentrated in neighboring districts. Online Appendix Table C1 presents graduate-weighted summary statistics for KPS, the MCEA districts, and all 511 districts in Michigan.
Table 7 presents estimates of δ2 for our main outcomes of enrollment and credential completion. Because we have a single “treated” district (KPS), clustering the standard errors by district will not yield correct standard errors (Conley and Taber 2011). While for context we present heteroskedasticity-robust standard errors (which are indeed more conservative than when clustering by district), we employ the permutationinference approach recommended by MacKinnon and Webb (2016) to calculate the p-value. This approach assigns a placebo treatment to every possible control, calculates the t-statistic in each case, and compares the t-statistic from the estimate on the actual treatment to the distribution of placebo t-statistics.
Turning to the results, Table 7 shows that the percentage changes implied by the estimates are remarkably similar to those obtained in the within-KPS analysis. Column 1 shows that the Promise increased enrollment in a four-year school within six months of graduation by about seven percentage points, which is a 27 percent increase (= 0.071/0.259). The point estimate is identical to the IV estimate from Table 3, and the relative magnitude is close to both the 23 percent increase implied by the OLS estimates in Table 3, Panel C, Column 1 and to the 18 percent increase implied by the IV estimates (Column 3, same table).
Column 2 shows that the likelihood of enrolling in a four-year Promise-eligible school increased by 61 percent (=0.110/0.181), while Column 3 shows that the likelihood of enrolling in a four-year non-Promise school decreased by 49 percent (=0.039/0.079), which are only slightly smaller than the implied relative changes from Table 4. Finally, Column 4 shows that the Promise had a large effect on increasing enrollment in Michigan flagship schools.
Turning to the completion results, the last two columns show that the probability of obtaining any credential after six years increases by about 26 percent (=0.061/0.238), and the probability of obtaining a bachelor’s degree by about 22 percent (=0.039/ 0.179). These two changes are only slightly smaller than those reported in Table 6, Panels B and D, Column 3 (32 percent and 26 percent, respectively). Although the completion point estimates in Table 7 are not statistically different from zero at conventional thresholds, these point estimates are nevertheless clearly economically meaningful and strikingly similar to the estimates obtained using variation within KPS. The similarity of the between-district and within-KPS analyses corroborates the causal impact of the Promise.37
VI. Heterogeneity
An important question is how Promise effects vary across demographic groups. The simplicity of the Kalamazoo Promise means that the scholarship is not targeted at those who need it most (the financially constrained) or those expected to benefit most (the academically capable). Although the credential completion estimates imply that the Promise does not simply reflect an income transfer to supramarginal students, it is possible that the gains are concentrated among more-advantaged groups, which could limit its potential to promote social mobility.
Table 8 reports selected results (using within-KPS data) for how Promise IV effects vary with family income (proxied by free or reduced-price lunch status), race, and gender.38 For conciseness, we focus on three outcomes: four-year college attendance within six months, six-year attainment of a bachelor’s degree, and any six-year credential.
As shown in the first panel, both the lower-income students and their higher-income peers experience sizable gains in enrollment. For other outcomes, the point estimates are substantively important for both groups, though imprecise for lower-income students.
While point estimates are higher for students who were not eligible for free or reduced-price lunches, standard errors are sufficiently large that we cannot reject equal impacts across the two income groups. Because the baseline postsecondary outcomes are much lower for the low-income groups, the effect of the Promise in proportional terms is often larger for these students. In particular, their attendance at four-year colleges increases by (an imprecise) 36 percent, relative to 32 percent for higher-income students. Bachelor’s completion within six years rises (imprecisely) by 51 percent, compared with a rise of 38 percent for higher-income students. For completion of any credential within six years, the estimates imply (imprecise) percentage gains of 57 percent for low-income students and 39 percent for higher-income students. These results suggest that the Promise’s benefits reach broadly across the economic spectrum.
The second panel shows differential results for white students and minority students (who are overwhelmingly Black or Hispanic; see Table 2). The estimates on four-year college enrollment are similar across racial groups, at 7–10 percentage points. Because of the lower baseline for students of color, the effect is larger in proportional terms relative to white students: 27 percent versus 20 percent. For baccalaureate completion, the point estimates again are somewhat imprecise, but that for nonwhite students is more than twice as large as that for white students; the difference in proportional impacts is even larger, 60 percent for nonwhite students and just 11 percent for white students. Looking at all credential completion, the racial gap becomes even more stark: a statistically significant 19 percentage point (82 percent) gain for nonwhite students, compared to a (noisy) null effect for white students. This difference is almost statistically significant. The Promise thus boosts postsecondary outcomes among racial minorities (who typically are economically disadvantaged) at least as much as it does for white students.
Comparing males and females in the third panel, Promise effects are larger for women than for men. Baseline means vary relatively little across sexes, but the point estimates are consistently large and often statistically significant for women, and smaller and always statistically insignificant for men. (Because of the reduced sample sizes, these differences are seldom statistically different, however.) The weaker impacts for men are consistent with the recent findings of Bertrand and Pan (2013) and Autor et al. (2015), who offer evidence that family disadvantage has a more harmful effect on the academic outcomes of boys than girls.
To investigate whether these within-KPS results are also present in the between- district analysis, we have estimated versions of Model 4 for subgroups defined by income (again proxied by free or reduced-prize lunch status), race (white vs nonwhite), and gender. Table 9 presents results for enrollment at a four-year college within six months of high school graduation and for obtaining a bachelor’s degree within six years of graduation. As in Table 8, estimates are similar for low-income and non-low-income students, although the former group has a larger percentage increase due to a lower baseline. While the point estimates are economically large, they are noisy, coming from a sample of a few hundred observations, some of which may be based on relatively few students in an income group.
Between-District Estimates of Promise Effects by Group
Turning to the results by race, the results differ more from those in Table 8. Although both whites and minority students experience an increase in enrollment from a similar baseline, the increase for the latter is much smaller and is imprecisely estimated. Whereas the likelihood of a bachelor’s degree increases by 40 percent (=0.082/0.203) for white students, the point estimate is small, imprecise, and negative for students of color. It is not clear why these between-district results by race differ from the within- KPS ones, but it is possible that white students in KPS were on a different trajectory than their counterparts in the comparison districts vis-à-vis the analogous trajectory for students of color.
The last panel of Table 9 shows results by gender. We again observe a large increase in the likelihood of enrollment for both males and females, although the relative effect for men is larger here than in Table 9. Once again, however, women experience the greater (0.090/0.201 =) 45 percent increase in the likelihood of obtaining a bachelor’s degree. For males, the point estimate is small and negative.
To summarize, the heterogeneity analyses based on both the within-KPS sample and the between-district sample indicate that Promise effects were not restricted to higher- income students but benefitted low-income and non-low-income students similarly. Moreover, both sets of analyses imply that women benefitted more from the Promise than men. However, our results for different racial groups are more sensitive to these different methodological approaches.
VII. Discussion and Conclusion
In this paper, we show that the Kalamazoo Promise, a generous placebased college scholarship, has large effects on postsecondary outcomes. Our estimates show sizable percentage effects on postsecondary college choice and attainment, and they are robust to different identification strategies. The estimates accord with both theory—for example, substitution of enrollment from Promise-ineligible to Promise- eligible colleges—and prior evidence from ACT score-sending (Andrews, DesJardins, and Ranchhod 2010).
The pattern of Promise effects across students is similar to that in Angrist et al. (2015), who study the effect of full college scholarships that were randomly assigned to academically talented high school seniors in Nebraska. They find the strongest effects on enrollment and second-year persistence for disadvantaged groups, such as racial minorities.39
These results are quite different from those of Cohodes and Goodman (2014), who study the effects of the Adams scholarship, a tuition-waiver program available to students attending public colleges in Massachusetts. Cohodes and Goodman find that the scholarship increased enrollment but, by shifting the college choice to lower quality publics in Massachusetts, resulted in lower college graduation rates. The contrast between our findings and those of Cohodes and Goodman illustrates that the local context of college scholarships matters for postsecondary success. Given the public versus private college options available to Michigan students, the Kalamazoo Promise incentivized students to trade up, rather than down, the college quality spectrum. The Michigan context is more relevant to the nation than the Massachusetts context, with the possible exception of the Northeast.
We also find that the Promise had a stronger impact on women than on men. We speculate that this finding could be explained by the results in Bertrand and Pan (2013) and Autor et al. (2015), who both document that family disadvantage has a disproportionately stronger effect on the educational outcomes of boys than of girls. It is important, however, to keep in mind that our primary strategy for identifying Promise effects focuses on cohorts who found out that they would be Promise-eligible relatively late. If boys tend to mature later than girls, then these cohorts of boys might not have been affected by the Promise the same way as the girls were. If this is the explanation, then our results for men might not generalize to later time periods or other settings.
The “within-KPS” identification strategy we have used in this paper provides a conservative estimate of Promise effects. By its design, it cannot capture Promise effects that are school-wide or community-wide.40 For example, the Promise has led to intensive effects, by both KPS school officials and many in the Kalamazoo community, to encourage a more “college-going culture” among students and their parents and guardians. Billboards, mailings, class meetings, and school-wide and community-wide meetings inform parents and students of college’s nature and benefits and the application process. Counseling, tutoring, and support services encouraging students to stay in school and succeed have been initiated. More Advanced Placement courses have been offered. These KPS and community efforts may affect the college enrollment and success of all KPS students, both Promise-eligible and -ineligible.
Nonetheless, and despite these limitations, the Promise effects are large, and they speak to the potential of place-based scholarship programs to be a cost-effective way of increasing earnings. A back-of-the-envelope calculation drawing on our degree completion estimates (based on Bartik, Hershbein, and Lachowska 2016) shows that the present value of increased career earnings exceeds the costs of Promise tuition subsidies at all real discount rates up to 11.3 percent.41 At a real discount rate of 3 percent (5 percent), the implied Promise earnings effects have a present value that is about 4.7 (3.0) times the present value of Promise subsidy costs. Since we believe the external validity of our results to be high, at least in similar contexts, this conclusion could likely apply to other urban school districts considering setting up their own Promise-like scholarships—to the extent that they closely follow the Kalamazoo model in terms of universality and generosity.
On the other hand, a Promise-like scholarship has the potential for solving only a portion of America’s skills challenge. The Promise increases postsecondary credential attainment at six years after high school graduation from about 36 percent to about 48 percent. Presumably some of the remaining 52 percent might benefit from receipt of a postsecondary educational credential. “Free” college is insufficient by itself to ensure successful postsecondary education. However, our results indicate that a simple, universal, and generous scholarship program can significantly increase educational attainment of American students. In addition, our results indicate that a simple universal scholarship can help low-income as well as non-low-income students and therefore can have broad benefits.
Appendix
Promise Effect on Enrollment at Colleges Categorized by 2004 Barron’s Selectivity Categories
Promise IV Effects by Year for Selected Outcomes
IV Promise Effect with Clustered Standard Errors
IV Promise Effects without Controlling for Demographic Characteristics
Robustness to Excluding Late Entrants (Reduced-form Estimates)
Estimates from First-Stage IV Regressions
Promise Effects on Credits Attempted
Application Form for the Kalamazoo Promise
Footnotes
The authors thank Drew Anderson, Raj Darolia, Josh Goodman, Jeff Henig, and Judy Scott-Clayton, as well as conference participants at the AEFP, AERA, APPAM, and SEA, seminar participants at the University of Maryland, the Urban Institute, and the Washington Economics of Education working group, and three anonymous referees for valuable comments and suggestions. They are grateful to Brian Jacob and Michigan Consortium for Educational Research (MCER) staff for their help in obtaining the MCER data and Bob Jorth of the Kalamazoo Promise, Michael Rice of Kalamazoo Public Schools (KPS), and Carol Heeter of Kalamazoo Valley Community College for assistance and for providing data. They also thank the William T. Grant Foundation for its generous support and Lumina Foundation for its support of the Promise Research Consortium. Stephen Biddle, Wei-Jang Huang, and Nathan Sotherland provided valuable research assistance. All errors are those of the authors. The authors have no material interests or current affiliation with the Kalamazoo Promise or KPS; Bartik previously served on the KPS Board between 2000 and 2008. The analysis uses confidential student-level data obtained with permission from KPS, which the authors are unable to publicly share. Interested researchers who want to use these data would have to enter into a contract with KPS and explain the proposed use of the data and security measures that would be put in place. The analysis also relies on district-level data provided by the Michigan Education Data Center (MEDC), formerly known as the Michigan Consortium for Educational Research (and referenced as such in the article). These data can be obtained by following the procedure found at https://medc.miedresearch.org/.
Supplementary materials are freely available online at: http://uwpress.wisc.edu/journals/journals/jhr-supplementary.html
* Supplementary materials are freely available online at: http://uwpress.wisc.edu/journals/journals/jhr-supplementary.html
↵1. See, for example, Bartik and Lachowska (2014), Burke (2014), Caplan-Bricker (2014), CBS News (2007), Economist (2008), Fishman (2012), and NBC News (2013).
↵2. Our review of the overall college scholarship research literature only skims the surface. Deming and Dynarski (2010) and Page and Scott-Clayton (2016) provide excellent reviews.
↵3. Page and Scott-Clayton (2016) briefly review the place-based scholarships literature. Dynarski (2004) notes that some state merit scholarships have modest requirements, and this makes research on merit scholarships pertinent to this paper. Additionally, Dynarski (2003) found that when Social Security ended a student benefit entitlement in 1982, educational attainment fell for affected students. However, the population in her study faced different educational options and college costs than today.
↵4. Evidence also suggests that the Kalamazoo Promise has improved high school outcomes (Bartik and Lachowska 2013) and shaped postsecondary school choices (Andrews, DesJardins, and Ranchhod 2010).
↵5. According to U.S. News rankings, the University of Michigan is “most selective” and Michigan State is “more selective.” In contrast, the flagship public college in Massachusetts—University of Massachusetts, Amherst—ranks as “selective.” UMass-Amherst is closer in selectivity to the public four-year college in Kalamazoo, Western Michigan University.
↵6. Poverty rates are from the U.S. Census Bureau’s Small Area Income and Poverty Estimates, and enrollment by ethnicity is from Michigan’s Center for Educational Performance and Information.
↵7. More precisely, the requirement is being enrolled as of the fall count day in ninth grade.
↵8. This use rate is the share of eligible students who successfully submit forms, enroll at a Promise-eligible institution, and receive a Promise scholarship for at least one credit hour. Nearly all students—eligible or not— submit applications. For comparison, in 2012, the estimated Kalamazoo County completion rate for the FAFSA was only 63 percent (Kalamazoo Area College Access Network 2013).
↵9. Beginning with the high school class of 2015, KPS graduates can also use the Promise at 15 Michigan private colleges. For these colleges, the Promise will pay up to the tuition and fees of the University of Michigan, the most expensive public college; the private colleges themselves will pay the remaining tuition costs (Mack 2014). Our analysis period precedes this change.
↵10. Although students do not need to apply for other scholarships to receive the Promise, Promise-eligible students are encouraged to fill out the FAFSA, as federal aid (for example, Pell grants) can be used for college expenses such as room and board, books, and supplies that the Promise does not cover. In fact, FAFSA completion rates are higher in KPS than in other school districts in the county, despite the socioeconomic differences mentioned above, suggesting that the Promise does not crowd out federal aid.
↵11. These total cost figures are in 2012 dollars and use a 3 percent discount rate to calculate present values as of high school graduation. The cost calculations use data from the KPS graduating classes of 2006 and 2007.
↵12. Specifically, students with 80 percent or greater scholarships must be grouped together, as data do not go back far enough to precisely assign a scholarship percentage for the 2003 graduates.
↵13. For example, 81.3 percent of these students were eligible for subsidized lunch, higher than both the overall sample rate of 55.8 percent and the rate for students ineligible under both schema, 66.3 percent. Online Appendix A provides further information on discrepancies between administrative and attendance history eligibility for the post-Promise period.
↵14. Details of this matching procedure are found in Online Appendix A.
↵15. As documented by Dynarski, Hemelt, and Hyman (2015), NSC coverage is high but not exhaustive. For this study, the main issue is that the local two-year college, Kalamazoo Valley Community College (KVCC), has NSC records only since the fall of 2005. To avoid excluding earlier KVCC students, we obtained from KVCC enrollment data from the summer of 2003 forward to the summer of 2005 for KPS students who graduated between 2003 and 2005. Our request for these enrollment data were for them to be assembled via the same process used for NSC submissions. Although one might be concerned that the use of different data sources could bias our estimates, this concern is alleviated because our results show most of the Promise effects are on four-year enrollment and bachelor’s attainment, which are unaffected by the inclusion of the KVCC data.
↵16. As we show in Table 2, students who enrolled after ninth grade tend to be more disadvantaged. Ineligible students may differ from eligible students on unobservables as well.
↵17. Naturally, one interacted term is omitted because of the constant.
↵18. We have also allowed errors to be correlated among students from the same school-cohort, but not between students from different school-cohorts (Appendix Table A.3). Because this clustering often produces slightly smaller standard errors than the standard Huber–White standard errors (potentially due to the small number of clusters), we adopt the more conservative inference from the latter approach.
↵19. Online Appendix Table A1 show the number of these exceptions by graduating cohort and compares their observable characteristics to those of students for whom administrative and rules-based eligibility agree.
↵20. In practice we obtain the IV estimates by replacing the interaction After × Elig in Equation 2 with the interaction After
Elig, where
is the first-stage coefficient; Appendix Table A.6 reports these coefficients. We implement this rescaling because the perfect collinearity of eligibility in the pre-period renders the standard IV commands infeasible for the pooled pre- and post-Promise periods. Our approach of rescaling the interaction term might, however, lead to underestimating the IV standard errors, as they do not account for the uncertainty in estimating π. Our reported statistical precision of the IV estimates is the same as the precision of the RF estimates.
↵21. Appendix Table A.4 presents IV estimates obtained from a model without using the demographic controls. The results, especially for credential attainment, are similar.
↵22. The change in the fraction at High School 1 is due to a 2008 KPS redistricting, which affected newly enrolled students, that is, ineligible students, more than students enrolled previously. We control for this high school differential by including high school-by-year fixed effects.
↵23. The slight changes in composition among eligible and ineligibles shown in Table 2 are consistent with earlier work. Hershbein (2013) studies changes in KPS enrollment after the Promise. Despite a temporary increase in student entrants, and an enduring decrease in exits, there were only slight changes in the academic and socioeconomic composition of entering and exiting students, and essentially no change in the stock of students as a whole, compared to the pre-Promise baseline. Similar findings are in Bartik, Eberts, and Huang (2010). Online Appendix B reproduces parts of the analysis in Hershbein (2013).
↵24. We cannot deal similarly with possibly selective out-migration, as we do not have college enrollment and graduation data on students who left KPS before graduation.
↵25. If we translate this effect into a percentage (or percentage point) increase in enrollment per thousand dollars of scholarship, as was done in previous research on merit scholarships, we find that enrollment increases by about 3.5 percent (2.2 percentage points) per thousand dollars per semester. These figures come from dividing the estimates by 3.3 ($3,300), the average Promise spending (in thousands of 2012 dollars) in the fall after high school graduation, aligning with the outcome of enrollment within six months. This spending number is lower than the $4,000 per semester as of 2014 mentioned earlier, mostly due to lower tuition in earlier years.
↵26. These conclusions are only suggestive because four-year enrollment effects are not statistically different from the any-enrollment effects. If we examine two-year college enrollment directly, we find a small, negative point estimate not statistically different from zero, in line with the difference between any-enrollment and four-year enrollment effects. The small point estimate for the two-year sector masks offsetting effects, as some students upgrade to four-years and others are induced to attend college.
↵27. In our data, 98 percent of students who enroll in a four-year college within 12 months of high school graduation do so within six months. Only 83 percent of students who enroll in a two-year college within 12 months of high school graduation do so within six months.
↵28. Table 5 is consistent with Andrews, DesJardins, and Ranchhod (2010), who find the Promise increased the submission of ACT scores to the flagship universities and WMU, with little change at KVCC or Kalamazoo College.
↵29. We also analyzed college quality effects using a Black and Smith (2006)–style quantile index. These results are similar to those using the Barron’s measures and are available from the authors.
↵30. Although the NSC data do not report credits, they report intensity of enrollment: full-time, half-time, and less than half-time. We convert terms to a semester equivalent and assign values of 12 credits for full-time students, six credits for half-time students, and three credits for less than half-time students. (NSC also reports whether a student withdrew before the end of term, which we code as zero credits.) These credits are summed over various time horizons since high school graduation, and students who never enroll are assigned zeros. We expect this credits-attempted variable to be positively correlated with credits earned and with degree attainment.
↵31. Four-year results are similar using the smaller six-year sample.
↵32. The fitted values apply the coefficient estimates from the full interaction of eligibility and class year to the adjusted mean outcome of the omitted group (ineligibles in 2003), where the adjustment holds covariates at their sample means over the whole analysis period. Appendix Table A.2 shows the estimated coefficients that are the basis for Figures 2 and 3. Online Appendix D shows the analogous OLS estimates.
↵33. This pattern is consistent with the Hershbein (2013) findings of minimal selective migration impacts following the Promise.
↵34. Dropping late entrants means that ineligibles for the class of 2007 must have entered KPS in tenth or eleventh grade and ineligibles in the class of 2008 must have entered KPS in tenth grade; ineligibles in later cohorts are excluded by construction. We lose 56 ineligible late-entrant students from the classes of 2008 or earlier, and 34 of these come from the class of 2008. The count of ineligibles in the class of 2008 falls from 54 (with four-year college enrollment of 0.185) to 20 students (with four-year college enrollment of 0.450).
↵35. The MCER is a cooperative endeavor among the University of Michigan, Michigan State University, and the Michigan Department of Education to produce a harmonized version of the state’s education data. We obtained aggregated data from a period earlier than generally available to researchers.
↵36. The MCEA districts include: Albion, Battle Creek, Bay City, Beecher, Benton Harbor, Dearborn, Ferndale, Flint, Garden City, Grand Rapids, Hazel Park, Highland Park, Jackson, Kalamazoo, Lansing, Monroe, Mt. Clemens, Mt. Pleasant, Muskegon, Muskegon Heights, Niles, Pontiac, Port Huron, Romulus, Saginaw, Southfield, Waterford, Wayne-Westland, Westwood, Willow Run, and Ypsilanti.
↵37. In Online Appendix C, we also present results using nearly all other school districts in Michigan as the control set, as well as results using the Abadie, Diamond, and Hainmueller (2010) synthetic control method.
↵38. Online Appendix D presents a version of this table using OLS instead of IV. These estimates are substantively similar but more precise.
↵39. Swanson and Ritter (2018), begun after this paper, also finds qualitatively and quantitatively similar results on attendance and completion for students affected by the El Dorado (AR) Promise, the place-based scholarship most similar to Kalamazoo’s in design.
↵40. The supplementary between-district analysis can partially capture district-wide effects but relies on a somewhat different counterfactual.
↵41. In comparison, Zimmerman (2014) estimates that the internal rate of return of admitting academically marginal students to four-year colleges is between 6 and 14 percent. Ost, Pan, and Webber (2018) find that for academically marginal students, the internal rate of return for persisting in college is between 4 and 19 percent.
- Received April 2016.
- Accepted March 2019.
This open access article is distributed under the terms of the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0) and is freely available online at: http://jhr.uwpress.org