Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
    • Supplementary Material
  • Info for
    • Authors
    • Subscribers
    • Institutions
    • Advertisers
  • About Us
    • About Us
    • Editorial Board
  • Connect
    • Feedback
    • Help
    • Request JHR at your library
  • Alerts
  • Call for Editor
  • Free Issue
  • Special Issue
  • Other Publications
    • UWP

User menu

  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Human Resources
  • Other Publications
    • UWP
  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Human Resources

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
    • Supplementary Material
  • Info for
    • Authors
    • Subscribers
    • Institutions
    • Advertisers
  • About Us
    • About Us
    • Editorial Board
  • Connect
    • Feedback
    • Help
    • Request JHR at your library
  • Alerts
  • Call for Editor
  • Free Issue
  • Special Issue
  • Follow uwp on Twitter
  • Follow JHR on Bluesky
Research ArticleArticle

More than Dollars for Scholars

The Impact of the Dell Scholars Program on College Access, Persistence, and Degree Attainment

Lindsay C. Page, Stacy S. Kehoe, Benjamin L. Castleman and Gumilang Aryo Sahadewo
Journal of Human Resources, July 2019, 54 (3) 683-725; DOI: https://doi.org/10.3368/jhr.54.3.0516.7935R1
Lindsay C. Page
Lindsay C. Page is an associate professor of research methodology at the University of Pittsburgh. Stacy S. Kehoe is an associate program officer on the Bill & Melinda Gates Foundation U.S. Program K-12 team. Benjamin L. Castleman is an associate professor of education and public policy at the University of Virginia. Gumilang Aryo Sahadewo is an assistant professor, Faculty of Economics and Business, at Universitas Gadjah Mada.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: lpage{at}pitt.edu
Stacy S. Kehoe
Lindsay C. Page is an associate professor of research methodology at the University of Pittsburgh. Stacy S. Kehoe is an associate program officer on the Bill & Melinda Gates Foundation U.S. Program K-12 team. Benjamin L. Castleman is an associate professor of education and public policy at the University of Virginia. Gumilang Aryo Sahadewo is an assistant professor, Faculty of Economics and Business, at Universitas Gadjah Mada.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: stacy.kehoe{at}gatesfoundation.org
Benjamin L. Castleman
Lindsay C. Page is an associate professor of research methodology at the University of Pittsburgh. Stacy S. Kehoe is an associate program officer on the Bill & Melinda Gates Foundation U.S. Program K-12 team. Benjamin L. Castleman is an associate professor of education and public policy at the University of Virginia. Gumilang Aryo Sahadewo is an assistant professor, Faculty of Economics and Business, at Universitas Gadjah Mada.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: castleman{at}virginia.edu
Gumilang Aryo Sahadewo
Lindsay C. Page is an associate professor of research methodology at the University of Pittsburgh. Stacy S. Kehoe is an associate program officer on the Bill & Melinda Gates Foundation U.S. Program K-12 team. Benjamin L. Castleman is an associate professor of education and public policy at the University of Virginia. Gumilang Aryo Sahadewo is an assistant professor, Faculty of Economics and Business, at Universitas Gadjah Mada.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: gasahadewo{at}ugm.ac.id
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • References
  • PDF
Loading

Abstract

Socioeconomic inequalities in college completion have widened over time. A critical question is how to support low-income and first-generation students to achieve college success. We investigate one effort, the Dell Scholars Program, which provides a combination of financial support and individualized advising to selected students who attend institutions throughout the United States. Using two quasi-experimental analytic strategies, regression discontinuity and difference-in-differences with a matched comparison sample, we find consistent evidence that being selected as a Dell Scholar leads to substantially higher rates of bachelor's degree completion within six years, as well improvements on multiple other measures of college success.

JEL Classification
  • I2

I. Introduction

Overall college enrollment rates in the United States have increased substantially over the past several decades. Yet socioeconomic gaps in college completion have widened. For example, the share of young people in the top income quartile earning a four-year college degree by age 25 increased from 36 to 54 percent between birth cohorts from the early 1960s and the late 1970s. In contrast, among those in the lowest income quartile, bachelor’s attainment increased only from 5 to 9 percent over the same period (Bailey and Dynarski 2011). Given the stark differences in these trends and the relationship between educational attainment and subsequent earnings and other life outcomes (Jepsen, Troske, and Coomes 2014; Ma, Pender, and Welch 2016; Oreopoulos 2007), there is a critical need for additional evidence on effective strategies to support low-income students to complete postsecondary degrees and credentials.

Recognizing this need, a variety of organizations—from local college access programs to the federal government—have invested substantially in programs and policies to support and improve college outcomes for economically disadvantaged youth. Such efforts can generate improvements in college access for lower-income populations (see Page and Scott-Clayton 2016 for a comprehensive review of rigorous empirical evidence on efforts to improve college access). In addition, evidence points to positive effects on college persistence and degree attainment from efforts, such as need- and merit-based financial aid (Castleman and Long 2016; Denning, Marx, and Turner 2017; Goldrick-Rab et al. 2016; Scott-Clayton 2011), college advising programs that focus on outreach and support to students before or while in college (Bettinger and Baker 2014; Carrell and Sacerdote 2017), and even lower-touch nudges that provide simplified information about and assistance with important processes like financial aid renewal (Castleman and Page 2016). Not all efforts, however, demonstrate positive long-term effects for students. For example, DesJardins and McCall (2014) find that although the Gates Millennium Scholars Program, which provides scholarship funds to high-achieving, low-income students, led to modest increases in students’ grade point averages (GPAs) through junior year of college, it had no impact on four-year bachelor’s degree attainment.1

Many prior interventions have focused on a single dimension of students’ postsecondary experience. For instance, many programs provide financial assistance but do not include academic guidance. Similarly, many intensive advising programs do not concurrently address the financial constraints students face to degree completion. However, recent evidence suggests that a more comprehensive approach to addressing barriers to degree completion can yield substantial positive impacts. The intensive structural, advising, and financial support provided through the Accelerated Study in Associates Program (ASAP) has produced dramatic improvements in associate’s degree completion at the City University of New York (Scrivener and Weiss 2013). Specific to the University of North Carolina–Chapel Hill context, the Carolina Covenant, which provides low-income students admitted to the university with a full cost-of-attendance scholarship and additional counseling and supports, led to improvements in on-time bachelor’s degree attainment on the order of eight percentage points. These strong completion effects emerged only after the program incorporated nonfinancial supports (Clotfelter, Hemelt, and Ladd 2017). Experimental evidence to date on the Buffett Scholarship, similar in structure to the Carolina Covenant but implemented at selected institutions in Nebraska, reveals sizeable impacts on early college persistence (Angrist, Hudson, and Pallais 2015). Common across these programs that provide multifaceted support is that they reside in and are operated primarily by colleges themselves. A critical question is whether it is feasible and effective to provide a comprehensive set of supports that bolsters the entire college-going pipeline, that is scalable, and that operates independent of (or external to) specific postsecondary institutions.

We shed light on this question and contribute to the growing literature on comprehensive efforts to promote college completion by investigating the Dell Scholars Program, a college success initiative sponsored and administrated by the Michael and Susan Dell Foundation (MSDF).2 The program targets motivated low-income students who have overcome adversity, who seek to obtain a bachelor’s degree, and who have the potential to enroll and succeed in college. Selected students receive financial support, including a total of up to $20,000 in scholarship funds. Beyond this direct financial aid, students also receive ongoing support and assistance, as stated on the program’s website, “to address all of the emotional, lifestyle, and financial challenges that may prevent scholars from completing college.” The program recognizes and actively supports students in overcoming challenges that include dealing with stress, getting out of debt, managing child care, and other varied life circumstances as they arise. This programmatic model is motivated by a theory of action that in order to meaningfully increase the share of lower-income students who earn a college degree, it is necessary both to address the financial constraints students face and to provide ongoing support for the academic, cultural, and other challenges that students experience during their college careers.

We identify the impact of the Dell Scholars Program on college persistence and completion using two complementary analytic strategies. First, we capitalize on an arbitrary, sharp cutoff in the selection process that determines which applicants are chosen as Dell Scholars and use a regression discontinuity (RD) design to estimate the impact of program selection on college enrollment, persistence, and degree attainment using outcome data from the National Student Clearinghouse (NSC). Here, our results indicate that being named a Dell Scholar has little to no impact on initial college enrollment. Yet, the program has positive impacts on college persistence and completion. Second, to bolster our RD results and to extend our inference beyond the margin of selection, we match college-enrolled Dell Scholars and non-Scholar finalists (that is, those below the selection threshold) to observationally similar students from a nationally representative data set of first-time college students. For college persistence and completion outcomes observable for students above and below the selection threshold, we use a difference-in-differences (DID) strategy to estimate program impacts for all Dell Scholars.

We obtain consistent results across the RD and DID approaches that point to the program’s impact on postsecondary success. Both at the margin of selection and overall, scholars are 8–12 percentage points more likely to persist into their third year of college, 6–10 percentage points more likely to earn a bachelor’s degree within four years, and 9–13 percentage points more likely to earn a bachelor’s degree within six years, compared to their non-Scholar counterparts. These impacts are sizeable and represent improvements on the order of 20 to 25 percent over baseline levels of four- and six-year bachelor’s attainment overall, with larger relative impacts in the context of less selective postsecondary institutions.

We then consider how the program shapes students’ college experiences to derive these results. We are not able to explore college process outcomes with the RD or DID framework, as we cannot observe such experiential measures for students not selected into the program. Therefore, we rely on extensive data tracked on the Dell Scholars and analogous data elements collected on comparison students to whom they are matched to estimate covariate-controlled, first-difference impacts on outcomes such as postsecondary academic performance, credit attainment, and loan borrowing behavior, as well as employment while in college. We show that scholars attain college credits at a faster rate, have significantly higher GPAs, are less likely to earn a GPA below 2.0, are less likely to take on either federal or private loans, and are less likely to work a high number of hours compared to their matched counterparts. We further contextualize these results drawing from interviews with program staff members and scholars, program administrative data, and observations of program practices. Consistent with the programmatic aims, our data sources collectively reveal the positive impact that the Dell Scholars Program has on many aspects of students’ undergraduate experiences.

In addition to our impact analyses, we conduct a back-of-the-envelope cost–benefit analysis to assess whether these substantial increases in college completion are sufficient to merit the intensive investment that the Dell Scholars Program makes in its recipients. Although our calculations hinge on several assumptions, as we outline, they nevertheless suggest that the program investment has a positive rate of return. Given that those selected as Dell Scholars are predominantly first-generation college students from low-income backgrounds, our findings have important implications for efforts to expand college success in the United States.

II. The Dell Scholars Program

The Dell Scholars Program is a college-success initiative that provides financial and nonfinancial resources to low-income students identified as having the potential to enroll and succeed in college. MSDF launched the program in 2004 with a goal to support low-income and first-generation students from college enrollment to bachelor’s degree completion. Students selected to be Dell Scholars receive a total of up to $20,000 in scholarship funds, a laptop computer, and textbook support.3 Compared to other scholarship programs, the Dell effort is unique in that it also provides ongoing outreach, close monitoring, and assistance to students that are geographically dispersed over postsecondary institutions across the United States.

This ongoing monitoring and support is actualized through a web-based platform that was developed at the MSDF to manage student communication, tracking, and case management. Through this tool, the program collects data from all Dell Scholars at key check-in points, including the summer before initial enrollment, after their first semester, and after every academic year. For each check-in, students are required to enter information about their academic achievement, financial aid package, and situational information, such as work hours, living circumstances, and emotional well-being. In addition, they are required to upload supporting documentation to verify their entries.4 Based on these data, students who exhibit predetermined academic, financial, and/or situational risk indicators are flagged for potential follow-up. The program team at MSDF reviews all flagged cases and determines whether follow-up is needed. Communication is then managed by a Dell Scholars Program retention officer who has extensive experience with supporting students in navigating financial, academic, and situational challenges that arise on postsecondary pathways.5 In addition to this staff-led outreach, Dell Scholars are encouraged to reach out to the program team at any time for guidance or help. All communication flows through and is logged in the web-based portal. This data-driven program model allows a small program team of four full-time staff members to provide ongoing, proactive, and intensive social support to scholars who are at risk of attrition.

Students apply to be Dell Scholars during their high school senior year. To be eligible, students must have participated in an affiliated college-readiness program,6 earn a minimum GPA of 2.4 from an accredited high school, be low-income as indicated by Pell grant eligibility, and plan to enroll in a four-year college. In a preliminary application, applicants provide information about high school grades, test scores, college plans, home and work responsibilities, and financial status. These applications are scored according to weighted scoring criteria that include factors along three dimensions: academic achievement, disadvantage, and responsibility. Applicants are ranked according to the weighted score, and the top 900 students are selected as semifinalists. Semifinalists then submit additional application materials, including short-answer questions to provide deeper insight into students’ goals and their experiences overcoming adversity in their own lives.7 Those who complete the second stage application are referred to as finalists. Each finalist application is reviewed and scored again according to weighted scoring criteria. The top 300 students are selected as Dell Scholars. See Online Appendix 1 for additional detail regarding the applicant scoring and selection process.8,9 Since 2004, the program has selected and supported more than 3,000 scholars. During the time of this study, the program selected 300 students annually. Despite the small annual cohort size, the program is well known. Between 2009 and 2014, for example, the program selected a total of 1,806 scholars from a pool of nearly 40,000 applicants.

III. Data and Research Design

The processes by which students apply, are selected, and are tracked and supported throughout their involvement with the program contribute to a rich database on scholars’ experiences prior to and throughout their college careers. We focus all analyses on students from the applicant cohorts of 2009–2012.10 In this section, we describe our data and analytic strategies.

A. Regression Discontinuity: Data and Analysis

Dell Scholar selection procedures align perfectly to a regression discontinuity (RD) design for assessing the program’s impact. This analysis relies on two primary data sources. First, we utilize Dell Scholar applicant administrative records, which provide detailed demographic information about each applicant, including gender, race/ethnicity, state of residence, and parents’ education level and employment status. The data also include indicators of students’ academic background, including standardized test scores, high school GPA, and participation in college readiness programs. Although optional and not factored into the scholar selection process, most applicants took and reported scores for either the SAT or ACT test.11 Applicants also provide detailed information about their responsibilities at home, work, and in their community. Lastly, these data provide measures of applicants’ financial circumstances, including household income and state or federal financial aid eligibility.

In Table 1, we provide counts of applicants to the 2009–2012 cohorts. The final number of selected scholars varies minimally from the target of 300 annually.12 Across these years, the program experienced growth in applications, with the 2012 applicant cohort being nearly 40 percent larger than that of 2009.13 In Table 2, we present descriptive statistics of applicants’ demographic characteristics, test score performance, and financial aid eligibility, pooled across cohorts. In the first column, we present figures for applicants overall, and in the remaining columns by applicant status (that is, nonsemifinalist, semifinalist, finalist, and scholar). Although not reported in the table, rates of missingness on student demographics are very low, ranging from 0 to 4 percent across most items. Missingness was most prevalent for SAT/ACT scores (nearly 16 percent), presumably for those applicants who either did not take a college entrance test or opted not to report scores. Of applicants, 70 percent are female, three-quarters are Black or Hispanic, and nearly 60 percent are would-be first-generation college goers. Applicants exhibit an average ACT composite score of 20.15, which corresponds to approximately the 50th percentile nationally.14 Those ultimately selected as scholars are similar in terms of gender and race/ethnicity but are even more likely to be first-generation college goers and have average ACT performance of nearly 22, corresponding to approximately the 63rd percentile. Scholars also achieved a slightly higher high school GPA, on average.15 Therefore, the scholar selection process favors those applicants who are higher performing but from lesser means. For example, while 75 percent of all applicants qualified for subsidized school meals, nearly all eventual scholars did so.

View this table:
  • View inline
  • View popup
Table 1

Counts of Applicants across Cohorts with Nonmissing Applicant Scores

View this table:
  • View inline
  • View popup
Table 2

Summary Statistics of All Applicants, Overall and by Applicant Status

We link applicants’ administrative records to data from the NSC, which provides semester-level information on whether and where students enrolled, and additionally on students’ postsecondary institutions such as whether they are public or private and whether they are two-year or four-year institutions. We observe six years of college enrollment data for the 2009 and 2010 cohorts and four years of data for the 2011 and 2012 cohorts. From the NSC data, we derive a comprehensive set of outcomes related to college enrollment, persistence, and degree attainment. We define all outcome variables in Online Appendix 2.

In Table 3, we list the college-going outcomes considered in our RD analysis, the cohorts for which we examine these outcomes, and the average values of these outcomes, disaggregated by applicant status. Descriptively, we observe a consistent pattern of better outcomes among Dell Scholars compared to non-Scholar finalists and the applicant pool overall. Being selected as a scholar may drive these differences, or they may also be attributable to differences in the characteristics of students ultimately selected, such as their higher prior academic achievement. We turn to discussing our first analytic strategy for disentangling these possibilities.

View this table:
  • View inline
  • View popup
Table 3

Summary Statistics of Applicants’ Outcomes, by Overall and Applicant Status

We take advantage of the Dell Scholars Program selection processes to identify the causal impact of being selected as a scholar on college enrollment, persistence, and completion outcomes at the margin of selection. We exploit the fact that the program uses well-specified rank thresholds for the selection of scholars and use an RD design to compare the outcomes of finalists with scores just above and below their year-relevant scholar-selection threshold. The students with scores at the thresholds are comparable on many dimensions, but the finalists with scores just above the relevant thresholds were selected as Dell Scholars. Thus, we can rely on the comparison of students at the scholar-selection margins to obtain unbiased estimates of the impacts of scholar selection.

For an RD strategy to yield valid causal inference, several conditions must be met (Schochet et al. 2010). First, the assignment rule must be clear and followed with a high degree of fidelity. Second, the score utilized to determine scholar status, our forcing variable, should be an ordinal measure with sufficient density on either side of the cutoff. Third, these scores should be utilized only for identifying scholar status such that differences at the relevant margin cannot be attributable to other potential mechanisms. We provide evidence below that these first three conditions are met in the context of the Dell Scholars Program. Finally, applicants must not be able to manipulate their own value of the forcing variable. It is highly implausible that applicants would have this ability. The scoring algorithms are complex, are not publicly disclosed, and rely on multiple inputs. Further, manipulation of one’s position relative to the cutoffs would require perfect information of the selection processes as well as of the inputs associated with other applicants. Of course, rater manipulation is also a potential threat if raters are overly generous in scoring certain applications. Although we cannot fully rule out the possibility of rater manipulation, it is unlikely to have an undue influence on students’ final rank order. Each rater evaluates only a small subset of finalists, and finalist scores are a combination of rater evaluations and scores automatically attributed to measures (for example, GPA). Taken together, raters are unlikely to know the marginal score that will be in the top 300 and are unlikely to be able to finely manipulate a student’s overall score around the relevant margin of selection. Further, raters are never assigned to review applicants who reside in the same zip code, thus reducing the likelihood that a rater would have a personal connection with an applicant under her review.

In Figure 1, we illustrate the relationship between scholar selection and finalist score, by year. In each year, the relevant threshold is demarcated with a vertical dashed line and recentered at zero. The selection rules and processes are followed with a very high—almost perfect—degree of fidelity. Nevertheless, we do observe a few exceptions in which finalists whose scores are above the scholar threshold were not selected as scholars and vice versa. In Online Appendix Table A3-1, we report on the relevant selection threshold values and provide counts of the cases in which the selection rules were not strictly followed, by year. These instances of noncompliance are explained by the fact that the Dell Scholars team reserves the right to manually disqualify applicants after initially being selected.16 Despite these small discrepancies, we collectively have very strong evidence in support of an RD strategy for assessing programmatic impacts.

Figure 1
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1

Relationship between Scholar Status and Finalist Score, by Year

Source: The Dell Scholars Program database.

Next, we test the validity of the RD assumptions related to the continuity of the forcing variables across the year-specific thresholds and assess any evidence of manipulation of position around these thresholds. We utilize the McCrary (2008) test to examine the continuity assumption by assessing the smoothness of the selection score densities across the relevant thresholds. In Figures 2 and 3, we illustrate the graphical presentation of the McCrary test for the finalists’ scores by year and pooled across cohorts. In Table 4, we provide summary statistics from the McCrary tests. In all but one cohort and for results pooled across cohorts, the test fails to reject the null hypothesis at the 5 percent significance level. Only for the 2011 cohort do the data reveal potential evidence of lack of continuity in the forcing variable density. This result may be driven by the missing data of 138 finalists and one scholar in the 2011 finalist data set. On the other hand, this may also be a false positive, given the multiple hypotheses being tested.17 As an additional check, we fit RD models examining potential jumps in individual student-level characteristics at the scholar selection thresholds (see Online Appendix Table A3-2 for results and Appendix Figure A3-1 for the graphical relationship between baseline covariates and the forcing variable). In no case did we observe consistent evidence of manipulation (from any potential source) around the selection cutoff. Therefore, we conclude that the Dell Scholars selection rules generate a robust quasi-random assignment of scholars local to the cohort-specific thresholds.

Figure 2
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2

Graphical Representation of the McCrary Density Tests at Scholar Selection Threshold, by Year

Source: The Dell Scholars Program database.

Figure 3
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3

Graphical Representation of the McCrary Density Test for the Scholar Selection, 2009–2012

Source: The Dell Scholars Program database.

View this table:
  • View inline
  • View popup
Table 4

Results of the McCrary Density Tests, by Year

Given the minimal noncompliance with the selection process, we rely on a reduced-form model specification to estimate the impact of selection as a Dell Scholar, as follows:

Embedded Image (1)

where Yit represents an outcome such as bachelor’s (BA) degree attainment for student i in cohort t; ABOVEit is an indicator for a finalist’s score exceeding the cohort-specific threshold, and SCOREit is the forcing variable, the finalist score re-centered around the cohort-specific threshold. The vector Xit comprises control covariates included in Table 2 and indicators for cohort and state of residence.18 Our models allow the slope of the forcing variable to vary by cohort.19 The coefficient α1 indicates the causal impact of being above one’s cohort-specific threshold.

In Table 5, we report results associated modeling scholar selection, pooling data across the 2009 through 2012 cohorts. The results of most interest are those associated with the first row, labeled “Above,” which estimate the difference in the probability of being selected as a scholar at the threshold. Visual inspection of Figure 1 foreshadows that coefficients associated with the assignment-rule indicator will be close to but somewhat less than one. Indeed, in the first column, we estimate that a finalist with a score just above the threshold has a 0.955 higher probability of being selected as a scholar. In the remaining columns, we present estimates associated with varying bandwidths. Across columns, the results are similar in terms of magnitude and statistical significance and are therefore not sensitive to bandwidth selection. Although representing a high degree of fidelity, these first-stage results signal modest imperfection in the assignment rule. Therefore, we also consider a two-stage instrumental variables (IV) or “fuzzy” RD approach (Jacob and Lefgren 2004; Imbens and Lemieux 2008) in which we use applicants’ scores relative to their year-relevant threshold to instrument for actual scholar selection.

View this table:
  • View inline
  • View popup
Table 5

First-Stage RD Estimation: Relationship between Scholar Status and Finalist Score

B. Matching: Data and Analysis

Despite the value of RD for achieving causal attribution, its limitations are clear. Estimation of effects within narrow bandwidths around the selection margin leads to reductions in sample size and statistical power, and inference is limited to students proximal to the selection margin. To estimate impacts beyond the margin of selection and to explore impacts on a richer set of college performance measures, we introduce a second, complementary analytic strategy motivated by prior work on extending inference beyond the assignment threshold (Battistin and Rettore 2008; Mealli and Rampichini 2012; Angrist and Rokkanen 2015) and relying on additional sources of data. Specifically, we use coarsened exact matching (CEM) (Iacus, King, and Porro 2009) to match college-enrolled Dell Scholars and non-Scholar finalists (that is, students above and below the selection threshold who successfully enroll in college) to observationally similar students in the NCES Beginning Postsecondary Students 2004/2009 (BPS:04/09) survey. The BPS:04/09, our third source of data, provides rich, longitudinal information on a nationally representative sample of nearly 16,700 first-time beginning college students in the United States.20 CEM is an appealing procedure for three primary reasons. First, it does not depend on modeling assumptions. Second, the procedure is straightforward and provides transparency in the matching process and easy interpretability. Third, CEM allows for user autonomy in coarsening and balancing continuous covariates and can easily incorporate observations with missing data.

Our matching process is as follows. We first identify baseline covariates to be used in the matching procedure. We then coarsen continuous covariates and stratify Dell and BPS students into strata defined by each possible combination of categorical and coarsened-continuous covariates. Finally, each observation included in a stratum with at least one Dell and one BPS student is assigned a weight. Matched Dell students are assigned a weight of one. Within each stratum, comparison BPS students are assigned a weight equal to the total number of Dell students divided by the total number of BPS students. For example, if a stratum has 10 Dell subjects and a 100 BPS subjects, each BPS subject receives a weight of 0.1.

On average, the Dell applicants and scholars are quite different from the full BPS sample. Therefore, we trimmed the BPS sample prior to matching, with cutoffs informed by ranges observed for the Dell sample. We restrict the sample to students who first enrolled in college the fall immediately after high school, were U.S. citizens, attained a high school diploma, were intending to earn a bachelor’s degree, were between the ages of 17 and 21, and had a parental adjusted gross income below $100,000. We matched on a selected set of covariates observable in both the Dell and BPS data at the student and institutional levels, given that factors at both levels can influence subsequent college outcomes. Specifically, we matched on indicators of gender, race/ ethnicity, household income, high school GPA, an indicator for English as a primary language, and institutional measures, including indicators for four-year institutions, public institutions, and levels of institutional selectivity, as indicated by the Barron’s 2009 selectivity index.21 We conducted the matching process separately for each Dell cohort to allow for the distribution of student and institutional characteristics to differ. Within each cohort, we matched separately for Dell Scholars and non-Scholar finalists, so that we can estimate separate effects for these two groups of students. We present descriptive information on the matching variables for the Dell sample and their BPS counterparts in Table 6. CEM produced groups that are highly comparable on observable characteristics. The Dell subjects that did not match included 268 applicants and 151 scholars.22

View this table:
  • View inline
  • View popup
Table 6

Descriptive Statistics for Dell and BPS Matched Analytic Samples

We use two analytic strategies for comparing the Dell sample to their matched counterparts. For outcomes observable for Dell applicants above and below the selection margin, we utilize a difference-in-differences (DID) approach with the following general specification:

Embedded Image (2)

where Yi represents a college-success outcome, such as BA attainment for student i. DELLi is an indicator for student i being an applicant (both Dell Scholars and non-Scholars) to the Dell Scholars Program. We preserve the spirit of the regression discontinuity analysis with the variable ABOVEi, which is a binary indicator for being selected as a Dell Scholar or being a student from the BPS data set matched to a Dell Scholar. The vector Xi again represents baseline covariates, including all those utilized in the matching procedure and presented in Table 6. The coefficient β1 represents the causal effect of the Dell Scholars Program.

We use this model to assess the quality of the matched comparison that we draw from the BPS:04/09 data set. Specifically, we performed the DID analyses using baseline covariates as outcomes for falsification testing. The results of these analyses, which we present in Online Appendix Tables A3-3 and A3-4, reveal almost no differences and underscore the quality of the match overall. The one exception that we observe is that Dell applicants who attend more selective institutions have an adjusted gross household income that is significantly lower than their matched counterparts. On the one hand, occasional differences are to be expected, given the number of tests that we run. On the other, this difference could signal meaningful differences between the Dell applicants and their BPS comparison sample. In the Results section, we discuss steps that we take to assess the sensitivity of our results to this imbalance.

Despite the overall quality of the match, we recognize several potential threats to validity of inferences based on this strategy. First, a time lag of six years exists between the BPS and Dell samples.23 This lag is far from ideal, as patterns in college outcomes could have changed with time. A second, related point is that the BPS students began college well before the economic recession. In general, however, overall progress in college success and degree attainment for low-income and first-generation college students has been frustratingly slow. Further, across this specific time span, longitudinal analyses conducted by the Pell Institute reveal that bachelor’s degree attainment rates for traditionally aged students remained flat for the bottom two income quartiles between 2007 (the on-time degree attainment year for the BPS cohort) and 2013 (Cahalan and Perna 2015). Therefore, we reason that the time lag does not present a major threat and that it would be reasonable to expect similar patterns of college success without the program supports. In addition to this time lag, matching on observables invites the possibility that unobservable factors may drive selection into the Dell sample, as well as subsequent outcomes (Rosenbaum and Rubin 1985). We acknowledge this point and return to it below.

Additionally, given our use of the BPS as the source for the matched comparison, we are limiting our analysis to those in the Dell sample who successfully enrolled in college. This analytic decision is driven by practical necessity, as the BPS is limited to enrolled college students. As we show, this decision finds some justification in the fact that we observe limited impacts of the Dell Scholars opportunity on whether or where students enroll, at least at the margin of selection. Of course, it may be that the opportunity is improving college enrollment outcomes for students further from the selection threshold. To the extent that this is the case, we argue that matching among college-enrolled students would, if anything, lead to downward bias in our estimated impacts of the program. For students below the threshold of selection, we argue that conditioning on college enrollment does not introduce bias, as the non-selected Dell applicants received no treatment, except for being turned down by the program. For students above the threshold, bias would exist if there are scholars who enroll in college but who would not have enrolled if they were not selected. For any selected scholar who enrolls due to the support of the program, if anything, we might predict their subsequent chances of college success to be more tenuous absent the support of the program. If so, then the opportunity to match them to similarly college-intending students who do not enroll on time would lead to larger rather than smaller estimates of the program’s impact. Therefore, although we recognize the potential threats to validity, we reason that any impacts that we estimate may reasonably be considered lower bounds on the effects of the program, at least for those who enroll on time.

In addition, our DID strategy allows us to assess whether these potential limitations hinder our ability to derive causal inferences. Our analytic approach relies on the comparison of non-Scholar finalists and Dell Scholars to their respective matched BPS counterparts. We hypothesize that we will see little difference in outcomes between non-Scholars and the students to whom they are matched. Based on Equation 2 above, this is equivalent to estimates of β2 that are close to zero and not statistically significant. Such results will serve as evidence that the BPS students represent an appropriate counterfactual for the Dell applicants.24 In contrast, we hypothesize large and significant differences between scholars and their matched counterparts, which would result in nonzero estimates of coefficient β1. We will interpret such a pattern of effects as evidence of the causal effect of the Dell Scholars Program on student outcomes.

A primary purpose of our DID strategy is as a gateway step to investigating whether and how the Dell Scholars Program impacts students’ postsecondary experiences. For students selected as Dell Scholars, the program’s administrative database provides detailed information about their progress and experiences in college, as does the BPS for sampled students. Therefore, for Dell Scholars and their BPS comparisons, we estimate covariate-controlled first-differences in selected metrics of college success using the following specification:

Embedded Image (3)

where γ1 is the coefficient of interest, representing differences between Dell Scholars and their BPS counterparts. This analysis is necessarily limited to those outcomes that we can observe consistently across the Dell and BPS data sets. We estimate first-difference impacts on the following outcomes: credit attainment and cumulative GPA in first four years of college, rate of success with credits attempted, incidence of GPA falling below 2.0 (a common threshold for satisfactory academic progress), first-year loan borrowing behavior, and whether and the extent to which students are working while in college. In Online Appendix 2, we provide detailed information about the construction of these outcome measures.

IV. Results

We begin by examining the impacts of being selected as a Dell Scholar on immediate four-year college enrollment, persistence into the second and third years of college, and bachelor’s degree completion within four or six years.25 In Table 7, we present impacts at the margin of selection from our reduced-form RD analyses. For all outcomes of interest, we present results derived from the full sample, as well as from those within an intermediate bandwidth of ±100 points, a narrow bandwidth of ±40 points, and an “optimal” bandwidth around the threshold. In selecting an optimal bandwidth, we utilize a first-order polynomial, a uniform kernel, and the bandwidth selector of Calonico, Cattaneo, and Titiunik (2014) (CCT). Because of this process, the optimal bandwidth varies modestly across outcomes. The column labeled μ presents fitted averages for students just below the threshold.26 In Table 8, we present impacts on the same set of outcomes based on our DID analysis of the Dell and BPS matched samples. We report results for all four cohorts through four-year BA attainment and for the 2009 and 2010 cohorts through six-year BA attainment to present results for consistent samples over time.

View this table:
  • View inline
  • View popup
Table 7

Regression Discontinuity Impacts of Scholar Selection on College Enrollment, Persistence, and Completion Outcomes, Reduced-Form Specification

View this table:
  • View inline
  • View popup
Table 8

DID Impacts of Scholar Selection on College Persistence and Completion Outcomes

Just below the margin of selection, 85 percent of non-Scholar applicants matriculate to college on time, and qualifying as a Dell Scholar does not consistently impact timely enrollment (Table 7, Row 1). Lack of impact on enrollment is not necessarily surprising. All students who achieve finalist status are likely to be highly college-intending, and students were notified of their scholar status well after deciding where to apply and, for many, where to attend. However, it is notable that approximately 15 percent of finalists do not successfully matriculate to college the fall after high school graduation, potentially facing other barriers to timely college enrollment during the summer transition (Castleman and Page 2014a, 2014b; Page and Gehlbach 2017).

Both the RD and DID estimates indicate meaningful programmatic impacts on college persistence. Focusing on RD results derived from the narrow bandwidth, impacts on persistence into the second year range from three to seven percentage points (n.s.) at the margin of selection, aligning with the significant DID impact estimate of four percentage points. Impacts on persistence into the third year are consistent in statistical significance and magnitude across analytic approaches, ranging from 8 to 12 percentage points. We also estimate consistent effects on four- and six-year bachelor’s degree attainment. Students selected into the program are 6 to 10 percentage points more likely to earn a BA within four years and 9 to 13 percentage points more likely to do so within six years.27 We present visual representations of the RD impacts on third-year persistence, four-year BA attainment, and six-year BA attainment in Figures 4–6, respectively. As is typical in the RD context, we face a tradeoff between statistical power and estimating effects local to the selection thresholds (Ludwig and Miller 2007). In Table 7, for example, the magnitude of the effects on four-year BA attainment are consistent across bandwidths, although statistical significance differs in some instances due to a loss of precision when restricting the sample. Nevertheless, the degree completion impacts are sizeable and represent improvements on the order of 20–25 percent over baseline levels of degree attainment.28

Figure 4
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4

Regression Discontinuity Plot: Third-Year Persistence

Source: The Dell Scholars Program database and the National Student Clearinghouse.

Figure 5
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5

Regression Discontinuity Plot: On-Time BA Degree Attainment

Source: The Dell Scholars Program database and the National Student Clearinghouse.

Figure 6
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6

Regression Discontinuity Plot: BA Degree Attainment in Six Years, 2009–2010

Source: The Dell Scholars Program database and the National Student Clearinghouse.

A key question is what the mechanisms are by which the program is improving students’ college outcomes. We consider five dimensions, broadly defined, on which the program might operate. First, being selected as a Dell Scholar may impact the type or “quality” of the institution in which students enroll. Upon being selected as a Dell Scholar, students complete in-take interviews conducted prior to matriculation. During this interview, the Dell Scholars team provides feedback on college plans and, in some cases, counsels students against certain postsecondary choices, such as an out-of-state public institution, as attending an in-state public institution is likely to be a more financially viable option. In addition, the Dell Scholars Program award includes a sizeable amount of grant-based financial aid that may enable students to view a different postsecondary option as financially within reach. If students are enrolling in “higher quality” institutions because of the Dell support, this may translate to better college completion outcomes (Goodman, Hurwitz, and Smith 2015; Howell and Pender 2016).

To investigate this possibility, we examine a set of indicators related to institutional quality and type.29 Related to quality, we examine whether students at the margin of selection initially enroll in institutions that differ in terms of graduation rates, instructional expenditure per full-time student, and whether a student attended a school in one of the top three classifications of selectivity based on the Barron’s Profile of American Colleges. Related to college type, we specifically examine whether students enroll in an in-state, public institution, given the program guidance noted above. We present the RD results in Table 9. We do not find consistent evidence that students’ institutional choices change as a result of being selected as Dell Scholars. Certain specifications indicate that scholars attend institutions with somewhat higher instructional expenditure and that they attend more selective schools, on average, but these results, particularly with regard to institutional selectivity, lack consistency across the bandwidths. Given the timing of the application and selection process, the overall lack of impact on where to enroll, is not surprising. By the time selected scholars are notified, they have already made decisions about where to apply and have received college acceptances. Generally, the quality of the college in which a student enrolls is more heavily driven by the set of schools to which she applies and less by the choice she makes among the schools to which she has been admitted (Hoxby and Avery 2013; Smith, Pender, and Howell 2013).

View this table:
  • View inline
  • View popup
Table 9

RD Impacts of Scholar Selection on College Quality, and College Type, Reduced-Form Specification

Second, conditional on the schools in which Dell Scholars enroll, we hypothesize that the program helps students to improve their chances of success. This may be especially so for the Dell Scholars who enroll in less selective institutions where average rates of persistence and success are lower. In Table 10, we disaggregate the DID impacts on persistence and degree attainment by institutional selectivity. We estimated impacts for students attending more and less selective institutions by fully interacting the DID model with an indicator for institutional selectivity. The top panel pertains to students enrolling in less selective institutions, and the bottom panel to those in more selective institutions.30 About half of the Dell finalists and scholars attend less selective institutions.

View this table:
  • View inline
  • View popup
Table 10

DID Impacts of Scholar Selection on College Persistence and Completion Outcomes, by Institutional Selectivity

In Table 10 and subsequent DID tables that differentiate impacts by institutional selectivity, we include two sets of p-values in our results. The first set of p-values is from testing whether the Above × Dell coefficients are significantly different within more and less selective institutions. The second set is from testing whether the baseline outcome rates differ across these two contexts.

In disaggregating results by institutional selectively, the key findings can be summarized as follows: in many cases, we observe differences in the point estimates of impacts by institutional selectivity, but these differences are not estimated with sufficient precision to be statistically distinguishable. Nevertheless, we observe statistically significant differences across almost all outcomes in the baseline comparison levels between less selective and more selective institutions. For example, BA attainment rates are far lower for the BPS students attending less selective institutions compared to for those attending more selective institutions. These differences in baseline rates mean that the relative impact of the program is quite different across contexts and is notably higher in less selective settings.

Examining the Table 10 results in more detail, impacts on persistence and degree attainment are consistently large in magnitude and significant for scholars attending less selective institutions (top panel). We estimate DID impacts on second and third year persistence of 5 and 11 percentage points, respectively, an impact on four-year BA attainment of 9 percentage points, and an improvement in six-year BA attainment of 18 percentage points. Impacts on persistence are generally smaller for students attending more selective institutions, although these are not statistically different from those estimated for students attending less selective institutions (as indicated by the p-values comparing impacts by selectivity). Still the difference in relative impact is notable. Consider the four-year degree attainment rates across these two settings: 17 percent in less selective institutions and 40 percent in more selective institutions. Using these as base rates, the Dell impacts translate to improvements on the order of 52 percent and 14 percent in less selective and more selective institutions, respectively.

Results for BA attainment, however, reveal an interesting pattern. For Dell finalists who are not selected as scholars but who attend more selective institutions, we estimate that they are significantly less likely to attain a BA without the support of the program. In contrast to these degree attainment differences, we observe no differences in the earlier persistence patterns between the non-selected finalists and their matched counterparts. There are at least two potential explanations for the negative Dell coefficient in the degree attainment models. First, this negative coefficient may signal that the matched BPS counterparts do not represent a suitable comparison group for the Dell sample. As noted previously, the Dell and BPS students in our analytic sample are well matched on a host of baseline characteristics that correlate with college outcomes. The one exception is modest imbalance in adjusted gross income such that the Dell applicants in more selective schools have household incomes that are, on average, lower than their BPS counterparts. Based on further sensitivity analyses, however, we conclude that this is not driving the negative coefficient.31

A second possibility is that the negative coefficient is related to factors considered in the Dell selection process for which we are not able to account in our matching procedure. Specifically, the program favors students from highly disadvantaged backgrounds and assesses disadvantage not only based on easily quantifiable factors such as household income and qualification for free or subsidized school meals, but also on situational narratives in student essays and teacher recommendations. It is possible that the non-selected Dell finalists must overcome situational barriers that are more challenging than those faced by their matched counterparts. Even though such barriers do not appear to lead to differences early in students’ college experiences, they may contribute to the differences in degree attainment that we observe between the non-selected Dell finalists and their matched counterparts over time.

Finally, as we highlight in the discussion, the Dell Scholars face a higher cost of attendance after factoring in other sources of grant aid but before considering the Dell funds. If the same is true for the Dell applicants, a higher cost of attendance may drive them to be less likely to persist. Unfortunately, we are not able to observe college funding information for the Dell applicants, so we cannot confirm this point with data. Regardless, our results do suggest that the program helps selected scholars to mitigate the barriers that contribute to this difference in outcomes between the non-selected Dell finalists and their matched BPS counterparts.

Indeed, a third potential mechanism through which the program is improving college outcomes is that the financial support that Dell Scholars receive may help students avoid accumulating loan debt to which they may be averse or may otherwise alleviate financial constraints that students experience in covering costs associated with college attendance. These include academic costs, such as tuition, fees, and books, as well as nonacademic costs, such as child care. If the award improves students’ ability to finance college, semester over semester, then we may expect to see substantially higher persistence among Dell Scholars. Because we cannot examine loan borrowing for non-Scholar applicants, we shift here and in all subsequent tables to results from covariate-controlled, first-difference comparisons between Dell Scholars and their matched BPS counter-parts. In Table 11, we consider loan borrowing in the first year of college. The top panel of results pertains to the sample overall, and the bottom panels present results by levels of institutional selectivity. We report on borrowing of federal loans, Parent PLUS loans,32 and private loans.33 We observe substantial differences in borrowing behavior. Overall, Dell Scholars are 27 percentage points less likely to take on federal loans, three percentage points less likely to take on a Parent PLUS loan, and five percentage points less likely to take on a private loan in their first year. These impacts are somewhat larger among students attending more selective institutions where scholars may otherwise face a higher out-of-pocket cost of attendance and where baseline levels of borrowing are higher.

View this table:
  • View inline
  • View popup
Table 11

First-Difference Impacts of Scholar Selection on First-Year Borrowing

Fourth, the financial support offered by the Dell Scholars Program may enable scholars to allocate more of their time to studying and integrating into campus life instead of working to cover expenses. In Table 12, we present results of covariate-controlled first-difference estimates of working patterns in the first year of enrollment.34 Interestingly, despite the financial component of the program support, Dell Scholars are nearly eight percentage points more likely than their matched counterparts to work at all and seven percentage points more likely to work at least ten hours a week. Yet they are less likely to work a high number of hours. These impacts are similar across institutional selectivity, although working a large number of hours is more prevalent among those in less selective institutions. Prior research has shown that the number of hours a student works during the school year matters in predicting subsequent academic outcomes. Students who work more than 25 hours a week are more likely to experience negative academic consequences, as opposed to students who take on part-time employment, which is linked to positive impacts on persistence and degree attainment (Leppel 2002; Scott-Clayton and Minaya 2016). When Dell Scholars report plans to work more than 20 hours a week, they are flagged and given detailed advising around course and work scheduling. We therefore attribute the impacts on working patterns to both the financial award and the robust counseling that the program team provides to Dell Scholars and their encouragement against taking on burdensome work hours.

View this table:
  • View inline
  • View popup
Table 12

First-Difference Impacts of Scholar Selection on First-Year Working Patterns

Fifth, gathering data on scholars’ postsecondary experiences and following up with them to provide feedback and support, the program may provide students with the guideposts, encouragement, and direction that they need to be more academically successful. As noted previously, Dell Scholars are required to submit their course transcripts to the program, and students who earn low GPAs (for example, where they may be failing to maintain satisfactory academic progress) are selected for follow-up and intervention. To explore the potential impact of the program support on academic performance, we again rely on our first-difference comparisons of the Dell Scholars and their matched counterparts to examine program impacts on credit attainment (Table 13) and GPA (Table 14), overall and by institutional selectivity. In Table 13, we examine the cumulative attainment of credits across the first four years of college. Although credit attainment was more similar for Dell and comparison students in the first year of college, Dell Scholars earned a significantly greater number of credits in subsequent years. In addition, we examine whether a student earned fewer than three-quarters of credits attempted in any semester enrolled and find that the scholars were significantly more likely to earn a substantial proportion of credits attempted. In short, the Dell Scholars exhibit better progress towards degree attainment over time. In Table 14, we present results related to academic performance as measured by cumulative GPA across the first four years of college.35 Results again indicate significant differences between Dell Scholars and their matched counterparts. Finally, Dell Scholars are less likely to experience a cumulative GPA that drops below 2.0 during the first four years of college. Given that all Dell Scholars are Pell eligible, our results signal that the program not only provides students with a generous scholarship that can be used flexibly during their undergraduate career, but that it may also help students to maintain more consistent access to other sources of need-based aid, such as the Pell Grant, by maintaining satisfactory academic progress (Schudde and Scott-Clayton 2016).

View this table:
  • View inline
  • View popup
Table 13

First-Difference Impacts of Scholar Selection on Course Credit Attainment

View this table:
  • View inline
  • View popup
Table 14

First-Difference Impacts of Scholar Selection on Postsecondary GPA

As above, we observe significant differences in baseline values for academic performance across institutional selectivity. Students in less selective institutions, on average, accumulate fewer credits, earn fewer of the credits they attempt, earn lower GPAs, and are more likely to put their federal financial aid at risk by earning less than a 2.0 GPA. The program’s impact on GPA is particularly large for students at less selective institutions and corresponds to an effect size on the order of more than one-third of a standard deviation. Thus, the ongoing support offered by the program may be especially meaningful for students enrolled in less selective institutions, where individualized support services are more scare (Brock 2010).

V. Discussion

Many college access and persistence efforts focus on financial barriers to college success by providing students with funds to defray the cost of college. Focus on financial barriers has gained traction in recent policy discussions of debt- or tuition-free college (Chingos 2016). Other efforts boost outreach and counseling to assist students in navigating the academic and behavioral challenges that emerge in college. Evidence suggests that both types of efforts hold promise for improving the college outcomes of low-income and first-generation college-going students. Yet offering students a suite of supports across these domains may be more successful than the sum of its parts. The ASAP program in New York City suggests this to be true in community college settings. Our examination of the Dell Scholars Program provides further supporting evidence in the context of four-year colleges and universities.

The Dell Scholars Program leads to meaningful improvements in college performance, persistence, and success. Our RD and DID analyses indicate that scholars are substantially more likely to persist into the third year of college. Further, scholars are 6 to 10 percentage points more likely to earn a bachelor’s degree on time and nearly 9 to 13 percentage points more likely to do so within six years. Relative impacts of the program on degree attainment are larger among students who attend less selective colleges and universities where rates of persistence and completion are otherwise significantly lower, on average. The Dell Scholars Program does not appear to shift the institutional choices that selected students make. Instead, it supports students to earn higher GPAs, to avoid academic probation, to earn college credits more successfully, and to reduce borrowing at a wide variety of postsecondary institutions across the United States.

The heterogeneity in relative effects across institutional selectivity is important, given that enrollment for low-income and first-generation college students has been concentrated predominantly in less selective institutions (Carnevale and Strohl 2013). Policy and programmatic efforts aiming to close the college completion gap need to be particularly attuned to interventions that effectively support low-income students where they tend to enroll. It is often assumed that the institutional performance of less selective colleges is weighed down by the influx of an underprepared student population. Examining increases in time-to-completion trends outside of the nation’s most selective colleges and universities, Bound, Lovenheim, and Turner (2010) find that the changes in the composition of the student body accounts for essentially none of these increases and that this trend is more attributable to reductions in college resources and per-student expenditure on support services and faculty quality. That is, institutions that are enrolling more low-income students are also those that have fewer resources dedicated to support students to degree attainment. Thus, it is possible that the impacts of the Dell Scholars Program are large in less selective institutions because the program is stepping in with support services that are no longer, or were never, offered on those campuses (Deming and Walters 2017). Furthermore, unlike other grant programs that require students to maintain a minimum GPA for aid, the Dell Scholars Program directs more program resources to students who are struggling. The program will place a scholar on probation if he does not maintain satisfactory academic requirements at his institution, but this probation status requires students to check-in with the program more often and does not signal that the student is at risk for losing access to his scholarship.

The program’s large persistence impacts emerge in what would be students’ third year of college. Programs like Dell Scholars that provide ongoing supports across a variety of domains may be particularly important to increase degree attainment among students who complete substantial credits but are at risk of withdrawal before earning their diploma as a result of obstacles arising later in college. Campus-based support programs at many institutions primarily target first-year students, and with high student/counselor ratios at broad access institutions, students may receive little in the way of proactive advising or outreach (Scott-Clayton 2015). Students may also experience challenges paying for later terms in college, especially if they fail to make satisfactory academic progress or otherwise run out of financial aid eligibility (Schudde and Scott-Clayton 2016). Students are thus in the position of finding campus resources on their own or having to make difficult decisions, such as which courses most efficiently lead them to a degree, independently. Owing in part to challenges such as these, more than 40 percent of college students who fail to complete their degrees leave after their second year of college (Bowen, Chingos, and McPherson 2009). Of students who have earned 75 percent of the credits they need to graduate, upwards of 25 percent of students at open-enrollment, four-year universities do not graduate within six years of starting college (Mabel and Britton 2017). By virtue of providing financial and advising supports consistently throughout college, the Dell Scholars Program may help students overcome these late hurdles.

An important question that remains unanswered is whether the effects that we observe are driven by the funding support, by the nonmonetary support, or by the unique combination of the two. To inform this question, we compare the first-year financial aid packages for the Dell Scholars and their matched BPS counterparts and estimate first-differences in cost of attendance, cost of attendance subtracting non-Dell grants, cost of attendance subtracting Dell and non-Dell grants, and the cost of attendance subtracting all grants and accepted federal loans (Online Appendix Table A3-8).36 We estimated differences both overall and by institutional selectivity.37 Given the six-year lag in time between the BPS and Dell samples, it is not surprising that the Dell Scholars face a total cost of attendance (that is, “sticker price”) that is about $4,000 higher, on average (Column 1). This difference is reduced somewhat after accounting for non-Dell grants (Column 2) and substantially after also accounting for the Dell award (Column 3). After accounting for all grant aid, the cost of attendance is lower for Dell Scholars attending less selective institutions and similar among those attending more selective institutions, relative to their matched counterparts. As shown above, the BPS counterparts are more likely than the Dell Scholars to take on loans. Therefore, after accounting for federal loans (including Parent PLUS loans), the Dell Scholars face a higher out-of-pocket cost of attendance but are substantially less likely to borrow. In short, in the comparison that we make, the Dell Scholars award affords students a financial advantage in terms of decreasing borrowing and mitigating the potentially negative effect of increased sticker price, but not in terms of out-of-pocket cost of attendance. In fact, this higher cost of attendance may have contributed to non-selected Dell applicants being less likely than their matched counterparts to complete college. On the basis of this analysis, we conclude that it is unlikely that the financial award is the only mechanism underlying our findings.

From the perspective of the program staff, there may not be a single mechanism but rather different mechanisms for different students, as the program flexibly and proactively responds to student needs and indicators of risk. Observations of program practices and descriptive analysis of administrative data reveal that the program is designed to be highly responsive to indicators of risk of student attrition, such as earning a low GPA during a semester or withdrawing from multiple courses. Such indicators drive program engagement with students. We illustrate this relationship in Online Appendix Figure A3-2 with a scatterplot of the probability of scholars having a relatively high number of program contact notes during the first four years of college by their cumulative GPA in their fourth year.38 Yet the program administrative records also reveal substantial and ongoing support of and contact with Dell Scholars who do not exhibit academic risk. Like other low-income, first-generation college students, Dell Scholars are susceptible to other challenges along their way to degree attainment. Over time, the program has evolved to have routines and expertise in place to help their participants grapple with hurdles beyond academics, including complex bureaucratic, financial, and life challenges that may lead to attrition. Qualitative evidence that we have gathered also reveals variation in how students experience the program. For some, the funding is the primary component on which they report relying. For others, the program serves as their only source of stable support.

Given the positive impacts on degree attainment, we explore whether the benefits associated with these increases in college completion justify the program costs. We provide a back-of-the envelope calculation of relative costs and benefits of the program in the spirit of Deming (2009), Pallais (2015), and Hurwitz et al. (2017). Drawing on our DID analyses, we estimate that the program improves six-year BA attainment by 13 percentage points. This corresponds to 39 students in each cohort of 300. The total cost of the Dell Scholars Program is approximately $25,000 per student, or $7,500,000 per cohort.39 Therefore, the cost per student induced to earn a bachelor’s degree is approximately $192,000.

Next, consider the benefits of this increase in educational attainment. The differential in annual earnings and tax payments between median full-time workers with a bachelor’s degree and those with only some college was $16,100 in 2011 (Baum, Ma, and Payea 2013). Although this is an observed difference, Card (1999) reports that causal estimates of the effect of education on earnings are often 20–40 percent larger. Using this observed differential and assuming, for the sake of simplicity, that it remains constant over time, the social and private monetary benefits of the Dell Scholars Program would exceed the costs after 12 years of post-college earning. Even if the earnings differential were smaller, the program still looks to have a positive rate of return, albeit over a long time horizon. Of course, this simple calculation leaves aside many factors. For example, we might consider this estimate conservative in that we do not attempt to monetize the many other types of benefits, both public and private, that accrue because of higher education (Ma, Pender, and Welch 2016). Similarly, we do not adjust for an increase in earnings differentials over time. Although recognizing the many assumptions that we have made, these calculations nevertheless suggest a positive rate of return for the MSDF investment in their Dell Scholars Program.

It is certainly the case that the generous and flexible financial support provides a strong incentive for the Dell Scholars to comply with program reporting requirements. To remain eligible, scholars must regularly report back to program staff on their academic progress as well as challenges that they are facing, be they related to academics, physical health, mental health, college finances, or general life management. By incorporating this reporting mechanism into their ongoing work with scholars, the program staff can track students closely and triage additional support to them when needed. It is an open question whether such a system of tracking and follow-up would be possible without the strong incentive that the funding creates.

Instead, if complementarities are realized by the combination of monetary and nonmonetary support, the Dell approach through which a small program team efficiently tracks and provides follow-up to a large number of geographically disbursed students may provide a model for scholarship programs (for example, place-based promise programs and state-level merit aid programs) to layer on robust student tracking and support. The web-based system that the program has developed both collects information on student progress and guides the provision of targeted support. This technology affords the lean program team the opportunity to meet students where they are and to respond flexibly to the challenges that they face. Such features may serve as a model to improve the efficacy of the substantial investments already being made to increase college success for low-income and first-generation college-goers throughout the United States.

Footnotes

  • * Supplementary materials are freely available online at: http://uwpress.wisc.edu/journals/journals/jhr-supplementary.html

  • ↵1. This lack of impact on degree attainment may relate to the very high performing nature of the students served.

  • ↵2. For more information about the Dell Scholars Program, see http://www.dellscholars.org/ (accessed November 6, 2018).

  • ↵3. The Dell Scholars Program allows participants to use their $20,000 for education-related expenses, and they allow flexibility in how the funds are disbursed. Scholars who receive generous financial aid packages or additional outside support may request that their scholarship award be allocated to tuition and fees for summer courses, study abroad tuition and fees, summer internship stipends, fees for graduate school test preparation courses, and/or graduate school tuition. In instances where Dell funds would displace other institutional grant aid in a student’s financial aid package, the program recommends withholding use of the Dell dollars and using them for subsequent repayment of loans that may also be a part of the student’s financial aid package. Nevertheless, the funds are primarily used for typical college expenses, such as tuition and room and board. On average, MSDF disburses an average of $3,240 per year toward school-year educational expenses in the first four years of college for each Dell Scholar.

  • ↵4. If students do not enter their information and supporting documentation, they compromise their access to the scholarship award. Therefore, compliance with reporting is quite high.

  • ↵5. The Dell Scholars Program maintains a team of four full-time staff members and has not experienced any staff turnover since its inception. Therefore, the knowledge and experiences of the team have been retained over time.

  • ↵6. At the time of our writing, these programs included Alliance College-Ready Public Schools, AP Strategies, Aspire, AVID, Bottom Line, Breakthrough Austin, College Forward, Cristo Rey Network, Fulfillment Fund, GEAR UP, Genesys Works, Green Dot Public Schools, IDEA Academy, KIPP Academy/KIPP Through College (KTC), Mastery Charter Schools, Noble Charter, One Goal, Philadelphia Futures, Upward Bound, Upward Bound–Math Science, YES Prep Public Schools, Uncommon Schools, and Uplift Education.

  • ↵7. Additional materials also include a high school transcript, a Student Aid Report obtained after completing the Free Application for Federal Student Aid (FAFSA), responses to short-answer questions, and a letter of recommendation.

  • ↵8. This and other appendixes are available online at http://jhr.uwpress.org/.

  • ↵9. The two-stage selection process lends itself to investigating both the impact of being selected as a semifinalist and the impact of being selected as a scholar. We explored but found no impacts at the margin of semifinalist selection. Therefore, we refrain from presenting results associated with semifinalist selection and focus attention on the impact of being selected as a Dell Scholar from among those applicants who reach the finalist stage.

  • ↵10. We did not consider cohorts prior to 2009 due to data quality issues, and we did not consider cohorts after 2012 to restrict our examination to those cohorts that we can follow through at least four years of postsecondary education.

  • ↵11. We convert SAT test scores to ACT composite scores using ACT-SAT concordance information retrieved from http://www.act.org/content/dam/act/unsecured/documents/ACT-SAT-Concordance-Tables.pdf (accessed November 26, 2018).

  • ↵12. We are missing three observations on the 2010 applicants who are missing the semifinalist algorithm score. We are also missing 139 observations from the 2011 finalists, one of whom is a scholar, due to an unknown system issue.

  • ↵13. Selected semifinalists complete the finalist application with a high rate of compliance (86 percent, on average).

  • ↵14. See http://www.act.org/content/dam/act/unsecured/documents/MultipleChoiceStemComposite.pdf (accessed November 26, 2018) for correspondence between ACT composite scores and percentiles of performance.

  • ↵15. Each applicant’s high school GPA is normalized with their high school’s GPA scale. For example, an applicant with a GPA of 3.6 from a high school with a 0–4 scale has a scaled GPA of 0.90.

  • ↵16. There are four main reasons for disqualification. First, an applicant may be disqualified if the applicant received a serious disciplinary action in high school. The Dell Scholars Program has yet to disqualify a scholar for this reason. Second, an applicant may be disqualified if the applicant’s essay did not meet the minimum criteria or if the applicant used the same responses for all essays. Third, an applicant may be disqualified if the applicant did not plan to attend a four-year college. While it is permissible for scholars to begin their postsecondary education at a community college, they must demonstrate a goal of completing a four-year degree. Lastly, an applicant may be disqualified if the applicant inflated their high school grades. Specifically, the Dell Scholars Program checks whether the self-reported grades matched with the official high-school transcript.

  • ↵17. We assess the sensitivity of our results to this cohort and find that results are, overall, not sensitive to the inclusion or exclusion of the class of 2011 students for whom score data are complete.

  • ↵18. We impute zero values and include dummies for missingness where students are missing valid values for these covariates. As noted previously, rates of missingness were very low (0–4 percent for nearly all variables), and results are not sensitive to the inclusion of covariates; therefore, we reason that our results are not sensitive to our handling of missing data.

  • ↵19. Our use of a linear specification for Equation 1 is well supported by the data graphically. Further, when we assess the need for quadratic terms, we fail to reject the null that trends are anything but linear. We do not consider higher order polynomial specifications, given recent guidance against doing so (Gelman and Imbens 2014).

  • ↵20. The BPS:04/09 provides a near-perfect source of comparison; it captures data on academic achievement and college financing for a nationally representative sample of students. The target population for the BPS:04/09 study was first-time beginning college students during the 2003–2004 academic year. The BPS:04/09 subjects were surveyed in their first year of enrollment, and then three and six years later in follow-up surveys. A key feature of this data set is that it includes transcript data from all institutions attended by BPS subjects as well as verified federal and state financial aid information. Given that the BPS:04/09 focuses on college-enrolled students, we restrict our matching analysis to those Dell Scholars and finalists who enroll in college on time. At the margin of selection, we see no systematic impacts on initial college enrollment, which serves as at least partial justification of the decision to focus here on the subset of Dell Scholars and applicants who successfully enroll. For more information on the BPS:04/09 survey, see https://nces.ed.gov/surveys/bps/about.asp.

  • ↵21. The Barron’s Profile of American Colleges includes a categorical measure of college selectivity that is based on college admission rates and the competitiveness of each institution’s student body as measured by high school ranking, GPA, and standardized test scores (Chetty et al. 2017; Reardon, Baker, and Klasik 2012).

  • ↵22. The majority of unmatched Dell subjects included male minorities in elite and highly selective institutions and students enrolled in less selective institutions with high school GPAs above 3.5.

  • ↵23. Students in the BPS were first-time college students in the 2003–2004 academic year, whereas the earliest cohort of Dell applicants that we consider began college in the 2009–2010 academic year.

  • ↵24. If we observe that the non-Scholar finalists perform more poorly than their BPS matched counterparts, this may be an indication of negative consequences of the recession. To the extent that these negative consequences would otherwise be the same for students above and below the threshold, the DID strategy will account for this possibility.

  • ↵25. We focus on immediate enrollment in a four-year institution since the program requires applicants to plan for enrollment in a four-year institution. We additionally examined retention within the same postsecondary institution. Because institutional retention and persistence outcomes yielded similar results, we omit the retention outcomes.

  • ↵26. For each student, we calculate fitted outcomes using the estimated coefficients. Then, we calculate the average fitted value of the outcome for students within a bandwidth of 10 below the cohort-specific thresholds.

  • ↵27. We focus on four-year college outcomes, specifically because of the aims of the Dell Scholars Program. Nevertheless, in Online Appendix Table A3-5, we examine impacts on enrollment in a two-year institution as well as two-year degree attainment within four years. We find no evidence that the scholarship offer impacts enrollment in a two-year institution but some evidence that it leads to reductions in two-year degree attainment. Therefore, the program may help students from “dropping back” from a four-year to a two-year institution.

  • ↵28. See Online Appendix Tables A3-6 for RD results using an IV specification to handle first-stage noncompliance and Tables A3-7 and A3-8 for additional sensitivity analyses of RD estimates using different optimal bandwidths for impacts estimated on the 2009–2012 cohorts and the 2009–2010 cohorts, respectively.

  • ↵29. We obtained year-specific data on institutional quality from the Integrated Postsecondary Data System (IPEDS) and linked to students by institution name. Approximately 15 percent of the Dell finalist sample has missing quality indicators because their institutions were not in the IPEDS data. Therefore, the final outcome in Table 9 pertains to an examination of whether there is a systematic difference in our ability to match students to IPEDS data across the threshold. As we show, we find no evidence that missingness is related to selection into the program.

  • ↵30. Less selective institutions are those that are categorized in the Barron’s ratings as Selective or Less Selective or are unrated. More selective institutions are those ranked as Elite, Highly Selective, or Very Selective. Analyses based on finer-grained categorizations yield results that are consistent in magnitude but less precisely estimated.

  • ↵31. To assess further, we drew another matched sample where adjusted gross income (AGI) was less coarsened. Doing so presented a tradeoff; we lost additional observations but achieved better balance on AGI. Results from this new matched comparison were qualitatively similar but with larger standard errors, given the loss of sample. In addition, our impact estimates are similar with and without controlling for baseline characteristics. In sum, we conclude that the negative Dell coefficient was likely not the result of the modest imbalance in AGI.

  • ↵32. Parent PLUS loan borrowing is of interest because it is recognized as a more burdensome form of college financing. Compared to other federal loans, Parent PLUS loans have higher interest rates, require immediate repayment, and are utilized more frequently by underrepresented students attending institutions with limited institutional aid (Rodriguez 2015).

  • ↵33. Private loan borrowing was only available for the first year of college enrollment for the BPS sample. We are only able to observe federal loan borrowing behavior in the first year for the 2010–2012 Dell cohorts.

  • ↵34. We are only able to observe first-year working patterns for the 2012 Dell Scholars cohort.

  • ↵35. To handle student stop out/dropout, for any year in which a student is not enrolled, his cumulative GPA from the previous year is carried forward.

  • ↵36. We adjust all dollar values to 2016 dollars.

  • ↵37. Due to data availability, we restrict this analysis to the 2010–2012 Dell cohorts and their matched counterparts.

  • ↵38. Interactions between program staff and Dell Scholars are routinely documented when a student reaches out for support, or if the student submits information during a check-in period that triggers outreach from the program. For example, if a student reports that he failed courses during his first year of enrollment, a Dell Scholars Program team member will schedule a phone call with the student to discuss what happened and devise an academic plan for the following semester. The content of this conversation, agreed-upon next steps, and any related follow-up via email, text, or phone, for example, are archived in that scholar’s file as a contact note. The average Dell Scholar has 2.26 contact notes annually, but this ranges substantially from 0 to 35 for the students in our data. Nearly 40 percent of scholars have no contact notes in a given year.

  • ↵39. The average Dell Scholar is active in the program system for 4.67 years, and the program estimates its per-student, per-year cost to be $465, for a total of just over $3,100 in costs associated with program employees who interact directly with the scholars, as well as the cost of employees who support the technology used by the program team. To estimate total cost, we add to this the cost of the laptop computer and book credits and roundly estimate that the per-student cost of the program is approximately $25,000; 80 percent of this cost goes directly to the student, and 20 percent is devoted to operations associated with interacting with the scholars.

  • Received May 2016.
  • Accepted October 2017.

References

  1. ↵
    1. Angrist Joshua,
    2. Hudson Sally,
    3. Pallais Amanda
    . 2015. “Evaluating Econometric Evaluations of Post-Secondary Aid.” The American Economic Review 105(5):502–07.
    OpenUrlCrossRef
  2. ↵
    1. Angrist Joshua,
    2. Rokkanen Miikka
    . 2015. “Wanna Get Away? Regression Discontinuity Estimation of Exam School Effects Away from the Cutoff.” Journal of the American Statistical Association 110(512):1331–44.
    OpenUrlCrossRef
  3. ↵
    1. Bailey Martha,
    2. Dynarski Susan
    . 2011. “Gains and Gaps: Changing Inequality in U.S. College Entry and Completion.” In Whither Opportunity? Rising Inequality, Schools, and Children’s Life Chances, ed. Duncan Gregory, Murnane Richard, 117–32. New York: Russell Sage.
  4. ↵
    1. Battistin Erich,
    2. Rettore Enrico
    . 2008. “Ineligibles and Eligible Non-Participants as a Double Comparison Group in Regression-Discontinuity Designs.” Journal of Econometrics 142(2): 715–30.
    OpenUrlCrossRef
  5. ↵
    1. Baum Sandy,
    2. Ma Jennifer,
    3. Payea Kathleen
    . 2013. “Education Pays 2013: The Benefits of Higher Education for Individuals and Society.” New York: The College Board.
  6. ↵
    1. Bettinger Eric,
    2. Baker Rachel
    . 2014. “The Effects of Student Coaching: An Evaluation of a Randomized Experiment in Student Advising.” Educational Evaluation and Policy Analysis 36(1):3–19.
    OpenUrlCrossRef
  7. ↵
    1. Bound John,
    2. Lovenheim Michael,
    3. Turner Sarah
    . 2010. “Why Have College Completion Rates Declined? An Analysis of Changing Student Preparation and Collegiate Resources.” American Economic Journal of Applied Economics 2(3):129.
    OpenUrlCrossRef
  8. ↵
    1. Bowen William,
    2. Chingos Matthew,
    3. McPherson Michael
    . 2009. Crossing the Finish Line: Completing College at America’s Public Universities. Princeton, NJ: Princeton University Press.
  9. ↵
    1. Brock Thomas
    . 2010. “Young Adults and Higher Education: Barriers and Breakthroughs to Success.” The Future of Children 20(1):109–32.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Cahalan Margaret,
    2. Perna Laura
    . 2015. “Indicators of Higher Education Equity in the United States: 45 Year Trend Report.” Washington, DC: Pell Institute for the Study of Opportunity in Higher Education.
  11. ↵
    1. Calonico Sebastian,
    2. Cattaneo Matias,
    3. Titiunik Rocio
    . 2014. “Robust Data-Driven Inference in the Regression-Discontinuity Design.” Stata Journal 14(4):909–46.
    OpenUrlCrossRef
  12. ↵
    1. Card David
    . 1999. “The Causal Effect of Education on Earnings.” Handbook of Labor Economics 3:1801–63.
    OpenUrlCrossRef
  13. ↵
    1. Carnevale Anthony,
    2. Strohl Jeff
    . 2013. “How Increasing College Access Is Increasing Inequality, and What to Do about It.” In Rewarding Strivers: Helping Low-Income Students Succeed in College, ed. Kahlenberg Richard, 71–190. New York: Century Foundation.
  14. ↵
    1. Carrell Scott,
    2. Sacerdote Bruce
    . 2017. “Why Do College Going Interventions Work?” American Economic Journal: Applied Economics 9(3):124–51.
    OpenUrlCrossRef
  15. ↵
    1. Castleman Benjamin,
    2. Page Lindsay
    . 2014a. “ATrickle or a Torrent? Understanding the Extent of Summer “Melt” among College-Intending High School Graduates.” Social Science Quarterly 95(1):202–20.
    OpenUrlCrossRef
  16. ↵
    1. Castleman Benjamin,
    2. Page Lindsay
    . 2014b. Summer Melt: Supporting Low-Income Students through the Transition to College. Cambridge, MA: Harvard Education Press.
  17. ↵
    1. Castleman Benjamin,
    2. Page Lindsay
    . 2016. “Freshman Year Financial Aid Nudges: An Experiment to Increase FAFSA Renewal and College Persistence.” Journal of Human Resources 51(2):389–415.
    OpenUrlAbstract/FREE Full Text
    1. Castleman Benjamin,
    2. Long Bridget
    . 2016. “Looking beyond Enrollment: The Causal Effect of Need-Based Grants on College Access, Persistence, and Graduation.” Journal of Labor Economics 34(4):1023–73.
    OpenUrlCrossRef
  18. ↵
    1. Chetty Raj,
    2. Grusky David,
    3. Hendren Nathaniel,
    4. Hell Maximilian,
    5. Manduca Robert,
    6. Narang Jimmy
    . 2017. “The Fading American Dream: Trends in Absolute Mobility Since 1940.” Science 356(6336):398–406.
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Chingos Matthew
    . 2016. “Who Would Benefit Most from Free College?” Economics Studies at Brookings 1(15):1–4.
    OpenUrl
  20. ↵
    1. Clotfelter Charles,
    2. Hemelt Steven,
    3. Ladd Helen
    . 2017. “Multifaceted Aid for Low-Income Students and College Outcomes: Evidence from North Carolina.” Economic Inquiry 56(1): 278–303.
    OpenUrl
  21. ↵
    1. Deming Daniel
    . 2009. “Early Childhood Intervention and Life-Cycle Skill Development: Evidence from Head Start.” American Economic Journal: Applied Economics 1(3):111–34.
    OpenUrlCrossRef
  22. ↵
    1. Deming Daniel,
    2. Walters Christopher
    . 2017. “The Impact of Price Caps and Spending Cuts on US Postsecondary Attainment.” NBER Working Paper 23736. Cambridge, MA: NBER.
  23. ↵
    1. Denning Jeffrey,
    2. Marx Benjamin,
    3. Turner Lesley
    . 2017. “ProPelled: The Effects of Grants on Graduation, Earnings, and Welfare.” NBER Working Paper 23860. Cambridge, MA: NBER.
  24. ↵
    1. DesJardins Stephen,
    2. McCall Brian
    . 2014. “The Impact of the Gates Millennium Scholars Program on College and Post-College Related Choices of High Ability, Low-Income Minority Students.” Economics of Education Review 38:124–38.
    OpenUrlCrossRef
  25. ↵
    1. Gelman Andrew,
    2. Imbens Guido
    . 2014. “Why High-Order Polynomials Should Not be Used in Regression Discontinuity Designs.” NBER Working Paper 20405. Cambridge, MA: NBER.
  26. ↵
    1. Goldrick-Rab Sara,
    2. Kelchen Robert,
    3. Harris Douglas N.,
    4. Benson James
    . 2016. “Reducing Income Inequality in Educational Attainment: Experimental Evidence on the Impact of Financial Aid on College Completion.” American Journal of Sociology 121(6):1762–817.
    OpenUrl
  27. ↵
    1. Goodman John,
    2. Hurwitz Michael,
    3. Smith Jonathan
    . 2015. “College Access, Initial College Choice and Degree Completion”. NBER Working Paper 20996. Cambridge, MA: NBER.
  28. ↵
    1. Howell Jessica,
    2. Pender Matea
    . 2016. “The Costs and Benefits of Enrolling in an Academically Matched College.” Economics of Education Review 51:152–68.
    OpenUrlCrossRef
  29. ↵
    1. Hoxby Caroline,
    2. Avery Christopher
    . 2013. “The Missing ‘One-Offs’: The Hidden Supply of High-Achieving, Low-Income Students.” Brookings Papers on Economic Activity 2013(1):1–65.
    OpenUrl
  30. ↵
    1. Hurwitz Michael,
    2. Mbekeani Preeya,
    3. Nipson Margaret,
    4. Page Lindsay
    . 2017. “Surprising Ripple Effects: How Changing the SAT Score Sending Policy for Low-Income Students Impacts College Access and Success.” Educational Evaluation and Policy Analysis 39(1):77–103.
    OpenUrlCrossRef
  31. ↵
    1. Iacus Stefano,
    2. King Gary,
    3. Porro Giuseppe
    . 2009. “CEM: Software for Coarsened Exact Matching.” Journal of Statistical Software 30(9):1–27.
    OpenUrlPubMed
  32. ↵
    1. Imbens Guido,
    2. Lemieux Thomas
    . 2008. “Regression Discontinuity Designs: A Guide to Practice.” Journal of Econometrics 142(2):615–35.
    OpenUrlCrossRef
  33. ↵
    1. Jacob Brian,
    2. Lefgren Lars
    . 2004. “Remedial Education and Student Achievement: A Regression-Discontinuity Analysis.” Review of Economics and Statistics 86(1):226–44.
    OpenUrlCrossRef
  34. ↵
    1. Jepsen Christopher,
    2. Troske Kenenth,
    3. Coomes Paul
    . 2014. “The Labor-Market Returns to Community College Degrees, Diplomas, and Certificates.” Journal of Labor Economics 32(1):95–121.
    OpenUrlCrossRef
  35. ↵
    1. Leppel Karen
    . 2002. “Similarities and Differences in the College Persistence of Men and Women.” The Review of Higher Education 25(4):433–50.
    OpenUrlCrossRef
  36. ↵
    1. Ludwig Jens,
    2. Miller Ludwig
    . 2007. “Does Head Start Improve Children’s Life Chances? Evidence from a Regression Discontinuity Design.” The Quarterly Journal of Economics 122(1):159–208.
    OpenUrlCrossRef
    1. Ma Jennifer,
    2. Pender Matea,
    3. Welch Meredith
    . 2016. “Education Pays 2016: The Benefits of Higher Education for Individuals and Society.” New York: The College Board.
  37. ↵
    1. Mabel Zachary,
    2. Britton Tolani
    . 2017. “Leaving Late: Understanding the Extent and Predictors of College Late Departure.” Social Science Research 69:34–51.
    OpenUrlPubMed
  38. ↵
    1. McCrary Justin
    . 2008. “Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test.” Journal of Econometrics 142(2):698–714.
    OpenUrlCrossRef
  39. ↵
    1. Mealli Fabrizia,
    2. Rampichini Carla
    . 2012. “Evaluating the Effects of University Grants by Using Regression Discontinuity Designs.” Journal of the Royal Statistical Society: Series A (Statistics in Society) 175(3):775–98.
    OpenUrlCrossRef
  40. ↵
    1. Oreopoulos Philip
    . 2007. “Do Dropouts Drop Out Too Soon? Wealth, Health and Happiness from Compulsory Schooling.” Journal of Public Economics 91(11):2213–29.
    OpenUrlCrossRef
    1. Page Lindsay,
    2. Gehlbach Hunter
    . 2017. “How an Artificially Intelligent Virtual Assistant Helps Students Navigate the Road to College.”” AERA Open 3(4).
  41. ↵
    1. Page Lindsay,
    2. Scott-Clayton Judith
    . 2016. “Improving College Access in the United States: Barriers and Policy Responses.” Economics of Education Review 51:4–22.
    OpenUrlCrossRef
  42. ↵
    1. Pallais Amanda
    . 2015. “Small Differences that Matter: Mistakes in Applying to College.” Journal of Labor Economics 33(2):493–520.
    OpenUrlCrossRef
  43. ↵
    1. Reardon Sean,
    2. Baker Rachel,
    3. Klasik Daniel
    . 2012. “Race, Income, and Enrollment Patterns in Highly Selective Colleges, 1982–2004.” Palo Alto, CA: Center for Education Policy Analysis, Stanford University.
  44. ↵
    1. Rodriguez Awilda
    . 2015. “Understanding the Parent PLUS Loan Debate in the Context of Black Families.” In Race in the Age of Obama: Part 2, ed. Cunnigen Donald, Bruce Marino, 147–70. Bingley, UK: Emerald Group Publishing.
  45. ↵
    1. Rosenbaum Paul,
    2. Rubin Donald
    . 1985. “Constructing a Control Group Using Multivariate Matched Sampling Methods that Incorporate the Propensity Score.” The American Statistician 39(1):33–38.
    OpenUrlCrossRef
  46. ↵
    1. Schochet Peter,
    2. Cook Thomas,
    3. Deke Jonathan,
    4. Imbens Guido,
    5. Lockwood J.R.,
    6. Porter Jack,
    7. Smith Jonathan
    . 2010. “Standards for Regression Discontinuity Designs.” https://ies.ed.gov/ncee/wwc/Docs/ReferenceResources/wwc_rd.pdf (accessed December 4, 2015).
  47. ↵
    1. Schudde Lauren,
    2. Scott-Clayton Judith
    . 2016. “Pell Grants as Performance-Based Scholar-ships? An Examination of Satisfactory Academic Progress Requirements in the Nation’s Largest Need-Based Aid Program.” Research in Higher Education 57(8):943–67.
    OpenUrlCrossRef
  48. ↵
    1. Scott-Clayton Judith
    . 2011. “On Money and Motivation: A Quasi-Experimental Analysis of Financial Incentives for College Achievement.” Journal of Human Resources 46(3):614–46.
    OpenUrlAbstract/FREE Full Text
  49. ↵
    1. Scott-Clayton Judith
    . 2015. “The Shapeless River: Does a Lack of Structure Inhibit Students’ Progress at Community Colleges?” New York: Community College Research Center.
    1. Scott-Clayton Judith,
    2. Minaya Veronica
    . 2016. “Should Student Employment Be Subsidized? Conditional Counterfactuals and the Outcomes of Work-Study Participation.” Economics of Education Review 52:1–18.
    OpenUrlCrossRef
  50. ↵
    1. Scrivener Susan,
    2. Weiss Michael
    . 2013. “More Graduates: Two-Year Results from an Evaluation of Accelerated Study in Associate Programs (ASAP) for Developmental Education Students.” New York: MDRC Policy Brief.
  51. ↵
    1. Smith Jonathan,
    2. Pender Matea,
    3. Howell Jessica
    . 2013. “The Full Extent of Student–College Academic Undermatch.” Economics of Education Review 32:247–61.
    OpenUrlCrossRef
PreviousNext
Back to top

In this issue

Journal of Human Resources: 54 (3)
Journal of Human Resources
Vol. 54, Issue 3
1 Jul 2019
  • Table of Contents
  • Table of Contents (PDF)
  • Index by author
  • Back Matter (PDF)
  • Front Matter (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Human Resources.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
More than Dollars for Scholars
(Your Name) has sent you a message from Journal of Human Resources
(Your Name) thought you would like to see the Journal of Human Resources web site.
Citation Tools
More than Dollars for Scholars
Lindsay C. Page, Stacy S. Kehoe, Benjamin L. Castleman, Gumilang Aryo Sahadewo
Journal of Human Resources Jul 2019, 54 (3) 683-725; DOI: 10.3368/jhr.54.3.0516.7935R1

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
More than Dollars for Scholars
Lindsay C. Page, Stacy S. Kehoe, Benjamin L. Castleman, Gumilang Aryo Sahadewo
Journal of Human Resources Jul 2019, 54 (3) 683-725; DOI: 10.3368/jhr.54.3.0516.7935R1
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Abstract
    • I. Introduction
    • II. The Dell Scholars Program
    • III. Data and Research Design
    • IV. Results
    • V. Discussion
    • Footnotes
    • References
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • References
  • PDF

Related Articles

  • Google Scholar

Cited By...

  • The Effects of Need-Based Financial Aid on Employment and Earnings: Experimental Evidence from the Fund for Wisconsin Scholars
  • Maxed Out?: The Effect of Larger Student Loan Limits on Borrowing and Education Outcomes
  • Google Scholar

More in this TOC Section

  • The Journal of Human Resources Referees Volume 55
  • The Economic Burden of Crime
  • Nonbinding Peer Review and Effort in Teams
Show more Article

Similar Articles

Keywords

  • I2
UW Press logo

© 2026 Board of Regents of the University of Wisconsin System

Powered by HighWire