Abstract
We examine whether students respond to immediate financial incentives when choosing their college major. From 2006–2007 to 2010–2011, low-income students in technical or foreign language majors could receive up to $8,000 in SMART Grants. Since income-eligibility was determined using a strict threshold, we determine the causal impact of this grant on student major with a regression discontinuity design. Using administrative data from public universities in Texas, we determine that income-eligible students were 3.2 percentage points more likely than their ineligible peers to major in targeted fields. We measure a larger impact of 10.2 percentage points at Brigham Young University.
I. Introduction
Choosing a college major is perhaps the most important decision students make in their college years, potentially influencing the jobs they are offered, their future earnings, and their contribution to society. Due to the perception that choice of major can have long-term impacts, both individually and collectively, policymakers have proposed several policies to influence this choice. Many policymakers and researchers have paid particular attention to science, technology, engineering, and mathematics (STEM) fields due to their high income potential and societal externalities. In this paper, we explore how students choose their major by investigating whether students respond to direct financial incentives when choosing their major. We do so by examining the National Science and Mathematics Access to Retain Talent (SMART) Grant, which offered financial awards to eligible students who majored in qualified technical fields.
Often, schooling is discussed as homogeneous when the type of training received can be quite heterogeneous. We explore how students choose among many types of human capital when making decisions about college major. Typically, economists have modeled choice among heterogeneous types of human capital (like college major) as agents weighing the costs and benefits of potential options. However, there may be other factors that matter like the how the major is structured, the composition of potential peers, or behavioral factors. Our study shows that small changes in the relative prices of different types of human capital can have relatively large effects on human capital acquisition.1 Our work also suggests that small financial incentives can alter the skill composition of the work force.
On an individual level, there is evidence that college major can have significant labor market impacts. (Arcidiacono 2004; Arcidiacono, Hotz, and Kang 2012). For instance, in the 2009 and 2010 American Community Survey, college graduates with fine art degrees had an unemployment rate of 11.1 percent and an average salary of about $30,000; college graduates with engineering degrees had an unemployment rate of 7.5 percent and an average salary of about $55,000 (Carnevale, Cheah, and Strohl 2012). However, differences in labor market outcomes cannot solely be attributed to different returns to college majors due to selection into majors and subsequent selection into the labor force.2 It is interesting to note, however, that with or without a degree in a STEM field, acquiring technical skills (for example, taking more math courses in high school) may lead to a wage premium of as high as 20–25 percent (Joensen and Nielsen 2009). While there appear to be private benefits to majoring in STEM fields, there is also evidence of externalities, suggesting a justification for policy intervention.3
The U.S. Department of Education operated the SMART Grant program between the fall of 2006 and the summer of 2011 in an effort to direct college students into–and retain them in–certain fields. In particular, this program gave up to $8,000 to juniors and seniors who met a variety of criteria including majoring in technical fields or critical foreign languages, qualifying for Pell Grants (a federal needs-based grant program for college students), and having a GPA above 3.0. This program awarded $195 million in grants in the 2006–2007 school year (United States Department of Education 2007) and over $432 million in grants for the 2010–2011 school year (Office of Postsecondary Education 2011).
This paper investigates the effect of the SMART Grant program using student-level, administrative data from all public universities in Texas and from Brigham Young University (BYU), a large private university in Utah that received the largest amount of SMART Grants of any school in the nation in the first year of the program. By examining this program we hope to gain important insights into how students choose their major and the role that policy can play in the types of human capital acquired. Our research design takes advantage of a discontinuity in the Pell Grant eligibility criteria and uses a regression discontinuity design to uncover the causal impact of the program on various measures of student major. Our data include students who attended these schools from the year 2000–2001 to 2011–2012, which allows us to conduct a robustness test of this discontinuity in the years before the grant existed as well as for one year after the grant ended.Our results show that SMART Grants did induce students tomajor in STEM fields as juniors and seniors who would not have done so otherwise. We also provide suggestive evidence that this response operates more strongly through encouraging students already in SMART-eligible majors to persist their major than through pushing student in noneligible majors to switch into an eligible field. The overall estimated effect is over twice as large at BYU as at public universities in Texas, and this effect appears to grow over time at both institutions. We explore this heterogeneity and find that the differences are consistent with salience being an important determinant of the effect of the program.
It may seem surprising that students could react to incentives that are small relative to the average wage differentials between these fields. Similarly striking results are found in Pallais (2015), where a small decrease in the price of sending ACT scores to additional schools resulted in students applying to many more, and more selective, schools. Large effects such as these may exist if students are myopic, misinformed about future earnings, or credit-constrained. Credit constraints may be particularly relevant in the case of SMART Grants because the grant was only available to low-income students. The responsiveness to relatively small amounts of financial incentives suggests that behavioral factors or market failures are likely to play a significant role in the acquisition of human capital.
Our work is part of a large literature on how students choose their college major. Previous research has identified many factors that appear to play a role in this choice, including tastes and ability (Wiswall and Zafar 2015; Stinebrickner and Stinebrickner 2011), career risk (Saks and Shore 2005), future earnings (Berger 1988; Wiswall and Zafar 2015; Beffy, Fougere, and Maurel 2012), credit constraints (Rothstein and Rouse 2011), career opportunities (Eide and Waehrer 1998), differential tuition (Stange 2015), and financial aid (Evans 2012; Sjoquist and Winters 2013).4 Our paper contributes to this literature by providing the evidence that even small direct financial incentives can have large impacts on a student’s major.
Of the above papers, only two consider how direct financial incentives may motivate students to graduate in targeted fields. Stange (2015) uses university-level data to performa difference-in-difference analysis of the rollout of differential tuition programs across the country. He finds that increasing the tuition of particular majors decreased the number of students graduating in some fields but increased it in others. He explains that the increase is likely because he is unable to decompose this effect into a response due to a price change and a change in the quality or capacity of departments who expand with the additional tuition money. In contrast, we use individual-level data, which allows us to compare students within the same institution who qualify for direct financial incentives to those who do not.
Additionally, a working paper by Evans (2012) considers the impact of SMART Grants at Ohio public universities and finds little evidence suggesting that the SMART Grant program increased the number of students graduating in STEM fields. However, the data from this paper is limited in a number of ways. When we replicate Evans’ methodology and data restrictions in our data, we similarly find no significant impact of SMART Grants on students’ choice of major. The details of this replication can be found in the online appendix.
The rest of the paper is organized as follows. Section II gives details of the SMART Grant program. Section III describes the data used. Section IV discusses the econometric identification. Section V presents the results and VI concludes.
II. The SMART Grant Program
The U.S. Federal Government operated the SMART grant program from the fall of 2006 until the summer of 2011 with the purpose of increasing the number of students who were studying STEM fields and critical languages. This federal program was designed to complement the existing Pell Grant program. Students who were eligible for the Grant received up to $2,000 per semester in their junior and senior year for a maximum benefit of $8,000.5 In order to be eligible for a SMART Grant a student was required to:
be a U.S. citizen;
be Pell Grant-eligible during the award semester;
be majoring in physical, life or computer science, engineering, mathematics, technology, or critical foreign language fields—hereafter “SMART fields” or “SMART majors”;6
be a junior or senior (or fifth year student in a five-year program) as defined by credit hours;
be enrolled as a full-time student;7
have at least a 3.0 GPA on a 4.0 scale.8
To be Pell-eligible a student must submit a Free Application for Federal Student Aid (FAFSA). The FAFSA is used to compute an Expected Family Contribution (EFC) which is a score that represents how much a student’s family can afford to contribute to the student’s postsecondary education. This EFC determines what federal grant and loan programs a student is eligible for. The threshold that defined whether a student was eligible for Pell Grants increased gradually throughout the time frame of this study. In the 2006–2007 school year the EFC cutoff for Pell Grants was 4,110 and by 2010–2011 the EFC cutoff for Pell grants had risen to 5,273.
Students with an EFC below the Pell Grant threshold in a particular year received the full amount of the SMART Grant in that year, while any student above the threshold received no SMART Grant money that year.9 As a result, students local to the threshold were very similar in family income but they may have differed in their incentives to major in eligible fields by up to $4,000 per year.10 Our identification strategy will take advantage of this large discontinuity in incentives.
An additional issue that also may affect the efficacy of the SMART Grant program is how informed students were about the existence of the grant. Bettinger, Long, Oreopoulos, and Sanbonmatsu (2012) highlight how the salience and simplicity of federal grant and scholarship programs can have first-order impacts on program take-up. According to the National Post-Secondary Aid Survey, only 6.8 percent of Pell recipients in 2007–2008 knew about the SMART Grant program. Of the relatively few students who had heard of the program and were declared in SMART majors, 4.7 percent said SMART Grants had affected their choice of major. Of those who had heard of SMART Grants and who were undeclared, 19.1 percent said that the grants would “definitely” or “probably” affect their choice of major. Of the students who were declared in non-SMART majors and had heard of SMART Grants, 16.8 percent said they would “definitely” or “probably” consider switching majors.11 This survey suggests that among students who knew about them, SMART Grants had the potential of influencing choice of major, but given that so few students knew of the program’s existence by 2007–2008, the measured impact of SMART Grants may be small or undetectable in its early years, which is consistent with what is seen in the data. One reason that the program may not have been well known is that students did not have to file additional forms when applying for the SMART Grant beyond the FAFSA. Rather, the SMART Grant was automatically added to financial aid packages if the student was eligible.
III. Data
The data come from two administrative data sets. The first data set was assembled for the purposes of this study by the Texas Higher Education Coordinating Board (THECB). The Texas data contain information on every student who enrolled in Texas public universities from 2000–2001 to 2011–2012, providing a diverse set of public institutions and a large number of students enrolled in higher education. The data include Expected Family Contribution (EFC) from every student who submitted a FAFSA and subsequently enrolled. It also includes information on students’ declared major in every semester they enrolled, degrees received, parent’s education, student race, student full-time/part time status, cost of attendance, Texas residency, and gender. 12 For this study we consider only students who are attending full-time because SMART Grants were available only to full-time students for the majority of the life of the grant. We also restrict the sample to students for whom the cost of attendance was high enough to enable the maximum Pell Grant in a given year.13
The second data set includes very similar information for Brigham Young University starting in 2001–2002. The biggest difference in the BYU data set is additional information about classes taken for all students at BYU. The BYU data also includes additional demographic variables, namely ACT/SAT Score, and the high school rank of a student (which we express as a percentile) and lacks information about parental education. Unfortunately ACT/SAT score and class rank variables are not available for every student, but we only use them as covariates in our regression specification. Our results are robust to specifications that do and do not include these variables.14
Summary statistics for Texas students with an EFC within 2,000 units of the eligibility threshold are presented in Table 1; a similar table for BYU is also presented in Table 1 but with a window of 3,000. These windows roughly correspond to the largest window chosen when estimating with the respective data sets. The Texas sample from 2006– 2007 to 2010–2011 is majority female and 30 percent Hispanic. Many students in Texas have parents who did not attend college. At public universities in Texas, 19.2 percent of juniors are declared in SMART-eligible majors. Less than 1 percent of these are declared in language majors, the majority being in STEM majors.
Summary Statistics
For BYU, the summary statistics reveal that the student body in this EFC window is 52 percent male and predominantly white. The fraction of students with SMART-eligible majors in their junior year is higher than the fraction for schools in Texas as well with 27 percent of students declared in eligible majors, with a small fraction in language. This is much larger than the fraction of students in SMART-eligible major in Texas schools in this period. We note though that before 2006, the fraction of SMART majors at BYU is more similar to schools in Texas at about 22 percent. The divergence between Texas and BYU in the years following the grant’s implementation is consistent with the results found in our analysis, which find much larger effects of the grant at BYU than in Texas.
BYU is unique in that it distributed more SMART Grants than any school in the nation in the first year of the program. (McArdle, McCullough, and Seller 2007) In fact, 4.17 percent of students at BYU in our data received a SMART Grant in 2006–2007. By the end of the program in 2010–2011 6.2 percent of the student body were receiving SMART Grants. The reason for this large number of SMART Grant recipients is likely because BYU has a very high fraction of students receiving Pell Grants. Over 30 percent of BYU’s student body received Pell Grants in 2001 which is one of the highest proportions of Pell recipients among comparable institutions in the nation. (Heller 2004)
While BYU’s position as the top distributor of SMART Grants may give cause to question the external validity of any estimates using data from BYU, it still may provide insights of the impact of these grants in a population that was likely to be aware of the grant. During this time frame around 5 percent of all BYU students were receiving SMART Grants, which means that many students were likely to have heard about the program through informal channels. In fact, some majors at BYU publicly advertised at orientation meetings that choosing their major could result in up to an additional $8,000 in grants. Public universities in Texas, however, seem to resemble more closely national patterns for the fraction of students receiving SMART Grants. In the Texas data there were 2,808 SMART Grants awarded in the 2006–2007 school year, and 6,496 were awarded in 2010–2011.15
IV. Identification
A. Background
When a student completes the FAFSA, their EFC is computed from information about family income, assets, and number of dependent children in a student’s family. This EFC determines eligibility for a host of federal grant and loan programs such as Pell Grants, SMART Grants, subsidized student loans. Each year a minimum Pell Grant and an EFC threshold are set. If a student’s EFC is below the EFC threshold, then the amount of a student’s Pell Grant will be equal to a decreasing function in EFC that equals the minimum Pell Grant at the threshold and is zero for all values above the threshold.16 This means that if the student’s EFC is above the EFC threshold, no Pell Grant is received. Although the amount of a student’s Pell Grant is a function of their EFC, students receive the whole SMART Grant if their EFC is below the threshold that qualifies them for a Pell Grant of any size. Thus, this discrete cutoff in Pell eligibility serves as a discrete cutoff in SMART Grant eligibility and facilitates a fuzzy regression discontinuity design. The identification comes from the fact that students barely on one side of the Pell- eligibility cutoff are similar to students on the other side in both observable and unobservable ways, but they differ in their eligibility for SMART Grants. Estimates for the impact of the program are all local to the margin of eligibility—namely, students with families who are just barely eligible. Roughly, these are students with family incomes from$40,000 to $60,000 in 2010 dollars (Office of Postsecondary Education 2011).
Because the threshold for eligibility for SMART Grants is the same as the threshold for Pell Grant eligibility, using this threshold may conflate the effect of SMART Grants and the effect of Pell Grants. We address this by performing the same analysis on the Pell Grant-eligibility threshold in the years before SMART Grants were implemented and find that Pell Grant eligibility had no impact on the outcomes of interest in those years. We also perform the analysis for the one year in the data after the grant program ended and again find no effect. The likely reason for this null finding is that the Pell Grant for this marginal group was only $400 per year in 2006–2007 and grew to $976 per year in 2009–2010. This amount is small relative to SMART Grants, which paid $4,000 per year.17 Additionally, the Pell grant offers no price incentives for major and would be operating through an income effect, which is less likely to affect SMART major participation. Moreover, later we will show that the largest responses measured were not in years with the largest minimum Pell Grants.
B. Estimation
The basic estimating equation that takes advantage of this discontinuity in EFC eligibility is:

where Y is the outcome of interest,
is a flexible function of junior-year recentered EFC where EFC is recentered so that
and MaxEFC is the maximum EFC in a given year that is allowable to qualify for Pell Grants. This centering means that
being zero or negative indicates a person was eligible for a Pell Grant. The variable X is a vector of covariates including indicators for student race (African American, Hispanic, Asian, missing race, with White omitted), and parent’s highest educational attainment indicators.18 University fixed effects, ηu, are included when using the Texas data.19
In some instances, the above equation is estimated but
and
are interacted with indicators for student characteristics. This allows a comparison of the discontinuities for two groups of students and also accommodates the implementation of a Regression Discontinuity Difference estimator and compares the discontinuity in the years of the program to the discontinuity in the years before the program.
This model was estimated in three different subsets of available data. First, we estimated the model in the set of students who were classified as juniors during any semester that SMART Grants were being distributed (that is, those that were juniors from the 2006–2007 to the 2010–2011 academic year). This would maximize the sample size of students who possibly could be influenced by the grant. Second, to limit the sample to those who could receive the grant and also who may have heard about SMART Grants as freshmen, we also estimate the model for those students who were juniors between the 2008–2009 and the 2010–2011 academic year. We anticipate that the effect would be larger in this subsample because students may be more aware of the program or have had longer to adjust their major. Third, we estimate the model in the set of students who graduated before the SMART Grant program started as a placebo test.
In order to be eligible for federal aid–and in many cases any financial aid–students must submit a FAFSA every year. The EFC calculated from the information on the FAFSA applies from the semester that the FAFSA is submitted until the following Fall semester.20 As a result, our data potentially contain several measures of EFC for each student. In our analysis, we use the EFC from the students’ junior year for several reasons.
First, since a student’s Pell Grant eligibility may vary from year to year, a student’s EFC as a freshman or sophomore will contain little information about their eligibility for a Pell Grant–and therefore a SMART Grant–in their junior or senior year. This is especially true for those students near the Pell Grant-eligibility threshold from who our estimate is identified.
On the other hand, one may worry that a student learns of their junior year eligibility too late for it have an impact on their major choice. We note, however, that students can file their FAFSA as early as January of the previous academic year. In fact, of the juniors in the 2007–2008 National Postsecondary Aid Survey cohort who eventually file a FAFSA, 67 percent have submitted it by the end of May and 83 percent by the end of July, meaning that the majority of students are aware of their SMART Grant eligibility many months before the start of their junior year. (See Figure A1 in the online appendix.) This is further exaggerated for students whose first semester as a junior occurs in Spring or Summer semesters due to the extra time to file the FAFSA. So given that junior EFC is most informative of SMART Grant eligibility, and given that this information is available to students early enough to influence their declared major, we use junior EFC the running variable in our analyses.
C. Assumptions for Regression Discontinuity
One assumption of the regression discontinuity estimator is that students are not able to precisely manipulate their EFC to gain access to the grant. If students in SMART Grant-eligible majors precisely manipulate their EFC to be eligible for Pell Grants or were more likely to submit a FAFSA conditional on being Pell-eligible, the distribution of EFC would have a discontinuity at the eligibility threshold with additional weight to the left of the threshold (just-eligible students). Fortunately for our identification, the formula for determining EFC is complicated and opaque, using a large number of current and historical factors, making it difficult to manipulate EFC precisely. We test manipulation and selection by analyzing the distribution of EFC around the threshold. Figure A2 displays the density of EFC reported in both the BYU and THECB data, and it does not show evidence of manipulation. Oddly, in both data sets, it appears that there are actually fewer students to the left of the threshold that to the right, which is the opposite of what would be expected if there were manipulation of EFCs or differential reporting. In formal testing of this manipulation as outlined in McCrary (2008), the discontinuity is significant in Texas when considering students who were juniors from 2006–2011 but the discontinuity drops in magnitude and is no longer statistically significant when considering juniors from 2008–2011. At BYU, the manipulation is never statistically significant but again goes in the direction of students moving out of eligibility. In other samples Turner (2013) and Evans (2012) find these same visually suggestive but statistically insignificant distributional attributes. We find this empirical feature curious but could find no explanation for it nor papers that explore this phenomenon. Given the direction of this effect, however, we find it unlikely that students are manipulating their EFC in a way that would bias our results.
Another assumption is that observed and unobserved student characteristics do not vary discretely at the EFC-eligibility threshold. We test that observed student characteristics do not vary by estimating Equation 1 with the outcome variable being student characteristics, and results are presented in Figure 1, which presents the estimated discontinuities for different variables in both the 2006/07–2010/11 and 2008/09–2010/11 time periods along with their 95 percent confidence intervals.21 We also test that school characteristics do not change by checking to see if school characteristics, such as the fraction of SMART majors or Pell Eligible-students at a university, changes at the threshold. For all Texas schools there are 14 covariates considered, and in the time frame from both 2006–2011 and 2008–2011, there is never any statistically significant discontinuity in covariates. For the 11 coefficients at BYU from 2006–2011 there are no statistically significant differences at the 5 percent level. Similarly for 2008–2011 at BYU only one coefficient is significant at the 5 percent level. Given that we are testing for discontinuities in 24 covariates in two time frames, finding only one that appears significant at the 5 percent level is what we would expect under that hypothesis that student characteristics are smooth through the threshold. Overall there is evidence that observable student characteristics do not vary discretely at the threshold for Pell/ SMART eligibility, increasing our confidence in the causal estimates found below.
Covariate Checks
These figures show the estimated discontinuities at the income eligibility threshold with 95 percent confidence intervals for several predetermined covariates. Each covariate is tested in both the 2006/07–2010/11 time period as well as the 2008/09–2010/11 time period.
The primary outcomes considered are being declared in a SMART-eligible major at the beginning of a student’s junior or senior year or earning a SMART-eligible degree. Specifically, the junior major variable is a binary variable that indicates if a student is declared in a SMART-eligible major in the first semester that they are classified as a junior. This variable is only defined for students whom we observe in their junior year. The senior major variable is defined as unity if the student is declared a SMART major in the first semester of their senior year and zero if they are declared in a non-SMART major in their senior year or do not appear as seniors in the data.
The degree outcome is a binary variable that indicates if a student receives a degree in a SMART Grant-qualified major. This variable is only defined for all students who have a valid EFC measurement as a junior and is a 1 if a student receives a diploma in a targeted field in the time-frame studied and a 0 if the student receives a degree in a non- SMART field or does not receive a degree. Because many students in the last years of our data will not have had sufficient time to graduate, the fraction of students graduating will be lower than it would be if we had additional years of data. At BYU we have data on coursework, so we also consider the fraction of credits earned that are in SMART fields in a student’s junior or senior year. Students who are not observed taking courses as seniors have the fraction of their courses in SMART fields coded as zero.
To confirm that the grant was administered in a discontinuous way, we consider actual receipt of the grant as an outcome as well. We express this as the total amount of SMART Grant dollars ever received as well as an indicator for whether a student ever receives SMART Grant money to provide evidence that there was a discontinuity in SMART Grant receipt. We perform this analysis separately for students who were declared as SMART majors as juniors, as well as for students who were not declared as SMART majors as juniors.
The optimal bandwidth, h, was chosen using the optimal bandwidth rule of thumb (Imbens and Kalyanaraman 2012) and is roughly 2.0 for the BYU data and 1.0 for the Texas data, although the actual optimum varies by outcome.22 We show later, however, that our results are not sensitive to our choice of bandwidth. Standard errors are corrected for heteroskedasticity in all specifications.
In all specifications, the parameter y from Equation 1 is the coefficient of interest. It represents the average effect of a student becoming EFC-eligible for a SMART Grant in their junior year. That is, a student could receive the grant if they were eligible in other ways (for example, major in an appropriate field, have a high enough GPA, etc.) Because students may be eligible by EFC but not be eligible by other criteria (other than major), y may be considered a lower bound on the impact of otherwise eligible students.
V. Results
A. Grant Receipt
As discussed above, using a single year’s EFC is not a perfect way to separate eligible and ineligible groups because students who are eligible in their junior year may no longer be eligible in their senior year. In the extreme, this could mean that students local to the eligibility threshold all may receive similar amounts of SMART Grant money on average, regardless of which side of the threshold they are on in their junior year. If this effect is so exaggerated that there is no measurable discontinuity in grant money received at the eligibility threshold, then a regression discontinuity design would not be appropriate because there is no discontinuity in treatment.
We test for a discontinuity in SMART Grant receipt with a regression discontinuity analysis of total SMART Grant awards. The total SMART Grant award variable is the sum of all of the SMART Grants received. We conduct this analysis separately for students who are declared as SMART majors in their junior year as well as for students declared in any other majors. Graphical results based on these regressions are found in Figures 2 and 3 and the estimates from these regressions are found in Table 2. Figures A3 and A4 in the online appendix presents similar figures for students not declared in SMART majors as juniors.
Total SMART Grant, SMART Majors
The average total amount of the SMART Grants received is plotted against recentered junior EFC for students declared in SMART majors at the beginning of their junior year. Each dot represents the average for students in a bin of 200 EFC. EFC is recentered so that SMART eligibility occurs to the left of 0 and EFC is divided by 1,000. The size of the dot is proportional to the number of observations included in the average. The lines represent linear predictions allowed to vary on each side of the cutoff. The bandwidth used at Texas is 1.0 and the bandwidth at BYU is 2.0.
Ever Receive SMART Grant
The probability of ever receiving a SMART Grant is plotted against recentered junior EFC for students declared in SMART majors at the beginning of their junior year. Each dot represents the average for students in a bin of 200 EFC. EFC is recentered so that SMART-eligibility occurs to the left of 0 and EFC is divided by 1,000. The size of the dot is proportional to the number of observations included in the average. The lines represent linear predictions allowed to vary on each side of the cutoff. The bandwidth used at Texas is 1.0 and the bandwidth at BYU is 2.0.
SMART Grant Receipt
These regressions highlight several important considerations in our analysis. We see in the figures that there is a clear and unambiguous discontinuity at the threshold for students declared in SMART majors. However, for students not declared in SMART majors there is no discontinuity in terms of grant eligibility as would be expected. In Table 2 all of these discontinuities are significant at the 1 percent level for students in SMART majors and zero for students not in SMART majors. In the SMART Grant amount regressions in the 2008–2011, we estimate a discontinuity for students declared in SMART majors of about $589 for Texas students and $1,772 for BYU students. These measurements are all slightly smaller when we use the 2006–2007 to 2010–2011 samples. There is no discontinuity inSMART Grant dollars for students in non-SMART majors as would be expected.
The magnitude of these discontinuities give a sense of how “fuzzy” this discontinuity is. That is, not all students who have an EFC below the threshold and declare a SMART major are qualified for SMART grants due to other conditions of the grant. So for instance, we see that that this discontinuity is much larger at BYU than in Texas. Part of this likely can be explained by eligibility criteria like GPA. Unfortunately, we do not have the major-specific GPA in either BYU or Texas to calculate the eligibility actually used for eligibility. However, 81 percent of BYU juniors below the EFC threshold have a cumulative GPA above 3.0. The mean semester’s GPA for university students in Texas in 2008/09 is approximately 2.7 and roughly half of student-semesters are below a 3.0. This suggests that the GPA requirement is likely to have been binding for more students in Texas than at BYU.
A second thing that can be learned from the figures is that we are measuring eligibility at one point in time while eligibility will be determined several times. That is, if students’ eligibility was entirely determined by their junior year EFC, we would expect the level on the right (corresponding to ineligible students) to be zero. The positive values for ineligible juniors give a sense of the fraction of students who are ineligible in their junior year but are eligible in later semesters. In Figure 3, there are nonnegligible positive values to the right of the threshold. In fact in both data sets, the fraction of students who are eventually eligible for SMART Grants but who are just ineligible in their junior year is roughly half of the fraction who are barely eligible for the grant in their junior year by the EFC criterion. This is consistent with the story that Pell Grant eligibility for those who are near the threshold is effectively random.
B. Student Outcomes
1. Majors, diplomas, and courses
To test the impact of SMART Grants on student major, we look at a variety of outcomes. In both the Texas and BYU data, we have information on the declared majors of junior and senior students and also information on the diploma they eventually received. In the BYU data, we additionally have information on the fraction of classes that were taken in SMART-eligible fields. We conduct our analysis with a 2006–2007 to 2010–2011 subsample and a 2008–2009 to 2010–2011 subsample but for these regressions we also measure the discontinuity for students who were juniors before 2006 as a robustness check. Results from these regressions are in Table 3. Graphical evidence is presented on junior major in Figure 4, on senior major in Figure 5, degrees granted in Figure 6, and courses taken at BYU are found in Figure 7.
Effects on Major
SMART Major in Junior Year
The probability of having a SMART major declared in the first semester of a student’s junior year is plotted against recentered junior EFC. Each dot represents the average for students in a bin of 200 EFC. EFC is recentered so that SMART-eligibility occurs to the left of 0 and EFC is divided by 1,000. The size of the dot is proportional to the number of observations included in the average. The lines represent linear predictions allowed to vary on each side of the cutoff. The bandwidth used at Texas is 1.0 and the bandwidth at BYU is 2.0.
SMART Major in Senior Year
The probability of having a SMART major declared in the first semester of a student’s senior year is plotted against recentered junior EFC. Each dot represents the average for students in a bin of 200 EFC. EFC is recentered so that SMART-eligibility occurs to the left of 0 and EFC is divided by 1,000. The size of the dot is proportional to the number of observations included in the average. The lines represent linear predictions allowed to vary on each side of the cutoff. The bandwidth used at Texas is 1.0 and the bandwidth at BYU is 2.0.
SMART Degrees
The probability of receiving a degree in a SMART field is plotted against recentered junior EFC. Each dot represents the average for students in a bin of 200 EFC. EFC is recentered so that SMART-eligibility occurs to the left of 0 and EFC is divided by 1,000. The size of the dot is proportional to the number of observations included in the average. The lines represent linear predictions allowed to vary on each side of the cutoff. The bandwidth used at Texas is 1.2 and the bandwidth at BYU is 2.0.
Fraction SMART Classes—BYU Only
The fraction of classes taken in SMART fields is plotted against recentered junior EFC. Each dot represents the average for students in a bin of 200 EFC. EFC is recentered so that SMART eligibility occurs to the left of 0 and EFC is divided by 1,000. The size of the dot is proportional to the number of observations included in the average. The lines represent linear predictions allowed to vary on each side of the cutoff. The bandwidth used for juniors is 2.0 and the bandwidth for seniors is 1.8.
Figure 4 contains plots of the estimated regression lines superimposed over a binscatter plot for all of our specifications corresponding to the junior major outcome variable. In the Texas plots, a small but clear discontinuity can be seen at the threshold in the 2006–2011 data and an even larger discontinuity can be seen in the 2008–2011 data. In the BYU plots, the discontinuity is much larger. Figure 5 gives parallel figures but for the senior declared major outcome.
The estimates from these regression in Table 3 tell the same story as can be seen in the figures. In Texas in the 2006–2011 sample for both junior and senior major, a positive but insignificant effect of about 1.5 percentage points is measured. When we restrict our sample to only students who were juniors from 2008–2009 to 2010–2011, the magnitude of the effect in both regressions doubles to 3.27 percentage points for junior and 3.18 percentage points for senior major, and both are significant at the 5 percent level. This discontinuity indicates that roughly 3 percent of students who were income-eligible in their junior year responded to the incentives of the grant and adjusted or persisted in their choice of major.
This is consistent with students who are already several years into the university studies either being unaware of the program in its early years or for the switching costs of changing into a qualified major being too high to motivate a large number of students to switch their major. Including these early students attenuates our measure to insignificant levels. This three percentage point increase is over a baseline SMART participation rate of 18 percent, which is a 17 percent increase over the baseline.
At BYU in the 2006–2007 to 2010–2011 sample, we measure a larger effect of almost seven percentage points for junior major but this effect is only significant at the 10 percent level. For senior major the effect is larger at eight percentage points and is significantly different from zero at the 5 percent level. Similar to the Texas data, when we restrict our sample to the 2008–2009 to 2010–2011 sample, we measure an impact of 10 percentage points with 95 percent confidence. This gives further evidence of an increasing impact in later years of the program. This increase is over a baseline of 22.4 percent which is a 45 percent increase over the baseline.
As discussed above, we attribute a portion of the magnitude differences between Texas and BYU to the greater salience of the program. Because a much larger of fraction of BYU students are eligible for Pell Grants, more students would have heard of the SMART Grant program through informal channels, making it more likely that this program could have an effect. Additionally, the differences in GPA eligibility between BYU and Texas likely play a role since the incentive would not have been available to a greater fraction of students in Texas. It is also possible, however, that other characteristics of the student body or universities accounts for this heterogeneity, such as different policies for declaring majors, differential response to the incentives across schools, or that income-marginal students in Texas may be less likely to be qualified along other margins such as citizenship. Anecdotally, we know that some BYU departments used the SMART Grant to recruit students into certain majors.23
In Texas, we are unable to detect an impact of SMART Grants on the number of diplomas awarded in SMART-eligible fields. This is seen in Figure 6. There is no apparent discontinuity in the Texas plots, and in the BYU plots, the estimated discontinuity is obscured by a lack of precision in the data. The regression results in Table 3 confirm what we see in the figures: The impact of the grant on eligible degrees granted at Texas public universities is virtually zero, and the 6.6 percentage point effects measured at BYU is marginally statistically significant. This is likely because the data only contain degrees for students who have finished by 2012. Many students who were juniors during the life of the program had not graduated by 2012 and therefore are treated as if the grant had no impact on their diploma in our data. In a few years when these students have graduated, it may be possible to measure the impact of SMART Grants on diplomas awarded. Because our data suggest that students at BYU responded more strongly and earlier to this program, it is unsurprising that a small impact on diplomas awarded can already be detected even with our limited data. However, students at Texas responded most strongly in the last year of the program. As a results, even those students who eventually graduated in SMART field as a result of the grant only would be coded as having responded if they graduated in no more than one year after they were first classified as a junior. This is uncommon, suggesting that a more accurate measure for this particular outcome would be possible to obtain if more years of data were available.
At BYU, we also have data on the specific courses students are taking.24 This allows us to test whether students are “gaming” the program by signing up for eligible majors to receive the SMART Grant money but not taking courses in the major as they never intend to complete it. We attempt to identify this by measuring the discontinuity as before, but using as the outcome variable the fraction of courses that a student takes in SMART-eligible departments. Despite a small sample size, we see that both the point estimates for the fraction of courses taken by juniors’ and seniors’ class taking are positive and are marginally statistically significant. These results give credence to the claim that the measured impacts on contemporary major are a result of students adjusting their actual major in response to the program rather than students gaming the system.
We also conduct a placebo test by performing the same regressions for students in the years before the SMART Grant was instituted. With one exception, each of these regression coefficients are close to zero and statistically insignificant. The exceptional case is the effect on junior major at Texas, for which we measure a small but marginally significant impact of 1.7 percentage points. Because we only measure an impact as large as 3 percentage points in our 2008–2011 Texas regression, this placebo estimate cannot be statistically separated from the measured impact in the years the grant was operating. This may raise concerns that the effect we measure in our main specifications are not due to SMART Grant but rather due to other factors that existed before the SMART Grant program. Several things, however, make us believe that this placebo estimate should not be so concerning. First, this oddity disappears in the senior major placebo regression, which includes the same students but measured a year closer to graduation. Also, this junior year placebo test oddity is not present in the BYU data. Additionally, in the year after the program there is no effect on student major declaration in the junior or senior year at Texas or at BYU, which can be seen in Table 4. This evidence suggests that the effect we measure is actually the impact of SMART Grants rather than Pell Grants or other programs that might discretely vary across the Pell Grant-eligibility threshold.
Yearly Discontinuities
We formally estimate the difference in the preperiod vs. 2008–2009 to 2010–2011 in Table 3 using a Regression Discontinuity Difference estimator. In Texas there is always a positive effect measured though it is only statistically different for senior major. At BYU the results are similar to estimates using data only from 2008–2009 to 2010–2011 though the results are slightly less precise with only junior major being marginally statistically significant.
2. Effects by year
In addition to institutional heterogeneity, we also can examine temporal heterogeneity. If students have inertia and need time to respond to the incentive program or if the salience of the program is growing over time, we would see an increasing impact of the program from year to year. To examine the heterogeneity of the effect across time we estimate the discontinuity separately by pairs of years except for the last year for which we have data. Specifically, the regressions estimate the discontinuity for students who were juniors in the school years beginning in 2001–2002 to 2002–2003. These estimates are plotted with their 95 percent confidence intervals in Figure 8. The regression results are found in Table 4.
Estimates by Year
The estimated discontinuity for the impact of SMART Grants on majors is plotted along with 95 percent confidence intervals. The years represent the end of a school year and the preceding two school years (for example, 2003 is the 2001–2002 and 2002–2003 school year). The exception is in 2012, which is only estimated using data from the 2011–2012 school year. A bandwidth of 1.1 is used for Texas and 2.5 is used for BYU.
Clearly, reducing the sample in each of these regressions reduces our ability to precisely measure the yearly impact. Several patterns emerge from these regressions nonetheless. First, in all of the sets of regressions, the only regressions reaching any level of significance are those corresponding to the 2009–10 to 2010–2011 junior cohort. We note that the regressions meet 90 percent confidence at BYU for both junior and senior major, and meet 95 percent confidence in the in Texas for junior and senior major. The magnitude of these regressions is slightly larger than the 2008–2011 estimates reported before. Second, in every regression corresponding to years before SMART Grants were being distributed, the estimates are insignificant and effectively zero in magnitude.
The measured discontinuity sharply drops for junior and senior major when the grant expires in the 2011–2012 school year in both the BYU and the Texas data. This is an additional falsification test in addition to using previous years. The estimated zero effect reinforces the idea that the measured discontinuities are related directly to the SMART Grant incentives rather than other changes occurring (for example, the Pell Grant) at the discontinuity.
We interpret these patterns as reinforcing our previous result that the impact of SMART Grants were small or absent in early years but that the impact of the grant grew over time. There are several reasons this pattern could emerge but two seem most likely: First, students needed time to adjust their plans so that the first cohorts of students were less likely to adjust their major; and second, salience is likely to have increased throughout the life of the grant. Given that the impact of the grant immediately falls to zero in the year following the program, we find the salience to be the more dominant factor relative to the inertia hypothesis. This also would give further merit to the hypothesis of increased salience at BYU due to a higher fraction of Pell-eligible students.
The difference in salience in early years between BYU and public Texas universities may also explain the heterogeneity of the impact of SMART Grants on degrees granted. Given that there was little effect on declared major in the early years of the program in Texas, we would not expect to see a large effect on diplomas awarded for this same cohort. On the other hand, we observe earlier effects on declared major at BYU and subsequently see a moderate though insignificant effect on diplomas within the timeframe of our data.
3. Specific majors
Because SMART Grants gave incentives for several classes of majors, there is also interest in decomposing the effect into the impact on each of these smaller classes. Of particular interest would be a decomposition into the impact on STEM majors and language majors. We do this by running separate regressions using a binary variable for the applicable subgroups. These results can be found in Table 5.
STEM and Language Outcomes
In Texas, we see that there was a 3.08 percent increase in junior STEM majors and 0.4 percent increase in junior language majors. The magnitudes are similar for senior majors as well. All of these measures sit very close to the 95 percent confidence level. This suggests that for junior major, the impact on STEM fields accounts for 87 percent of the total impact, and for senior majors, it accounts for 80 percent. The increase in language majors is notable because it is a 0.4 percentage point increase over a baseline of 0.7 percent for juniors and a 0.64 percentage point increase over a baseline of 0.9 percent for seniors. The results at BYU are too noisy to make any strong claims about the decomposition but they again show that the bulk of the effect was in STEM fields. Ultimately it appears that while language majors are different from STEM majors in many ways, financial incentives increased the number of declared majors in both cases.
We hoped to measure which majors these new students were coming from by examining other classes of majors in a similar manner but our results suffered from a lack of statistical precision, and no consistent patterns emerged.25 Although some of the regressions passed low levels of significance, none of them were strong enough to convincingly rule out significance purely due to multiple testing.
C. Heterogeneity/Robustness
There is significant interest nationally to increase the number of women and minorities in STEM fields. One might be interested, therefore, if SMART Grants had a differential impact on these groups. To test this, we run extended models that include interaction terms between the group we are examining (for example, gender) and the slope and discontinuity terms. The coefficient associated with the interaction between the discontinuity variable and the group indicator would identify any between-group heterogeneity. Unfortunately, in each of these specifications, no significant differences could be identified. Given that our samples are only barely large enough to measure the main effect in many cases, this lack of result may simply be due to lack of power.
In an effort test for heterogeneity in the impact of SMART Grants by program salience, we use the Texas data and the fraction of students in graduating in SMART majors a student’s institution the year before the SMART Grant program was implemented as a proxy for how salient the program may have been at their school or a binary transformation of this variable. We then run the baseline analysis, including this proxy variable and an interaction between the proxy and the discontinuity variable and omitting university indicators. Schools who were above the median in terms of the fraction of SMART majors have an estimated discontinuity of 3.79 percentage points and is statistically significant at the 10 percent level. The point estimate for schools below the median is essentially zero at -0.008 and is not distinguishable from zero.26 Similar results are found when interaction the running variable with the fraction of students in SMART Majors in the year before the program though the difference in discontinuity is statistically insignificant. These results are consistent with our hypothesis of salience playing a role in the differences of success across schools and over time, though we acknowledge that our proxy is imperfect; the fraction of students in SMART majors at a particular school may be correlated with a wide variety of school-level factors that also play a major role in student response to this and other incentive programs.
As a final robustness check, we test how sensitive our results are to the choice of bandwidth. We this by repeating our junior- and senior-declared major regressions with the Texas and BYU data, but with various bandwidths in a 500 EFC-unit neighborhood of the optimal one. We also include examine bandwidths that are 1,000 or 2,000 EFC units than the optimal bandwidths. The coefficients of these regressions and their 95 percent confidence intervals are plotted in Figure 9 and reported in the online appendix Table A2. The figure shows that our estimates are quite stable for all bandwidths tested. Generally, the wider bandwidths produce slightly smaller estimates that we attribute to increasing bias associated with larger bandwidths. The ideal comparison in a regression discontinuity setting is the students just above and below the cutoff. As data further from the discontinuity is used, the modeled relationship between EFC and major choice becomes more reliant on students who are increasingly dissimilar in family income. As a result, estimates using data closer to the cutoff are likely to be less biased but less precise. As an additional check on the functional form of f(x), we use quadratic in recentered EFC that is allowed to be different on each side of the threshold. These results also are presented in the online appendix Table A2 and the results are qualitatively very similar to the local linear results presented before with the Texas estimates being slightly smaller and the BYU estimates being slightly larger.
Various Bandwidths
Estimates of the impact of SMART Grants on majors is plotted for various bandwidths. The bandwidths vary by +/–0.5 around the optimal bandwidth. These estimates are for the 2008–2009 to 2010–2011 school years.
There is still the question of whether the impact of SMART Grants operates primarily through persistence in SMART fields or through switching into SMART fields from ineligible fields. We examine this question by interacting the running variable and discontinuity with an indicator for being declared as a SMART major in a student’s sophomore year. The results are presented in Table 6 where we detect no significant differences in the discontinuities for students declared in SMART majors as sophomores in any of these regressions. However, the point estimates suggest that, if anything, the effects are concentrated among students who were declared in SMART majors as sophomores. Many students leave STEM majors as they advance through college and it appears that the SMART Grant may have partially mitigated this flow from STEM fields. Students already in SMART majors as sophomores also would have another potential avenue for information about the grant. These students may have filed the FAFSA and found out about the existence of the grant because they were awarded it. Students who received the SMART Grant then would be able to alter their plan to switch from STEM but students who did not receive the grant would have no such incentives.
Heterogeneity by Sophomore Major
VI. Conclusion
This analysis of the SMART Grant Program provides evidence that students respond strongly to direct financial incentives when choosing their major. These results also show that there can be a high level of heterogeneity in the impact over time and across institutions. They also indicate that there is a differential impact across fields of study. We are unable to precisely decompose how much of the effect measured is due to students persisting in eligible fields versus switching to eligible fields though our point estimates suggest that persistence may play a more significant role in this program than switching.
Several lesson emerge from this analysis. First, policymakers can influence the choice of major using targeted financial incentives. In our analyses we estimate that this program in particular increased the number of students in STEM and language fields by more than three percentage points in Texas public universities and by more than ten percentage points at Brigham Young University for income-eligible juniors. These results should be promising to policymakers who are interested in influencing the skill composition of the labor force. Similarly, these results suggest that caution should be taken when implementing policies that charge differential tuition of students in degrees that are more expensive for schools to provide since such programs may discourage students from majoring in those fields.
Second, students’ choices among heterogeneous human capital investments are affected by factors outside of long-term costs and benefits. Given the award amount and the average differential earnings between these fields, it appears that students are much more elastic to small financial incentives than one would expect with reasonable levels of discounting. This increases the potential influence of the types of policies outlined above and opens further policy-relevant questions of what drives students to respond so strongly. For instance, there are different implications for inequality if the high elasticity is primarily due to credit constraints rather than myopia. Further research is needed to better understand this phenomenon.
Lastly, salience plays a fundamental role in the success of these sorts of programs; unadvertised and unknown programs can be expensive and have little impact on outcomes of interest. As discussed, an explanation to the increasing effect over time measured in our data is that students may not be aware of the program in its early years but over time as more and more students are receiving the grant, salience increases. We find suggestive evidence that this may be the case because schools that begin with a larger baseline of students in eligible majors also see a larger impact of SMART Grants. If salience is a major determinant of heterogeneity over time and between schools, then policies should be designed to reduce this insalient period by better advertising or other informational interventions.
In summary, the SMART Grant program is a useful tool for better understanding how students select into heterogeneous forms of human capital investment and what sort of policies may be most effective at influencing this decision. In the future, richer data sets that include more years than were available to us may allow researchers to understand longer-term outcomes, such as degrees awarded, and to explore the other factors that play a role in a student’s field of study. We also hope to use the SMART Grant in the future as an instrument to measure the impact of majoring in a STEM field on various labor outcomes such as employment, employment in a STEM field, and earnings.
Acknowledgments
The authors would like to thank the Texas Higher Education Coordinating Board and Brigham Young University for providing the data. They also would like to thank two anonymous referees, Sandra Black, Lawrence Katz, Dayanand Manoli, Amanda Pallais, Carole Turley, Robert Turley, participants in the University of Texas at Austin Labor Lunch, the Education and Transition to Adulthood Group of the Population Research Center at the University of Texas at Austin, and participants at the STATA Texas Empirical Micro Conference for helpful comments on the draft. They also would like to thank Kelli Bird for data on the timing of FAFSA filing. They claim all errors as their own. The data used in this article can be obtained by contacting the Texas Higher Education Coordinating Board and Brigham Young University. That authors are willing to provide guidance on how to acquire it.
Footnotes
↵1. The changes in incentives examined in this study are much smaller than average differences in earnings across these fields.
↵2. Hamermesh and Donald (2008) find that the earnings gap across majors decreases when controlling for hours worked and selection into the labor force.
↵3. For instance, Murphy, Shleifer, and Vishny (1991) show that the economy of countries with a higher fraction of engineering majors grows more quickly than the economy of countries with more law concentrators. The choice of major is a significant source of interest for the Federal Government of the United States. In fact, the U.S. government has claimed, “In the case of technical fields, these majors will benefit both national and individual competitiveness, increasing the nation’s economic security.” (United States Department of Education, 2006).
↵4. Many studies have found that merit-based financial aid programs have increased college enrollment (Kane 2003; Dynarski 2004; Cornwell, Lee, and Mustard 2005), decreased college dropout rates (Dynarski 2008), and raised GPAs (Scott-Clayton 2011). However, the evidence on how these programs impact course taking is mixed, with papers that report that merit-based aid programs increase, decrease, and have no effect on course credit accumulation (Scott-Clayton 2011; Brock and Richburg-Hayes 2006; Angrist, Lang, and Oreopoulos 2009; Cornwell, Lee, and Mustard 2005). Turner (2013) illustrates that grant aid can be captured by the institution rather than fully realized by the student. Turner finds that 11 percent of Pell aid is captured by universities though the estimate is smaller at 4.9 percent for public universities. We proceed with our analysis noting that some of the aid disbursed may be captured by universities but that that amount is likely to be small.
↵5. The award amount could not exceed the cost of attendance less Pell Grant receipts.
↵6. In practice, this was all foreign language majors in the later years. We use the definitions from 2011 to define which majors are SMART-eligible.
↵7. Starting in 2009 a prorated award was available to students who were enrolled in at least six credits.
↵8. Officially this was 3.0 for course work required for the major. In practice, some school websites listed the requirement as a 3.0 cumulative GPA.
↵9. This is true provided they were not already receiving other sources of aid that was not greater than the Cost of Attendance. In practice nearly all students received the full amount of the SMART Grant for a given semester.
↵10. EFC is computed yearly and so an eligible junior in fall semester would receive $4,000 more than an ineligible student.
↵11. These statistics from the NPSAS are the authors’ calculations.
↵12. Administrators at THECB feel most confident about the accuracy of the financial aid data starting in 2005. The only substantive variable we use from before that time is EFC, and it appears to follow similar patterns to the data from post-2005 so we feel confident using these data.
↵13. This restriction does not affect many students but simplifies the calculation of the cutoff for SMART Grants.
↵14. We use a mean value imputation when high school percentile or ACT score is missing along with a dummy variable for a missing observation, this mean value imputation does not change the results significantly.
↵15. Some of this increase is likely due to relaxing the requirements for the grant, but some is also likely to represent real growth in SMART Grants distributed.
↵16. This function is a step function. In general, the function takes on the minimum Pell amount for a few hundred EFC units below the EFC threshold though this varies from year to year.
↵17. During the summers of 2009, 2010, and 2011 students were eligible for a “third semester” of Pell Grants. Notably students also were eligible for an additional semester of SMART Grants during this time.
↵18. At BYU this also includes information about ACT/SAT score as well as high school percentile and does not include parental education indicators.
↵19. As in Imbens and Lemieux (2008) and Lee and Lemieux (2010), estimating
using kernel regression with a rectangular kernel yields the same results as a linear regression on a local subsample allowing the slopes to vary on either side of the cutoff; as such, we estimate this equation using Ordinary Least Squares. The covariates only are included to increase precision and are not necessary for identification.↵20. If a student has a life event that would change their EFC after their FAFSA has been submitted, a student may amend their FAFSA and receive federal grant money for the semester in which they submit the amendment if they then qualify.
↵21. Regression results are available upon request.
↵22. For degrees the bandwidth is 1.2, total SMART Grant received has a bandwidth of 1.6, and ever received a SMART Grant uses a bandwidth of 0.9.
↵23. We reached out to all Texas public universities to try to examine if similar advertising was done but only received a handful of responses. All respondents indicated that they had not done any recruiting using the SMART Grant.
↵24. The THECB only recently started collected course-level data, so we could not conduct this analysis with their larger data set.
↵25. These results are available upon request.
↵26. The full results from these regressions are available upon request.
* Supplementary materials are freely available online at: http://uwpress.wisc.edu/journals/journals/jhr-supplementary.html
- Received April 2014.
- Accepted August 2015.
















