Abstract
We use a discontinuity on the test score disclosure rule for the National Secondary Education Examination in Brazil to test whether test score disclosure affects student performance in public and private schools. We find that the impact of test score disclosure on student performance differs between public and private schools. Our results suggest that this difference is driven by differences in the market incentives faced by these two types of school.
I. Introduction
The literature that studies the impact of school accountability on student performance is by now large.1 There are two different ways in which school accountability can have an impact on student performance. First, disclosure of information about how schools perform along different dimensions can lead students, parents, and schools to change their behavior in response to the information revealed. For example, a school can change its teachers or curriculum in response to an unsatisfactory performance of its students on standardized tests. Likewise, parents can choose a different school for their children if the performance of the school in which their children are enrolled is worse than the performance of nearby schools. Second, a school accountability system that ties rewards and punishments to school performance can directly affect the behavior of teachers and school managers by changing their incentives.
An important question in the literature on school accountability is whether the disclosure of information about school performance is enough to influence the behavior of teachers and school managers or one needs to provide explicit incentives to these agents by tying pay to school performance. Information disclosure by itself can have an impact on the behavior of teachers and school managers when schools are subject to (implicit) market incentives, and perceptions about school quality are affected by this disclosure. In this case, teachers and school managers can react to the disclosure of information about school performance even in the absence of an explicit link between pay and school performance.
In this paper, we take advantage of a discontinuity on the test score disclosure rule for the National Secondary Education Exam (ENEM) in Brazil and of differences in the incentives faced by public and private schools in the country to test whether the disclosure of the average ENEM scores of schools can by itself influence the behavior of teachers and school managers. The ENEM was created in 1998 to assess the proficiency of high school students in the country. In 2006 the Ministry of Education established that beginning in that year schools would have their average ENEM score in the previous year publicly released if in this year ten or more of their senior students took the ENEM. The introduction of this disclosure rule was thus unanticipated by schools at the time of the 2005 ENEM.
We use a regression discontinuity design to identify the impact of the disclosure of the average 2005 ENEM scores of schools on: (i) the average 2006 and 2007 ENEM scores of schools, (ii) the observable characteristics of the 2006 and 2007 ENEM takers, and (iii) the school observable inputs in 2006 and 2007. While we find no significant difference in the impact of test score disclosure on the average 2006 ENEM scores of schools between public and private schools—the impact of test score disclosure was not statistically significant in either type of school—we find that test score disclosure had a greater impact on the average 2007 ENEM scores of private schools than on the average 2007 ENEM scores of public schools. On average, the impact of test score disclosure on the average 2007 ENEM scores of schools was 0.10 to 0.17 standard deviations of the students'ENEM score distribution higher in private schools. Moreover, the impact of test score disclosure on the average 2007 ENEM scores of private schools was even stronger for treated private schools with average 2005 ENEM score below the median average 2005 ENEM score.2 On the other hand, we find no evidence that treated schools adjusted their observable inputs from 2005 to 2007 in a systematic way or that there were significant changes in the observable characteristics of the ENEM takers in treated schools over the same period of time. Finally, we find that the impact of test score disclosure on the average 2007 ENEM scores of private schools was higher in regions where the school market was less concentrated.
Our empirical findings suggest that test score disclosure by itself can affect the behavior of teachers and school managers as long as they are subject to market incentives. This conclusion is driven by the following pieces of evidence and institutional details.3First, in the time period analyzed, test score disclosure did not seem to affect either the observable characteristics of the ENEM takers or the school observable inputs in treated schools, both factors that could affect student performance. This suggests that the channel through which the disclosure of the average 2005 ENEM scores of schools could have affected student performance in the 2007 ENEM was by changing the behavior of students, teachers, and school managers.4 Since students in public and private schools face similar incentives when taking the ENEM, the difference in the impact of test score disclosure on the average 2007 ENEM scores of schools between public and private schools in turn suggests that the response of teachers and school managers to test score disclosure differed in these two types of school. An important feature of the Brazilian school market is that teachers and managers in public schools are not accountable to student performance because they have job stability and their earnings are not related to student performance—wages in Brazilian public schools are set by the government and vary exclusively with tenure and schooling level.5 Given that market incentives can, in principle, make teachers and managers in private schools accountable to student performance, it is then reasonable to conjecture that differences in the market incentives faced by public and private schools are behind our findings. The fact that the impact of test score disclosure on the average 2007 ENEM scores of private schools was greater in less concentrated school markets provides some evidence in support of our conjecture.
The rest of the paper is organized as follows. We review the related literature in the remainder of this section. Section II discusses the ENEM and the Brazilian school market and describes our data. Section III discusses our empirical strategy. Section IV presents our empirical results. Section V discusses possible mechanisms behind our empirical findings. Section VI concludes. The Appendix complements the discussion in Section V by presenting a formal model of school behavior that helps explain our empirical findings. The Online Appendix at http://jhr.uwpress.org contains other details omitted here.
Our paper belongs to the literature on the impact of information disclosure on student performance. Unlike the literature that investigates the impact of hard accountability systems on student performance, we are able to analyze the informational effect of test score disclosure on student performance in isolation—hard accountability systems not only change the information about school performance that is available to students, parents, and the schools themselves, but also directly affect the incentives faced by teachers and school managers.6
A basic premise of our analysis is that test score disclosure can influence perceptions about school quality. The literature on school accountability has found some evidence in support of this. For instance, Black (1999), Figlio and Lucas (2004), and Fack and Grenet (2010) find that test score reports affect housing prices in the United States. Hastings, van Weelden, and Weinstein (2007) and Hastings and Weinstein (2008) find that in the Charlotte-Mecklenburg school district in the United States the proportion of parents choosing higher-performing schools increased after information on school test scores became available. Koning and van der Wiel (2013) finds that in the Netherlands a drop in a publicly available ranking of school quality is correlated with student enrollment. Figlio and Kenny (2009) finds that private contributions to schools in Florida are responsive to school grades. Firpo, Possebom, and Ponczek (2014) finds that the disclosure of the average ENEM scores of schools had a significant impact on the tuition fees of private schools in Brazil.7
The papers most closely related to our work are Andrabi, Das, and Khwaja (2017) and Koning and van der Wiel (2012). Koning and van der Wiel (2012) finds that in the Netherlands average grades go up in schools that are poorly evaluated in a national newspaper ranking. Andrabi, Das, and Khwaja (2017) studies the impact of releasing report cards with student and school academic performance to parents in Pakistan and finds that the impact of test score disclosure on student performance is larger in private schools than in public schools. A difference between our paper and Andrabi, Das, and Khwaja (2017) is that in the experimental design these authors consider, the villages in which report cards are released are randomized. So, in Andrabi, Das, and Khwaja (2017), the comparison is between schools that are revealed to have a good performance and schools that are revealed to have a bad performance. In our quasiexperimental setting, the comparison is between schools that have their average ENEM scores revealed and schools that do not have their average ENEM scores revealed. Another difference is that report cards not only provide information about school quality, but also provide information about a student's own achievement. Finally, unlike these authors, we investigate the relationship between market concentration and the reaction of private schools to test score disclosure.
Our result that test score disclosure did not have an impact on student performance in the short run contrasts with Rockoff and Turner (2010), which finds that accountability measures introduced in the city of New York had an impact on student performance in the short run. An important difference between our setting and the setting that Rockoff and Turner analyze is that the accountability measures they consider feature both information disclosure and a hard accountability system. Our study thus suggests that a soft accountability system such as the one introduced in Brazil may fail to produce shortrun impacts on student performance.
Finally, our paper is also related to the literature that studies how the disclosure of information about the quality of public schools affects incentives in the public sector. Hussain (2009) shows that disclosing school quality ratings affects the wages of school principals in England. However, unlike in Brazil, public schools in England have a governing board responsible for hiring and setting principals' wages. Firpo, Pieri, and Souza (2017) find that the reelection probability of mayors in Brazil depends on how the schools under their administration are ranked according to an index created to assess the quality of public schools.
II. Institutional Background and Data
In this section, we first discuss the ENEM and the introduction of the test score disclosure rule in 2006. Then, we briefly discuss the school market in Brazil. Finally, we describe our data.
A. The ENEM
The ENEM is a nonmandatory exam organized by the National Institute of Educational Studies and Research (INEP), which is part of the Brazilian Ministry of Education.8Until 2008, the ENEM was a one-day exam consisting of an essay and 63 multiplechoice questions covering four different subjects—humanities, mathematics, natural sciences, and Portuguese—with scores in a 0–100 scale standardized to have a mean of 50. The ENEM takes place once a year, always in the second semester. A student is eligible to take the ENEM as a senior, that is, if he is in the last year of high school, or if he has already graduated from high school. A student can take the ENEM more than once. The number of ENEM takers has greatly increased over time. For example, from 2001 to 2008, the takeup rate increased from 31.4 to 61.8 percent in public schools and from 25.2 to 72 percent in private schools. The ENEM is used as an admission exam by manyBrazilianuniversities.ItisalsousedtoselectthebeneficiariesofProUni,acollege voucher program run by the federal government. There were no centralized admission exams for Brazilian universities before the ENEM.
In February of 2006, the INEP released for the first time the average ENEM score of schools in the previous year. Figure 1 shows a timeline of the events relevant for our analysis. Notice that the disclosure of the average 2005 ENEM scores took place roughly six months before the 2006 ENEM, leaving a relatively short amount oftime for schools and students to react to tests core disclosure before the 2006 ENEM. Also notice that the 2005 ENEM scores were disclosed only after the enrollment for the 2006 school year took place.
The stated purpose for releasing the average ENEM scores of schools was to help teachers and school managers identify their schools' strengths and weaknesses. Due to a concern that the average ENEMs core of a school with a small number of seniors taking the ENEM would not be representative of the proficiency of the school's current students and that it would be easier to identify individual scores in such a school, the INEP adopted the rule that only schools with ten or more seniors taking the ENEM would have their average ENEM scores released. Consistent with this rule, a school's average ENEM score is computed using only the ENEM score of its senior students. From now on, when we refer to the number of ENEM takers in a given school, we mean the number of senior students in the school who take the ENEM.
Figure 2 shows how average ENEM scores by school appear at the INEP's website.9Although the website does not contain a ranking of schools based on their average ENEM scores, several newspapers in Brazil produce local, regional, and national rankings. Figure 3 shows a school ranking from a major newspaper in Brazil. Many private schools also advertise their average ENEM scores in the media and on their websites. Moreover, some real estate websites inform the average ENEM score of the schools in a given location.10 These facts suggest that parents, teachers, and school managers care about the average ENEM score of schools.
Given that the ENEM is a relatively short exam, an important concern is whether the average ENEM scores of schools provide an informative signal of school quality. Firpo, Possebom, and Ponczek (2014) provide evidence that the average ENEM scores of schools are informative about school quality by showing that there is a positive correlation between the tuition fees of private schools and their average ENEM score, and that this correlation is stronger for private schools whose performance at the ENEM was harder to predict before the introduction of the test score disclosure rule in 2006.
B. The Brazilian School Market
Private secondary schools in Brazil are predominantly for-profit and constitute a small segment of the school market. For instance, in 2007, only 11 percent of the students enrolled in secondary schools were enrolled in private schools.11
There are important differences between public and private secondary schools in Brazil. Repetition rates, evasion rates, and the proportion of over-age students are substantially higher in public schools. For instance, in 2007, the evasion rate in private secondary schools was 0.6 percent, while the overall evasion rate in secondary schools was 13.2 percent. Likewise, in the same year, 5.6 percent of students in private secondary schools repeated a grade, and 8.9 percent of them were over age, while the corresponding proportions in all secondary schools were 12.7 and 42.5 percent, respectively.12
These facts reflect important differences between public and private secondary schools. As we show below, the ENEM data reveal that students in public secondary schools are from lower income families and have less educated parents than students in private secondary schools. Measures of school quality also differ between public and private secondary schools. The ENEM data reveal that private secondary schools have better observable inputs.
C. The Data
We use the 2005–2007 ENEM microdatabases and the 2005–2007 school census. The ENEM microdatabases have individual information on the ENEM scores and socioeconomic characteristics of ENEM takers, and the school census has information on school observable inputs.
Using the ENEM microdatabase, we can determine the number of ENEM takers and their scores for each school in a given year. Thus, we are able to compute average ENEM scores for schools above and below the cutoff point of ten ENEM takers. The average ENEM score of each school is computed using only the ENEM scores of the school's senior students. Crucially for our analysis, the ENEM microdatabases became available only in 2010, so students, teachers, and managers in schools with fewer than ten ENEM takers in 2005 had no access to their schools' average 2005 ENEM scores in 2006 and 2007.13
Our sample consists ofall public and private secondary schools that are located in one of the metropolitan areas in Brazil that have a state capital within its boundary. There are 21 such metropolitan areas, and all regions of the country are represented. The reason to exclude some of the state capitals from our sample—Brazil has 27 state capitals, including the federal capital—is that they did not have enough public and private schools around the cutoff point of ten ENEM takers in 2005 for us to exploit the treatment discontinuity.14
Tables 1 and 2 pool data from 2005 to 2007 and report descriptive statistics for public and private schools in this period. We divide each type of school (public or private) into three groups: schools with less than 10, 10–19, and more than 19 ENEM takers in 2005. Students enrolled in larger private schools have higher scores and better socioeconomic characteristics than students enrolled in smaller private schools. However, larger private schools do not present better observable inputs than smaller private schools. The pattern is the opposite for public schools. On the one hand, there is no clear association between school size and student characteristics. On the other hand, larger public schools are better equipped than smaller public schools. Finally, as expected, we find large differences between public and private schools. On average, students in private schools have better socioeconomic characteristics than students in public schools and private schools have better observable inputs than public schools.
III. Empirical Strategy
We take advantage of the discontinuity in the exogenous disclosure rule fortheaverage2005ENEMscoresofschoolstotestwhethertestscoredisclosureledto a change in either the observable characteristics of ENEM takers or the school observable inputs in 2006 and 2007. We also test whether the impact of test score disclosure on the average 2006 and 2007 ENEM scores ofschools differed between public andprivateschoolsandwhetherthisdifferencewaslargerforschoolswithaverage2005 ENEM score below the median average 2005 ENEM score. Finally, we test whether the impact of test score disclosure on student performance in private schools was greater in metropolitan areas with less concentrated school markets.
Our benchmark specification is
(1)where T 2005s is the number of senior students who took the 2005 ENEM in school s, ds equals one ifT2005 is at least ten and equals zero otherwise, and privs equals one if school s is private and equals zero otherwise. The variable Yism is the outcome of interest for student i in school s in metropolitan region m. The functions ϕY, ψY,and ζY are flexible functions of the running variable. We control for a number of student and school characteristics (Xism).15 Since public and private schools have significant differences in terms of their observable characteristics, we also add an interaction term between the controls Xism and the treatment variable ds to allow for heterogeneity in the impact of test score disclosure driven by differences in the observable characteristics of schools. The variables ηm and εY,ism are, respectively, a metropolitan area fixed effect and an unobservable term that is mean-independent of ds at T2005s= 10 under usual assumptions on the continuity of ϕY, ψY, and ζY at T2005s = 10. We center all covariates around their mean value for public schools near the cutoff point of ten ENEM takers, so that the coefficient ofds measures the impact oftest score disclosure on the outcome Yism in public schools. The coefficient of ds measures the additional impact of test score disclosure on the outcome Yism in private schools and is the coefficient of interest in our analysis.
We run local linear regressions with triangular kernels. For each regression, we choose the bandwidth to minimize the feasible mean squared errors criterion proposed by Calonico, Cattaneo, and Titiunik (2014a,b). Since we include interaction terms with the treatment dummy and metropolitan area fixed effects, we pool the data from public and private schools and choose the optimal bandwidth including all covariates based on Calonico et al. (2016). We run the regressions at the student level and cluster the standard errors at the school level.
There are two important conditions for the validity of our regression discontinuity design: first, that there is no manipulation of the running variable and, second, that there are no preexisting systematic differences between schools above and below the treatment discontinuity. If either of these conditions are not met, then we cannot assign to test score disclosure a causal impact on the outcomes of interest, as there would be confounding factors affecting the outcome variables beyond test score disclosure.
We check the validity of our regression discontinuity design by: (i) observing whether there are jumps in the histogram of the running variable around the cutoff point of ten ENEM takers and (ii) using the average 2005 ENEM scores of schools, the observable characteristics of the ENEM takers in 2005, and the school observable inputs in 2005 as outcome variables to see if there are any significant preexisting differences between schools above and below the treatment discontinuity. A jump in the histogram of the running variable around the cutoff point of ten ENEM takers would indicate that schools were able manipulate their treatment status. In this case, any difference between schools above and below the treatment discontinuity could be attributed to unobserved factors correlated with the decision to have test scores disclosed in 2005. Significant preexisting differences between schools above and below the treatment discontinuity would imply that test score disclosure is not the causal factor explaining the differences observed after the disclosure.
Even if the conditions for the validity of our regression discontinuity design hold, this does not necessarily mean that test score disclosure is the main mechanism explaining the differences in the outcome variables. For instance, it is possible that the difference in the impact of test score disclosure on average 2007 ENEM scores between public and private schools happened by chance. To discard this possibility, we run our benchmark specification (Equation 1) using several false cutoff points as placebo tests. It could also be that schools were able to manipulate the characteristics of their ENEM takers in 2007. Since the ENEM is a voluntary exam, there is no reason to believe that treated and control schools could affect their students'decision to participate in the 2007 ENEM differently, though. Another possibility is that school mortality in the 2006–2007 period differed between treated and control private schools. We address all these issues in the next section.
Also notice that in any regression discontinuity design there is the potential for spillover effects. In our context, spillovers would occur naturally as teachers and managers in schools whose test scores were not revealed could have changed their behavior in response to test score disclosure in other schools. In particular, if for some reason spillover effects were stronger in public schools, then this could explain why test score disclosure had no impact on the average 2007 ENEM scores of public schools. We address this issue in the next section as well.
Finally, it is important to stress that one of the limitations of any regression discontinuity design is its potential lack of external validity, as it is an estimate of the impact of a policy intervention in a subset of the population. Our study is no exception. In our benchmark regression for the impact of test score disclosure on the average 2007 ENEM scores of schools, the optimal bandwidth is 11.21 ENEM takers. The students enrolled in schools with less than 21 ENEM takers in 2005 constituted a small fraction of the 2005 ENEM takers. Indeed, only 19.3 percent of the 2005 ENEM takers enrolled in private schools were from private schools with fewer than 21 ENEM takers. This proportion was even smaller in public schools: 5.2 percent. Moreover, the descriptive statistics in Tables 1 and 2 show that there are important differences between schools with less than 20 ENEM takers in 2005 and schools with 20 or more ENEM takers in 2005, especially among private schools. Therefore, the extrapolation of our results to other populations should be considered with caution.
IV. Empirical Results
In this section, we first check the validity of our regression discontinuity design and the robustness of our empirical results. We then analyze the impact of test score disclosure on the observable characteristics of schools and ENEM takers in 2006 and 2007 and on the average 2006 and 2007 ENEM scores of schools. Following that, we analyze how the impact of test score disclosure on student performance in private schools depends on concentration in the school market. We conclude by analyzing whether there were spillover effects in the impact of test score disclosure.
A. Confounding Factors and Robustness
In order to test whether schools manipulated their number of ENEM takers in 2005, we first check whether there is a jump in the frequency of schools at the treatment discontinuity. Figure 4 shows that there is no discernible jump at the treatment discontinuity.Thisevidenceisnotsurprising.TheENEMisavoluntaryexam,andschoolsonly learned about the test disclosure rule several months after the 2005 ENEM took place.16
As further evidence that schools did not manipulate their number of ENEM takers in 2005, Figure 5 shows that there is no jump in the average 2005 ENEM scores ofschools at the treatment discontinuity for both public and private schools; we standardize the 2005 ENEM scores of students to have a mean of zero.17
We also do not find statistically significant differences in the observable characteristics of the 2005 ENEM takers and the school observable inputs in 2005 between treated and control schools for either type of school.18
In order to check the robustness of our empirical results, we falsify the treatment discontinuity and use alternative cutoff points other than ten ENEM takers. Since the optimal bandwidth in our benchmark regression for the impact of test score disclosure on the average 2007 ENEM scores of schools is 11.21 ENEM takers, we run the same regression for all false cutoff points from ten to 20 ENEM takers. For false cutoffs below the true cutoff, we included only schools with fewer than ten ENEM takers, while for false cutoff points above the true cutoff, we included only schools with more than ten ENEM takers. We find that there is no difference in the impact of test score disclosure between public and private schools in the placebo treatment: the coefficient of the interaction term ds ⋅ privs in Equation 1 is not statistically significant for all false cutoff points tested. Figure 6 depicts the t-statistic for the coefficient of the interaction term ds ⋅ privs for all false cutoff points considered.
A different concern is that schools manipulated the characteristics of their 2007 ENEM takers by, for instance, “cherry picking” which of their students would take the exam. The ENEM, however, is a voluntary exam, and its stakes are high for students, as it is used as an entry exam by many colleges and universities in Brazil and is also used to select students for the ProUni, a college voucher program run by the federal government. Moreover, the enrollment process for the ENEM is done through the internet and is completely outside of the schools'control. Our finding in the next subsection that test score disclosure had no impact in the observable characteristics of the ENEM takers and the proportion of ENEM takers in schools in 2007 is consistent with these observations.
Thus, there is no reason to believe that treated and control schools could have affected their students'decision to participate in the 2007 ENEM differently.
Finally, one could also conjecture that treated private schools with low average 2005 ENEM scores were more likely to close before 2007. If this were the case, then our results on the impact of test score disclosure on the average 2007 ENEM scores of private schools would be biased due to selection among treated private schools. However, there is no statistically significant difference in the probability that schools with less than 21 ENEM takers in 2005 had no ENEM takers in 2006 and 2007 between treated and control private schools.19
B. Observable Characteristics of ENEM Takers and Schools
Tables 3 and 4 show, respectively, estimates for differences in the observable characteristics of the 2006 and 2007 ENEM takers between treated and control schools for each type of school. We consider the following student characteristics: gender, whether the student is in the correct school grade, race, parental education (college degree or not), and family income (income above or below ten times the minimum wage). The first three columns in each table show the results for public schools, while the last three columns show the results for private schools. We ran the regressions for each type of school separately and included as covariates the remaining control variables and the metropolitan area fixed effects. We find that, except for a reduction in the proportion of malestudentstakingthe2006ENEMintreatedprivateschools,testscoredisclosurehad no significant impact in the observable characteristics of the 2006 and 2007 ENEM takers in both public and private schools.20
Tables 5 and 6 below show that while in 2006 test score disclosure had a significant impact in the proportion of teachers with college degrees in public schools (a reduction) and the staff/student ratio in private schools (an increase), these changes did not persist in 2007. We consider the following school inputs: total school enrollment, proportion of ENEM takers, proportion of teachers with college degree, probability of having a science lab, and ratios of teachers, computers, and staff to students. As above, the first three columns in both tables show the results for public schools, while the last three columns show the results for private schools. Notice, in particular, that there are no statistically significant differences in the proportion of ENEM takers in 2007 between treated and control schools of either type. Together with the results in the previous paragraph, this suggests that schools were not able to manipulate their students'decisions to take the 2007 ENEM.
C. Average ENEM Scores
We first consider the impact of test score disclosure on the 2006 ENEM scores of schools. Figure 7 shows the relationship between the average 2006 ENEM scores of schools and the number ENEM takers in 2005 for both public and private schools when we consider our benchmark specification; we standardize the 2006 ENEM scores of students to have a mean of zero. We find no impact of test score disclosure on student performance in both types of school. The relatively short period of time that elapsed between the disclosure of the average 2005 ENEM scores of schools and the 2006 ENEM is the likely reason for the absence of an impact of test score disclosure on schools, as students and schools probably did not have sufficient time to adjust their beliefs and behavior in response to the newly available information.
We now consider the effect of test score disclosure on student performance in the 2007 ENEM. Figure 8 depicts the relationship between the average 2007 ENEM scores of schools and the number of ENEM takers in 2005 for both public and private schools, again considering our benchmark specification; we now standardize the 2007 ENEM scores of students to have a mean of zero. While there is no increase in the average 2007 ENEM scores of public schools above the treatment discontinuity, the increase in the average 2007 ENEM scores of private schools above the cutoff point of ten ENEM takers is evident.
Table 7 reports the coefficients of the terms ds and ds ⋅ privs in the benchmark specification (Equation 1) for the relationship between the number of ENEM takers in 2005 and the average 2007 ENEM score of schools. We consider three different bandwidths: one-half the optimal bandwidth, the optimal bandwidth, and twice the optimal bandwidth.
Consistent with Figure 8, we find that regardless of the bandwidth considered, test score disclosure had no significant impact on the average 2007 ENEM scores of public schools. Recall that we centered our covariates around their mean value for public schools in the bandwidth considered, so that the coefficient of ds measures the impact of test score disclosure on the average 2007 ENEM score of public schools.
On the other hand, also consistent with Figure 8, we find that test score disclosure had a larger impact on the average 2007 ENEM scores of private schools than on the average 2007 ENEM scores of public schools and that this difference is statistically significant when we consider either the optimal bandwidth or twice the optimal bandwidth. Since we control for the interaction between the schools' and students' observable characteristics and the treatment variable, our findings suggest that the difference in the impact of test score disclosure on average 2007 ENEM scores between public and private schools is not driven by differences in the observable characteristics of these two types of school.
Taken together, the results in Table 7 show that on average, the impact of test score disclosure on the average 2007 ENEM scores of schools was 0.10 to 0.17 standard deviations of the students' ENEM score distribution higher in private schools than in public schools.21
A concern with our analysis is that the results of the 2006 ENEM were released prior to the 2007 ENEM, so that in 2007 schools also differed in their treatment status in 2006. When we narrow our sample to schools that had the same treatment status in 2005 and 2006, the difference in the impact of test score disclosure on the average 2007 ENEM scores of schools between public and private schools becomes even stronger.22
An interesting question is whether the impact of test score disclosure on the average 2007 ENEM score of private schools differed depending on their performance in the 2005 ENEM. In order to test for this possibility, we include the triple-interaction term ds ⋅ below_mediansm #x22C5; privs in Equation 1, where below_mediansm is one if the average 2005 ENEM score of school s in metropolitan area m was below the median of the average 2005 ENEM scores of schools in metropolitan area m and is zero otherwise.23The coefficient of this term measures the additional impact of test score disclosure on the average 2007 ENEM scores of private schools that had a below median average 2005 ENEM score. Note that when ranking schools according to their average 2005 ENEM scores, we included the schools that did not have their average 2005 ENEM scores released, as the comparison we are making is between schools with the same average 2005 ENEM score but different treatment status.
Table 8 reports the coefficient of the term ds ⋅ below_mediansm ⋅ privs for the same bandwidths of Table 7. We find that this coefficient is positive and statistically significant for all choices of bandwidth. Thus, the positive impact of test score disclosure on the average 2007 ENEM scores of private schools was even higher in private schools with a poor performance on the 2005 ENEM.
D. Competition in the School Market
The fact that test score disclosure did not affect in a systematic way the observable characteristics of ENEM takers and schools suggests that test score disclosure acted by changing the behavior of students, teachers, and managers in treated schools. Students in public and private schools face the same incentives when taking the ENEM, though. Thus, the difference in the impact of test score disclosure on average 2007 ENEM scores between public and private schools suggests that teachers and school managers in private schools reacted differently to test score disclosure than teachers and managers in public schools.
There are different factors that could explain why the reaction of teachers and school managers to test score disclosure would depend on the type of school in which they work. First, since public and private schools are different in terms of their observable characteristics, it could be that these differences explain the difference in reactions. For instance, since students in Brazilian public schools are from more disadvantaged backgrounds, teachers and managers in such schools could be less willing to react to a negative performance at the ENEM than teachers and managers in private schools. Second, as already discussed, teachers and managers in Brazilian public schools are not subject to market incentives, which could also make them less responsive to test score disclosure than teachers and managers in private schools.
The fact that the difference in the impact of test score disclosure on the average 2007 ENEM scores of schools between public and private schools survives even after one controls for the observable characteristics of ENEM takers and schools and the interaction between these characteristics and the treatment status of schools shows that the first factor discussed above cannot by itself explain our findings.
In order to test for the possibility that differences in market incentives can help explain our empirical findings, we include the triple-interaction term ds ⋅ below_HHIm ⋅ privs in our benchmark specification (Equation 1), where below_HHIm is equal to one if in 2005 the Herfindahl-Hirschman index of the school market in metropolitan area m, HHIm, is below the median of the Herfindahl-Hirschman indices of all school markets in our sample.24 The Herfindahl-Hirschman index of the school market in a given metropolitan areais the sum ofthe squared market shares ofthe schools in this metropolitan area.25 This index is a measure of market concentration, and so competitiveness, in a given school market: the lower this index, the less concentrated a school market is. We say that the school market in metropolitan area m has low concentration if below_HHIm is one; otherwise we say that it has high concentration. The coefficient of the term ds ⋅ below_HHIm ⋅ privs measures the difference in the impact of test score disclosure on the average 2007 ENEM scores of private schools between low- and high-concentration school markets.
The results reported here provide some evidence that market incentives do indeed shape the response of private schools of test score disclosure. This, in turn, provides some evidence that our findings on the impact of test score disclosure on the average 2007 ENEM scores of schools are explained by the absence of market incentives in public schools.
E. Spillover Effects
As discussed in the previous section, spillover effects can imply that schools react to test score disclosure even if their ENEM scores are not revealed. In this case, one expects that the impact of test score disclosure on the average ENEM scores of schools would depend on the fraction of schools that are treated. In order to check for this possibility, we test whether there is any evidence that the changes in the test scores of control schools in a given metropolitan area m are correlated with the intensity of treatment in this metropolitan area, that is, the proportion of treated schools in m. Table 10 below reports the result of school-level regressions where we regress the change in the average ENEM score of control schools on the proportion of treated schools in the control schools'metropolitan area. The first column reports changes from 2005 to 2006, and the second column reports changes from 2005 to 2007. The coefficients associated to the proportion of treated schools are statistically insignificant for both regressions. This suggests that spillover effects did not play an important role in shaping the response of control schools to test score disclosure.26
V. Discussion
Our empirical findings suggest that the reason why test score disclosure had a greater impact on the average ENEM scores of private schools than on the average ENEM scores of public schools is that the former schools are subject to market incentives while the latter are not. In this section, we discuss possible reasons for this.
There are, in principle, two different channels through which test score disclosure can affect schools that are subject to market incentives. First, test score disclosure can help schools learn about their quality, leading teachers and school managers in treated schools to react to news about their schools'quality.27 Second, test score disclosure can mitigate adverse selection in a given school market by reducing the asymmetry of information between treated schools and other market participants, which can also lead teachers and school managers to change their behavior.
Andrabi, Das, and Khwaja (2017) show how a model of adverse selection, based on Wolinsy (1983), can help explain why test score disclosure increases average test scores in schools that are subject to market incentives. In their model, schools can make costly investments in quality, which affects test scores. Parents, however, can observe only a noisy signal of school quality. In this setting, one can have a separating equilibrium in which schools that make a high investment in quality separate themselves from schools that make a low investment in quality by charging higher prices (tuition fees). The markup necessary to sustain separation depends on the quality of the signal about school quality that parents observe. A higher quality signal—due to test score disclosure, for instance— reduces this markup. This, in turn, can lead to a smaller quality differential between schools. Andrabi, Das, and Khwaja (2017) discusses conditions under which this reduction in the quality differential comes from lower quality schools increasing their quality, which would imply an increase on average test scores.
An important prediction in the model of Andrabi, Das, and Khwaja (2017) is that test score disclosure leads to a reduction in the tuition fees of higher quality private schools relative to the tuition fees of lower quality private schools, a fact that is consistent with its findings in Pakistan. This prediction, however, is at odds with the finding in Firpo, Possebom, and Ponczek (2014) that in Brazil treated private schools with a good performance in the 2005 ENEM were able to increase their tuition fees by more than treated private schools with a poor performance in the 2005 ENEM. While this finding does not allow us to rule out asymmetric information as one factor explaining the impact of test score disclosure on private schools in Brazil, it suggests that other factors may be important as well. In the remainder of this section, we discuss how learning can also help explain our empirical findings, and in a way that is consistent with the findings in Firpo, Possebom, and Ponczek (2014). In the Appendix, we supplement this discussion with a model of school behavior in which learning is the only channel present.
Suppose, as in the model of Andrabi, Das, and Khwaja (2017), that schools can make costly investments in quality but now parents can observe these investments, so that there is no asymmetric information. On the other hand, suppose that schools differ in their “baseline” quality or type—the same investment leads to a higher quality in a higher type school—and that neither schools nor parents know the schools'types. Moreover, suppose that schools'revenues depend on their expected quality. Thus, schools are uncertain about the return from their investments in quality, and test score disclosure reduces this uncertainty.
Now, to fix things, suppose that there are two types of school, a low-type school and a high- type school. If baseline quality and investments in quality are substitutes and there are decreasing marginal returns from increasing expected school quality, then treated high-type schools reduce their investments in quality relative to control high-type schools, while treated low-type schools increase their investments in quality relative to control low-type schools. Test score disclosure increases average school quality, and thus average test scores, if treated high-type schools reduce their investments in quality by less than treated low-type schools increase their investments in quality. The model in the Appendix provides conditions under which this is the case.
Since treated high-type schools invest less in school quality than treated low-type schools, and so have a smaller marginal cost of investment, decreasing marginal returns from investments in school quality imply that the after-investment expected quality of treated high-type schools is greater than the after-investment expected quality of treated low-type schools. Thus, unlike in the adverse selection model discussed above, test score disclosure leads to greater differences in expected school quality among treated schools. To the extent that these differences in expected quality translate into differences in tuition fees, test score disclosure can lead to higher tuition fees in high-type schools.28
In reality, one would expect that schools are better informed about their investments in quality but are uncertain about their after-investment quality, so that there is both adverse selection and learning. An interesting question, which we leave for future research, is whether one can distinguish between these two forces in the data.
VI. Conclusion
An important question in the literature on school accountability is whether test score disclosure can influence the behavior of teachers and school managers in the absence of a hard accountability system that ties pay to student performance. In this paper, we take advantage of a discontinuity on the disclosure rule for the average ENEM score of schools in Brazil and of differences in market incentives faced by public and private schools in the country to provide evidence that test score disclosure by itself can affect the behavior of teachers and school managers if they are subject to market incentives.
While we find no significant difference in the impact of the disclosure of the average 2005 ENEM scores of schools on the average 2006 ENEM scores of schools between public and private schools, we find that this disclosure had a greater impact on the average 2007 ENEM scores of private schools than on the average 2007 ENEM scores of public schools. Together with the fact that the disclosure of the average 2005 ENEM scores of schools had no systematic impact on the observable characteristics of ENEM takers and schools in 2006 and 2007, these findings provide some evidence that private schools reacted differently to test score disclosure than public schools. We argue that this difference is driven by the fact that only private schools in Brazil are subject to market incentives and provide some evidence in support of this conjecture.
Even though our analysis is concerned with the impact of the disclosure of the average 2005 ENEM scores of schools in Brazil on their average ENEM scores in the years immediately after this disclosure took place, it would be interesting to know whether test score disclosure had a longer-term impact on the average ENEM scores of schools. We find that the impact of test score disclosure on the average 2008 ENEM scores of schools isstatisticallyinsignificantforbothpublicandprivateschools.29Moreover,wedonotfind significant effects even after narrowing the sample to schools that had the same treatment status in 2005 to 2007—close to half of the schools with less than 21 ENEM takers in 2005 changed their treatment status by 2007. We are not able to say if the impact of test score disclosure is transitory or if the lack of significant results is due to the fact that the number of schools that had their average ENEM scores released increased over time, which may have contributed to dilute the impact of test score disclosure as time passed.
Appendix
We present a stylized model of school behavior to show how differences in market incentives can help explain why the disclosure of the average 2005 ENEM scores of schools increased the average 2007 ENEM scores in treated private schools by more than it did in treated public schools. The channel that we emphasize in our model is that a treated school had more information about the achievement of its students, and thus about its quality, than a control school, and it could react to this information.
Our analysis abstracts from the impact of test score disclosure on the observable characteristics of ENEM takers and the school observable inputs by assuming that both are fixed. Doing so is reasonable given that we find no evidence that test score disclosure had a significant impact on these variables in the period of time that we consider.
A. Setup
Schools are either public or private and can make investments in their quality, which increase their average ENEM scores. The (after-investment) quality of a school is
where e> 0 is the school's investment, and q0> 0 is the school's intrinsic quality. Investments are costly but observable. We assume that the cost of investment is g(e) = e2/2. The assumption of quadratic investment costs simplifies the analysis without changing its substance.
The intrinsic quality of a school depends on its observable and unobservable characteristics. The observable characteristics of a school are its observable inputs and the socioeconomic characteristics of its students. Controlling for observable characteristics, the only differences in intrinsic quality across schools are idiosyncratic differences in unobservable characteristics. So, q0 = θ + v,wherey and v are the unobservable and observable component of intrinsic quality, respectively. We assume that θ is unknown to schools and can have one of two values, θℓ > 0 or θh>θℓ. Let μ ∊ (0.1) be the probability that θ = θh. We say that a school is of high type if θ =θh;other wise we say that it is of low type. For simplicity, we omit the dependence of q0 on v in what follows. All the results in our model should thus be viewed as statements about the impact of test score disclosure on test scores conditional on the observable characteristics of schools.
A school's revenue, R, is a function of its expected quality, q¯. The latter is q¯0+ e, where q¯0 is the school's expected intrinsic quality, and e is its observable investment. We assume that
where R0 is a constant, and λ ∊ (0.1). The parameter l captures the strength of market incentives faced by the schooland is high if the school is private and close to zero if the school is public. The function V(q¯) is smooth, strictly increasing, and strictly concave. The assumption that V (q¯) is strictly increasing is a reduced-form way of capturing that private schools with a higher expected quality are able to charge higher tuition fees and thus obtain higher revenues.
Besides being public or private, schools also differ in the information they have before making their investment decisions. We assume that an independent random draw determines whether a school is treated. This assumption is justified by our finding that treated and control schools (whether public or private) do not differ in terms of their observable characteristics. Treatment provides a signal ξ about intrinsic quality. For simplicity, we assume that a school's average 2005 ENEM score is perfectly correlated with its intrinsic quality; that is, ξ = θ. So, a treated high-type school observes a high average 2005 ENEM score while a treated low-type school observes a low average 2005 ENEM score. Our analysis can be extended to the case in which ξ is not perfectly correlated with θ. We describe the event that a school is not treated by saying that it observes the signal ξ = ϕ.
B. The Impact of Test Score Disclosure
The objective of a school is to maximize its revenue net of the cost of investment. The expected payoff to a school with signal ξ ∊ {ϕ, θℓ, θh} that chooses investment e is
where m(x) is the school's posteriorbeliefthatitis high-typewhenitobserves the signal X. The assumptions on V(q) and g(e) imply that for each signal X, a school's choice of effort is the unique solution to the first-order condition
(2)Let e0* be the optimal effort for a control school and ek*, with k ∊ {‘ℓ, h} be the optimal effort for a treated school of intrinsic quality θk. The following result is an immediate consequence of the first-order condition (Equation 2).
Lemma 1. and
Lemma 1 implies that the difference between e‘* and eh* is significant o nly for private schools, which is intuitive. The fact that e‘* >e0* >eh* is a consequence of the assumption that intrinsic quality and investments in quality are substitutes and V(q) is strictly concave. So, the marginal benefit of investment to a treated school with a high average 2005 ENEM scoreis smaller than the marginal benefit ofinvestment to atreated school with a low average 2005 ENEM score. On the other hand, it follows from Equation 2 that
and so yh+ eh* >myh+ (1 - m)0‘ + e0* >0‘ + ek*. So, despite test score disclosure leading high-type schools to decrease their investment and low-type schools to increase their investment, the expected quality of a treated high-type school is nevertheless higher than the expected quality of a control school, whichin turn is higher than the expected quality of a treated low-type school. To the extent that differences in expected quality lead to differences in tuition fees, test score disclosure then allows treated high-type schools to charge higher tuition fees than treated low-type schools, a result that is consistent with the findings in Firpo, Possebom, and Ponczek (2014).
The quality of a type-yk school is q0,k= yk + e0* if the school is control andq1,k=yk+ ek* if the school is treated. The difference Dk= q1,k - q0,k is the impact of test score disclosure on the quality, and thus on the average 2007 ENEM score, of a school of intrinsic quality yk. In other words, Dk is the counterfactual change in the average 2007 ENEM score of a type-yk control school in case the school had the chance to observe its average 2005 ENEM score. It follows from Lemma 1 that Dk is negative if k= h and positive otherwise and that these differences are significant only for private schools. Proposition 1 summarizes this discussion.
Proposition 1. For private schools with high average 2005 ENEM scores, test score disclosure has a negative impact on average 2007 ENEM scores. For private schools with low average 2005 ENEM scores, test score disclosure has a positive impact on average 2007 ENEM scores.
Given that test score disclosure has a small impact on the average 2007 ENEM scores of public schools, in what follows we abuse notation and let e0* denote the investment of a control private school and ek* denote the investment of a type θk treated private school. The average investment of treated private schools is . Assume that V0 (q) is strictly convex. We claim that . Suppose not—that is, suppose that e • e0—then
where the first inequality follows from the concavity of V(q) and the second inequality follows from the strict convexity of V'(q). Since g'(e) = e, we then have that g'(e0*) <g0(e), a contradiction. Thus, and so the expected quality of treated private schools is greater than the expected quality ofcontrol private schools.30 We have thus established the following result.
Proposition 2. Suppose that V0(q) is convex. The effect of test score disclosure on the average 2007 ENEM scores is positive and significant only for private schools.
To conclude the analysis of our model, notice that an increase in l increases the investment, and thus the average 2007 ENEM scores, ofboth treated and control private schools. Hence, it is not clear a priori how the difference e - e0 changes with l. It is possible to show (see the Online Appendix for details) that the effect of test score disclosure on the average 2007 ENEM scores of private schools can be strictly increasing in l for some choices of the revenue function V(q). Thus, our model can generate the result that the impact of test score disclosure on the average 2007 ENEM score of private schools is greater in markets in which private schools are subject to stronger market incentives.
Acknowledgments
The authors are grateful to two anonymous referees for their comments and suggestions and to Jonah Rockoff for his extensive comments on an earlier version of the paper. The authors also benefitted from conversations with Aureo de Paula, Juan Dubra, and Miguel Urquiola, and from the input of various conference and seminar participants. Braz Camargo, Sergio Firpo, and Vladimir Ponczek gratefully acknowledge financial support from CNPq. Rafael Camelo gratefully acknowledges financial support from FAPESP. The data used in this article can be accessed at the INEP’s website (http://portal.inep.gov.br/microdados) or obtained directly from the corresponding author, Vladimir Ponczek (vladimir.ponczek{at}fgv.br).
Footnotes
Braz Camargo is a full professor of economics at the Sao Paulo School of Economics (FGV) in Brazil. Rafael Camelo is director of impact evaluation at Plano CDE Institute in Brazil. Sergio Firpo is Instituto Unibanco Professor of Economics at Insper Institute of Education and Research in Brazil and research fellow at Institute of Labor Economics (IZA). Vladimir Ponczek is an associate professor of economics at the Sao Paulo School of Economics (FGV) and coordinator of the Lab for Evaluation, Analysis and Research in Learning in Brazil (LEARN), Rua Itapeva 474, São Paulo, SP 01332-000, Brazil. This paper originally circulated under the title “Test Score Disclosure and Student Performance.”
↵1 See Figlio and Loeb (2011) for a comprehensive survey.
↵2 In what follows, we refer to the schools that had their average 2005 ENEM scores released in 2006 as the treated schools and to the remaining schools as the control schools.
↵3 We discuss alternative explanations when we present our empirical results.
↵4 We discuss possible reasons for why test score disclosure had no impact on the average 2006 ENEM score of schools later in the text.
↵5 More recently, some states in Brazil have introduced hard accountability policies in their public school systems (for example, Programa de Valorizacao pelo Mérito [Merit Valuation Program] in the state of Sao Paulo). These measures took place after 2007, though.
↵6 There is a substantial literature that studies the impact of hard accountability systems on student performance. See, for example, Hanushek and Raymond (2005), Dee and Jacob (2009), Bacolod et al. (2009), Chiang (2009), Rockoff and Turner (2010), and Muralidharan and Sundararaman (2011).
↵7 Contrary to this evidence, Mizala and Urquiola (2009) find that identifying a school in Chile as being outstanding has no effect on enrollment, tuition, and the socioeconomic characteristics of students. This is consistent with Mizala, Romaguera, and Urquiola (2007), which finds that average test scores in Chile are very volatile and highly correlated with students' socioeconomic status.
↵8 The fact that the ENEM is nonmandatory may raise questions about the possibility of schools manipulating their number of ENEM takers. We address this issue in Section IV.
↵9 http://portal.inep.gov.br/web/guest/enem-por-escola (accessed 14 July 2017).
↵10 See www.zap.com.br (accessed 14 July 2017).
↵11 The fraction of secondary students enrolled in private schools has stayed roughly the same in the 20012010 period; see Costa (2013) for details.
↵12 The picture is similar throughout the 2001–2010 period; see Costa (2013) for details.
↵13 Unlike in Rouse et al. (2013), we do not have information about whether schools adopted practices or policies that could have changed in response to test score disclosure.
↵14 The state capitals in our sample are: Aracaju, Belem, Belo Horizonte, Curitiba, Brasilia, Florianopolis, Fortaleza, Goiania, Joao Pessoa, Macapa, Maceio, Manaus, Natal, Porto Alegre, Recife, Rio de Janeiro, Salvador, Sao Luiz, Sao Paulo, Teresina, and Vitoria.
↵15 For students, we control for gender, age, whether a student is in the correct school grade, race, parental schooling, and family income. For schools, we control for the number of students, the proportion of teachers with a college degree, the presence of a science laboratory, and the ratios of teachers, computers, and staff to students.
↵16 We also constructed a separate histogram for each metropolitan area in our sample. We did not find a jump in the frequency of schools at the treatment discontinuity in any of these histograms. See the Online Appendix for details.
↵17 Furthermore, the difference in the impact of test score disclosure on the average 2005 ENEM score of schools between public and private schools is not statistically significant. See the Online Appendix for details.
↵18 Tables with the regression results are available in the Online Appendix.
↵19 See the Online Appendix for details.
↵20 Test score disclosure also had no consistently significant impact on the age of the 2006 and 2007 ENEM takers in both public and private schools. Since the age of ENEM takers and whether they are in the correct school grade provide essentially the same information, we chose not to report the results on age.
↵21 Since the ENEM did not use Item Response Theory until 2009, we cannot compare ENEM scores across years, making it impossible to evaluate the impact of test score disclosure on the value-added of schools.
↵22 Since the disclosure of the average 2006 ENEM scores was anticipated by schools, there could be nonrandom selection of schools into treatment status, though.
↵23 We also include the terms below_mediansm, ds ⋅ below_mediansm, and below_mediansm ⋅ privs in Equation 1.
↵24 For this exercise, we also include terms ds ⋅ below_HHIm and below_HHIm ⋅ privs in Equation 1. We do not include the term below_HHIm since this term is absorbed by the metropolitan area fixed effects.
↵25 We define the market share of school s in metropolitan area m, sharesm, as the number of its secondary students in 2005 divided by the number Nm of students in metropolitan area m in 2005. So, Im = +sSm= 1shares2m, where Sm is the number of schools in metropolitan area m.
↵26 It is also important to notice that our benchmark regressions include metropolitan area fixed effects, which absorb any potential direct effect of the proportion of treated schools on the estimated coefficients.
↵27 The average 2005 ENEM scores of schools were released only in February of 2006. Hence, unless they failed to graduate in 2005, the senior high school students who took the ENEM in 2005 were no longer in school at the time the ENEM scores were disclosed. This makes it plausible to assume that treated and control schools had different information about their quality in subsequent years.
↵28 The above discussion, as well as the model in the Appendix, ignores spillover effects from test score disclosure by assuming that schools care only about their after-investment quality. Thus, only treated schools react to test score disclosure and their reaction does not depend on how many other schools are treated. It is possible to extend the model in the Appendix to the case in which schools care about their relative quality, so that all schools in a given school market react to test score disclosure.
↵29 See the Online Appendix for details.
↵30 The conclusion that holds as long as is strictly convex in q. Straightforward algebra shows that h(q:λ)is strictly convex in q if h(q;1) is strictly convex in q. The latter condition holds if g‘(e) is concave. More generally, the latter condition holds if V‘(q) is more convex than g’(e).
- Received January 2015.
- Accepted December 2016.