Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
    • Supplementary Material
  • Info for
    • Authors
    • Subscribers
    • Institutions
    • Advertisers
  • About Us
    • About Us
    • Editorial Board
  • Connect
    • Feedback
    • Help
    • Request JHR at your library
  • Alerts
  • Call for Editor
  • Free Issue
  • Special Issue
  • Other Publications
    • UWP

User menu

  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Human Resources
  • Other Publications
    • UWP
  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Human Resources

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
    • Supplementary Material
  • Info for
    • Authors
    • Subscribers
    • Institutions
    • Advertisers
  • About Us
    • About Us
    • Editorial Board
  • Connect
    • Feedback
    • Help
    • Request JHR at your library
  • Alerts
  • Call for Editor
  • Free Issue
  • Special Issue
  • Follow uwp on Twitter
  • Follow JHR on Bluesky
Research ArticleArticles
Open Access

Reducing Parent–School Information Gaps and Improving Education Outcomes

Evidence from High-Frequency Text Messages

View ORCID ProfileSamuel Berlinski, View ORCID ProfileMatias Busso, View ORCID ProfileTaryn Dinkelman and View ORCID ProfileClaudia Martínez A.
Journal of Human Resources, July 2025, 60 (4) 1284-1322; DOI: https://doi.org/10.3368/jhr.1121-11992R2
Samuel Berlinski
Samuel Berlinski is a Principal Economist at the Research Department of the Inter-American Development Bank and an IZA Research Fellow .
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Samuel Berlinski
  • For correspondence: samuelb{at}iadb.org
Matias Busso
Matias Busso is a Principal Economist at the Research Department of the Inter-American Development Bank .
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Matias Busso
  • For correspondence: mbusso{at}iadb.org
Taryn Dinkelman
Taryn Dinkelman is the Loughrey Associate Professor of Economics at the University of Notre Dame, a Faculty Research Associate at NBER, BREAD, CEPR, and IZA, and a J-PAL affiliated professor (corresponding author: ).
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Taryn Dinkelman
  • For correspondence: tdinkelm{at}nd.edu
Claudia Martínez A.
Claudia Martínez A. is a Full Professor at the Department of Economics of Pontificia Universidad Católica de Chile, a J-PAL affiliated professor, and Millennium Nucleus on Intergenerational Mobility: From Modelling to Policy (MOVI) [NCS2021072] .
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Claudia Martínez A.
  • For correspondence: clmartineza{at}uc.cl
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • References
  • PDF
Loading

Abstract

We conducted an experiment in low-income urban schools in Chile to test the effects and behavioral changes triggered by a program that sends attendance, grade, and classroom behavior information to parents via weekly and monthly text messages. Our 18-month intervention raised average math scores by 0.09 of a standard deviation and increased the share of students satisfying attendance requirements for grade promotion by 4.7 percentage points. Treatment effects were larger for students at higher risk of later grade retention and dropout. Our results demonstrate that communicating existing school information to parents frequently can shrink parent–school information gaps and improve school outcomes in a light-touch, scalable, and cost-effective way.

JEL Classification:
  • I25
  • D8
  • N36

I. Introduction

Grade retention and early dropout are two of the biggest challenges facing education systems in many middle-income countries today. In Latin America, only 46 percent of students graduate from high school on time, and only 53 percent of young people aged 20–24 have completed secondary school (Busso et al. 2017). These poor schooling outcomes contribute to persistent education gaps between low- and high-income families.

Researchers have identified absenteeism, failing grades, and classroom misbehavior as important early warning signals for grade retention and the likelihood that students will eventually drop out of school (for example, Manacorda 2012; Wedenoja 2017). While schools around the world routinely record these types of student outcomes, families often do not have timely access to this information. We examine whether increasing the frequency and ease of communication between parents and schools can improve students’ academic outcomes, particularly among those at higher risk of being retained at a given grade or of later dropout. We evaluate an intervention that leverages existing school resources and practices to improve education outcomes. We also explore several channels through which this intervention may have changed parenting practices around schooling.

In 2014 and 2015, we conducted a randomized experiment in Chile to evaluate the effects of using weekly and monthly cellphone text messages to provide parents with up-to-date information on students’ attendance, grades, and classroom behavior. The intervention focuses on students in the last five grades of primary school, years during which attendance and grades start to matter, but before the risks of grade repetition or dropout significantly increase. The text message intervention (Papás al Día) was deliberately designed to be a low-touch intervention, with no change in behavior required by schools or teachers, who were already collecting attendance, grade, and behavior information. We sustained the high-frequency text messaging over two school years to allow parents time to adapt their parenting strategies in response to an ongoing flow of student-level information.

Our main experimental sample includes about 1,000 children enrolled in seven low-income schools in a metropolitan area in Chile. After conducting baseline student and parent surveys and collecting school administrative data (Berlinski et al. 2022) on student outcomes, we randomly varied which classrooms in each school were to receive a high (75 percent) or low (25 percent) share of treated students and then randomized individual students in each classroom into the text messages treatment. Over 18 months, we delivered more than 44,000 text messages to families in our sample. Treatment messages containing information about attendance, grades, and behavior were sent to treated parents, while control parents received general all-school text messages during this time. We continued to collect administrative data throughout the two years and conducted mid- and end-line parent and student surveys. Our data allow us to measure schooling outcomes, changes in parent information sets, and changes in parenting practices.

We begin by documenting sizable gaps that exist between parents’ knowledge and school reports of students’ attendance and grades. Comparing baseline survey responses to school records, we find that 26 percent of parents were unable to report correct information about their child’s grades, and 48 percent could not approximate their child’s school attendance in the previous two weeks. Similar information gaps have been found in settings as diverse as the United States (Bergman 2021), Malawi (Dizon-Ross 2019), and Colombia (Barrera-Osorio et al. 2020). Moreover, we document that the parents of at-risk, low-achieving students are more likely to misreport grades and attendance at baseline. Narrowing this gap—between parents’ understanding of their child’s performance and actual performance as documented by the school—is a key target of our text messaging treatment. Parents who have more accurate knowledge about recent grades, attendance, and behaviors are likely to be more engaged with their child’s schooling on a day-to-day basis in ways that improve schooling outcomes (Escueta et al. 2020; JPAL 2020).

Our main results are that exposure to the messaging treatment improved math grades and attendance, with particularly large impacts on at-risk students and positive spillover effects within classrooms. Relative to control students, treated students increased their math GPA by 0.09 standard deviations, and the probability of treated students earning a passing grade in math increased by 2.7 percentage points (or 2.9 percent relative the control mean of 93 percent). The intervention increased school attendance by 1.1 percentage points (or 1.2 percent relative to the control mean of 87 percent) and increased the share of students who satisfied the attendance requirements for grade promotion by 4.7 percentage points (or 6.4 percent, relative to a control mean of 73 percent). On average, there were no significant impacts of the treatment on recorded misbehavior in school. We find important heterogeneity in these treatment effects related to initial academic performance. Grades and attendance impacts are 40–60 percent larger, and misbehavior falls by a significant 0.2 standard deviations more, among students whose at-risk index is one standard deviation above the mean.

Exploiting aspects of the research design and using our detailed administrative data, we investigate some of the ways in which the information intervention operated on parents and students. First, using variation in the weekly and monthly frequency of text messages delivered, we examine whether the effects of messages changed over time or with the frequency of the messaging. The patterns in our data indicate that the positive effect on attendance fades out over the week: effects appear somewhat larger immediately after parents receive the text messages and decline as the days go by. This suggests that for outcomes where the student makes daily choices—to attend or not to attend school—high-frequency text messages may be more beneficial than sporadic messages. At the same time, we find that the intervention is effective throughout the school year. Despite the sustained nature of the treatment, parents do not seem to “get used to” the treatment. Although the data do not allow us to precisely estimate all of the patterns of effects related to timing and frequency of messaging, taken together, the results suggest that information treatments like the one studied here may be more effective when delivered at high frequency and in an ongoing way over time. Next, we use the random manipulation of the share of treated students in each classroom to assess spillover effects within treated students. Understanding spillovers is important for thinking about impacts when information interventions like these scale. Although our design does not allow us to test for spillovers to the control group, we find evidence of positive classroom-level spillovers among treated students. This suggests that the positive direct effect on individual grades and attendance that we measure likely underestimates the impacts of a scaled-up version of this program in which all students would be treated.1

The information intervention was targeted at improving communication between parents and schools, lowering parent monitoring costs, and enabling better parent engagement with students and with schools. We use our rich administrative data and information collected through surveys conducted with parents and students before and after the program to explore these channels. We show that exposure to the high-frequency text message treatment shrinks information gaps about math scores and misbehavior between parents and schools. Parents of at-risk students “correct” their understanding of their child’s performance to the greatest degree (although results are not statistically significant at conventional levels). And, although the information treatment was designed to deliver information about specific subjects and behaviors, we show that it may also have directed parents to pay more attention to all aspects of school performance: the treatment group performed better in nontargeted subjects (for example, language), and parent misinformation about these nontargeted subjects also improved among the treated group.

Suggestive evidence from our surveys (point estimates are not always statistically significant) indicates that treated parents used the new information they obtained about their children to guide interactions with their children at home. Treated students report significantly more family support as a result of the intervention and that their parents were more involved in school matters. Parent engagement in day-to-day school matters appears to have changed as a result of the sustained, high-frequency information intervention. Consistent with these changes in reported parental behavior, we find that a large share of parents are willing to pay for the information program. We rely on a survey experiment to assess willingness to pay for access to the information program. For all parents, demand slopes downward. More than 70 percent of parents are willing to pay for the text messaging service when offered the lowest randomized price, and this share falls as the randomized price rises.2 We cannot reject that treated parents have the same elasticity of demand for the program as control parents.

Our study makes three contributions to the literature. First, we add new evidence from Chile to a large and active literature that studies the effect of sending information to parents about their children’s activities and performance in school. In a recent review of this literature, Escueta et al. (2020) highlight a key finding that bridging information and communications gaps between parents and schools, no matter how it happens (by text messages, email, regular phone calls, regular mail, report cards, or in-person visits), often results in learning gains for students.3 However, as Angrist et al. (2020) note, information interventions tend to have high variance across settings. Our results from poor urban schools in Chile indicate learning and attendance gains, but our treatment effects differ from other similar programs in different contexts. We estimate learning gains in math (0.09 SD) that are at the lower end of the range in the literature (0.09–0.19 SD of test scores), while our attendance gains (1.1. percentage points) fall in the middle of the range (0–2.1 percentage point gains in attendance).4 An important emerging pattern from our work in Chile, from the work of Barrera-Osorio et al. (2020) in Colombia, and from Bergman and Chan (2021) in the United States is that interventions improving parent–school communications tend to have the largest test score effects for the weakest students.5 Closing information gaps between parents and schools in an effective way, starting as early as elementary school, may therefore contribute to shrinking achievement gaps in a persistent manner.

Our second contribution is to study the impacts of an information treatment that was sustained for almost two school years. The unusually long duration of our intervention contrasts with prior studies that deliver information to parents for between three and four months (for example, Bettinger et al. 2021; Angrist, Bergman, and Matsheng 2022; Gallego, Malamud, and Pop-Eleches 2020) and one school year (for example, Rogers and Feller 2018; De Walque and Valente 2018). The duration of an information treatment may matter for several reasons. Continuing the text message program over multiple years means that parents experience a persistent improvement in information and reduction in monitoring costs. Parents may have been able to adopt different types of parenting strategies than they otherwise would have after a one- time or shorter-lived treatment (for example, engaging more with schools, or providing more family support for schoolwork, as students here report).6 In addition, the value of some types of information (for example, attendance this week or grade on a recent test) likely falls over time. For example, parents may be most likely to act on truancy in the days or weeks following a reported event. A sustained information treatment allows parents to always be up to date with this type of information. Furthermore, since the novelty of receiving information might fade out over time, it is important to test for ongoing impacts in a long-term treatment. Overall, we interpret the results from our sustained treatment as providing a good sense of how parents would respond and how student outcomes would change, on average, in a realistic environment outside of an experiment.

Finally, we tackle a series of questions related to scalability of information interventions. We analyze an intervention that uses primarily existing school inputs. Our text messaging program did not require any change in teacher inputs, practices, or pedagogy for implementation. It was possible for us to implement (and evaluate) the intervention for such a long time because we leveraged existing school practices and high-frequency data already collected by teachers, without making their jobs more complex. Our paper is most closely related to Bergman and Chan (2021), who automate the process of gathering already-digitized student data.7 Our results are most relevant for poor-performing schools in urban areas of developed and middle-income developing countries, where student records are already being collected at school level.8 In settings like these, implementing a program like Papás al Día would entail a low variable cost and a one-time setup cost. The program-specific variable cost of achieving a 0.01 standard deviations increase in math grades is about US$1.21 per student per year at market prices (rising to US$1.39/year when we include a fixed setup cost for the digital platform). Compared to other interventions in Latin America designed to improve learning outcomes and attendance, a program like Papás al Día is cost-effective.9 Relevant to the question of scale-up, we find tentative evidence of positive classroom-level spillovers among treated students. Our intention-to-treat measures may therefore underestimate the impacts of a scaled-up version of this program. Programs such as Papás al Día offer a practical, effective, and low-cost example of how to bridge information gaps between old-school paper records and parent cellphones at scale.

II. Setting

There are 12 years of mandatory schooling in Chile, eight of primary school and four of secondary school. Our experiment focuses on children from fourth to eighth grade who attend schools in an urban setting. Children walk to their neighborhood school or take public transportation. Depending on their age, they may travel alone, with an adult, or with older siblings. They attend school for about 180 days in the year, from 8:00 a.m. to 4:00 p.m.10 After school, children return home and are supposed to do homework. Many are unsupervised when they return home. This setup is similar to that of other large urban areas around the world.

Although Chile is now a high-income country, schools still lag behind relative to those in the United States or the average OECD country. For example, average class size in Chile’s secondary schools is 35 students, while in the United States, it is 26. According to the 2018 PISA results, almost one-third of Chilean students are below the minimum proficiency level in reading, compared with 19.3 percent in the United States. More than half of Chilean students (51.9 percent) are below minimum proficiency in math, compared with 27.1 percent in the United States. As in many other urban school settings, students are highly segregated into schools by socioeconomic status (Mizala, Romaguera, and Urquiola 2007).

Recent high school graduation rates in Chile are around 90 percent, 10 percent higher than the average OECD country (OECD 2022). This figure, however, masks considerable inequalities. High school dropout in Chile is concentrated among students in lower-income quintiles. For instance, in 2017, only 79 percent of students in the lowest-income quintile completed high school, compared with more than 96 percent of students in the highest-income quintile. Attendance, grades, and classroom behavior in elementary school are key factors affecting the risk of grade retention, which, in turn, increases the probability that students will drop out of school when they grow older (for example, Manacorda 2012; Wedenoja 2017). We focus on these three variables being the early warning signals for poor school outcomes later.

To advance to the next grade, Chilean students must attend at least 85 percent of school days in a school year and obtain a passing grade of 4.0 in all subjects (on a scale from one to seven).11 As a result, there is a strong correlation between attendance, subject grades, and grade retention.12

The transition from the final grade of primary school to the beginning of secondary school is a point at which students are at high risk of grade retention or, in the worst-case scenario, dropping out of the school system. Even though grade retention is an outcome of concern during lower grades, it becomes even more problematic as students progress through their school years. During Grades 1–3, about 3 percent of students repeat their grade. Starting in Grade 4, this percentage increases with each grade, finally reaching 5 percent by the end of primary school. In the first year of secondary school, the grade retention rate surges, reaching 13 percent. This pattern is observed in our sample and is common in most Latin American countries (Bassi, Busso, and Muñoz 2015).

Our intervention focuses on students in the last five grades of primary school, where the median child age is ten. It targets information for parents during the years when attendance, grades, and behavior begin to matter, but before the risks of grade repetition or dropout significantly increase.

Gaps in the information that schools and parents have about children have been identified in settings as diverse as the United States (Bergman 2021), Malawi (Dizon-Ross 2019), and Colombia (Barrera-Osorio et al. 2020). Examples in the literature suggest that most parents tend to overestimate their child’s performance in school and less-educated parents have worse information about their child’s performance in school (Barrera-Osorio et al. 2020; Rogers and Feller 2018; Bergman and Chan 2021). Parents in our sample are literate but have generally low levels of education (for example, only 53 percent of mothers have completed high school).

In our setting, we observe similar types of parent–school information gaps regarding the student’s actual grades and attendance. Parents are usually provided with information about their child’s progress once per quarter through a physical report card that details a student’s grades and number of absences. Not all report cards make it home. Teachers and principals also communicate with parents on an “as-needed” basis for certain cases of misbehavior, chronic absenteeism, and repeated low grades. Figure 1, based on data from our baseline parent survey described in Section IV, plots the share of parents whose report of the child’s grade/attendance is at odds with the child’s actual school performance before the intervention began. We define a grade as being misreported if it deviates more than 0.5 points above or below the actual grade. The share of grade misreports is plotted with a solid line. We define attendance to be misreported if the parents’ report of the child’s absence differs by two or more instances from actual absences recorded in the previous two weeks. The share of attendance misreports is plotted with a dashed line.13 These misreports are graphed against a summary measure—the (standardized) at-risk index—of whether a child is considered at risk of retention or dropping out (because of higher absenteeism, lower grades, or worse behavior in class) before the intervention.14 The histogram describes the distribution of this at-risk index.

Figure 1. Baseline Share of Misinformed Parents Notes: The y-axis presents the (lowess-smoothed) share of parents misinformed regarding their child’s grades (solid line) and attendance (dashed line) for different levels of the at-risk index (whose histogram is shown in gray). Estimates are based on parent surveys and administrative data at baseline. See notes for Columns 2 and 4 of Table 5 for details on the construction of misinformation measures and Section IV for the index construction.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1

Baseline Share of Misinformed Parents

Notes: The y-axis presents the (lowess-smoothed) share of parents misinformed regarding their child’s grades (solid line) and attendance (dashed line) for different levels of the at-risk index (whose histogram is shown in gray). Estimates are based on parent surveys and administrative data at baseline. See notes for Columns 2 and 4 of Table 5 for details on the construction of misinformation measures and Section IV for the index construction.

In our sample, on average, 26 percent of parents were unable to report correct information about their child’s grade, while 48 percent could not correctly report their child’s school attendance in the previous two weeks. Moreover, Figure 1 shows that misinformation is more prevalent among parents of students with higher at-risk index values, and a greater share of parents misreport attendance, relative to grades, for students at all levels of risk. About 40 percent of parents of students with a baseline math grade below 4.5 did not accurately know their children’s test scores. Similarly, 70 percent of parents of students with an attendance rate below 85 percent, did not know how many days their children had missed school in the previous two weeks. This is despite 79 percent of parents in our survey declaring that they almost always check their children’s report. These are the types of information gaps our intervention is designed to address. The patterns in Figure 1 suggest that our intervention should be particularly relevant for those children who are the most at risk of grade retention or dropping out.

III. Experimental Design

In this section, we outline the basic elements of our experiment: the recruitment of schools and parents, the randomization of students and classrooms, and the intervention.

A. Recruitment of Participants

We recruited publicly funded schools from two municipalities in Santiago.15 Chile’s Quality of Education Agency rates schools based on student learning, social and personal development, and any recent changes in these measures. The schools in our sample are particularly deprived according to these ratings and are tagged by the Chilean Ministry of Education as requiring additional resources and support based on poor student outcomes. Three of our schools (42.9 percent) are in the “insufficient” category (the lowest category), and two (28.6 percent) are in the medium and medium-low categories. Nationally, only 7.6 percent of schools are ranked as “insufficient.” Schools in our sample served students of medium-low or low socioeconomic status. Learning outcomes in our schools are among the lowest in Chile: in 2015 national standardized tests, our sample schools perform between the 18th and 35th percentiles.

In recruited schools, we held a series of meetings, inviting parents of all students in Grade 4 and above to join the experiment.16,17 More than 50 percent of parents consented to participate. Consent rates by grade level were similar. Younger students, those not new to the school, and those with better baseline attendance and grades were somewhat more likely to consent.18

B. Randomization and Intervention

We assigned students to treatment in two steps. First, we stratified by school grade level and randomly allocated classrooms (sections) to include a high or low share of students whose parents would receive text messages. In high-share classrooms, 75 percent of students whose parents had consented to participate were treated; in low-share classrooms, 25 percent of students whose parents had consented were treated.19 Second, within each classroom, we randomized students whose parents had consented into treatment or control status, according to the shares allocated in the first-step randomization. Students retained their individual and classroom-level randomization status for the duration of the intervention. Teachers were not informed about which students in their classrooms were participating in the experiment or who was randomized to treatment.20

Figure 2 shows the timeline of the intervention and the data collection. The school year in Chile runs from March to December, with two weeks of winter vacation in July. A first welcoming message was sent to all participants in May 2014. The intervention started before the winter break and lasted through December 2014, resuming again in March 2015 and lasting until December 2015. The summer break happened from mid-December 2014 to early March 2015.

Figure 2. Timeline Notes: The figure shows the timeline of the intervention and data collection implemented in 2014 and 2015.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2

Timeline

Notes: The figure shows the timeline of the intervention and data collection implemented in 2014 and 2015.

All parents in the treatment group received weekly messages on attendance and monthly messages on classroom behavior and math test scores (separately).21 We told parents how many days the child had attended school out of the previous school week (usually five days), and provided parents with the number of positive, neutral, and negative classroom behaviors that teachers had recorded in the classroom notebook over the prior month. We provided monthly updates on the record of all math test scores in the semester, the average of these scores, and the classroom average score for the same tests. Hence, parents learned information about their child’s math performance, as well as how their child performed relative to the classroom average. In addition, parents in both the treatment and the control group received text messages about school meetings, holidays, and other general school matters throughout the year. We refer to these as “general” messages.22 Parents of students in the control group continued learning about their child’s academic performance through report cards that were sent home every quarter.

To create the information for these messages, we collected data on attendance, grades, and behavior from school classroom books. Our research team scanned and entered these data into a digital platform, which then automated the sending of messages each week. We sent more than 44,000 text messages over 18 months: 68 percent provided information on attendance, 16 percent on math grades, and 16 percent on classroom behavior.23

Our original research design included a complementary intervention to the text messaging treatment. The complementary investment consisted of a nine-minute parenting video that provided parents with advice on how to use the text message information provided by schools. In a random 50 percent of all classrooms, we allocated the parenting video to the text messaging treated parents only. This second treatment therefore worked as an add-on to the original text messaging treatment. We discuss the implications of this add-on treatment for empirical strategy and interpretation of results in Section V.A and Online Appendix IV. Online Appendix IV details the various implementation challenges in the field that led to very few parents watching the video. In the rest of this paper, we focus on estimating the effects of the text messaging treatment (with or without the add-on parenting video).

IV. Data

A. Data Sources

We use information from four data sources. First, we collected data on all students’ math grades, daily attendance, and behavior notes from classroom books for the years 2014 and 2015. These are daily, weekly, and monthly frequency data that we aggregate to an annual level. Second, we use student-level records provided by Chile’s Ministry of Education. These records contain information on students’ end-of-year school performance, including test scores, annual attendance rates, and grade retention, as well as basic demographic information. They are available for our sample of schools for the period 2013–2015 and are used for allocating funding/subsidies across schools. We use the 2013 data as pre-treatment controls and to generate our measure of students who are at risk at baseline and 2014 and 2015 Ministry data to validate our main results using classroom records. Third, we recorded all text messages’ information, such as day and time stamps, the messages’ content, the name of the recipient parent, and the delivery status of the text message (that is, whether the phone number received the message). Fourth, we administered several surveys to all parents and children participating in the experiment. Surveys were administered before the intervention took place (baseline), at the end of the first academic year (midline), and at the end of the second academic year (endline). Student surveys were conducted in class, while parent surveys were sent home with children, who were encouraged to ask their parents to complete and return the surveys.24

B. Outcome Variables

We use data recorded by teachers in classroom books to measure our primary student outcomes—math grades, attendance rates, and classroom behavior—which we aggregate at the annual level. Using administrative school records, we also measure outcome variables (that is, grades, attendance rates, and an indicator for whether the student passed the grade) at an annual frequency at the end of each school year to validate our main sources.25,26

Using classroom books, we also constructed monthly math grades, attendance rates, and behavioral notes. All math grades were standardized using the corresponding grade–year control mean and standard deviation.27 In addition, we built two indicator variables for meaningful thresholds required to pass the grade: 85 percent of annual attendance for passing the grade and the 4.0 math grade for passing the subject. Using classroom books, we also measured negative behavior by adding all the behavioral entries during the school year (post-treatment) and then standardized the sum using the grade–year control distribution.

Our secondary outcome variables were designed to capture information gaps and certain behavioral responses to the treatment among students and parents. First, we built measures of information gaps by comparing survey questions that asked parents about their children’s recent grades, absences, and behavior. We then compared parents’ responses to students’ responses and administrative records. These measures help us to test whether the text messaging treatment improved parent–school communication at all. Second, we asked parents and children a series of questions to compute pre-specified measures (that is, several items that are aggregated into one variable usually referred to as a “scale”) of study habits, academic efficiency, parental support, parental supervision, parental school involvement, and parental positive reinforcement. These were intended to capture any changes in home behaviors and parent–child or parent–school relationships that might result from the intervention. We administered a set of survey items from three sources: the University of Chicago Consortium on Chicago School Research, the Manual for the Patterns of Adaptive Learning Scales (PALS) developed by the University of Michigan, and scales on positive parenting developed by the Prevention Group at Arizona State University. We aggregated categorical answers into scales using a maximum likelihood principal components estimator. We then standardized answers using the mean and standard deviation of the control group. Overall, we find that each scale has good psychometric properties.28 We asked parents and their children a similar set of questions. Scales are highly correlated both across survey waves and between children and parents, further suggesting that the quality of these scales is high (see Online Appendix Tables F.5 and F.6). Finally, to assess how much parents value the information provided through our intervention, follow-up surveys asked parents about their willingness to pay for the text messages.29 Parents were randomly assigned a value $V of (low) $500 Chilean pesos, (medium) $1,000, and (high) $1,500 price (where $ is Chilean pesos per month, and where $1,000 is about US$1.50).

C. At-Risk Index

We build an index to measure each student’s risk of failing classes or dropping out later in life. Specifically, we rely on three variables measured before the intervention began: standardized attendance (Embedded Image), math grades (Embedded Image), and negative behavioral notes (Embedded Image).30 The at-risk index is then defined as a simple average of these measures [Embedded Image], which we standardize to the control group. The higher the value of this index, the worse grades, attendance, and classroom behavior the student has at baseline. Throughout the analysis, we rely on this index to assess the differential impact of the intervention on the primary and secondary outcomes for students with different values of the index.31

In our setting, low attendance and low grades are early warning signals for future grade retention and dropout. To explore this empirically, we used data from the Ministry of Education to look at the complete educational trajectory of almost 1.3 million students who were in Grades 8–12 during the 2006–2013 school years in the metropolitan area of Santiago. We estimated a simple model in which the dependent variable was an indicator for having been retained in the same grade or having dropped out of school, and the independent variables were the attendance and GPA in the previous three years (two of three components of the at-risk index that we observe for the whole population). We find that all coefficients are negative, and most are statistically significant at conventional levels.32

D. Response Rates

Baseline data from administrative sources are available for all students in the experimental sample, except for a handful who join the school mid-year in 2014. Administrative data are also complete for the first year of the experiment. During the second year of the experiment, due to normal student turnover, we have information for 90 percent of the students. This attrition rate is similar for both treated and control students. Regarding survey data, students’ response rates were between 91 percent, 89 percent, and 80 percent across baseline, midline, and endline, respectively. More data were missing for parents, particularly in the follow-up surveys. Parental response rates were 73 percent at baseline, 57 percent at midline, and 54 percent at endline. For all survey waves, response rates were similar across treated and control students and parents. In addition, respondents may have chosen to complete some items but not others. This item nonresponse affects the sample sizes of secondary outcomes measured through the midline and endline parents’ and students’ surveys.33

V. Estimation and Experimental Validity

A. Empirical Strategy

1. Intention-to-treat effects (ITT)

To identify the effect of sending parents high-frequency academic information on students’ and parents’ outcomes, we pool the two school years of the intervention and estimate individual-level regressions of the form:

Embedded Image1

where Yict is the outcome of student (or parent) i in classroom c of school j, and year t; Tic is an indicator for whether a child’s parents were part of the randomized group that received the information treatment, and it is constant over time; and πt are year fixed effects. Embedded Image are the baseline standardized math grade and attendance rate.34 Finally, γc are classroom-level fixed effects (strata in the experimental design). Despite the main randomized variation being at the student level, to be conservative, we cluster standard errors at the classroom level.35 β1 captures the intention-to-treat effect of the information sent by text messages. Because we include classroom-level fixed effects (γc), β1 is identified through differences in individual-level treatment status within each classroom.36

2. Classroom-level spillover effects

We exploit the differential classroom-level exposure to treatment to estimate spillover effects of the intervention on the treated. Such spillovers could be important, especially if such parent–school communication programs scale up to cover all enrolled students (rather than just a randomly selected treatment group), where by definition, there would be no control group. Let Ec be an indicator variable equal to one if classroom c was randomized to have 75 percent of students treated and is equal to zero if it was randomized to have 25 percent of students treated. We estimate the parameters of the following model:

Embedded Image2

The coefficient η2 measures the differential treatment effect of the text message intervention in classrooms where a larger proportion of students were treated. Because of randomization, and assuming there are either no spillovers to the control group, or equal spillovers to the control group in all classrooms, η2’s estimate allows us to quantify the spillover effect on the treated students. This is the relevant group when thinking about scaling the program to cover all students. In our experimental design, Ec is collinear with λc, so we cannot estimate differential spillovers among students who were randomized out of the text messages treatment.37

If there are any positive spillover effects to the control group, such as those found by Bettinger et al. (2021), our treatment effect estimates (Embedded Image) would capture a lower bound of the effect of text messages on all students’ outcomes. Moreover, as long as any spillovers on the nontreated are larger in classrooms where a higher share of students were treated, then our estimated spillover effects on the treated (Embedded Image) would also represent a lower bound of the true spillover effect to this group.38

B. Balance on Pre-Treatment Observable Characteristics

We compare the observable characteristics of students and parents assigned to the treatment and control groups before the intervention began.

Table 1 shows total observations with available data (Column 1),39 the average of each variable for the treatment group (Column 2) and the control group (Column 3), and the p-value of the null hypothesis that, conditioning on classroom (strata) fixed effects, the differences between treatment and control averages are zero (Column 4).40

View this table:
  • View inline
  • View popup
Table 1

Students’ and Parents’ Pre-Treatment Characteristics

Panel A shows statistics based on administrative records. In our sample, 45 percent of students are female. The median age is 9.8 years. Students in treatment and control groups have similar grades at baseline, with math and language scores around 5.1 (on a 1–7 scale), similar attendance rates (89 percent), and similar levels of the at-risk index. About 95 percent passed their grade in the year prior to the experiment. Pre-treatment administrative records are missing for about 9 percent of the sample. We cannot reject equality between any of the mean characteristics of students randomized to treatment and control. The last row of the panel presents a Wald test of the joint null hypotheses that the differences in means reported in Columns 2 and 3 for all the variables in the panel are zero. We cannot reject the null at conventional levels of significance.

Panels B and C show standardized parents’ and students’ scales from the baseline surveys.41 Before the intervention began, students in the treatment and control groups reported putting in similar effort when studying at home and receiving the same parental supervision, involvement in their school affairs, and positive reinforcement at home. Parents across treatment and control groups similarly report the same parenting practices at home. We reject equality at the 10 percent level for one measure, with parents in the treatment group reporting less family support than parents in the control group. Despite not rejecting most of the null hypotheses that the average scales are similar for treated and control students, we note that in most cases, the estimated means are lower in the treatment group. This could reflect the fact that many of these scales could be noisy measures of a similar latent variable.

For this reason, we aggregated all these scales into parents’ and students’ indexes.42 We cannot reject equality of the mean of the indexes of treated and control students. Finally, we find that mothers in the treatment and control groups are equally likely to have completed high school. The final row in each panel presents the p-value of the joint test of equality of the variables listed in the panel. In both cases, we cannot reject the null at the 10 percent confidence level.

C. Delivery of Text Messages

All text messages were sent to parents as planned. However, not all text messages were actually received.43 Several factors contributed to reception failure. A message was more likely to fail if the network was very busy, if some technical problem surfaced within the network, or if a parent had changed their phone number during the experiment. To maximize the chances that text messages reached parents, we sent the messages on Mondays, when the network was not as busy as on other days.44 At the beginning of the second school year during which the experiment took place, we also recontacted all consenting parents to verify or update their cellphone numbers.

Table 2 shows estimates obtained with Equation 1, where the dependent variable is the total number of messages sent (Panel A) or received (Panel B) during the course of the experiment. The variables are computed for each type of message (attendance, grades, classroom behavior, general, and all) using information from the digital platform described in Section III. Each point estimate shows the coefficient estimate of β1, which estimates the differences in the total number of text messages sent to/received by parents in the treatment group and those in the control group.

View this table:
  • View inline
  • View popup
Table 2

Compliance by Type of Text Message

By the end of 2015, when the experiment had run for one and a half school years, an average of 44 more text messages per year had been sent to parents in the treatment group than to parents in the control group. Over the same period, an average of 26 messages per year had been received by parents in the treatment group. This implies that almost 60 percent of sent text messages were successfully received by the end of the intervention, a success rate similar to those reported in the literature. Bergman and Chan (2021), for instance, report that in their text messaging information intervention in West Virginia, about one-third of treated parents never received messages that were sent.

The bottom panel shows the distribution of messages sent and received by parents in the treatment and control groups. Most of the messages were about attendance because these were sent weekly, while classroom behavior and grade messages were sent monthly. These treatment messages were only sent to, and received by, parents assigned to the treatment group. By contrast, parents of students in the control group were sent (and received) general text messages at largely the same rate as those in the treatment group (Column 5).45

The data suggest that the probability of receiving text messages is unlikely to be correlated with family-level characteristics that also affect child outcomes of interest. We might worry, for instance, that parents who have low attachment to the labor market and unstable incomes are also more likely to switch cell numbers. They would then be less likely to receive text messages about their children’s academic performance. Children in these families may also have worse school outcomes. To assess this possibility, we estimated a regression model in which the dependent variable was the total share of successfully delivered text messages (total received/total sent) on baseline attendance and math grades, age, gender, a composite index of the parent scales and mother’s education (as reported in Table 1), and classroom fixed effects. Students with higher baseline grades, with higher attendance, or with higher family support and supervision are no more (or less) likely to receive text messages. Mother’s education seems to be weakly correlated with the share of messages received.46

Beyond the matter of whether parents received text messages that were sent, there is also the question of whether parents read the text of the messages they received. In the follow-up surveys we asked parents if they had received text messages with information on their children’s school outcomes. We found that parents in the treatment group were more likely to answer that they had received text messages regarding their child’s attendance, grades, and classroom behavior.47

VI. Results

A. Main Results: Students’ Academic Outcomes Improved

Table 3 presents the main results. We show the estimates of the intention-to-treat effects (using Equation 1) of the intervention on our primary students’ outcomes measured using classroom books: standardized math-grade outcomes at the end of each year (Column 1), an indicator for whether the annual math grade was a passing grade (above 4.0) (Column 2), yearly attendance rate (Column 3) for each year, an indicator for whether attendance was above the 85 percent cutoff required for the student to pass the grade (Column 4), and standardized total annual negative behavioral notes (Column 5).

View this table:
  • View inline
  • View popup
Table 3

Treatment Effects on Grades, Attendance, and Classroom Behavior

The ITT estimates show positive and significant effects on students’ school performance. Math grades improved by 0.088 standard deviations. This positive impact on math grades pushed more students over the 4.0 cutoff for passing the subject, increasing this probability by 2.7 percentage points. The treatment also improved attendance by almost 1.1 percentage points, leading to a 4.7 percentage point increase in the number of students who met the 85 percent attendance rate threshold needed to pass the grade.48 On average, the treatment did not have an impact on the occurrence of negative classroom behaviors.

Our main results are robust across a range of different specifications, sample choices, and data sources. Online Appendix Figure 2 presents results from estimating the effects of the treatment on grades, attendance, and behavior for specifications that include and exclude baseline controls, that separate out the midline and endline samples, for samples that include students who leave the study in year two (either because they are in Grade 8 in the first year or attend the one school that dropped out of our study at the end of year one), and that use outcomes data from the national ministry rather than the administrative data collected by our research team directly from schools. While the effects on math grades are larger in 2014, the impact on attendance rates appears to be stronger in the second year of the intervention. Overall, while the confidence intervals move around somewhat with different choices of samples and outcomes, the point estimates for the impacts of the treatment on grades and attendance are uniformly positive. The main results in Table 3 are in the middle of the range of estimates in Online Appendix Figure 2. For each outcome, we could not reject the hypothesis that the point estimates are the same across different samples, specifications, and source of outcomes data and the same as in Table 3. The fact that the treatment produces stable positive impacts on our main grade and attendance outcomes is reassuring.49

Panel B of Table 3 shows estimates for students with different pre-treatment risk of failing grades or poor attendance. To estimate these effects, we interacted the at-risk index described in Section IV with the randomized treatment indicator variable (in Equation 1) and controlled for the at-risk index. The intervention had the largest impacts on math grades, attendance, and improvements in behavior for students who were more at risk before the intervention started. The treatment effects are two to three times larger for students with an at-risk index one standard deviation above the mean (which by construction of the index is zero for the control group). Figure 3 explores this result in more detail by plotting the linear prediction of the treatment effects on math grades (Panel A), attendance rates (Panel B), and classroom behavior (Panel C) for students with different levels of the at-risk index. We find that the effects for attendance and math grades are larger and statistically significant only for students at higher risk. The pattern of behavioral effects by the at-risk index also suggests larger improvements (fewer negative behavior notes) for the most at-risk students, although the confidence intervals in Figure 3 Panel C cannot reject zero. Note that the results in Table 3 Panel B are consistent with the treatment increasing the probability of the most at-risk students achieving the attendance and math grades thresholds for passing the grade and subject—precisely for the population of students who have a higher probability of dropping out in later years. Improving parent–school communication through this text messaging program seems to effectively target and improve outcomes for students who need the most support in school and at home.

Figure 3
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3

Predicted Treatment Effect by Baseline At-Risk Index

Notes: Figure shows linear predictions and 95 percent confidence intervals of the intention-to-treat (ITT) estimates on math grades, attendance rate, and negative behavior. Computed based on coefficients from Columns 1, 3, and 5 of Table 3, Panel B, respectively. The standard error for estimate at each percentile p is constructed as Embedded Image, where Embedded Image is the mean of at-risk index in percentile p.

B. Classroom-Level Spillovers on the Treated

In the presence of treatment spillovers among the treated, the treatment effect could vary with the share of other treated students in the classroom. This could happen, for example, if the value of skipping school falls when friends are no longer truant (Bennett and Bergman 2021). Alternatively, if a student’s friends are working harder to improve their grades, that student’s own effort to earn better grades may increase (if, for instance, there are ranking concerns, Tincani 2018). Spillover effects are important to quantify when considering possible impacts at scale. To estimate these indirect effects of the intervention, under the assumptions discussed in Section V.A, we exploit the randomization of the different shares of students who were part of the treatment group in each classroom.

Table 4 presents the results of the ITT spillovers for the same set of outcomes as in Table 3. Note that the interaction coefficient captures the differential effect of the spillovers by comparing classrooms with high and low shares of treated students; in other words, it examines whether there is extra value evident in being in the text messaging program when many more classmates are also in the program. In all cases, although point estimates are imprecise and not statistically significant in Columns 1 and 3, the differential effect of being assigned to treatment in a high-share classroom improves educational outcomes of treated students—it is larger than the main effect of the treatment in low-share classrooms. The last row presents the p-value of the null hypothesis that the treatment effect was zero in high-share classrooms, which is rejected at the 10 percent level in Columns 2, 3, and 4. This suggests positive spillovers of the intervention among treated students. With a higher share of treated peers, students are significantly more likely to meet the 4.0 passing grade cutoff and to reach the 85 percent attendance cutoff.

View this table:
  • View inline
  • View popup
Table 4

Spillover Effects

The spillover results in Table 4 suggest that we would not expect any negative impacts of scaling up this intervention to cover all students. If anything, we should expect even larger impacts at scale, when everyone is treated.

C. Do Text Messages Work in the Same Way over Time?

Bergman and Chan (2021) note that there are many open questions about how parents will respond to ongoing text messaging from schools. The long duration of our treatment intervention allows us to explore how parents responded to the text messaging over time. Parents who receive text messages might forget about the content of the messages after some time, and this could affect their decisions about whether to allow their children to miss a day of school. The majority of the weekly attendance text messages were sent on Mondays. We use daily attendance data to explore whether the effectiveness of the text messages fades within the week.50

Figure 4 depicts point estimates and confidence intervals for models similar to that of Equation 1, which was modified to include an interaction of the share of text messages received with days-of-the-week indicator variables. We find a pattern suggestive of fade-out over the week. Attendance by students in the treated group is significantly higher than attendance of students in the control group on Mondays, Tuesdays, and Wednesdays; by contrast, attendance rates of the two groups are indistinguishable on Thursdays and Fridays.51 However, we cannot reject equality of the coefficient estimates. Rogers and Feller (2018) find similar results with a larger impact in the week immediately following the delivery of the treatment. This result suggests that the treatment effect of the text messages could be somewhat short-lived. Information treatments delivering information that depreciates in value over time may need to be high frequency in order to be effective.

Figure 4. Weekly Fade-Out of Attendance Treatment Effects Notes: Coefficients are obtained from the daily intention-to-treat estimates of Online Appendix Table 4. Standard errors clustered at the classroom level. Confidence intervals are at the 90 percent level.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4

Weekly Fade-Out of Attendance Treatment Effects

Notes: Coefficients are obtained from the daily intention-to-treat estimates of Online Appendix Table 4. Standard errors clustered at the classroom level. Confidence intervals are at the 90 percent level.

A related concern is that parents could at some point stop paying attention to the content of the communication or stop internalizing the information after having received such messages over some period of time. Because our intervention lasted for one and a half school years, we can explore the treatment effects over the months of the intervention. We estimated effects by month-groups, interacting the treatment with groups of months since the beginning of the intervention. Figure 5 plots the estimates and confidence intervals on the impact on monthly attendance, monthly math grades, and monthly negative behavioral notes.

Figure 5. Treatment Effects over Time Notes: Coefficients are obtained from the respective intention-to-treat estimates of Online Appendix Table 5. Standard errors are clustered at the classroom level. Confidence intervals are at the 90 percent level.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5

Treatment Effects over Time

Notes: Coefficients are obtained from the respective intention-to-treat estimates of Online Appendix Table 5. Standard errors are clustered at the classroom level. Confidence intervals are at the 90 percent level.

We find that the impact on attendance is mainly concentrated in the last months of the intervention, although we cannot reject the null that all coefficients are equal.52 In the case of math grades and behavior, there is no clear pattern in the timing of the effect. This is consistent with students/parents dynamically optimizing attendance behavior. The intervention could have more of an impact on absenteeism than grades by the end of the year because that was when parents/students started to realize that the absences had accumulated enough to matter. It could also be the case that approaching the end of the school year, attendance is easier to move than test scores. From a policy perspective, these results suggest that parents do not become immune to the intervention over the course of 18 months.

VII. Did the Text Messages Intervention Improve Parent–School Info Gaps and Change Parenting Behaviors?

Our intervention was designed to close information gaps between parents and schools and promote parent engagement with students and with schools. In this section, we explore some of these underlying mechanisms that might have contributed to why students’ school performance improved after their parents were exposed to high-frequency text messages containing student-specific information. We show that the treatment was able to close existing parent–school information gaps about math grades, attendance, and behavior, while also improving parent attentiveness to other nontargeted aspects of school performance. The new information seemed to have changed the way parents provide support and supervise their children at home. All of these changes reflect greater parent engagement with day-to-day school activities of their children.

A. Parent–School Information Gaps Narrowed

We study whether the text messages reduced the prevailing parent information gaps regarding students’ academic performance by comparing the accuracy of information among parents in the treated and control groups. We construct different measures of the accuracy of parent’s beliefs regarding their child’s school performance. Specifically, we contrast parents’ responses with student surveys, classroom books, and school records. We then estimate treatment effects using Equation 1, in which the outcome variables are the misinformation measures.

Table 5 presents the ITT effects.53,54 Columns 1–2 measure parental misinformation regarding a student’s attendance. Surveys asked parents about their child’s absences with and without permission in the previous two weeks. We contrast parents’ responses to students’ own responses on total absences (Column 1) and to actual absences recorded in classroom books (Column 2). Columns 3–4 assess the effect of the intervention on parental information about students’ grades. Columns 5–6 capture parental misinformation about students’ misbehavior. In both cases, we also contrast parents’ responses with students’ surveys responses (Columns 3 and 5) and with classroom books (Columns 4 and 6). In all cases, the outcome variable is an indicator variable that is equal to one if the parent’s response does not match the student’s responses or the administrative records.55

View this table:
  • View inline
  • View popup
Table 5

Treatment Effects on Parental Misinformation

Panel A of Table 5 shows that all point estimates are negative. That is, text messages reduced information gaps about student attendance, grades, and classroom behavior. Parents’ reports got closer both to students’ reports and to school administrative records. Because our sample of parents who responded to the follow-up survey is relatively small, these reductions in information gaps are not always precisely estimated; nevertheless, coefficients are large and negative for all outcomes.56 The ITT estimates, for instance, show that text messages significantly reduced the probability that parents misreported the number of their child’s absences. The likelihood of such misreporting fell by 7.9 percentage points, in comparison to the results from student surveys. When we compare parents’ beliefs with classroom books, the results also show a decline in information gaps, but not to a degree that is statistically significant.

In addition, the information intervention seems to have improved the accuracy of parents’ knowledge of their child’s grades. Although not statistically significant at conventional levels, coefficients are negative and stable across outcomes. We also find a significant improvement in the precision of parents’ assessment of their child’s misbehavior at school. Overall, these results suggest that treated parents had more accurate information about their child’s grades, attendance, and classroom behavior after the treatment.

Panel B of Table 5 tests whether treatment effects on information gaps vary for students with different baseline values of the at-risk index. The intervention seems to have improved the accuracy of parents’ beliefs about their child’s grades and behavior for students with a higher at-risk index, although the results are not statistically significant.

We notice that the impact of the treatment on grades in Table 3 was larger (in percent terms) than the impact on attendance in that table; however, the parent misinformation gap shrinks more for attendance than for grades. But attendance, grade, and behavior are measured in different units, and the range of possible parent responses to questions about these outcomes also differs from the range of actual outcomes (for details, see the table notes in Table 5). For this reason, and because our smaller parent sample makes precise estimation challenging, we view the results in Table 5 as broadly consistent with the view that exposure to the treatment improved parent information sets. Closing the information gaps was one channel through which the text message intervention improved schooling outcomes.

B. Effects on Other Subjects, and Parent Misinformation about Those Subjects

In Table 6, we estimate effects of the treatment on other, nontargeted subjects using outcomes data reported by the schools to the national ministry. We see that language scores increased by a significant 0.11 standard deviations, and scores on natural science and history also increased by 0.05–0.1 standard deviations (not significant). This positive impact of the treatment on nonmath subjects could have occurred through the channel of increased attendance (that is, a positive downstream impact of the treatment). However, the treatment might have also increased parental attention to school in general, thus leading to improvement in nontargeted academic subjects.

View this table:
  • View inline
  • View popup
Table 6

Treatment Effects on Other Subjects’ Grades and Misinformation

In Panel B, we show some suggestive evidence that the treatment may have reduced parent misinformation in general. We estimate the impact of the treatment on parental misinformation about other subjects not specifically targeted by the intervention. Across the board, parent misinformation relative to the administrative records shrinks. The coefficients for parent information gaps about language, social studies, and history are all negative. Interestingly, parent information gaps in language shrink to about the same extent as they shrink for math grades (Table 5, Column 1). The results in Table 6 are consistent with two plausible explanations. First, in addition to reducing information gaps on the specific topics on which parents received information, the text message intervention could have induced parents to pay more attention to how their children are doing in other (nonmath) subjects. Alternatively, because grades are correlated across subjects, parents updated their beliefs about their child’s performance on the targeted subject (math) and, jointly, on all other subjects.57

C. Parent Engagement at Home and School Improved

By providing parents with information over a sustained period of time, the intervention may have led students and parents to respond with changes in behaviors at home, which, in turn, might then have resulted in better outcomes at school. To examine this, in Table 7 we analyze the responses to survey questions that were put to both parents (Panel A) and students (Panel B) in an identical manner. Columns 1 and 2 measure students’ academic responses in terms of two aggregate scales (study habits and academic efficiency). Columns 3–6 look at parents’ behavioral responses, in terms of several aggregate scales designed to capture family support, supervision, involvement with school matters, and positive reinforcement at home.

View this table:
  • View inline
  • View popup
Table 7

Treatment Effects on Parental Behavior at Home

These aggregate scales are built from individual survey items. Looking at the control group means of these outcomes provides a clearer picture of the status quo. About 66 percent of students in the control group considered themselves to be organized with schoolwork, and 80 percent thought they were capable of understanding difficult school content. Approximately 93 percent of parents reported having shown to their children that they are proud of them and to have congratulated their children regarding school achievements. Thirty-six percent reported that their children went to school alone, and 29 percent reported a communication with a child’s teacher.58

We do not find a clear pattern or statistically significant results for the information provided by parents in terms of how the treatment affected their self-reported behaviors. By contrast, however, treated students perceived that they received significantly more family support as a result of the intervention (0.112 standard deviations). This scale incorporated the students’ answers to questions such as whether parents checked the child’s homework, provided motivation to them, or talked to them when needed. Moreover, the treatment also increased students’ perception of their parents’ level of school involvement (0.117 standard deviations). This perception was reflected in students’ answers to questions about whether their parents contacted the school director or teachers or whether their parents attended school meetings.

Overall, results in this section show that exposure to the text messages treatment reduced parent–school information gaps and increased student reports of parent engagement with their day-to-day school activities. Although smaller samples in the parent and student surveys limit the precision of the estimates, the pattern of results is consistent with those of Bergman (2021), Bergman and Chan (2021), and Barrera-Osorio et al. (2020), who find that the additional information provided to parents increased their contact with the school.

VIII. Cost-Effectiveness, Willingness to Pay, and Potential to Scale

A. Cost-Effectiveness

The literature on information interventions to improve learning gains in school settings has burgeoned in recent years. Several reviews of this work now exist. For example, JPAL (2020) reviews the results of 23 randomized evaluations from low-, middle-, and high-income countries in which information is provided to parents about student performance (for example, attendance, behavior, or grades) using text messages, emails, report cards, and videos. Escueta et al. (2020) find 13 field experiments where information is sent to parents about student performance through text messages and emails. Collectively, these studies show that closing knowledge gaps about a child’s education often increases parental engagement with schools, student effort in school, or both, while also improving learning outcomes. Bergman (2021) is a leading example of this type of work. In his study, parents of 462 students in Los Angeles schools were randomly assigned to receive automated texts about missing assignments and grades. After four months, the text message intervention decreases the number of missed classes by 28 percent, with a corresponding gain of 0.21 standard deviations in math grades, but no gains in English. These results are much larger than the ones we find in Chile.59 This difference reflects an emerging fact coming from these information interventions: impacts of information interventions to improve learning in schools are high variance across different settings (Angrist et al. 2020).

Where do our estimates fit with respect to this literature? Our estimated learning gains in math (0.09 standard deviations) are at the lower end of the range of effect sizes in the literature (0.09–0.19 standard deviations of test scores), while our attendance gains of 1.1 percentage points fall in the middle of the range (0–2.1 percentage point gains in attendance).60 We do find larger estimated effects for the most at-risk students in our sample of schools. For this group, the effect of the text messaging program generates grade and attendance effects at the upper end of the range of average effects in the literature.

Regarding our intervention cost, as pointed out by Bergman and Chan (2021), interventions that leverage technology to connect schools with parents on an ongoing basis are characterized by low variable cost and a once in a lifetime setup cost. In their study, the variable cost per text message was negligible, while there was a once-off fixed training cost of US$7 per student if schools did not have electronic gradebooks. In the case of Chile, the market value of sending text messages is US$0.05 per message. With an average of six text messages sent per month for ten months, this adds up to $3.00 per student per year. In addition, the monthly subscription fee for a digital text messaging platform is $0.77 per student, or $7.70 per student per year. We estimate the cost of digital data entry to be $0.16 per student per year.61 The total variable cost per student per year (in 2021 nominal prices) is therefore $10.86 per year. Given our effect sizes for math grades, the cost of a 0.01 standard deviation in math grade would be $1.21 at market prices (10.86/9). In transitioning to this system, schools would have to incur a fixed messaging platform setup cost of $615.4 per school. Considering the average primary school size in our sample had 377 students, the fixed cost per student in the first year would be $1.63. In the first year of using a platform like Papás al Día for parent–school communication, the cost of a 0.01 standard deviation of math grade improvement would therefore be $1.39, with that cost falling over time.62

A program like Papás al Día is cost-effective when compared to other interventions designed to improve learning outcomes. Busso et al. (2017) review results from 21 low-cost interventions designated to improve student learning in primary schools in Latin America and the Caribbean. Strategies include tracking, funding for materials, lesson plans, nonmonetary incentives, and guided technology. The authors of that study calculate the implementation cost of each intervention implemented in Colombia. The average cost per student for a 0.01 standard deviation gain in learning is US$4.42, and the median cost is US$2.00.63 In terms of cost, our intervention compares favorably to these other approaches.

B. Willingness to Pay

In addition to the program being cost-effective, most parents in our study seemed willing to pay enough to cover the costs of the intervention. In our follow-up surveys, we asked both treatment and control parents to tell us whether they would be willing to pay for a text message service that provided them with four monthly messages from schools about their child’s performance and behavior in school. This was a nonincentivized survey experiment in which we randomized the price at which parents were given a “take it or leave it” offer: a high price of 1,500 CLP (Chilean pesos, or 2.2 USD) per month, a medium price of 1,000 CLP (or 1.5 USD) per month, or a low price of 500 CLP (0.74 USD) per month.64 The low price covers more than twice the monthly cost of sending messages.

Table 8 uses the survey experiment to estimate parents’ demand curves for the complete sample in Column 1. On average, 71 percent of parents said that they were willing to pay at least the minimum amount to receive text messages from the school, which generously covers the break-even costs of the intervention. In Column 2, we allow each experimental group to have a different response to the randomized price by including price assignment by treatment assignment interaction terms.

View this table:
  • View inline
  • View popup
Table 8

Parental Willingness to Pay

Overall, the demand curve for a service like the one we offered in our intervention is downward-sloped. Column 1 shows that the share of parents willing to pay for the service falls by more than 15 percentage points as the price increases from low to medium levels, and by an additional 8.7 percentage points when the price increases from a medium to a high level (the coefficient on high price is −0.238). We then analyze whether the treatment induced parents to value the text messages program differently (Column 2). There is no evidence that treated parents value the information differently than control parents.

C. Features of Scalability

The primary goal of this project was to evaluate an intervention that leverages existing school resources and practices—rather than requiring substantial additional resources or a change in school practices—to improve student outcomes. In middle-income countries, and in poor schools in high-income countries, education expenditures are already high. There are potentially large returns to adopting low-cost interventions that can make existing school expenditures more effective. Our results indicate that a text messaging intervention to improve parent–school communication can achieve just this.

From our experience in the field, it would be relatively straightforward for a school district to scale a Papás al Día–like program by adopting the following three components: (i) a subscription to a text messaging platform such as the one used in our study, possibly paid for or subsidized by parents; (ii) a weekly digitization of attendance, grades, and behavior classroom books, which is already being done in some schools, but alternatively could be completed by existing administrative staff at schools; and (iii) a registry of cellphone numbers for parents/guardians of students, updated at least once per year. Schools already collect contact details for parents, but contact lists would need to be digitized and shared with the digital messaging platform.

Schools in Chile have already started down the road of adopting text messaging technologies to improve communication with parents, even in the absence of national policy about such programs. When we began this study in 2014, the market for digital information platforms serving schools was nascent. In the last several years a number of companies have entered this market (for example, one of the suppliers, Papinotas, offers various digital services to more than 2,000 Chilean schools). The results from our study suggest that expanding these types of services in upper-primary and middle schools would likely lead to small but meaningful improvements in grades and attendance, especially for students most at risk of repeating grades, or dropping out later in life. Moreover, our positive results on spillovers to the treated group suggest that a scaled-up version of the program, in which all students are treated, would continue to yield positive learning and attendance gains.

IX. Conclusions

We present the results of a simple, effective, and low-cost intervention that uses existing data regularly collected by schools to improve the accuracy and timeliness of information parents have about their children’ attendance, grades, and classroom behavior.

We showed that high-frequency text messages communicating this information to parents decreased prevailing information gaps between parents and schools and shifted some aspects of parent–school and parent–student engagement. The intervention sustained over two school years resulted in learning and attendance gains on average, with significantly larger gains for students most at risk of poor schooling outcomes later on in life. At a broad level, our findings suggest that efforts to reduce grade retention and school dropout in later grades may be supported by early information interventions. We leave the analysis of these long-term impacts to future work.

Acknowledgments

The authors thank Julian Martinez-Correa, who provided excellent research assistance and multiple comments that improved the manuscript, as well as Anna Koh Lee, Santiago Perez Vincent, Dario Romero, and Dario Salcedo for excellent research assistance in early stages of the paper. They thank Bernardita Muñoz, Daniela Alvarado, and Paula Espinoza for superb support in fieldwork. They thank Thomas Dishion and Anne Mauricio for fabulous guidance on communicating with parents and the Family Check-Up approach. The authors gratefully acknowledge funding through J-PAL’s Post-Primary Education Initiative, the Inter-American Development Bank, the Spencer Foundation, and a Chilean FONIDE grant (No. 711272). IRB approval: MIT COUHES Protocol # 1308005856. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the Inter-American Development Bank, its Board of Directors, or the countries they represent. The study is registered as AEARCTR-0000458. The data used in this article are available online in the Harvard Dataverse (https://doi.org/10.7910/DVN/9DVYK4).

Footnotes

  • ↵1. For budgeting reasons, we do not have pure control classrooms; therefore, we are restricted to estimating spillovers within treated students.

  • ↵2. This result echoes Bursztyn and Coffman (2012), who show that Brazilian parents are willing to pay for receiving regular updates on their child’s absenteeism.

  • ↵3. Information provided to students themselves has been shown to matter for key schooling transitions at higher grades (for example, Dinkelman and Martínez A. 2014; Castleman and Page 2015; Busso and Hincapie 2017).

  • ↵4. These ranges are taken from our summary of the literature in Online Appendix I.

  • ↵5. This is not the case everywhere. In Malawi, where schools are more rural and with fewer resources than in our context, Dizon-Ross (2019) finds that better information increases inequality between students, as parents are better able to target resources towards the highest-ability children.

  • ↵6. Parent–student and parent–school communications have been found to be important for improving school outcomes of older students in the context of information interventions (for example, Kraft and Rogers 2015; Kraft and Dougherty 2013).

  • ↵7. Bergman and Chan (2021) scrape student information systems and feed this into a text messaging platform to facilitate an information intervention in 22 schools (covering Grades 6–12) in West Virginia. Rather than sending regular text messages to all parents, their intervention alerts parents to missed classes, missed assignments, and low grades.

  • ↵8. In much poorer contexts, where basic data collection is not practiced, other studies have found information interventions alone were ineffective at improving education outcomes (for example, Banerjee et al. 2010; Muralidharan and Sundararaman 2010) or were only effective once additional teaching inputs were provided (Angrist, Bergman, and Matsheng 2022).

  • ↵9. In a setting similar to ours, Barrera-Osorio et al. (2020) combined a one-time information intervention about student performance in Grades 4–6 with targeted advice to parents in Colombia. Their results indicate short-run gains on a combined math and reading test score that are close to the Papás al Día test score results, but at a considerably higher cost per student (US$7.50 per year).

  • ↵10. Most schools in Chile have full-day schools. Schools can distribute their mandated hours throughout the week and typically have classes from 8 a.m. to 4 p.m. four days a week, ending at 1 p.m. one day a week.

  • ↵11. Students who fail one subject can still advance to the next grade if they maintain an average grade of 4.5 for the remaining subjects; students who fail two subjects can also advance if they maintain an average grade above 5.0 in the remaining subjects. The 85 percent attendance requirement can be lifted by the school board under special circumstances.

  • ↵12. Using administrative data, we examined these same correlations in our sample prior to the start of the intervention. The correlation of average grade was 0.4 with attendance and −0.4 with grade retention. The correlation between school attendance and grade retention was −0.3. Even conditional on age and gender controls, and taking into account grade-level and school fixed effects, the correlations between lower attendance, lower grades, and a higher risk of failing the grade are large and statistically significant at the 5 percent level.

  • ↵13. Parents who did not respond to either question were also classified as misinformed. See notes on Columns 2 and 4 of Table 5 for details.

  • ↵14. We discuss how we construct this at-risk index in Section IV.

  • ↵15. There are mostly two types of public schools in Chile: pure public schools and voucher schools. In one municipality, we worked with local education officials to recruit public schools. In the second municipality, we recruited a voucher school. Our main sample consists of students in 63 classrooms across seven schools.

  • ↵16. Consent forms were distributed during an initial parent meeting or later sent home with children.

  • ↵17. Initially, students whose parents consented to participate in the experiment were in Grades 4–8 in the eight schools that participated in the study. The composition changed in the second year. Students in Grade 8 participated during the first year of the experiment, but these students could not be treated or followed into secondary school. In addition, one school decided not to continue during the second academic year because it chose to allocate internal resources to other school goals. Because randomization was done at the individual level, stratifying by classroom, the main analysis does not include either the school that dropped out of the program or the students who were in Grade 8 at baseline. In the Online Appendix, we show the main results when using this “full” sample as a robustness check.

  • ↵18. See Online Appendix II for more details regarding the sample and the characteristics of students whose parents consented to participate in the experiment and those whose parents did not consent.

  • ↵19. For budgeting reasons, we did not have a pure control group in which no student was treated. We discuss the implication of this in Section V.A.

  • ↵20. It is possible that teachers could have inferred which students were in the treatment group. We think this is unlikely given the many responsibilities teachers have for classroom activities and the size of classes in our school settings.

  • ↵21. This differs from Bergman and Chan (2021), who only sent text messages to alert parents of missing homework, tests, or classes.

  • ↵22. Online Appendix III explains in detail the intervention: production of messages, timeline, and delivery. It also provides a script of each type of message sent to parents.

  • ↵23. Behavior data were difficult to collect. In Chile, each classroom has a notebook in which teachers can make comments about particularly good or bad behaviors of specific students. For example, the teacher might write, “Samuel concentrated well in reading,” or “Taryn hit her friend during math class.” We developed a system for categorizing such behavior “notes” as positive or negative and followed these definitions in all classrooms.

  • ↵24. Online Appendix V provides more details and information on these data sources.

  • ↵25. Online Appendix VI describes in detail each of the outcome and control variables used in this paper. It shows the specific data sources and provides a description of how the variables were constructed.

  • ↵26. We relegate most of the results using these data to the Online Appendix.

  • ↵27. In computing the control mean and standard deviations, we use only information of the students who consented to participate in the study.

  • ↵28. Online Appendix VI.A describes how the scales were built. For both parents and students, we show the eigenvalue of each latent factor, the loading associated with each variable, and the Cronbach’s alpha for each survey wave.

  • ↵29. We asked: “It is possible that next year your daughter’s/son’s school can send you regular text messages with information about their school performance (attendance, grades, and classroom behavior) four times a month. However, there might not be enough funds to provide this service free of charge. Thinking about how valuable this service would be for you, please tell us whether you will be willing to pay $V pesos a month to receive four text messages a month, from April to December.”

  • ↵30. We use final attendance and math grades from the academic year prior to the beginning of the intervention and accumulated negative behavioral marks during the month prior to the start of the intervention.

  • ↵31. From the onset of the experiment, we set out to study differential treatment effects for students of different baseline achievement (attendance, grades, behavior). We did not, however, pre-specify the at-risk index or the heterogeneity analysis directly based on it.

  • ↵32. See Online Appendix Table 1.

  • ↵33. Online Appendix VII shows the response rates for the different samples, years, and data sources. It also describes attrition from and entry into the sample, and the characteristics of those students in terms of their treatment status.

  • ↵34. For a handful of students, baseline values are missing. In those cases, we impute the control baseline variables using the classroom-level mean. We add an indicator variable in the regression model equal to one for these observations.

  • ↵35. A classroom c is a unique combination of school, grade level, and classroom in the first year of the intervention.

  • ↵36. As mentioned in Section III, our original research design had a complementary investment (a parenting video) randomized to half of the classrooms. Within the video-treated classrooms, only parents who were already receiving text messages received the video. This implies that the parameter 1 in Equation 1 can in principle be capturing two effects: the treatment effect of the text messages and the treatment effect of the parenting video intervention times the probability of receiving that parenting intervention. In Online Appendix IV, we discuss the parenting intervention, the research design, challenges with implementation that meant very few parents watched the video, the results, and the implications for the interpretation of the parameters in Equation 1. We show evidence that our estimated Embedded Image is mostly capturing the treatment effects of the text message intervention.

  • ↵37. Estimating Equation 2 without classroom fixed effects would not respect the research design and would not allow us to control for variations in class size (in our sample, classes vary from 20 to 44), consent rates across classrooms (mean consent rate is 54 percent), and possibly other classroom characteristics not observable in the data. This could affect the estimated treatment effect if the number of treated students has an additive impact.

  • ↵38. The assumption that there is a dose–response relationship between the size of the share of students treated in the same classroom and the spillover to the control group is a reasonable one. Avvisati et al. (2014) provide evidence consistent with spillovers increasing with the level of interaction between treated and nontreated students in the same classrooms in their parent–school intervention in French middle schools.

  • ↵39. We note that the number of observations vary throughout the manuscript for three reasons: (i) there are two samples (main sample and full sample), (ii) which we analyzed in two formats (cross-sectional vs. panel data analyses), and (iii) observation numbers are sometimes affected by nonresponse (both survey nonresponse and item nonresponse).

  • ↵40. Panel A of Online Appendix Figure 1 shows that observable characteristics are similar between treatment and control students when the full sample is used or in the sample of respondents to the parent’s and student’s baseline surveys. Additionally, Panel B reports a similar balance table to that shown in Table 1; it includes an additional variable to indicate whether the classroom was randomized to receive a high or low share of treatment, and the interaction with Tic.

  • ↵41. The survey items used to build these scales can be found in Online Appendix Tables F.2 and F.3.

  • ↵42. To compute the parents’/students’ scales index, we added all the standardized scales with a positive connotation and subtracted the low family supervision scale. We then normalized by the number of scales and standardized using the control group’s mean and standard deviation.

  • ↵43. After sending a text message, cellphone companies mark that message as received or failed to be sent.

  • ↵44. During the first two months of the experiment, messages were sent on Fridays.

  • ↵45. Panels A and B of Online Appendix Table 2 report the treatment compliance in each year of the intervention (2014 and 2015). More messages were sent in 2015, when the intervention was implemented for a full school year, than in 2014, when the intervention was implemented during the second half of the school year. Panel C presents the compliance for the full sample.

  • ↵46. In Online Appendix III.C, we present and discuss these results. We also show that people who received the text messages (compliers) are very similar to those that were sent text messages but did not receive them (noncompliers) based on a wide set of pre-treatment variables.

  • ↵47. See Panel D of Online Appendix Table 2.

  • ↵48. Larger treatment effects in Column 4 compared to Column 3 suggest the possibility of bunching around the threshold. We tested for a discontinuity in the attendance distribution in the year prior to the intervention following Cattaneo, Jansson, and Ma (2018). We rejected the null that the distribution is continuous (p-value = 0.03).

  • ↵49. We account for the imperfect compliance with treatment by estimating local average treatment effects. Let Dic be an indicator variable equal to one for those treated students whose parents received at least one text message with information on each specific outcome (that is, compliers). We then include Dic—instead of Tic—in Equation 1, which we instrument in a first stage with the randomized treatment variable Tic. Online Appendix Table 3 shows the results. Point estimates are larger in absolute value than those presented in Table 3; they are values scaled up by the proportion of parents who actually received the text messages. These results are robust to other definitions of compliance with treatment like, for instance, having received more than 75 percent of the messages.

  • ↵50. After the first two months of the intervention, we started to systematically send all the text messages on Mondays. For this part of the analysis, we restrict the sample to this period and keep only observations for those students whose parents were sent and actually received the messages on Monday.

  • ↵51. Online Appendix Table 4 shows the estimated coefficients used to construct this figure and p-values of tests of equal coefficients. We reject the null that all coefficients in Figure 4 are equal (p-value = 0.037) and the null that the treatment effect on Monday’s attendance is equal to that of Friday’s attendance—against the alternative that is lower—(p-value = 0.065).

  • ↵52. Online Appendix Table 5 reports the estimated coefficients used to construct this figure. The p-values associated to the null of equality of the estimated coefficients are 0.766 (Panel A), 0.751 (Panel B), and 0.555 (Panel C).

  • ↵53. The share of parents who are misinformed is larger for misbehavior than it is for attendance than it is for grades. This could be because the misbehavior and attendance are not reported in the students’ report cards, while grades are. In addition, each of the variables has a different range, allowing parents more or less scope to make mistakes in their assessments.

  • ↵54. We computed the magnitude of the information gap for those parents without missing data. The average gap in attendance/grades is equivalent to one-half of a standard deviation in the attendance/grades distribution.

  • ↵55. When comparing with classroom books, we allowed for a “mistake” of one absence and 0.5 points in the case of grades.

  • ↵56. We cannot reject equality of treatment effects on information gaps based on students’ reports and those based on administrative records.

  • ↵57. The pair-wise correlations between grades in math, language, natural science, and history are, in our sample, always larger than 0.6.

  • ↵58. See Section VI.A in the Online Appendix for details on how the scales were built, as well as the psychometric properties of each of them. Online Appendix Table 6 presents results for the individual items in each aggregate scale.

  • ↵59. We expected a smaller impact for our intervention, as in the United States, the GPA depends on assignment submission (a directly targeted outcome in Bergman 2021), whereas in Chile grades are based only on performance on class exams.

  • ↵60. The ranges provided here are taken from our summary of the literature in Online Appendix I.

  • ↵61. The hourly minimum wage in 2021 in Chile was approximately 2.83 dollars. Assuming that it takes an administrative staff about 5 seconds to enter the weekly attendance and grade data for each student, the total annual time allocated to data entry would be (5 × 40 weeks =) 200 seconds per student. Therefore, the annual cost of data entry per student amounts to 0.16 dollars (200/60 × (2.83/60).

  • ↵62. The cost of putting the experiment into the field was higher, as we had to hire a team of research assistants to visit schools, photocopy classroom books, and digitize the data.

  • ↵63. Busso et al. (2017) also provides information for 52 evaluations designed to improve student learning in secondary schools around the world. The strategies for which they find evidence of success include: (i) monetary incentives to students, (ii) “no excuses” models, (iii) extended school day, and (iv) vouchers, subsidies, or scholarships for students. The weighted averages of the effect sizes on test scores are 0.16 SD, 0.14 SD, 0.08 SD, and 0.03 SD, respectively. Although this study does not include intervention costs for these alternative strategies, it is likely that our text message intervention used fewer resources than any of these four programs and, therefore, was cheaper on a per student basis. McEwan (2015) provides a meta-analysis of randomized experiments of school-based interventions on learning in primary schools and finds seven experiments that involve informational treatments. The mean effect size of these interventions is 0.049 (p-value = 0.240). Andrabi, Das, and Khwaja (2017) find that providing report cards to parents in Pakistan leads to a closing in informational gaps and a 0.11 SD gain in student outcomes.

  • ↵64. This method of asking about willingness to pay has two important shortcomings. First, this is a hypothetical scenario. Therefore, parents have no incentive to reveal the true valuation for the service. Second, we use a take-it-or-leave-it offer, which gives a bound rather than an exact measure of willingness to pay.

  • Received November 2021.
  • Accepted September 2022.

This open access article is distributed under the terms of the CC-BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0) and is freely available online at: https://jhr.uwpress.org.

References

  1. ↵
    1. Andrabi, Tahir,
    2. Jishnu Das, and
    3. Asim Ijaz Khwaja
    . 2017. “Report Cards: The Impact Of Providing School and Child Test Scores on Educational Markets.” American Economic Review 107(6):1535–63.
    OpenUrl
  2. ↵
    1. Angrist, Noam,
    2. Peter Bergman, and
    3. Moitshepi Matsheng
    . 2022. “School’s Out: Experimental Evidence on Limiting Learning Loss Using ‘Low-Tech’ in a Pandemic.” Nature Human Behaviour 6(1):941–50.
    OpenUrlPubMed
  3. ↵
    1. Angrist, Noam,
    2. David K. Evans,
    3. Deon Filmer,
    4. Rachel Glennerster,
    5. F. Halsey Rogers, and
    6. Shwetlena Sabarwal
    . 2020. “How to Improve Education Outcomes Most Efficiently? A Comparison of 150 Interventions Using the New Learning-Adjusted Years of Schooling Metric.” World Bank Policy Research Working Paper 9450. Washington, DC: World Bank.
  4. ↵
    1. Avvisati, Francesco,
    2. Marc Gurgand,
    3. Nina Guyon, and
    4. Eric Maurin
    . 2014. “Getting Parents Involved: A Field Experiment in Deprived Schools.” Review of Economic Studies 81(1):57–83.
    OpenUrlCrossRef
  5. ↵
    1. Banerjee, Abhijit V.,
    2. Rukmini Banerji,
    3. Esther Duflo,
    4. Rachel Glennerster, and
    5. Stuti Khemani
    . 2010. “Pitfalls of Participatory Programs: Evidence from a Randomized Evaluation in Education in India.” American Economic Journal: Economic Policy 2(1):1–30.
    OpenUrl
  6. ↵
    1. Barrera-Osorio, Felipe,
    2. Kathryn Gonzalez,
    3. Francisco Lagos, and
    4. David J. Deming
    . 2020. “Providing Performance Information in Education: An Experimental Evaluation in Colombia.” Journal of Public Economics 186(1):104185.
    OpenUrl
  7. ↵
    1. Bassi, Marina,
    2. Matias Busso, and
    3. Juan Sebastián Muñoz
    . 2015. “Enrollment, Graduation, and Dropout Rates in Latin America: Is the Glass Half Empty or Half Full?” Economía 16(1):113–56.
    OpenUrl
  8. ↵
    1. Bennett, Magdalena, and
    2. Peter Bergman
    . 2021. “Better Together? Social Networks in Truancy and the Targeting of Treatment.” Journal of Labor Economics 39(1):1–36.
    OpenUrl
  9. ↵
    1. Bergman, Peter.
    2021. “Parent–Child Information Frictions and Human Capital Investment: Evidence from a Field Experiment.” Journal of Political Economy 129(1):286–322.
    OpenUrl
  10. ↵
    1. Bergman, Peter, and
    2. Eric W. Chan
    . 2021. “Leveraging Parents through Low-Cost Technology: The Impact of High-Frequency Information on Student Achievement.” Journal of Human Resources 56(1):125–58.
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Berlinski, Samuel,
    2. Matias Busso,
    3. Taryn Dinkelman, and
    4. Claudia Martínez A
    . 2022. “Replication Data for: Reducing Parent–School Information Gaps and Improving Education Outcomes: Evidence from High-Frequency Text Messages.” V1, UNF:6:zpPd2puX6SFQfxDiZe0n7Q==[fileUNF]. Harvard Dataverse. https://doi.org/10.7910/DVN/9DVYK4.
  12. ↵
    1. Bettinger, Eric,
    2. Nina Cunha,
    3. Guilherme Lichand, and
    4. Ricardo Madeira
    . 2021. “Are the Effects of Informational Interventions Driven by Salience?” Working Paper 350. University of Zurich, Department of Economics.
  13. ↵
    1. Bursztyn, Leonardo, and
    2. Lucas C. Coffman
    . 2012. “The Schooling Decision: Family Preferences, Intergenerational Conflict, and Moral Hazard in the Brazilian Favelas.” Journal of Political Economy 120(3):359–97.
    OpenUrlCrossRef
  14. ↵
    1. Busso, Matias,
    2. Taryn Dinkelman,
    3. Claudia Martínez, and
    4. Dario Romero
    . 2017. “The Effects of Financial Aid and Returns Information in Selective and Less Selective Schools: Experimental Evidence from Chile.” Labour Economics 45(C):79–91.
    OpenUrl
  15. ↵
    1. Busso, Matías, and
    2. Diana Hincapie
    . 2017. “Skills Development: Breaking It Down.” In Learning Better: Public Policy for Skills Development, ed. Matías Busso, Julián Cristia, Diana Hincapié, Julián Messina, and Laura Ripani, 45–68. Washington, DC: Inter-American Development Bank.
  16. ↵
    1. Castleman, Benjamin L., and
    2. Lindsay C. Page
    . 2015. “Summer Nudging: Can Personalized Text Messages and Peer Mentor Outreach Increase College Going among Low-Income High School Graduates?” Journal of Economic Behavior and Organization 115(1):144–60.
    OpenUrlCrossRef
  17. ↵
    1. Cattaneo, Matias D.,
    2. Michael Jansson, and
    3. Xinwei Ma
    . 2018. “Manipulation Testing Based on Density Discontinuity.” Stata Journal 18(1):234–61.
    OpenUrlCrossRef
  18. ↵
    1. De Walque, Damien, and
    2. Christine Valente
    . 2018. “Incentivizing School Attendance in the Presence of Parent–Child Information Frictions.” World Bank Policy Research Working Paper 8476. Washington, DC: World Bank.
  19. ↵
    1. Dinkelman, Taryn, and
    2. Claudia Martínez A
    . 2014. “Investing in Schooling in Chile: The Role of Information About Financial Aid for Higher Education.” Review of Economics and Statistics 96(2):244–57.
    OpenUrl
  20. ↵
    1. Dizon-Ross, Rebecca.
    2019. “Parents’ Beliefs about Their Children’s Academic Ability: Implications for Educational Investments.” American Economic Review 109(8):2728–65.
    OpenUrlCrossRef
  21. ↵
    1. Escueta, Maya,
    2. Andre Joshua Nickow,
    3. Philop Oreopoulos, and
    4. Vincent Quan
    . 2020. “Upgrading Education with Technology: Insights from Experimental Research.” Journal of Economic Literature 58(4):897–996.
    OpenUrl
  22. ↵
    1. Gallego, Francisco A.,
    2. Ofer Malamud, and
    3. Cristian Pop-Eleches
    . 2020. “Parental Monitoring and Children’s Internet Use: The Role of Information, Control, and Cues.” Journal of Public Economics 188(1):104208.
    OpenUrl
  23. ↵
    JPAL. 2020. “Improving Learning Outcomes through Providing Information to Students and Parents.” J-PAL Policy Insights https://doi.org/10.31485/pi.2756.2020
  24. ↵
    1. Kraft, Matthew A., and
    2. Shaun M. Dougherty
    . 2013. “The Effect of Teacher–Family Communication on Student Engagement: Evidence from a Randomized Field Experiment.” Journal of Research on Educational Effectiveness 6(3):199–222.
    OpenUrl
  25. ↵
    1. Kraft, Matthew A., and
    2. Todd Rogers
    . 2015. “The Underutilized Potential of Teacher-to-Parent Communication: Evidence from a Field Experiment.” Economics of Education Review 47(1):49–63.
    OpenUrlCrossRef
  26. ↵
    1. Manacorda, Marco.
    2012. “The Cost of Grade Retention.” Review of Economics and Statistics 94(2):596–606.
    OpenUrlCrossRef
  27. ↵
    1. McEwan, Patrick J.
    2015. “Improving Learning in Primary Schools of Developing Countries: A Meta-Analysis of Randomized Experiments.” Review of Educational Research 85(3):353–94.
    OpenUrlCrossRef
  28. ↵
    1. Mizala, Alejandra,
    2. Pilar Romaguera, and
    3. Miguel Urquiola
    . 2007. “Socioeconomic Status or Noise? Trade-Offs in the Generation of School Quality Information.” Journal of Development Economics 84(1):61–75.
    OpenUrlCrossRef
  29. ↵
    1. Muralidharan, Karthik, and
    2. Venkatesh Sundararaman
    . 2010. “The Impact of Diagnostic Feedback to Teachers on Student Learning: Experimental Evidence from India.” Economic Journal 120(546):187–203.
    OpenUrl
  30. ↵
    OECD. 2022. “Secondary Graduation Rate (Indicator).” https://doi.org/10.1787/b858e05b-en
  31. ↵
    1. Rogers, Todd, and
    2. Avi Feller
    . 2018. “Reducing Student Absences at Scale by Targeting Parents’ Misbeliefs.” Nature Human Behaviour 2(5):335–42.
    OpenUrlPubMed
  32. ↵
    1. Tincani, Michela.
    2018. “Heterogeneous Peer Effects in the Classroom.” Working Paper.
  33. ↵
    1. Wedenoja, L.
    2017. “The Dynamics of High School Dropout.” Unpublished.
PreviousNext
Back to top

In this issue

Journal of Human Resources: 60 (4)
Journal of Human Resources
Vol. 60, Issue 4
1 Jul 2025
  • Table of Contents
  • Table of Contents (PDF)
  • Index by author
  • Front Matter (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Human Resources.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Reducing Parent–School Information Gaps and Improving Education Outcomes
(Your Name) has sent you a message from Journal of Human Resources
(Your Name) thought you would like to see the Journal of Human Resources web site.
Citation Tools
Reducing Parent–School Information Gaps and Improving Education Outcomes
Samuel Berlinski, Matias Busso, Taryn Dinkelman, Claudia Martínez A.
Journal of Human Resources Jul 2025, 60 (4) 1284-1322; DOI: 10.3368/jhr.1121-11992R2

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Reducing Parent–School Information Gaps and Improving Education Outcomes
Samuel Berlinski, Matias Busso, Taryn Dinkelman, Claudia Martínez A.
Journal of Human Resources Jul 2025, 60 (4) 1284-1322; DOI: 10.3368/jhr.1121-11992R2
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Abstract
    • I. Introduction
    • II. Setting
    • III. Experimental Design
    • IV. Data
    • V. Estimation and Experimental Validity
    • VI. Results
    • VII. Did the Text Messages Intervention Improve Parent–School Info Gaps and Change Parenting Behaviors?
    • VIII. Cost-Effectiveness, Willingness to Pay, and Potential to Scale
    • IX. Conclusions
    • Acknowledgments
    • Footnotes
    • References
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • References
  • PDF

Related Articles

  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • What Knox Achieved
  • How Do Mass Shootings Affect Community Well-Being?
  • Early-Life Exposure to the Great Depression and Long-Term Health and Economic Outcomes
Show more Articles

Similar Articles

Keywords

  • I25
  • D8
  • N36
UW Press logo

© 2025 Board of Regents of the University of Wisconsin System

Powered by HighWire