Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
    • Supplementary Material
  • Info for
    • Authors
    • Subscribers
    • Institutions
    • Advertisers
  • About Us
    • About Us
    • Editorial Board
  • Connect
    • Feedback
    • Help
    • Request JHR at your library
  • Alerts
  • Call for Editor
  • Free Issue
  • Special Issue
  • Other Publications
    • UWP

User menu

  • Register
  • Subscribe
  • My alerts
  • Log in
  • Log out
  • My Cart

Search

  • Advanced search
Journal of Human Resources
  • Other Publications
    • UWP
  • Register
  • Subscribe
  • My alerts
  • Log in
  • Log out
  • My Cart
Journal of Human Resources

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
    • Supplementary Material
  • Info for
    • Authors
    • Subscribers
    • Institutions
    • Advertisers
  • About Us
    • About Us
    • Editorial Board
  • Connect
    • Feedback
    • Help
    • Request JHR at your library
  • Alerts
  • Call for Editor
  • Free Issue
  • Special Issue
  • Follow uwp on Twitter
  • Follow JHR on Bluesky
Research ArticleArticles

Generosity and Prosocial Behavior in Healthcare Provision

Evidence from the Laboratory and Field

J. Michelle Brock, Andreas Lange and Kenneth L. Leonard
Journal of Human Resources, January 2016, 51 (1) 133-162; DOI: https://doi.org/10.3368/jhr.51.1.133
J. Michelle Brock
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andreas Lange
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kenneth L. Leonard
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • References
  • PDF
Loading

Abstract

Do health workers sometimes have intrinsic motivation to help their patients? We examine the correlation between the generosity of clinicians—as measured in a laboratory experiment—and the quality of care they provide (1) in their normal work environment, (2) when a peer observes them, and (3) six weeks after an encouragement visit from a peer. We find that clinicians defined as generous in the laboratory provide 8 percent better care in their normal work environment. On average, all clinicians provide 3 percent and 8 percent better care when observed by a peer and after encouragement, respectively. Importantly, generous clinicians react to peer scrutiny and encouragement in the same way as ungenerous clinicians.

I. Introduction

Healthcare workers are commonly described as being intrinsically motivated, and the literature on health care is full of references to prosociality terms such as “professionalism,” “esteem,” and “caring.” At the same time, all health systems invest significant resources in regulation and quality assurance, thereby declining to leave quality up to the caring instincts of providers. Furthermore, where regulation is weak, quality is also often low (Das and Hammer 2007, Das et al. 2008, Rowe et al. 2005). In particular, significant attention is paid to what has been identified as the “know-do gap”—the gap between what health workers know how to do and what they actually do for their patients (Leonard and Masatu 2010a, Maestad and Torsvik 2008, Maestad et al. 2010, WHO 2005). Thus, in these settings, the average health worker is capable of doing more for patients and could choose to do so. Quality is low, in part, because health workers are not sufficiently motivated to provide adequate effort. Does this mean that intrinsic motivations—the caring instincts—are not present or strong enough and that one must look primarily to extrinsic incentives to motivate quality from health workers?1 Or should policymakers refocus their efforts on finding intrinsically motivated health workers who will provide adequate effort without extrinsic incentives?

In this paper, we examine evidence on a particular type of intrinsic motivation, prosocial motivation, where an individual has an outward orientation and the welfare and / or opinion of others enters into his or her utility function. We look at prosocial motivation as coming from prosocial preferences, prosocial incentives, or both. Prosocial preferences can be thought of as a reflection of individuals’ context-independent values that can cause an individual to take costly actions that benefit (or harm) others (for example, altruism, positive or negative reciprocity). In contrast, prosocial incentives—such as being observed or appreciated by others—are features of the decision-making environment that increase the return to prosocial behavior, similar to the way that the wage is the extrinsic incentive for some actions. We argue that some healthcare providers have prosocial preferences and can be described as inherently caring or altruistic but that the prosocial incentives of the setting in which workers practice, regardless of their level of inherent prosocial preferences, are potentially more influential on effort choice. Using data drawn from a laboratory experiment and the field, we show evidence that altruistic health workers—as defined by behavior in the lab environment—provide higher quality care (exert more effort) for their patients. In addition, we show that changing the workplace environment to provide greater prosocial incentives increases effort, and therefore quality, for all types of health workers, even those who are not found to be altruistic in the laboratory.

To test the importance of health workers’ level of inherent prosocial preferences relative to the effect of prosocial incentives in their work environments, we examine the behavior of health workers who provide outpatient care (clinicians) in urban and peri-urban areas of the Arusha region of Tanzania. We look at four settings, each with different implied prosocial incentives to provide effort. First, we examine the performance of the clinicians in their normal workplace (baseline). Second, we measure their performance when there is a peer present to observe their activities (scrutiny). Third, we measure their effort after participation in a trial in which a Tanzanian doctor reads an encouraging statement and asks them to improve their performance on five specific items (encouragement). Finally, we measure clinicians’ inherent prosocial preferences, or generosity, in an economic laboratory experiment by examining their willingness to sacrifice on behalf of strangers using the dictator game.

The laboratory experiment allows us to distinguish clinicians who are generous to strangers in that setting and compare their performance in their normal workplace to clinicians who are not generous in the laboratory experiment. By comparing the quality of care (as measured by effort exerted to adhere to protocol items required by the patients’ symptoms) in the three different clinical environments (baseline, scrutiny, and encouragement) we can evaluate the response of all clinicians to the prosocial incentives implied by peer scrutiny and encouragement. And finally, we can compare the differential response of generous and ungenerous clinicians to the changes in prosocial incentives implied by scrutiny and encouragement.

We find that clinicians who are generous in the laboratory perform better at work. As such, prosocial preferences appear to be linked across different environments; measuring prosocial preferences in the lab allows the classification of generous health workers who expend more effort in treating their patients than do other health workers. In addition, we find that regardless of their prosocial preferences in the lab, on average clinicians respond positively to changes in prosocial incentives in the workplace, increasing their effort significantly both when subjected to peer scrutiny and when encouraged to provide better care. In the latter case, the improvements are large and significant even eight weeks after clinicians received an encouragement visit. Notably, the performance increases under scrutiny and encouragement are similar for generous and ungenerous clinicians alike.

The results suggest that an underlying degree of prosocial motivation drives behaviors in the laboratory and the field: willingness to sacrifice one’s own gains for a stranger’s benefit implies willingness to exert costly effort on behalf of one’s patients. Such a view would lend support to the hypothesis that workers with prosocial preferences, as a type, are an important determinant of quality care. However, our research indicates that even those apparently without this source of motivation can be incentivized by scrutiny and encouragement.

In the following section, we outline the view of prosocial behavior and intrinsic motivation from the management and experimental economics literatures and present a descriptive model of behavior in the healthcare setting. Section III outlines the data and empirical methodology for examining the data. Section IV shows the results, and Section V discusses the implications and provides our conclusions.

II. Intrinsic Motivation and Prosocial Behavior

The term “intrinsic motivation” takes on different meanings in different literatures. In the psychology literature, it refers only to the individual’s enjoyment of doing the job as distinct from enjoying that others may benefit. Grant (2008) offers the example of a professor who enjoys the performance of lecturing (intrinsic) as opposed to a professor who enjoys seeing students learn (not intrinsic). Most of the literature in the behavioral economics field avoids the labels “extrinsic” and “intrinsic” and focuses on prosocial behavior, perhaps because it is easier to measure in the laboratory. The healthcare literature, in contrast, uses the term “intrinsic” as an umbrella term for both strict intrinsic and prosocial motivation. We use the term “prosocial” when we are referring to specific forms of motivation (altruism, generosity, and esteem-seeking), and we use the term “intrinsic” to refer to the broader sense of motivation as discussed in the health field even though it may not be strictly intrinsic as seen by the psychology literature.

Where prosocial behavior is discussed in the healthcare literature, there is little debate about its importance. A number of studies have shown that improved prosocial motivation results in higher quality care (Delfgaauw 2007, Kolstad 2013, Prendergast 2007, Serra et al. 2011). In particular, there is a focus on how to build a workforce characterized by prosocial preferences, with health workers willing to sacrifice their own well-being for that of the patient. Whereas some recent research suggests that inherently altruistic types are more desirable in health care (Coulter et al. 2007, Smith et al. 2013), the majority of the research currently focuses on inculcating professional ideals, not selecting altruistic individuals. A common feature in the literature is the belief that medical schools and training can create unconditionally prosocial health workers (Medical School Objectives Working Group 1999, Wear and Castellani 2000).2 These views overlap in the assertion that whether taught in school or preexisting, prosocial preferences are key to delivering consistently high quality care.

In contrast, recent evidence from experimental and behavioral economics can be interpreted to suggest that most prosocial behavior is context dependent (Levitt and List 2007), where the present (or salient) prosocial incentives define the context. Benabou and Tirole (2003) suggest multiple sources of context-dependent prosocial behavior, concentrating on happiness or “warm glow” (Andreoni 1989, 1990) derived from others’ perceptions, including concerns for social reputation and self-respect. Returns to prosocial behavior as such can be thought of as coming from internally realized benefits: Individuals may enjoy seeing the recipients receive something, they may enjoy the fact that the recipients know they gave them something, or they may enjoy being seen as having given to the recipients. Importantly, the “other” whose perception is an incentive for the prosocial behavior can be the recipient of generosity or a witness to the act of generosity. In addition, generosity may be conditional on the identity of the recipient or the individual observing the giving (Ellingsen and Johannesson 2008): individuals may like being seen as generous in the eyes of specific people. In this formulation, the presence of opportunities to earn social reputation, for example, act as prosocial incentives that drive the corresponding realization of prosocial behavior.

The importance of an audience as a prosocial incentive can most readily be seen in healthcare when the “other” is a peer rather than a patient: A peer observer triggers the desire to be seen as conforming to specific expectations. These shared expectations are most frequently referred to as professionalism, which is common in settings where dedication to group goals and values promotes service to a greater good (Akerlof and Kranton 2000, 2005; Cullen 1978; Freidson 2001). As with other forms of prosocial behavior, professionalism can be seen as a feature taught in medical school (a doctor becomes a professional) or as a feature of the work environment (a doctor’s behavior depends on the context). For example, Leonard and Masatu (2010b) describe a form of latent professionalism in which individuals understand professional norms but follow them only when they believe their fellow professionals can observe or evaluate their behavior. In this case, the environmental factor driving alignment with organizational goals is the opinion of the peers. Similarly, Kolstad (2013) demonstrates that access to information comparing one’s own performance to the performance of peers leads to significant improvements in quality for many surgeons.

In the healthcare setting, therefore, we can think of two potential sources of prosocial motivation that may affect effort in the workplace: patient-oriented motivation, where the behavior is the result of prosocial preferences, and peer-oriented motivation, where the behavior is dependent on prosocial incentives. If health workers obtain utility directly from the welfare of their patients, then they will exert more effort on their patients’ behalf regardless of the context. If health workers gain utility from being seen to follow the norms of their peer group, they will provide greater effort when their effort is visible to their peers, which may vary significantly by context.

A. A Descriptive Model of Effort with Prosocial Incentives

We introduce a descriptive model of effort provision to help clarify our empirical investigation. Clinicians provide effort (a) for many reasons, some of which may be described as prosocial. In order to illustrate the different sources of motivation, we distinguish monetary motivation (W for wealth) and two types of prosocial motivation: patient-based (M for moral) and esteem-seeking (R for reflective). Effort choices will depend on stimuli si (i ∈ {w, m, r}), which may impact the respective motivations: the monetary stimulus sw can be thought of as the wage, the moral stimulus sm can be thought of as exposure to others to whom one feels social obligation, and the reflective stimulus, sr can be thought of as exposure to peers. Utility is assumed to be additively separable as in Levitt and List (2007) and Leonard and Masatu (2008):

Embedded Image (1)

The health worker will choose effort (a = a*) to maximize utility. It is standard to assume utility to be concave in the action a(∂2U / ∂a2 < 0) and that stimuli have a positive impact on marginal returns from actions (∂2Ui / ∂a ∂si > 0 for all i ∈ {w, m, r}). Given these assumptions, it follows that increasing stimuli from any source increases effort. (∂a*/ ∂si = −[(∂2Ui / ∂a ∂si) / (∂2U / ∂a2)] > 0).

In our empirical setting, we can observe effort (a*) but we cannot observe the current levels of wealth, moral stimuli, or reflective stimuli. However, we can study the changes in effort due to exogenously increased levels of reflective stimuli (∂a*/ ∂sr) as a result of increased exposure to peer scrutiny or encouragement. In addition, we use behavior in the lab experiment to define a set of clinicians who have strong incentives to provide effort for altruistic reasons—that is, clinicians who have higher responses to moral stimulus than other clinicians. Assuming that this moral stimulus transfers across different environments, we expect that these altruistic clinicians also face higher levels of moral stimulus in their normal workplace. This leads us to the following conjectures:

Conjecture 1 Clinicians with higher responses to moral stimulus as measured in a lab environment will provide higher levels of effort in the field.

Clinicians may also respond to the opportunity to be seen by other as conforming to professional standards and thereby respond to reflective stimuli. We examine the marginal impact of increased reflective stimuli by varying the level of scrutiny in our study:

Conjecture 2 The average clinician will increase his or her effort when faced with increased reflective stimuli (exposure to peer scrutiny): ∂a*/ ∂si > 0.

It may also seem natural to expect that the different stimuli interact. That is, when one form of stimuli is high—all else being equal—the gain in effort from increasing another form of stimuli may be expected to be lower (∂2Ui / ∂si ∂s−i < 0).3 In other words, when an individual has high incentives to provide effort (for example, a high wage or high levels of social obligation), then increases in other stimuli (exposure to peers, for example) should not lead to large increases in effort. On the other hand, when a clinician faces low levels of stimuli overall, increases in any form of stimuli can lead to large increases in effort. We therefore expect:

Conjecture 3 Clinicians who provide low levels of effort relative to protocol requirements (a large performance gap), possibly because of a low response to existing moral stimuli or to wages, may exhibit greater responses to increases in reflective stimuli.

That is, ∂a*/ ∂sr is negatively correlated with performance and positively correlated with the performance gap across a sample of clinicians.

Conjecture 3 emphasizes the heterogeneous response by baseline effort and naturally leads to:

Conjecture 4 Clinicians with high response to moral stimuli as measured in the lab will respond less to increases in reflective stimuli than clinicians with low response to moral stimuli.

Conjecture 4 essentially combines Conjectures 1 and 3, hypothesizing a heterogeneous response to reflective stimuli by type. We investigate these conjectures using the data explained below.

III. Methodology

We studied 103 clinicians who practice healthcare in the Arusha region of Tanzania by collecting data on the quality of care in the course of their normal practices. Sixty-three of these clinicians also participated in a laboratory experiment, and this analysis focuses on these workers.

A. The Laboratory Experiment

In order to measure the generosity of clinicians independent of their work environment, we invited clinicians to participate in a laboratory experiment to measure generosity. The laboratory experiment took place in Arusha, Tanzania, in July 2010, after all data had been collected from the field. The subject pool consisted of 71 clinicians4 and 78 nonclinician subjects. The clinician subjects were recruited with a letter that we sent to the clinicians who participated in the field study. We recruited nonclinician subjects with printed advertisements distributed in major market areas in Arusha. Although fliers were distributed to a variety of people, the group of non-clinician subjects was ultimately a convenience sample. All of the nonclinician subjects that arrived to participate each day were allowed into the experiment. Clinician subjects were given a per diem of 35,000 Tsh in addition to what they earned in the experiment. Nonclinician subjects received a show-up fee of 5,000 Tsh. One U.S. dollar is equal to approximately 1,300 Tanzanian shillings.5

Clinician subjects gathered in a classroom and nonclinician subjects gathered on a lawn outside of the classroom, near enough that both groups could see each other but far enough that there was no communication or individual identification. This was done to preserve anonymity while ensuring that subjects understood the concept of being paired with another player. Subjects recorded decisions using paper and pen. We provided a hard copy of the experimental instructions to each participant and read them aloud before the experiment began. The instructions explained the basic guidelines of the experiment and how earnings were determined. Subjects were given the chance to ask clarifying questions after the instructions were read.

The experiment was a standard dictator game in which the dictator decides how to allocate 100 tokens between himself or herself and an anonymous partner. The dictator in each pair was always a clinician and the receiver was always someone drawn from the nonclinician pool.6 The receiver had no choice but to accept what was given. Each token was worth 150 Tsh, so that the clinician was choosing the allocation of 15,000 Tsh (approximately U.S. $12).

B. The Field

1. Sample and data collection

We collected data on clinician performance for 103 clinicians and 4,512 patients in the urban and peri-urban area of Arusha, Tanzania.7 The field data collection ran from November 2008 until August 2010. Clinicians entered the study at different times, and the time between enrollment and the final data collection for each clinician was about six and half weeks on average.

The sample includes clinicians working in public, private, and nonprofit / charitable facilities. The term “clinician” refers to primary health workers who provide outpatient care. They fill the role of “doctor”—they all have significant medical training, although the majority of them do not have full medical degrees.8

On each day of data collection, we interviewed all the patients seen in the four-hour window during which we visited the facility. The interviews with patients followed the Retrospective Consultation Review (RCR) instrument, which measures adherence to protocol. It is a slightly modified version of the instrument used by Leonard and Masatu (2006). Immediately after their consultation with a clinician, patients are asked a series of questions about their consultation based on the symptoms that they reported. The questions allow us to reconstruct the clinicians’ activities, specifically the extent to which they followed protocol. Even though the interviews took place within minutes of the consultation, patient recall is not perfect. It is, however, highly correlated with what actually takes place (Leonard and Masatu 2006). The questions used to establish protocol adherence are listed in Table 8 in the appendix. Given the existence of medically defined protocol, we can assume that effort to increase protocol adherence is a reasonable measure of quality.

2. Workplace environment interventions

Every health worker was examined in his or her normal workplace (baseline) as well as under two interventions to the workplace environment designed to expose him or her to two different types of reflective stimuli: scrutiny and encouragement. The sequence of interventions followed a standard order: First, we measured protocol adherence under normal circumstances (the baseline); second, we measured protocol adherence when there was another clinician in the room observing (scrutiny); third, we measured protocol adherence immediately after this clinician left the room (post scrutiny); fourth, a clinician on the research team visited with the clinician subject and read an encouragement script (encouragement visit); and fifth, we measured protocol adherence about six weeks after this visit (post encouragement).

Scrutiny involves an immediate reflective stimulus: There is a peer present in the room. Previous work has shown an increase in quality of between five and ten percentage points in such circumstances (Leonard and Masatu 2006, Leonard et al. 2007). For the encouragement intervention, Dr. Beatus, a Tanzanian doctor and lecturer at a health research institution, visited each clinician and read the following script (numbers were added here for clarity but were not in the script):

We appreciate your participation on this research study. The work that you do as a doctor is important. Quality health care makes a difference in the lives of many people. Dedicated, hardworking doctors can help us all achieve a better life for ourselves and our families.

One important guideline for providing quality care is the national protocol for specific presenting symptoms. While following this guideline is not the only way to provide quality, we have observed that better doctors follow these guidelines more carefully. Some of the protocol items that we have noticed to be particularly important are (1) telling the patient their diagnosis, (2) explaining the diagnosis in plain language, and (3) explaining whether the patient needs to return for further treatment. In addition, it is important (4) to determine if the patient has received treatment elsewhere or taken any medication before seeing you, and (5) to check the patient’s temperature, and check their ears and / or throat when indicated by the symptom.

For this research, we look at doctor adherence to these specific protocol items.

We chose specific items because our previous work shows that the best clinicians frequently perform these activities but most clinicians do not. Mentioning these items also allows us to compare the performance on these items to performance on items not mentioned.

This intervention has multiple potential impacts. The most direct is that the script itself encourages clinicians to improve their quality of care, either because it inspires them or because it contains information they did not previously know. However, it may also involve the understanding that one is participating in research or that one’s actions are being observed or measured. By the time we measure the quality of care after the encouragement visit, a health worker is likely to have received up to four visits from the research team.9 Thus, encouragement involves explicit expectations and frequent contact, but, unlike the scrutiny visit, it never involves the immediate presence of a peer. By mentioning five items during the encouragement visit, we can examine if the changes in effort are due to information: If clinicians are responding to new information, we should observe increases in quality only for those items for which information was provided.

C. Research Design

We use a within-subjects design and measure the changes in quality of care (from the baseline) as a result of our two interventions. The post-scrutiny visit was included to test whether clinicians return to their normal quality of care after the scrutiny treatment. The fact that clinicians do return to lower levels of effort after the scrutiny visit allows us to analyze the scrutiny and encouragement as two different types of interventions, not one cumulative intervention. In addition, by treating every clinician in the sample, we are required to use baseline performance as our control rather than comparing performance to a random selection of clinicians who received no intervention. We deliberately chose this path for two reasons. First, there is no reason to expect a secular trend in quality that could lead to significant increases in quality within six weeks.10 Thus the assumption of no change in the absence of treatment is reasonable. Second, previous work shows that measuring the quality of care can lead to changes in quality without any other treatment, ensuring the contamination of any control group for which we would be able to measure quality. Because the timing is the same for all clinicians, we cannot definitively rule out the possibility of spillovers from the scrutiny visit to encouragement, but we can rule out the possibility of spillover from the encouragement to scrutiny.

D. Empirical Specification

Our investigation concerns the overall level of effort (and therefore quality) provided by clinicians in the three settings (baseline, scrutiny, and encouragement) and how these compare for generous and ungenerous clinicians. As such, the dependent variable in our regressions is always the effort of the clinician, and the independent variables include the generous / ungenerous classification from the laboratory experiment and the environment in which effort was provided.

Measuring changes in effort such that we can reasonably infer that changes are associated with changes in actual quality requires some careful analysis. An outpatient consultation with a clinician involves a series of discrete interactions, most of which are required by protocol. The RCR instrument is designed to measure whether the clinician did the clinical tasks he or she is required to do by asking patients if the clinician did those items as soon after the consultation as possible. These items can involve greeting the patient and offering him or her a chair, asking the patient how long they have been suffering from particular symptoms, asking about additional symptoms, examining the patient, and explaining the diagnosis properly. The list of discrete items required by protocol differs somewhat according to the presenting symptoms of the patient. We have compiled lists of items required by protocol for four categories of presenting symptoms (fever, cough, diarrhea, and general) and two types of patients (older than or younger than five years). Overall, there are 74 different items (listed in Appendix A2), but only a subset will apply to any given patient. During the RCR interview, patients are only asked about items that apply to their symptoms and age category. Thus, our dependent variable is xijk, a dichotomous dependent variable indicating whether clinician j followed protocol for item k as required for patient i. This is modeled as a function of clinician fixed effects (Γj), item fixed effects (Γk), and patient characteristics (Zi). Zi includes four age categories, gender, and the order in which patients were seen by each clinician.

Each of our conjectures corresponds to an estimating equation. Equation 2 models the impact of being designated as generous in the laboratory experiment (GENj) on the quality of care provided in the baseline:

Embedded Image (2)

Because this is the baseline, we do not include the variables indicating changes in workplace environment and cannot include clinician fixed effects.

Equation 3 models the impact of our workplace interventions on the quality of care provided:

Embedded Image (3)

SCRi and ENCi indicate whether the clinician was subject to one of the work environment interventions at the time they treated patient i. ENCi · TRKk captures whether the item is one of the items mentioned in the encouragement visit.11 We also include a variable (POST SCRi) indicating patients seen in the time immediately after the scrutiny intervention to test if quality remains high, falls below normal, or returns to normal. By eliminating a dummy variable for work environment characteristics in the baseline, we can include clinician fixed effects in this regression.

Equation 4 examines the differential reaction of clinicians to the workplace interventions according to their performance gap (∆j). The performance gap for clinician j is the difference between what is required by protocol and what clinician j does for the patient, on average.12 ∆j thus captures how far from protocol clinician j performs, on average. For example, the terms SCR and SCRi · ∆j capture the impact (on protocol adherence) of being under scrutiny and the degree to which this impact varies with the performance gap, respectively.

Embedded Image (4)

We can also include clinician fixed effects in this regression because the performance gap is interacted with the interventions, not entered directly.

Equation 5 examines the impact of the workplace interventions interacted with whether or not the clinician is generous. Because generosity is interacted with the interventions, we can also include clinician fixed effects.

Embedded Image (5)

With all four regressions, it is important to consider the sources of variations in effort that are not driven by our interventions. Not all items are equally important, not all clinicians are equally qualified to do each item, and the patients who are at one facility might be unobservably different from the patients at another facility. Thus, comparisons across clinicians are difficult. We address these problems in five ways.

First, wherever possible, we include clinician fixed effects (Γj), allowing us to compare each individual clinician to his- or herself in different situations (baseline compared to peer scrutiny, for example). This helps to deal with the case mix and qualifications problem because these potential sources of bias do not change during the short period of our study.

Second, we include fixed effects for each specific item (Γk), essentially asking if a clinician is more or less likely than the average clinician to provide a given protocol item. For example, the clinician who asks about the duration of a cough 80 percent of the time is providing below average quality, whereas the clinician who asks about the history of vaccinations in infants 80 percent of the time is providing above average quality. This helps to control for case mix by adjusting expectations for each type of patient; otherwise, a clinician who sees many infants will look worse than a clinician who sees few infants because his average score may be lower.

Third, because we observe a series of outcomes for each patient (corresponding to all of the required items), we can cluster the standard errors at the patient level or include a patient-level random effect.13 This allows us to control for the fact that some patients may be quite different from others (they may be more demanding or critically sick, for example), the distribution of these patients across clinicians may not be even, and the probability of performing one item is likely to be correlated with the probability of doing another for the same patient.

Fourth, in addition to examining the probability that a clinician would perform any individual required item, we examine the results by looking at average adherence to protocol for each patient, reducing the number of observations to the total number of patients (not potential items).

Finally, we include a variable in the vector of patient effects (Zi) indicating the order of patients on the day of the visit. In addition to tracking the illnesses or conditions of patients (which change over the day but are controlled for directly), this helps to deal with changes in case mix (the most severe cases are usually seen earlier in the day). This is particularly important for the scrutiny and post-scrutiny visits because they take place on the same day as the baseline and are always after the baseline. If quality is falling normally over the course of the day and we did not take this into account, we would underestimate the effort provided under scrutiny.

We include four specifications for each of the equations above, corresponding to the columns in each of the tables. The first specification is a logit model of whether the clinician performed each required item with item-specific dummy variables. Because the standard errors are not corrected or adjusted, this specification always has smaller standard errors than the other specifications. The second specification is a logit model of whether the clinician performed each required item with item-specific dummy variables and patient random effects. The patient random effect captures the possibility that an unobservable patient characteristic might simultaneously increase (or decrease) the probability that a clinician did all of the required items.14 The third specification is a linear regression of the discrete variable of whether the clinician performed each required item with item-specific fixed effects and standard errors clustered at the patient level. The fourth specification is a linear regression of the proportion of required items performed for each patient (x−ij). Because we examine average performance over all items, coefficients for tracked items (TRKk) are dropped. The patient-level regression also controls for the major symptoms reported (fever, cough, diarrhea, or general, by infant or noninfant); these controls are already embedded in the item-specific dummy variables for the other three regressions.

An additional concern with the measurement of quality is the fact that clinicians might realize we are on site collecting data and change their effort in reaction. In fact, during the scrutiny visit, this is precisely what we expect to happen: Clinicians will react to our presence by increasing quality. Appendix A1 investigates the evidence for this behavior by looking for patterns in the quality of care during each site visit that would indicate clinicians had increased the quality of care in response to discovering our team. We find no evidence for any of these patterns, suggesting either that no one realized our team was present until after we collected the data, or that they were discovered and there was no reaction. As we discuss later, the evidence suggests that clinicians realized the team had been present after we left, but this discovery does not allow them to “cheat” by temporarily increasing effort.

IV. Results

A. Laboratory Experiment

Table 1 presents a summary of giving in the dictator game. The average number of tokens given was just over one third, but the mode in the data was half, and 36.8 percent of the participants in the laboratory experiment gave at least half of their tokens to the stranger.

View this table:
  • View inline
  • View popup
Table 1

Laboratory Experiment Results

The fact that the mode was half suggests a norm in which people simply divide their allocation evenly between themselves and their partner. Thus, we create a dichotomous variable indicating the clinicians who gave at least half and call these clinicians generous (that is, conforming to the generosity norm).15 About one-third of all health workers qualified as generous types in the laboratory, a higher percentage than is usually found with the dictator game in other populations. This result may be driven by inferable income differences between clinicians and recipients (who were recruited from the general population where the average daily wage is lower than it is among clinicians). Recall that the recipient sample was chosen specifically with the purpose of matching the context in which clinicians see patients. Although the income differential between dictator and recipient makes it difficult to directly compare our lab experiment results with the literature, it strengthens the comparability of this behavior with field data because the same gap exists in the field.

At least two other studies attempt to characterize clinician altruism. Working with Tanzanian medical students, Kolstad and Lindkvist (2012) also use a dictator game to assess self-selection of health workers into the public versus private sector. Based on modified dictator games with four recipients, they show that those who prefer to work in the public sector have stronger prosocial preferences, as measured by the amount given, than those who prefer the private for-profit sector. Godager and Wiesen (2013) work with German medical students to measure their altruism—that is, the weight they give to patients’ health benefits. While their elicitation technique is quite distinct from that of the standard dictator game, they also identify substantial heterogeneities in the degree of altruism.

B. Effort in the Field

Table 2 shows the basic statistics for the 63 clinicians who were involved in both the laboratory experiment and the field study. Of these 63 clinicians observed in the baseline, 59 were observed under peer scrutiny and 51 under the encouragement intervention. Clinicians dropped out of the study for various reasons but attrition was not correlated with quality.16 The average clinician completed 74 percent of the required items in the baseline, and the standard deviation of average clinician quality was 16 percentage points. The percentage of items completed during the scrutiny visit is the same as for the baseline, but recall that this number does not control for case mix and that, normally—since the scrutiny visit is later in the same day—effort would otherwise have fallen.

View this table:
  • View inline
  • View popup
Table 2

Summary Statistics

1. Are generous clinicians different from other clinicians?

The purpose of the laboratory experiment was to document any norm of prosocial preferences among clinicians and to categorize clinicians according to this norm for analysis with the field data. We use subjects’ giving behavior in a standard dictator game to categorize clinicians as responsive to moral stimuli or not.

Table 3 (corresponding to Equation 2) examines the quality of care provided by clinicians in the baseline (and therefore does not include clinician fixed effects) with the key variable being the dichotomous classification of whether a clinician is generous in the lab experiment. The table shows that according to all four ways of examining quality, generous clinicians provide significantly higher quality than ungenerous clinicians in the baseline. Lab behavior is therefore informative of the relative performance of clinicians in the field. This confirms Conjecture 1 that clinicians with greater moral stimulus as measured in the lab will provide greater effort under normal circumstances. The impact is between seven and nine percentage points, about half a standard deviation of quality. Note that while we use the laboratory data to characterize clinician “type,” we do not interpret our regressions as causal. Rather, we assume that there is an underlying characteristic that affects behavior in the laboratory and the field in a similar way: preferences (innate or learned) drive behavior in both settings. We cannot definitively rule out other possible links that are not driven by prosocial behavior; nonetheless, this is an important external validity result for the dictator game because it shows a parallel between behavior in the lab and the field.

View this table:
  • View inline
  • View popup
Table 3

Generosity and provision of effort in the baseline

2. Reactions to reflective stimuli (scrutiny and encouragement)

Table 4 (corresponding to Equation 3) examines Conjecture 2 for all of the clinicians who took part in the laboratory experiments (not differentiated by generosity). Unlike Table 3, each clinician is compared to himself or herself by the inclusion of clinician fixed effects. The average increase in quality due to scrutiny is between three and four percentage points, depending on the type of regression. The reaction to encouragement is about eight percentage points (as seen in Column 4). Columns 1–3 show the reaction to encouragement for items that were not mentioned in the encouragement (about five-six percentage points) and, differentially, for those that were mentioned in the encouragement script (an additional four-seven percentage points, for a total response of about ten percentage points). The overall reaction to encouragement represents an increase in quality that is about half of a standard deviation of quality. Thus, Table 4 confirms our conjecture that the average clinician will respond to scrutiny and encouragement.

View this table:
  • View inline
  • View popup
Table 4

Changes in Quality under peer scrutiny and encouragement

Note that after the scrutiny from the research team, the clinician returns toward his or her baseline levels of effort. Effort is slightly higher than in the baseline, though this result is not significant across the regressions. This suggests that the response to scrutiny is short-lived (post-scrutiny is not significantly greater than zero) and that there is no need to readjust effort to “catch up” after the scrutiny visit (post-scrutiny is not less than zero). If clinicians believed our research project could have extrinsic ramifications on their practices, they would not have returned to low quality while we were still present at the facility (but not present in their consultation room). Further, it suggests that, by the time the encouragement occurs, clinicians have returned to baseline effort levels and the marginal impact of encouragement is measured from baseline, thus providing a good approximation of the absolute effect of encouragement.

3. Heterogeneous responses to scrutiny and encouragement

Table 5 (corresponding to Equation 4) examines Conjecture 3, that clinicians who face low levels of motivation under normal circumstances exhibit greater changes in effort when faced with additional scrutiny—the conjecture inherent in our descriptive model of motivation. Table 5 includes a measure of the baseline performance for each clinician transformed into a performance gap: the difference between what is required by protocol and the average proportion of items completed in the baseline. (If a clinician follows protocol for all of his or her patients, the average score would be 1.00 and the gap would be 0.00.) By interacting the gap with the treatments, we examine the degree to which the gap explains the change in performance when a clinician is subject to peer scrutiny or encouragement. A coefficient of 1 would suggest that the gap is fully closed, and a coefficient of 0 would suggest that the reaction to scrutiny is independent of the gap.

View this table:
  • View inline
  • View popup
Table 5

Changes in Quality as a Function of the Baseline Quality

Confirming Conjecture 3, the coefficients (across all four regressions) suggest that the performance gap is highly correlated with the increase in effort, and the gap closes by about one-quarter under scrutiny and almost one-half under encouragement. The coefficient on the performance gap is significantly different from both 0 and 1, implying that the reactions to scrutiny and encouragement differ across clinicians, with the better clinicians exhibiting a smaller reaction. The coefficients suggest that, after encouragement, a clinician with 75 percent adherence in the baseline (a 25 percent performance gap) will increase his or her effort by about 2 percentage points (0.25*0.3 – 0.055), whereas a clinician with only 50 percent adherence will increase effort by 9.5 percentage points.17

4. Reactions to reflective stimuli for generous clinicians

Table 6 (corresponding to Equation 5) examines the impact of the two interventions—scrutiny and encouragement—and the differential response of generous and ungenerous clinicians. Testing whether the response to scrutiny or encouragement is similar for generous and ungenerous clinicians addresses Conjecture 4. We regressed quality on interactions of whether a clinician is generous in the laboratory with the timing of the two interventions. As with Table 5, Table 6 regressions include fixed effects for each clinician and therefore examine changes in quality, not the level of quality.

View this table:
  • View inline
  • View popup
Table 6

Changes in Provision by type and intervention

The increase in quality due to the impact of peer scrutiny and encouragement for the average clinician is almost exactly the same as we found in Table 4. Because generous clinicians provide higher levels of quality overall, Conjecture 4 suggests that they might respond less to the additional stimuli inherent in peer scrutiny and encouragement. When we examine the marginal coefficients for generous clinicians, the small and insignificant coefficients show that generous clinicians are not different from other clinicians for either the scrutiny effect or the encouragement effect. Note that, not only are the coefficients not significantly different from 0, they are also small and the confidence intervals show that we can rule out the possibility that generous clinicians do not respond to reflective stimuli.

V. Conclusion

This paper offers two different ways of thinking about prosocial (intrinsic) motivation in the health sector by examining how prosocial preferences influence the quality of care provided by types of health workers and how prosocial incentives influence the quality of care provided by all health workers. We isolate a type of health worker who is generous to strangers in a laboratory setting to proxy for altruistic or patient-based prosociality. In addition, we measure the degree to which all health workers respond to prosocial incentives in the field, with two interventions that increase the exposure of health workers to their peers.

The changes in the quality of care observed in this investigation are large. The standard deviation of average quality provided is about 17 percentage points, implying that generous clinicians are half a standard deviation better than ungenerous clinicians. Encouragement also improves the average performance by half a standard deviation, and being observed by a peer increases adherence to protocol by about a quarter of a standard deviation. These differences are about three-quarters of the difference found between effective and ineffective organizations in a similar setting (Leonard et al. 2007) and significantly larger than the 0.14 standard deviation gain observed in the successful pay-for-performance scheme in Rwanda (Basinga et al. 2011). In a systematic review of the impact of audit and feedback interventions, Jamtvedt et al. (2003) find an average reduction in noncompliant behavior of 7 percent, whereas our improvements translate to approximately a 20 percent reduction.

A. Generosity

We find that behavior in the dictator game is significantly correlated with effort in the field. About one-third of all health workers qualify as generous types in the laboratory—that is, they conform to a generosity norm by sharing an allocation fairly between themselves and an anonymous partner. We can interpret this result as reflecting the prosocial attitudes of health workers toward patients. Importantly, those health workers who are generous and who conform to the fairness norm in the lab are better clinicians in their normal practices. The difference is large—almost half of a standard deviation in the distribution of quality. Our interpretation of this result is that both generosity in the laboratory and effort with actual patients are driven by the underlying prosocial preferences of individuals whether they are innate or learned in their medical training. The fact that clinicians who display prosocial preferences provide higher quality has been alluded to in previous studies (Delfgaauw 2007, Prendergast 2007, Serra et al. 2011). However, to our knowledge, this is one of the few studies in any field that has demonstrated a strong link between altruism in a laboratory and behavior in the field. If generosity—or prosocial preferences more generally—means better performance, this is both good and bad news for the health sector in countries with ineffective regulation. First, it suggests that some health workers will provide better care, even in difficult situations. However, given that it has not been possible to screen health workers by type, there is little opportunity to weed out those who are not intrinsically motivated. More importantly, healthcare facilities require systematic and reliable quality control, which precludes simply relying on the generosity of clinicians of heterogeneous types.

B. Responses to Peer Scrutiny and Encouragement

The good news from a policy perspective is that even clearly ungenerous clinicians respond to some types of prosocial incentives. In this case, we look at the power of peer influences. The average clinicians in our sample increase the quality of care they provide when observed by a peer and when encouraged and studied over a long period. Our original view of peer effects was that clinicians would respond more to the presence of a peer than to the encouragement of a peer. This turned out to be incorrect. Even though the clinician doing the encouragement was never present in a consultation, clinicians worked harder six weeks after he visited them with the encouraging message.

Why should clinicians work harder just because they have been asked to do so? Table 4 shows that clinicians return to their normal levels of effort immediately after the scrutiny visit. Thus, the mere fact that they are being researched does not lead to increased levels of effort; only having someone watch them resulted in increases in effort. Encouragement had a greater and more lasting effect than scrutiny alone. Our interpretation is that encouragement worked because it included several contacts by the research team as well as the scrutiny of being part of a research project. It is the increased contacts with peers that stimulated an effect similar to being observed by someone in the room. In other words, the expectations implied by the encouragement script are only salient when the clinician feels that someone is paying enough attention to return multiple times.

By asking clinicians to work harder and by mentioning five clinical tasks, we were able to increase quality for the average clinician by at least half a standard deviation, which is greater than other, more intensive or expensive interventions have been able to achieve. Clinicians did more of the things we asked them to do, but they also did more of the things we never mentioned; there is no substitution away from unmentioned protocol items toward the ones mentioned in the encouragement visit. Clinicians were not paid more or promised any increases in pay. This is a large increase in quality from a simple and seemingly inconsequential intervention.

Whether or not this is a scalable intervention, these results to highlight lost opportunities in the healthcare systems of many low-income countries. It should be natural and normal for all health workers to feel that their work is important and that they are accountable to their peers for the quality of care they provide. Tanzania has multiple supervision systems that are supposed to achieve exactly this aim. Yet, clearly, for most health workers these systems fall short of maximizing effort returns from encouragement and peer accountability.

C. Generosity and Scrutiny

As a final test, we examined the way that generous and ungenerous clinicians respond to changes in peer scrutiny. By analyzing heterogeneous responses to increases in peer stimuli, we demonstrated that even types of health workers with prosocial preferences respond to changes in prosocial incentives. This indicates that patient-based prosocial behavior and peer-based esteem-seeking exist side by side in the health field. Furthermore, the two types of motivation are not substitutes for each other: each one can increase the performance of health workers independent of the other.18 Thus, the focus on the right type of healthcare worker may offer less to policymakers than a focus on the right prosocial incentives through changes in the workplace environment.

In the debate over the role of extrinsic incentives, our results suggest an important way forward. Traditionally, those who believe prosocial motivation—often referred to as intrinsic motivation in the healthcare literature—is important have been concerned that extrinsic incentives may crowd out prosocial motivation. Thus, programs that pay facilities for improved quantity or quality of healthcare may experience decreased effort from previously intrinsically motivated health workers. However, our results suggest that even health workers who have prosocial preferences suffer when they work in an environment that ignores prosocial incentives, so there is less to crowd out. In addition, the focus on the characteristics of the work environment rather than the character of the clinician suggests a more nuanced view of extrinsic, and specifically monetary incentives. A careful examination of incentives programs such as that in Rwanda (Basinga et al. 2011) shows a focus on monetary incentives with additional emphasis on autonomy, accountability, team-based recognition of effort, and significant exposure to external peers. All of these aspects could increase intrinsic motivation rather than decrease it. (See also Miller and Babiarz 2013 for a discussion of what we do and do not know about the way these programs can interact with all types of incentives.) Thus, although the focus on extrinsic motivation may have been born out of frustration with the lack of intrinsic motivation of healthcare workers, it is possible that these programs have had positive spillover impacts, ultimately enhancing intrinsic motivation.

Appendix A1 Do Health Workers React when They Discover the Team Has Arrived?

The data analyzed in this study were collected from patients by enumerators who had not met the clinicians they were studying. They would have no reason to falsify the answers of patients. Patients themselves could not have known what the study was about, and certainly could not have known the stage of the research. Thus, we believe the patients’ assessments of quality are an unbiased (though noisy) reflection of what they have seen. Our study measures the impact of encouragement combined with monitoring or studying clinicians. It does not matter whether clinicians increased quality because they were encouraged or because they expected the team to collect data. However, one concern is that if clinicians knew what day we were coming or knew that our team had arrived at the facility, the gains we observe could be unrepresentative of the true changes in quality. In particular, clinicians might hear (from patients or nurses) that the research team had arrived and then change their behavior in order that the patients might report improvements. If this is the case, then our data do not capture real gains.

Because the first few patients we interviewed would have consulted with the clinician before the team arrived, it is not possible to alter the true quality for these patients, but subsequent patients might see better (false) quality. To investigate the possibility of false increases, we look for trends in the quality of care with the order of patients on the same day. Over the course of a normal day, the quality of care declines slightly for the average clinician. This is probably due to the changing severity of illnesses reported; those who are very sick tend to queue early at the health facility. Thus, in the baseline—when clinicians knew nothing of the study—quality declines slightly over the course of the day. On the other hand, we know that quality increases significantly when a peer enters the room. Thus, if our enumerators were “discovered” we would observe an immediate increase in quality at the moment of discovery. This pattern should be observable in the quality of care provided with the order of patients. If the enumerators are discovered, the quality should increase with the number of patients in the post-study data collections.

Table A1 looks at the quality of care provided by all clinicians who were observed in both the baseline and the post study and measures the changes in quality of care with the order in which patients were seen. We examine a series of different windows that might capture the moment when an enumerator is discovered, from the first four patients up to the first eight patients, and also all patients seen on that day. All trends were negative, and there is no statistically significant difference between the trends in the baseline and post study. This suggests that health workers did not know or care that we had arrived and that the increases seen in the data are representative of what clinicians do on days when we are not at the facility observing them.

View this table:
  • View inline
  • View popup
Table A1

Quality by order of patients, comparing baseline to post-study visits

Appendix A2 Retrospective Consultation Review

Appendix Table A2 summarizes the performance of clinicians in the sample for all items in the RCR instrument, reporting the level at baseline, the change in performance when under scrutiny, and the change in performance after the encouragement.

View this table:
  • View inline
  • View popup
Table A2

Baseline Adherence by Item and Changes by Peer Scrutiny and Encouragement

Footnotes

  • J. Michelle Brock is a research economist at the European Bank for Reconstruction and Development in London, United Kingdom Andreas Lange is a professor in the Department of Economics at the University of Hamburg in Hamburg, Germany. Kenneth L. Leonard is an associate professor in the Department of Agricultural and Resource Economics at the University of Maryland in College Park, Maryland, United States.

  • ↵1. There has been increased focus on extrinsic motivation using monetary incentives (see, for example, Basinga et al. 2011, Meessen et al. 2006).

  • ↵2. The taught ideals are explicitly altruistic: “I will follow that system of regimen which, according to my ability and judgment, I consider for the benefit of my patients” (the Hippocratic Oath: Adams 1849) and “The health of those in my care will be my first consideration” (Declaration of Geneva: World Medical Association 1995).

  • ↵3. Without additional restrictions on the utility function, this result cannot be derived from our simple model. However, as we show below, our empirical measures of effort have an upper bound—100 percent adherence to protocol—and therefore it is natural to think that increases in effort are increasingly costly at higher levels of effort, even with multiple sources of motivation.

  • ↵4. Some of the clinicians in the laboratory experiment did not participate in the field study. Also, the gap between data collection ending and the laboratory experiment varies considerably by clinician, since clinicians entered the study on a rolling basis. For some, a year may have passed between the interventions and the laboratory experiment.

  • ↵5. The imbalance in the show-up fees was never highlighted to participants but could have been inferred. It does parallel the power and income imbalance in a typical clinical encounter.

  • ↵6. We never used the terms clinician or patient in the experiment, but the clinicians knew they were in a group of clinicians.

  • ↵7. We sampled 100 percent of the healthcare facilities in the area with outpatient departments, though some facilities were eventually excluded based on convenience—they were either too difficult to reach for obtaining consent or had too small of a patient volume.

  • ↵8. The four cadres of clinicians include assistant clinical officer (ACO), clinical officer (CO), assistant medical officer (AMO), and medical officer (MO). Each of these titles requires a specific degree. The medical training required for each depends on the degrees an individual already has. Typically, with no other degrees and four years of secondary school, it requires three years of training to become a CO. ACOs have less training. AMOs have on average 3.5 years of medical schooling. MOs have the equivalent of a U.S. M.D. degree. None of the MOs in our sample participated in the laboratory experiments, so they are not featured in this paper.

  • ↵9. In between the encouragement visit and the post-study visits, clinicians were randomized into four treatments in which they received gifts, prizes and follow up visits at different times. These treatments are ignored in the current study and we examine only the long run impact of having been encouraged and studied.

  • ↵10. Previous and subsequent research in this area has resulted in essentially identical measures of quality for the average clinician, ruling out a secular trend in quality.

  • ↵11. The direct effect of being a tracked item is included in item fixed effects.

  • ↵12. Embedded Image, where Embedded Image is estimated from the regression xijk = Γj + Γk + eijk run using observations exclusively from the baseline.

  • ↵13. It is not possible to include patient level fixed effects because each patient is only seen once in our data.

  • ↵14. We expect this affect to be uncorrelated with observable characteristics of the patient (age and gender) and therefore include these characteristics independently as dummy variables.

  • ↵15. Results are robust to alternative definitions of generous, including giving exactly half and giving in a small window around the 50 / 50 allocation. The trends we observe do not come through, however, if we consider a continuous measure of generosity (that is, where generosity is simply measured by the number of tokens given in the dictator game rather than giving above some threshold). Those who give more than half are not higher-quality clinicians than those who give exactly half, and those who give more than zero but less than 50 tokens are not higher quality clinicians than those who give zero.

  • ↵16. Results do not change when we run regressions that exclude all attriting clinicians.

  • ↵17. A clinician with almost perfect adherence actually decreases effort. This result is driven by the asymmetry of measurement error in quality at the high end—it is difficult to overestimate quality when the baseline is 98 percent but easy to underestimate it.

  • ↵18. Our findings somewhat contrast Brosig-Koch et al. (2013), who not only identify heterogeneous reactions to pay-for-performance schemes but also some crowding out of intrinsic motivation. Our results suggest that such crowding out does not occur when changing prosocial incentives.

  • Received October 2013.
  • Accepted October 2014.

References

  1. ↵
    1. Adams Francis
    1849. The Genuine Works of Hippocrates. London: Sydenham Society.
  2. ↵
    1. Akerlof George A.,
    2. Kranton Rachel E.
    2000. “Economics and Identity.” Quarterly Journal of Economics 115(3):715–53.
    OpenUrlCrossRef
  3. ↵
    1. Akerlof George A.,
    2. Kranton Rachel E.
    2005. “Identity and the Economics of Organizations.” Journal of Economic Perspectives 19(1):9–32.
    OpenUrlCrossRef
  4. ↵
    1. Andreoni James
    1989. “Giving with Impure Altruism: Applications to Charity and Ricardian Equivalence.” Journal of Political Economy 97(6):1447–58.
    OpenUrlCrossRef
  5. ↵
    1. Andreoni James
    1990. “Impure Altruism and Donations to Public Goods: A Theory of Warm-Glow Giving.” Economic Journal 100(401):464–77.
    OpenUrlCrossRef
  6. ↵
    1. Basinga Paulin,
    2. Gertler Paul. J.,
    3. Agnes Soucat,
    4. Sturdy Jennifer
    2011. “Effect on Maternal and Child Health Services in Rwanda of Payment to Primary Health-Care Providers for Performance: An Impact Evaluation.” Lancet 377(9775):1421–28.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Benabou Roland,
    2. Tirole Jean
    2003. “Intrinsic and Extrinsic Motivation.” Review of Economic Studies 70(3):489–520.
    OpenUrlCrossRef
  8. ↵
    1. Brosig-Koch Jeannette,
    2. Heike Hennig-Schmidt,
    3. Nadja Kairies,
    4. Wiesen Daniel
    2013. “How Effective Are Pay-for-Performance Incentives for Physicians? A Laboratory Experiment.” Ruhr Economic Papers 413.
  9. ↵
    1. Coulter Ian D.,
    2. Wilkes Michael,
    3. Der-Martirosian Claudia
    2007. “Altruism Revisited: A Comparison of Medical, Law and Business Students’ Altruistic Attitudes.” Medical Education 41(4):341–45.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Cullen John B.
    1978. The Structure of Professionalism: A Quantitative Examination. New York: PBI.
  11. ↵
    1. Das Jishnu,
    2. Hammer Jeffrey S.
    2007. “Money for Nothing, the Dire Straits of Medical Practice in Delhi, India.” Journal of Development Economics. 83(1):1–36.
    OpenUrlCrossRef
  12. ↵
    1. Das Jishnu,
    2. Hammer Jeffrey S.,
    3. Leonard Kenneth L.
    2008. “The Quality of Medical Advice in Low-Income Countries.” Journal of Economic Perspectives 22(2):93–114.
    OpenUrlCrossRefPubMed
  13. ↵
    1. Delfgaauw Josse
    2007. “Dedicated Doctors: Public and Private Provision of Health Care with Altruistic Physicians.” Tinbergen Institute Discussion Paper Number 2007-010/1.
  14. ↵
    1. Ellingsen Tore,
    2. Johannesson Magnus
    2008. “Pride and Prejudice: The Human Side of Incentive Theory.” American Economic Review 98(3):990–1008.
    OpenUrlCrossRef
  15. ↵
    1. Freidson Eliot
    2001. Professionalism: The Third Logic. Chicago: The University of Chicago Press.
  16. ↵
    1. Godager Geir,
    2. Wiesen Daniel
    2013. “Profit or Patients Health Benefit? Exploring the Heterogeneity in Physician Altruism.” Journal of Health Economics 32(6):1105–16.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Grant Adam M.
    2008. “Does Intrinsic Motivation Fuel the Prosocial Fire? Motivational Synergy in Predicting Persistence, Performance, and Productivity.” Journal of Applied Psychology 93(1):48–58.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Jamtvedt Gro,
    2. Young Jane M.,
    3. Kristoffersen Doris Tove,
    4. O’Brien Mary Ann,
    5. Oxman Andrew D.
    2003. “Audit and Feedback: Effects on Professional Practice and Health Care Outcomes (Review).” The Cochrane Database of Systematic Reviews 2(2) CD000259.
    OpenUrl
  19. ↵
    1. Kolstad Jonathan T
    2013. “Information and Quality when Motivation Is Intrinsic: Evidence from Surgeon Report Cards.” American Economic Review 103(7):2875–910.
    OpenUrlCrossRef
  20. ↵
    1. Kolstad Julie Riise,
    2. Lindkvist Ida
    2012. “Pro-Social Preferences and Self-Selection into the Public Health Sector: Evidence from an Economic Experiment.” Health Policy and Planning 28(3):320–27.
    OpenUrlPubMed
  21. ↵
    1. Leonard Kenneth L.,
    2. Masatu Melkiory C.
    2006. “Outpatient Process Quality Evaluation and the Hawthorne Effect.” Social Science and Medicine 63(9):2330–40.
    OpenUrlCrossRef
  22. ↵
    1. Leonard Kenneth L.,
    2. Masatu Melkiory C.
    2008. “Moving From the Lab to the Field: Exploring Scrutiny and Duration Effects in Lab Experiments.” Economic Letters 100(2): 284–87.
    OpenUrlCrossRef
  23. ↵
    1. Leonard Kenneth L.,
    2. Masatu Melkiory C.
    2010a. “Professionalism and the Know-Do Gap: Exploring Intrinsic Motivation among Health Workers in Tanzania.” Health Economics 19(12):1461–77.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Leonard Kenneth L.,
    2. Masatu Melkiory C.
    2010b. “Using the Hawthorne Effect to Examine the Gap Between a Doctor’s Best Possible Practice and Actual Performance.” Journal of Development Economics 93(2):226–43.
    OpenUrlCrossRef
  25. ↵
    1. Leonard Kenneth L.,
    2. Masatu Melkiory C.,
    3. Vialou Alex
    2007. “Getting Doctors to Do Their Best: The Roles of Ability and Motivation in Health Care.” Journal of Human Resources 42(3):682–700.
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Levitt Steve,
    2. List John
    2007. “What Do Laboratory Experiments Measuring Social Preferences Reveal About the Real World?” Journal of Economic Perspectives 21(2):153–74.
    OpenUrlCrossRef
  27. ↵
    1. Maestad Ottar,
    2. Torsvik Gaute
    2008. “Improving the Quality of Health Care When Health Workers Are in Short Supply.” CMI Working Paper 2008:12.
  28. ↵
    1. Maestad Ottar,
    2. Torsvik Gaute,
    3. Aakvik Arild
    2010. “Overworked? On the Relationship Between Workload and Health Worker Performance.” Journal of Health Economics 29(5):686–98.
    OpenUrlCrossRefPubMed
  29. ↵
    1. Medical School Objectives Working Group
    . 1999. “Learning Objectives for Medical Student Education-Guidelines for Medical Schools: Report I of the Medical School Objectives Project.” Academic Medicine 74(1):13–18.
    OpenUrlCrossRefPubMed
  30. ↵
    1. Meessen Bruno,
    2. Musango Laurent,
    3. Kashala Jean-Pierre I.,
    4. Lemlin Jackie
    2006. “Reviewing Institutions of Rural Health Centres: The Performance Initiative in Butare, Rwanda.” Tropical Medicine and International Health 11(8):1303–17.
    OpenUrlCrossRef
  31. ↵
    1. Miller Grant,
    2. Babiarz Kimberly Singer
    2013. “Pay-For-Performance Incentives in Lowand Middle-Income Country Health Programs.” NBER Working Paper 18932.
  32. ↵
    1. Prendergast Canice
    2007. “The Motivation and Bias of Bureaucrats.” American Economic Review 97(1):180–96.
    OpenUrlCrossRef
  33. ↵
    1. Rowe Alexander K.,
    2. de Savigny Don,
    3. Lanata Claudia F.,
    4. Victora Cesar G.
    2005. “How Can We Achieve and Maintain High-Quality Performance of Health Workers in Low-Resource Settings?” Lancet 366:1026–35.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Serra Danila,
    2. Serneels Peiter,
    3. Barr Abigail
    2011. “Intrinsic Motivations and the Nonprofit Health Sector.” Personality and Individual Differences 51(3):309–14.
    OpenUrl
  35. ↵
    1. Smith Richard,
    2. Lagarde Mylene,
    3. Blaauw Duane,
    4. Goodman Catherine,
    5. English Mike,
    6. Mullei Kethi,
    7. Pagaiya Nonglak,
    8. Tangcharoensathien Viroj,
    9. Erasmus Ermin,
    10. Hanson Kara
    2013. “Appealing to Altruism: An Alternative Strategy to Address the Health Workforce Crisis in Developing Countries?” Journal of Public Health 35(1):164–70.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Wear Delese,
    2. Castellani Brian
    2000. “The Development of Professionalism: Curriculum Matters.” Academic Medicine 75(6):602–11.
    OpenUrlCrossRefPubMed
  37. ↵
    1. World Health Organization
    . 2005. “Bridging the ‘Know-Do’ Gap: Meeting on Knowledge Translation in Global Health.” Technical Report WHO/EIP/KMS/2006.2.
  38. ↵
    1. World Medical Association
    . 1995. “Declaration of Geneva” reprinted in Encyclopedia of Bioethics: 2646 (Reich Warren Thomas, et al. eds., rev. ed. 1995) Macmillan.
PreviousNext
Back to top

In this issue

Journal of Human Resources: 51 (1)
Journal of Human Resources
Vol. 51, Issue 1
1 Jan 2016
  • Table of Contents
  • Table of Contents (PDF)
  • Index by author
  • Back Matter (PDF)
  • Front Matter (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Human Resources.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Generosity and Prosocial Behavior in Healthcare Provision
(Your Name) has sent you a message from Journal of Human Resources
(Your Name) thought you would like to see the Journal of Human Resources web site.
Citation Tools
Generosity and Prosocial Behavior in Healthcare Provision
J. Michelle Brock, Andreas Lange, Kenneth L. Leonard
Journal of Human Resources Jan 2016, 51 (1) 133-162; DOI: 10.3368/jhr.51.1.133

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Generosity and Prosocial Behavior in Healthcare Provision
J. Michelle Brock, Andreas Lange, Kenneth L. Leonard
Journal of Human Resources Jan 2016, 51 (1) 133-162; DOI: 10.3368/jhr.51.1.133
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Abstract
    • I. Introduction
    • II. Intrinsic Motivation and Prosocial Behavior
    • III. Methodology
    • IV. Results
    • V. Conclusion
    • Appendix A1 Do Health Workers React when They Discover the Team Has Arrived?
    • Appendix A2 Retrospective Consultation Review
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • References
  • PDF

Related Articles

  • No related articles found.
  • Google Scholar

Cited By...

  • Do Altruistic Mental Health Care Providers Have Better Treatment Outcomes?
  • Google Scholar

More in this TOC Section

  • Prescription for Disaster
  • Occupation and temperature-related mortality in Mexico
  • Employers’ Language Proficiency Requirements and Hiring of Immigrants
Show more Articles

Similar Articles

UW Press logo

© 2026 Board of Regents of the University of Wisconsin System

Powered by HighWire