## Abstract

We provide evidence for the effectiveness of conferences in promoting academic impact by exploiting the cancellation—due to Hurricane Isaac—of the 2012 American Political Science Association Annual Meeting. We assembled a data set of 29,142 papers and quantified conference effects, using difference-in-differences regressions. Within four years of being presented at the conference, a paper’s likelihood of becoming cited increases by five percentage points. We decompose the effects by authorship and provide an account of the underlying mechanisms. Overall, our findings point to the role of short-term face-to-face interactions in the formation and dissemination of scientific knowledge.

## I. Introduction

Modern societies commit considerable resources to academic research, and of these resources, academics generally invest a significant proportion in attending (and organizing) conferences and similar gatherings.^{1} But is this proportion being well spent? Although conferences feature prominently in the dissemination strategies for most academic projects, it is striking that there is little existing scientific evidence for, or direct measurement of, the effectiveness of such meetings in promoting the impact of academic work.

A main reason for this deficiency lies in a hard to escape identification problem. In general, one does not have a compelling counterfactual for the papers presented in any given conference. An ideal test of efficacy would entail deliberate randomization of paper selection for a scientific meeting.^{2} As an alternative to such an intervention, in this paper, we exploit a natural experiment: the last-minute cancellation, due to an act of nature (Hurricane Isaac), of the 2012 American Political Science Association (APSA) Annual Meeting.

The APSA meeting gathers close to 3,000 presenters every year, from more than 700 institutions. By the time of its cancellation in 2012, the conference program had been fully arranged and there was therefore a unique opportunity to identify conference effects. We test whether the cancellation lessened the academic impact of the 2012 APSA papers.

We assembled a new data set comprising 29,142 conference papers scheduled to be presented between 2009 and 2012, and we matched these to outcomes collected over the next four years: downloads from the Social Science Research Network and citations reported in Google Scholar. To quantify conference effects, we adopt a difference-indifferences approach. We examine how outcome patterns change in 2012 (first difference) in the APSA meeting series versus in a comparator meeting series (second difference): a similarly large and significant conference in the same academic field (the Midwest Political Science Association Annual Meeting) that was never cancelled.

We detect statistically significant conference effects in our indicators of visibility. Papers that were to be included in the 2012 APSA cancelled meeting became less likely to be cited—by about three percentage points within two years and by about five percentage points within four years. These estimates imply that the experience of an occurring conference increases the likelihood of an article becoming cited, over either time horizon, by about 40 percent. We present several econometric specifications and robustness checks to support the validity of our identification strategy and ensure that we are not capturing other factors, such as unobservable heterogeneity related to papers’ prospects. Notably, the findings survive in regressions that control for author fixed effects.

We consider two different mechanisms that could, in principle, be operating. The conference presentation directly advertises a paper to the session audience, but separately the authors may (through the processes of making a presentation and of reflecting on feedback received) become encouraged and enabled to further advance their work. We try to distinguish between these—“advertisement” and “maturation”—channels mainly by looking at whether citations gained (due to the conference) are more likely to come from participants in the conference (and indeed, participants in the same conference session) than from other academics in the population. We also ask: Who benefits from presenting in conferences? In other words, does the gain mainly accrue to already-established academics or to lesser-known and newcomer authors? One supposition might be that conferences are particularly valuable for less-established authors, for whom the opportunity to gain feedback and to advertise their work is needed most. A countervailing supposition might be that experienced scholars, perhaps with an existing reputation, may benefit by attracting larger audiences within the conference or by being able to utilize feedback more productively.^{3}

The sharpest evidence of a conference impact is found for articles authored by academics with low to intermediate experience and profile. For these papers, the benefit seems to arise though “maturation.” However, for papers with more established authors, we find indications of an “advertisement” gain of citations from academics participating in the same conference session. In general, our analysis suggests that social interactions during conferences generate positive impacts: for some authors, an improvement or progression of their working paper, for others, more directly ensuring their paper becomes known.

Our findings give scientific corroboration to the common perception among research funders and institutions that conferences play a significant role in disseminating and improving academic work. These results are consistent with correlations found in previous empirical work (Winnik et al. 2012; Castaldi et al. 2015; Chai and Freeman 2017), but, to the best of our knowledge, this study is the first to use quasi-experimental evidence to estimate the benefits of conferences and in this sense is wholly novel within the existing literature.^{4} More broadly, we contribute to a growing body of work that investigates the impacts of face-to-face interactions and the determinants of knowledge flow.

The remainder of the paper is developed as follows. In Section II, we discuss the related literature and the channels underlying conference effects on academic impact. In Section III, we explain the data and we present the results in Section IV. In Section V we conclude.

## II. Conferences and Academic Impact

The potential roles of conferences in scientific production are manifold, and within this study we focus only on one specific effect: the effect of the conference in promoting the visibility of the presented papers, manifesting in increased downloads and citations.

There are two clear mechanisms through which such an effect could arise. The first, more direct, mechanism may be termed “advertisement.” The presentation of a paper within the conference may lead to academics hearing about the paper who would not otherwise have done so, or to the paper becoming more salient even to the scholars who would in any case have known of its existence. In fact—due to the cancellation—the APSA sent out hard copies of the 2012 meeting program to all participants so that there remained some opportunity for academics to discover each other’s work, but it was the opportunity to learn about this work *in person* that was missed. The second, less direct, mechanism may be termed “maturation.” An academic paper may be improved or it may be progressed to more visible forms (posted in working paper series, etc.) as a consequence of the conference presentation. This could be because the processes of preparing and delivering a presentation are in themselves conducive to an academic refining the work. Again, in this study we may not be picking up the full effect because academics would have in any case prepared for the conference, the cancellation being just two days before the event. Maturation may also occur because an academic receives useful ideas, advice, and encouragement from other participants (notably the chair, discussant, other presenters, and the audience within the conference session), and the cancellation would certainly have attenuated these benefits.

The maturation and advertisement mechanisms relate, respectively, to significant recent literatures on *the formation* and *diffusion* of scientific knowledge. However, these literatures mainly consider the importance of *long-term* collocation and opportunities for face-to-face interaction.

The maturation mechanism relates specifically to established peer effects in the formation of knowledge, as explored, for example, in Waldinger (2010), Azoulay et al. (2010), Borjas and Doran (2015), and Borjas et al. (2018). In general, this literature reports positive spillovers from very productive academics to closely related peers, such as collaborators, students, and advisors.^{5}

The advertisement mechanism relates to work that seeks to understand information flows. Some existing literature—McCabe and Snyder (2015), Gargouri et al. (2010), Evans and Reimer (2009)—has explored the dissemination benefits of modern communication technologies (open access and online publication). However, another strand of the literature suggests a role for face-to-face interactions in transmission of knowledge. Orazbayev (2017) finds a negative relationship between stricter immigration policies and bilateral knowledge flow measured by academic citations. Jaffe et al. (1993), Belenzon and Schankerman (2013), and Agrawal et al. (2017) are among many significant papers that have found geographical proximity, state-collocation, and the existence of good transport links to be strong determinants of citations to patents. The seminal work of Jaffe et al. (1993) demonstrates that knowledge spillovers are closely constrained by location. Belenzon and Schankerman (2013) show that citations of university patents and publications decline sharply with distance up to 150 miles—arguably, a commuting distance over which personal interactions are more likely to occur—but are constant after that. In related literature, using evidence from natural experiments, Catalini et al. (2016) and Catalini (2018) find that low-cost air-travel links and microgeography (within-campus location), respectively, are significant determinants of collaboration. They demonstrate that face-to-face interactions are important for creating and maintaining academic partnerships.

Conferences and workshops represent opportunities for a very short-term in-person interaction, which on first consideration may seem very different in character and potential for effect to the long-term opportunities mainly considered in the literature above. However, there are already hints, in existing work, that short-term face-to-face encounters may also be significant. Blau et al. (2010) showed effects from a mentoring workshop on participants’ subsequent publications and research grant applications. Boudreau et al. (2017) showed that a (within institution) 90-minute brainstorm session could substantially increase the likelihood of collaboration between participants. In Campos et al. (2018), we use the same data and setting as this current paper to estimate conference effects on authors’ future work. We *do not* find that, after the 2012 APSA cancellation, participants produced fewer quality-adjusted subsequent papers (solo or coauthored), but we do detect effects on academic collaborations. The cancellation led to a 16 percent decrease in the likelihood of individuals subsequently coauthoring a paper with another conference participant and to a relative subsequent clustering—a tendency for future new collaborations to form within existing cliques—within the coauthorship network.

## III. Data and Methodology

### A. Background: The APSA and MPSA Meetings

In investigating the effect of conferences, our analysis focuses on a specific event: the annual meeting organized by the American Political Science Association (APSA). This meeting occurs in the last week of August or the first week of September (always on the American Labor Day weekend) and comprises four days of presentations of panels, posters, workshops, evening sessions, and roundtables.

The 2012 APSA meeting was due to take place in New Orleans and was scheduled to begin August 30. However, it was cancelled with less than 48 hours’ notice due to the approach of Hurricane Isaac. By the time of this cancellation the conference program was complete and publically available, providing a group of conference papers that did not have the conference experience. Using a difference-in-differences approach, we investigate whether the 2012 APSA papers have reduced academic visibility because of the cancellation.

We examine papers’ outcomes across eight conferences. We compare 2012 APSA papers with papers that were scheduled to be presented in conferences that took place, in the previous years of the APSA Meeting, 2009–2011. To circumvent timing effects and any shocks particular to the cohort of 2012 papers, as a control for APSA papers (the treatment group), we use papers accepted at a comparator conference: the Midwest Political Science Association (MPSA) Annual Meeting.^{6}

The APSA and the MPSA are professional associations of political science scholars in the United States. Both associations publish leading journals, *The American Political Science Review* and *The American Journal of Political Science*, respectively. Their annual meetings are the largest conferences in the field and are similar in profile and format, although the MPSA meeting has a larger number of papers presented than the APSA meeting: 4,200 versus 3,000 papers, on average. In Table A1 in the Online Appendix, we describe the top 30 and top ten most populated themes in terms of papers for the two meeting series. There are close similarities in the themes that concentrate most papers in both series.

### B. Data Sources and Descriptive Statistics

#### 1. Conference papers

We assembled a data set of papers presented in the APSA and MPSA Meetings from 2009 to 2012 and corresponding outcomes. We focus on the performance of papers presented in panel sessions (which concentrate most of the participants). In both meetings, panel sessions are one hour and forty-five minutes long and usually have four papers presented, one chair, and one or two discussants.

We collected titles of all APSA papers, or 12,070 presented papers. For the MPSA, we have two groups of papers. The first and main group is a random sample of 20 percent of all papers presented in the MPSA meeting from 2009 to 2012, a total of 3,074 papers, for which we searched for all outcomes. The second includes the entire list in the MPSA program, or 17,072 papers. We obtained this list later on and therefore only obtained later outcomes for the full list. For clarity, throughout this discussion we refer to the first sample—comprising all APSA papers and 20 percent of MPSA papers—as the “main paper sample (with 20 percent of the MPSA papers)” and the second sample—comprising all APSA and all MPSA papers—as the “full paper sample (with all of the MPSA papers).” Our data sets—derived from the conferences’ online programs—include, for each paper, the title, authorship, and each author’s affiliation. They also include the session within which the paper was due to be presented and information on the chair and discussant for each session.

#### 2. Participants’ characteristics

We gathered data on conference participants from three sources: the Web of Science (WoS), the Social Science Research Network (SSRN), and the conference programs.^{7} From the WoS, we determined conference participants’ prior productivity, observed in a five-year window prior to the conference, including the number (within the relevant window) of each author’s publications, citations, and publications weighted by journal impact factor. From the SSRN, we determined whether the participant had posted a working paper in the SSRN before.^{8} We linked the SSRN and WoS data to conference participants (that is, a combination of authors’ first and last name and conference edition) using individuals’ first and last names.^{9} Note that as these characteristics are conference year–dependent, they convey time-varying individual characteristics.

From the conference programs, we recovered each conference participant’s affiliation, and we associated an affiliation ranking to each author. These were taken from Hix (2004). We aggregated authors’ characteristics to the paper level to use as controls in the regressions.

#### 3. Descriptives and the matched sample

Table 1 presents averages for all conference papers and separately for papers in the APSA and MPSA meetings. Overall, 70.9 percent of the papers are solo-authored, 51.7 percent are written by academics affiliated with a top-100 institution, and 11.8 percent of authors from an institution within the top ten. Less than half of the papers are authored by recently published academics (43.7 percent), and only 16.2 percent of papers are authored by an academic with a working paper previously posted in SSRN.

There are some differences between the APSA and MPSA papers. On average, APSA papers are more likely than MPSA papers to be authored by academics with a prior publication (53.5 percent versus 36.8 percent) and are slightly more likely to have been authored by an academic from a highly ranked institution. Similar differences are observed also in authors’ numbers of publications adjusted by quality and the likelihood of having a previous paper posted in SSRN. Except for the number of authors and proportion of solo-authored papers, these differences are all statistically significant.

The difference-in-differences approach that we are using controls for systematic differences across conferences, such as different standards for paper acceptance. The key identification assumption is that there are common pre-trends in the outcome variable for APSA and MPSA papers and that, had the 2012 APSA conference taken place, outcome differences between the 2012 papers and the 2009–2011 papers would have evolved in a parallel manner for papers in both conferences. This would be violated if the APSA papers became weaker in 2012, while the MPSA papers did not (or, if the MPSA papers became stronger). It is worth noting that, since the MPSA conference takes place five months before the APSA conference, there is no possibility that cancellation of the 2012 APSA meeting in itself affected in any way the profile of papers at the 2012 MPSA meeting.^{10}

In Figure 1, we plot papers’ characteristics described in Table 1—predictive of outcomes. Average characteristics seem to have changed in the same manner over the years, providing some supportive evidence for the suitability of MPSA papers as a control group in the difference-in-differences analysis.

As a robustness check, we also conduct analyses for a more homogeneous set of papers across the APSA and MPSA Annual Meetings. Using a nonparametric coarsened exact matching (CEM) approach (Iacus, King, and Porro 2011, 2012), we selected MPSA (control) papers with the same conference year and covariates described in Table 1 as the APSA (treatment) papers.^{11} The resulting matched sample is described in Table A2 in the Online Appendix and it accounts for 73.8 percent of all conference papers.

#### 4. Outcomes

We collected conference papers’ outcomes from SSRN and Google Scholar. As the MPSA meeting precedes the APSA meeting by five months, we conduct our analysis using outcomes collected five months earlier for MPSA papers than for APSA papers.^{12} From Google Scholar, we collected citation counts recorded 24 months and 48 months after the 2012 MPSA and APSA conferences (in April and September, 2014 and 2016, respectively), for the main paper sample (with 20 percent of the MPSA papers).

There are significant challenges associated with tracking unpublished papers. The titles of prepublished papers often change over time, and indeed authors’ projects can develop, evolve, divide, or combine in ways that mean one cannot objectively say whether a specific working paper is the same paper that was presented at a conference or not. In order to increase our chances of finding conference papers, our main search was made based on authorship and an abbreviated form of each paper’s title. Our initial search (in April and September 2014, two years after the 2012 meetings) recorded information from the first three Google Scholar hits. (In our auditing, we found that if a conference paper could be found on Google Scholar, then in more than 90 percent of the cases it appeared in the first three hits.) We developed an algorithm (explained in the Online Appendix) to verify title similarity between the papers discovered by the search and the conference paper. In constructing the citation outcome, we retained only the highest hit (that is, the first among the three Google Scholar articles) that (i) was verified by the algorithm as a title match and (ii) had exactly the same authorship as the conference paper. If none of the first three Google Scholar hits were thereby retained, we considered the paper as “not found on Google Scholar” and as having zero Google Scholar citations. To check the accuracy of our sample, two research assistants conducted manual checks on 900 randomly chosen papers (a sample approximating 5 percent of our full data set). From this sample, 96.6 percent of the articles identified on Google Scholar were considered correct.

In the later Google Scholar search (in April and September 2016, four years after the 2012 meetings) we expanded the collection, gathering information on the first ten hits in Google Scholar.^{13} For the citation outcome we again used the highest of these hits that was also (by the same criteria as before) both a title match and an authorship match. In a second step, we also collected information on the ten first papers that cited the selected Google Scholar hit, by accessing the “Cited by” link in Google Scholar. In Figure A2 in the Online Appendix, we provide examples of these data. After excluding self-citations, we use these data to identify whether the conference paper was eventually cited by academics not in the conference, academics in the conference, and academics in the same conference session.

From SSRN, we collected counts for articles’ downloads. The SSRN downloads outcome we use is measured by the number of times a paper has been delivered by the SSRN to an interested party either electronically or as a purchased bound hard copy. At the working paper stage, this is the most-used indicator for visibility and (though SSRN also records papers’ views and citations) is the primary measure used in SSRN’s ranking of authors and papers. We initially collected these counts 15 months after the 2012 conferences (in September 2013, for MPSA papers and in January 2014 for APSA papers) and then subsequently at 12-month intervals thereafter, in each case for the main paper sample (with 20 percent of the MPSA papers). For convenience, we shall refer to these observations as “one year”, “two years,” and “three years” after the 2012 conferences. This search was based on authorship and an abbreviated form of each paper’s title. We found relatively few SSRN entries for the MPSA papers: only 103 across the four years (2009–2012).

We then conducted a later search (in September 2015 and January 2016), using the full conference paper sample (with 100 percent of MPSA papers). This search (for which we used a different web-scraping service) was based on authorship and each paper’s full title. Because these search criteria were more restrictive, we found fewer APSA papers in SSRN (2,351 as opposed to 2,892), but we nevertheless achieved our goal of increasing the size of the MPSA control group: this time identifying 445 MPSA papers. As the size of the control group is more satisfactory, we use the outcomes from this later search in our main results. In Table A3 in the Online Appendix we provide details about the differences across SSRN search samples. In Table A4 in the Online Appendix, for comparison we report the estimated conference impacts based on the earlier (“one year,” “two years,” and “three years”) searches.

Table 2 presents summary statistics for all papers’ outcomes considered in the main regressions. Panel A reports the summary statistics for SSRN outcomes observed three years after the 2012 meetings. Ten percent of conference papers are found to be posted in SSRN, and among these the average number of downloads is 95.2. When considering all papers (even those not posted in SSRN, which consequently have zero downloads), the average number of downloads is 9.14.

As shown in Panel B, two years after the 2012 meetings, 27 percent of papers are found in Google Scholar. Citations are highly skewed, with 98 percent of papers having fewer than ten citations. We therefore examine the likelihoods of a conference paper receiving at least one citation, at least two citations, at least five citations, and at least ten citations. Two years after the 2012 meetings, these thresholds are met, respectively, by 11, 8, 4.3, and 2.4 percent of papers. These proportions grow with time to 17, 12.9, 8.3, and 5.7 percent four years after the 2012 meetings.

In Figures 2–4 we provide some visual evidence for the impact of the 2012 APSA cancellation, by decomposing average outcomes by the eight conferences. We focus on the number of accumulated downloads, the percentage of papers that received at least one citation (two and four years after), and the percentages of papers found online. In the Online Appendix, Figures A4–A5, we provide figures for all remaining outcomes. There is a visible drop in outcomes for 2012 APSA papers that is not mirrored for 2012 MPSA papers, suggestive of conference effects. We examine this relationship in a more controlled way, as explained next.

### C. Regression Specifications

We first estimate the following ordinary least squares (OLS) equation, Equation 1, using as the unit of observation the paper described in the conference program. This is our baseline specification.
(1)
where *y _{ist}* is the outcome of a conference paper

*i*as due to be presented in year

*t*∈ {2009, 2010, 2011, 2012} of conference series

*s*∈ {

*APSA, MPSA*}. The term [

*s*=

*APSA*] is a conference series dummy (set to 1 if

*s*=

*APSA*, 0 otherwise), [

*t*=

*T*] is a conference year dummy,

*π*is an APSA specific year-trend variable (that is, linear in

_{t}*t*and to control for any differential time trends between the APSA and MPSA meeting), and

*vist*is a random term. The vectors of covariates

**X**

_{ist}and

**Aff**

_{ist}, respectively, include paper characteristics—the number of authors in the paper, the accumulated number over all paper authors of publications weighted by journal impact factor, and an indicator for whether any author had a previous paper posted in SSRN—and affiliation dummies (using the highest-ranked institution among the paper authors’ affiliations). The conference impact is revealed by the coefficient β

_{1}. We report Huber–White robust standard errors. (It is worth noting that the results are neither weakened nor lose statistical significance when standard errors are clustered at the author level.)

To control for author time-invariant unobservable heterogeneity, we also analyze the data at the paper–author level^{14} and estimate Equation 2 with individual fixed effects:
(2)
where *y _{aist}* represents the outcome of a paper

*i*(due to be presented in year

*t*of conference series

*s*), as associated with one of its authors,

*a*. The term φ

_{a}are author-specific fixed effects. The effects are identified because authors frequently have papers presented in multiple meetings.

^{15}The regression identifies, in coefficient γ

_{1}, the within-author gap in papers’ outcomes across the APSA and MPSA meetings in 2012 compared to previous cohorts.

It is also the case that some participants send the same paper to both the APSA and MPSA meetings (6.8 percent of papers). This might lead to an underestimate of the conference effects as the outcome sometimes also duplicates across conferences. We also provide estimated impacts for all outcomes, excluding these papers.

## IV. Results

We present several tests for the effects of conferences on papers’ academic visibility. We examine the conference effect on downloads and consider the effect on likelihoods of accumulating citations. We then test for heterogeneous effects by session and authorship characteristics and provide evidence for the underlying mechanisms.

### A. The Effect of Conferences on Papers’ Visibility

We begin by examining, in Table 3, conference effects on papers’ SSRN downloads. To avoid undue influence of a small number of papers with very large numbers of downloads, we exclude papers that accumulated more than 500 downloads. In the Online Appendix (Table A5), we detail these excluded papers and present (in Table A6) results—which are qualitatively similar—including all papers, winsorizing the data, and using alternative outlier cutoffs.^{16}

Each entry in Table 3 reports OLS estimates for the difference-in-differences coefficient from Equation 1. We present results without controls in Column 1 and including controls for paper characteristics in Column 2. In Column 3, we replicate the specification in Column 2, but restricting observations to papers in the matched sample. In Row we present estimates for the difference-in-differences coefficient in regressions using the overall number of SSRN downloads as the paper outcome. For this variable, papers not found in SSRN are treated as having zero downloads. The estimates are all statistically significant (*p*-value <0.01) and indicate that the 2012 APSA meeting cancellation led to a decrease of around 4.5–5.4 downloads per paper. In Rows 2 and 3 we decompose this overall effect. The cancellation may have changed the likelihood of participants posting their paper in SSRN, and it may also have affected the rate at which papers, once posted on SSRN, were subsequently downloaded. In Row 2, the entries represent estimated impacts on the probability that a paper is posted in SSRN. The difference-in-differences estimates are negative—suggesting that the cancellation led to fewer participants uploading their papers. But the coefficients are not statistically significant for the most controlled specifications (in Columns 2 and 3). In Row 3, we examine the impacts on the number of downloads, but restricting the sample to papers that were posted in SSRN. The difference-in-differences coefficients are negative, suggesting also a decrease in papers’ readership, but the point estimates are not (for the most controlled specifications) statistically significant.

In Rows 4–6, we replicate regressions, but excluding papers scheduled to be presented at both the APSA and the MPSA meetings. (The APSA meeting organizers encourage participants to upload their conference papers in SSRN and therefore, for our downloads outcome, there is a specific risk of contamination, due to a possibility that MPSA papers found in SSRN may often be papers also presented at the APSA meeting.) For this sample, the magnitudes of estimated effects, and their *t*-statistics, increase for all outcomes.

We might tentatively suppose that the overall effect on downloads (in Rows 1 and 4) arises both because authors became somewhat less likely to post their paper in SSRN and because, once posted, papers were less frequently downloaded.^{17}

Next, we examine whether the 2012 APSA meeting cancellation had an impact on the likelihood of articles accumulating citations. Again, we provide difference-in-differences estimates for several regression specifications and samples. We report results for Google Scholar outcomes measured two years after, in Table 4, and four years after the 2012 meetings, in Table 5.

Focusing first on the two-year outcomes in Table 4, we report coefficients, in Row 1, from simple OLS regressions without paper controls and, in Row 2, from specifications controlling for paper covariates. The estimates in Row 2 indicate that the APSA meeting cancellation led to decreases in the likelihoods of presented papers receiving at least one citation and at least two citations of more than three percentage points. It transpired that, within two years, just 7.1 percent and 4.5 percent of 2012 APSA papers received at least one citation and at least two citations, respectively, so the implied effect of conferences is to increase these likelihoods by 40–70 percent. We also detected conference effects on the likelihood of articles collecting larger numbers of citations: the cancellation leading to a decrease of 1.9 percentage points in the likelihood of receiving at least five citations. In Row 3, we report results from Equation 2, replacing institution dummies with covariates for author fixed effects. The coefficients for conference impacts become larger in magnitude, with lower *p*-values, suggesting a possible selection of more-likely-to-be-cited authors into the 2012 APSA meeting. The estimates indicate that the conference cancellation led to decreases of 8.2, 7.2, and 4.5 percentage points, respectively, in the likelihoods of an article receiving at least one, two, or five citations. In Rows 5 and 6, we present results for the group of papers in the matched sample. While none of the estimated effects are significant from the OLS regressions (in Row 5), they become significant in specifications including author fixed effects (in Row 6), and they resemble in magnitude the impacts estimated for the full data (in Row 3).

In Table 4 we also report estimates for the effect of the conference cancellation on the likelihood of the conference paper being found, in our search, on Google Scholar at all. These coefficients, in Column 5, are all negative, and in most specifications are statistically significant, with estimated effects varying between 5 and 16 percentage points. These estimates parallel the suggestive evidence in Table 3 of a reduced likelihood of 2012 APSA papers being posted in SSRN; however, they do not appear to be an artefact of the former effect. To check for this we also created an indicator for whether the paper was found online, but coded as zero conference papers found on Google Scholar such that SSRN was the only source for the paper.^{18} The difference-in-differences estimates for this outcome are presented in Column 6, the coefficients being qualitatively similar to and only slightly smaller in magnitude than those in Column 5.

In Table 5, we present results for longer-run counts of citations. Four years after the 2012 meetings, the 2012 APSA coefficients are generally larger in magnitude, but imply similar relative conference effects.^{19} For example, 14.5 percent of 2012 APSA papers received at least one citation within four years, so the estimated impact of 5.7 percentage points, as reported in Column 1 Row 2, implies that the conference would have increased this likelihood by 39 percent. The estimated effects remain statistically significant for the likelihood of an article being cited at least once or twice, but not for the likelihood of being cited at least five times.^{20}

The results both for downloads and for citations largely support the hypothesis that conferences increase the visibility of presented papers. The estimates indicate that the conference presentation leads to four to seven additional downloads and increases the likelihood of the paper being cited by around 5.7 percentage points (based on estimates from Equation 1, in Table 5, Row 2). These effects could arise through mechanisms of maturation or of advertisement. In Table 3, we find some evidence that the 2012 APSA meeting cancellation affected the chance of a paper being posted in SSRN, and the results in Table 4 indicate that 2012 APSA papers became less likely to have any version online, even two years after the conference. This is suggestive evidence for a maturation effect: the conference seems to be affecting the likelihood that a project endures or progresses, so a paper develops to a stage that is ready to be made publicly available.

As a first indication as to whether advertisement effects are also in place, we consider the identity of the citing author, from citations observed four years after the 2012 meetings. A maturation effect may be expected to lead to increased citations from all academics, while an advertisement effect may be expected to lead, disproportionately, to increased citations from academics who were at the conference.

The estimates for the difference-in-differences coefficients and outcome averages are described in Table 6, in which we use, as the dependent variable, indicators for whether a conference paper became cited by at least one other academic at the conference, at least one academic within the same session (that is the chair, discussant, or another presenter) in the conference, and at least one academic not in the conference. We show results for the most complete specifications (analogous to Table 5, Rows 2 and 3). In Column 1, we show OLS results, and in Column 2, we present estimates from specifications adding covariates for author fixed effects. The estimated coefficients for the impact of the 2012 APSA meeting are negative, but are only statistically significant in regressions that control for author fixed effects. The estimated effect on being cited by academics not in the conferences has the lowest *p*-value (*p*-value < 0.05) and indicates an impact of 7.5 percentage points. The impact for being cited by academics in the conference (Row 1) is only significant at the 10 percent level, and indicates a decrease of 5.3 percentage points. These two impacts are very similar as proportions (approximately 45 percent) of the means for the respective dummy variables, so there is altogether no evidence—from the comparison of coefficients in Rows 1 and 2—of an advertisement effect. However, it is worth noting that the estimated effect on the likelihood of being cited by an academic within the same session, while also only significant at the 10 percent level, represents a far higher proportion (approximately 100 percent) of the mean for this variable. This hints at a possibility of advertisement, specifically between the participants in a session. We explore further evidence for this when we next consider heterogeneities in the conference effect.

### B. Heterogeneous Effects by Session and Authorship

We consider heterogeneity in the conference effect in two dimensions. First, we consider: Which sessions are most beneficial? We examine whether the assignation of a highly cited academic (henceforth, a “star academic”) to a conference session—as a chair, discussant, or presenter—determines the impact of the conference in the paper to be presented. Then, we consider: Who benefits? We investigate whether and how the conference effect varies by academics’ institutional ranking and by measures for their experience and existing profile.

It is well documented that highly productive academics generate powerful peer effects in science (Azoulay et al. 2010; Oettl 2012). In the context of conferences, a star academic might be expected to induce both maturation and advertisement effects. First, they may provide high-quality comments to presenters of works in progress. This seems particularly likely when the star academic is assigned as a discussant or chair in the session. Second, star academics may attract a larger audience to the session. This is perhaps most likely when the star academic is an author of a presented paper.^{21} Using WoS data, we identified highly cited authors in political science and traced these back among the conference participants.^{22} In Table A10 in the Online Appendix, we provide summary statistics for the distribution of star academics among participants.

We consider four session categories based on the role of the star academics in the session, that is, sessions in which: (i) the chair and/or discussant is a star (*disc_chair_ star*), (ii) an author of a presented paper (*author_star*) is a star, (iii) the chair/discussant *and* an author of a paper are stars (*author_disc_chair_star*), and (iv) no star academic is assigned a role (*norole_star*). It should be noted that both academic meetings tend to assign discussant and chair roles to academics that are not authors of presented papers, so categories (i), (ii), and (iii) are separate.

It is possible that conference organizers allocate more promising authors and papers to sessions with high-profile discussants or chairs. Since our intent is to identify differential effects due to the presence of the star academic (rather than on characteristics that explain the allocation of papers to high-profile sessions), we focus on the most complete specifications, including the full set of controls and author fixed effects.

In Table 7, Panel A, we repeat average impacts reported in Table 5, Row 3. In Panel B, we analyze the impact of conferences decomposed by type of session using the pooled data and splitting the 2012 APSA indicator among the four categories above. In these regressions, we also include indicators for session type, four sets of session type–APSA year-specific trends, and an indicator for whether the paper is authored by a star academic. Each Column in Panel B reports results from a separate regression. We detect statistically significant coefficients for conference impacts in determining at least one or two citations (Columns 1 and 2) for most of the sessions. It is noticeable that papers assigned to sessions with star academics in multiple roles (as discussant/chair and as a presenting author), seem to be the ones more harmed by the 2012 APSA meeting cancellation. This is perhaps not surprising: we would expect these sessions to confer the greatest benefits, both in terms of visibility and comments. Although the difference-in-differences coefficients are largest for this group, a test for difference across coefficients only shows statistically significant differences between these highest-profile sessions (*author_disc_chair_star*) and sessions where a star academic has no role as discussant or chair (*author_star* and *norole_star*) and then only for impact in determining at least ten citations and for being cited by academics not in the conference. This may be seen as suggesting that the key mechanism underlying these differential effects is the feedback provided by the star academic.^{23}

It is interesting to note that the coefficients for effects of conferences in determining citations from academics in the same session (Column 7)—academics who will have seen the paper presented, in the occurring conferences, and who are also likely to have the most closely related research—are broadly similar across session types. They are only statistically significant (at the 5 percent level) for papers assigned to sessions where star academics have no role—these being the most common sessions, accounting for 62.4 percent of conference papers. This somewhat reinforces the suggestive evidence noted in the previous section that conferences have an informational and advertisement role within and between the participants in a session.

We may also expect some heterogeneity by authorship of conference effects. A conference gathers a group of unpublished papers. In its absence, any article has an ex ante expected readership, based (at least in part) on its authors’ characteristics: their institutional affiliation (Oyer 2006; Kim et al. 2009), the existing visibility of their previous papers, and so forth. We therefore investigate whether there are differential conference effects by such characteristics. Do conferences help “the weak” or the “the strong”? For this analysis, we use article-level data and split the data on the basis of various authors’ characteristics: (i) institutional affiliation, (ii) citations of published papers,^{24} (iii) number of recent publications,^{25} and (iv) whether an author has a recent top-quartile publication.^{26}

In Table 8 we look for heterogeneous effects from subsamples divided by these four characteristics, and using long-term citations (four years after the 2012 conference) as outcomes. Each entry reports estimates for the key difference-in-differences coefficients. The estimates for the effect of the 2012 APSA meeting cancellation on citations are only negative and statistically significant for papers whose authors are affiliated with an institution outside the top ten (Rows 1–4, Columns 1–3). Curiously, the point estimates for papers whose authors are in a top-ten institution are positive (possibly suggesting a substitution of citations across authors due to conferences), but the coefficients are very largely not significant. Authors affiliated with mid-tier institutions became less likely to accumulate at least ten citations, and authors affiliated with institutions outside the top 100 became less likely to receive at least one citation, as a consequence of the cancellation.

Articles authored by academics with no publications, with no citations of published papers, or with no top publications also became less likely to receive at least one citation. The group of papers authored by academics with one or two previous publications became—with the largest coefficients we observe—less likely to receive at least five or at least ten citations due to the 2012 APSA meeting cancellation. For authors in all these groups, comparing the coefficients in Rows 5, 6, and 7, there is no observable tendency for the conference-generated citations to be gained largely from academics within the conference (or conference session) as opposed to in the outside population. It appears that the academics with lower and intermediate ex ante likelihoods for gathering citations—less experienced and affiliated with institutions outside the top ten—are the main beneficiaries of the overall conference effect. Moreover, for these groups the mechanism is mainly one of maturation.

For articles authored by academics in the groups with highest ex ante prospects—those with more than two previous publications, or publications that have been cited, or that have a publication in a top journal—the pattern of conference effect seems quite different. For this group, though the 2012 APSA coefficients are generally negative, they are not generally statistically significant. However, statistically significant effects are then consistently observed in the likelihood of receiving a citation from another academic in the same conference session. This seems to provide a fairly compelling corroboration of the evidence in Tables 6 and 7 that an advertisement effect occurs within session participants. And the beneficiaries of this advertisement effect appear to be authors with relatively high levels of experience or existing profile.

## V. Conclusion

By exploiting a natural experiment, we have provided estimates for the effects of conferences on papers’ visibility and academic impact. To the best of our knowledge, no previous analysis has applied a compelling identification strategy to this issue, and the issue itself is of considerable importance because significant resources across all research fields in academia are apportioned to organizing and attending such events.^{27}

Using papers accepted in a comparator conference as a baseline group for papers in the American Political Science Association Annual Meeting, our difference-in-differences analysis suggests that a conference increases short-run visibility (as indicated by working paper downloads) and moreover boosts the likelihood of a paper becoming cited: by three percentage points after two years and by five percentage points after four years.

The gains are most noticeable for authors who are not in the very top institutions and academics (generally early in their career) who do not have previous papers that are cited or published in top journals. For these academics the conference effect seems to be driven by “maturation,” with the presented paper improving and progressing as a consequence of the personal interactions within the conference, these complementing—perhaps—similar processes that occur within an author’s own institution.

However, for higher profile authors we detect an “advertisement effect,” with the conference presentation leading to a decisive increase in the likelihood of the conference paper becoming cited by other participants in the same session. The gains may be accruing to this group due to a correlation between paper quality and an author’s recent publications or due to a “Matthew effect” of accumulated advantage. By our results, the catalyst for an advertisement benefit could lie either in the strength of the paper or in the perceived credentials of the author. But, either way, conferences seem to be facilitating a direct transmission of knowledge between academics.

Of course, our analysis is of one specific meeting: a large political science conference, with its own characteristics. But it is a reasonably modest step to suppose that in many respects the results will generalize to other conferences. Each academic field has its own character, but we might also expect to find resemblances, especially between political science and other social sciences. Indeed, many of the papers in the APSA meeting lie on the intersections between politics, economics, sociology, psychology, law, and management science. Most conferences are much smaller than that which we have analyzed, but many offer a very similar within-session experience. In less cognate disciplines, the differences in conference format and function may be larger. For example, in the biomedical sciences conferences are more numerous and are often arranged to facilitate interactions with related industries (see Ioannidis 2012). Practices of citation and collaboration also differ. We therefore cannot be sure if the impacts and mechanisms associated with meetings in such fields will be the same.

Where the APSA meeting may differ from many other conferences, even in social science, is in the assigning of a discussant to every session and also in the high proportion of early-career academics attending (reflected, in Table 1, by 46.5 percent of papers being authored by academics without previous publications). We can expect these differences to have affected the relative roles of the maturation and advertisement functions of the conference. In light of our results, we may suppose that in other meetings—without discussants but with a higher proportion of experienced academics—the importance of the advertisement effect will be greater.

Historically, in the era preceding digital communication, the importance of scientific meetings as a forum for academics to discover each other’s work seems clear. A compelling demonstration is provided by Iaria et al. (2018), who show consequences for knowledge-flow and scientific productivity arising from an interruption in opportunities to attend international scientific meetings (combined with increased delays in delivery of international journals) during and after the First World War. However, in the last 30 years the internet has transformed opportunities for academics to access working papers and to correspond (Agrawal and Goldfarb 2008; Ding et al. 2010). It is then reasonable to ask whether face-to-face interaction, as facilitated by the conference setting, continues to influence the flow of academic understanding. Our findings indicate that it does.

## Footnotes

The authors thank three anonymous referees for helpful comments and are also grateful for useful inputs from Steve Coate, David Hugh-Jones, Arthur Lupia, Will Morgan, Judit Temesvary, Fabian Waldinger, and the seminar attendances at the Universities of East Anglia, Kent, and Portsmouth and at the 2015 Royal Economic Society Meeting and 2015 Barcelona GSE Summer Forum. Excellent research assistance was provided by Chris Bollington, Raquel Campos-Gallego, Ben Radoc, Arthur Walker, and Dalu Zhang. This research was funded by the Leverhulme Trust (grant RPG-2014-107). The data used in this article will be available online, from September 2020, at: Kent Data Repository, https://doi.org/10.22024/unikent/01.01.75

Supplementary materials are freely available online at: http://uwpress.wisc.edu/journals/journals/jhr-supplementary.html

↵1. The American Economic Association advertised close to 300 meetings in 2014, and in the field of medical science there are an estimated 100,000 meetings per year (Ioannidis 2012).

↵2. One paper does achieve this: Blau et al. (2010) evaluate the impacts of CeMENT—a mentoring workshop for female assistant professors, at which participants also have a chance of having a working paper discussed by a small group of peers. However, to the extent that Blau et al. (2010) hint at any generalizability, their suggestions are with respect to other mentoring interventions rather than to other conference settings.

↵3. In other words, conferences could plausibly either mitigate or exacerbate any “famous-get-famous effect” (or the “Matthew effect”). See Merton (1968), Salganik et al. (2006), and Azoulay et al. (2013).

↵4. Winnik et al. (2012) and Castaldi et al. (2015) compare “accepted” vs. “rejected” papers, so a selection effect (the extent to which the conference committee selects for papers that are likely to have greater impact) is likely to be a confounder to any conference effect. Chai and Freeman (2017) conduct a more controlled analysis by comparing patterns of collaboration and citations among attendees of the Gordon Research Conferences with patterns among a matched group of nonconference attendees and instrumenting conference attendance by individuals’ distance to the conference.

↵5. Waldinger (2010) finds that doctoral students in Germany whose departments lost eminent scientists during the Nazi era were—by various career metrics—consequently less successful; Azoulay et al. (2010) show that scientists publish fewer papers, or papers of lower quality, after a “superstar” coauthor dies unexpectedly. Borjas and Doran (2015) document that mathematicians who became geographically separated from high-quality coauthors during the post-1992 exodus of scientists from the Soviet Union became less productive. Borjas et al. (2018) find that a positive supply shock of Chinese graduate students into American universities led to increased productivity of Chinese-American advisors (who tended to work with the students from China) and to commensurably reduced productivity of American advisors of non-Chinese heritage.

↵6. It should be noted that the conference papers are typically working papers, usually with no record of existence before the conference (indeed, as shown in Table 2, only 27 percent are found in Google Scholar two years after the 2012 conferences), so an analysis within the paper, before and after the conference, is not possible.

↵7. From the WoS, we assembled all articles published in the 155 WoS Political Science journals and in the top 20 WoS journals in Economics, Sociology, Law, History, and International Relations from 2004 to 2011. From the SSRN, we assembled a set of working papers comprising all papers posted in the SSRN Political Science Network from January 1996 to September 2015. These sets include 113,895 working papers and 115,188 published papers, respectively.

↵8. For participants in the conferences taking place in 2009, we consider the window of calendar years 2004–2008. For conferences taking place in 2010, the window comprised years of 2005–2009, and so forth.

↵9. In using this rule, we run into the issue of name ambiguity and possible misattribution of characteristics among participants. We conducted several checks to ensure that individuals’ first and last name identifies uniquely conference authors with some previous history in SSRN, by crossing this information with unique SSRN author identifiers.

↵10. One specific concern related to an early campaign against holding the 2012 APSA meeting in Louisiana in response to the state’s refusal to recognize same-sex marriages. Within this campaign, 1,109 academics signed a petition advocating a boycott, approximately half of whom are in our data set. It transpired that, indeed, very few (only 30) of these registered to attend the 2012 meeting in New Orleans. However, as shown in Figure A1 in the Online Appendix, we find no evidence that the petitioners became, in turn, more likely to attend the 2012 MPSA instead (a potential threat to identification) or indeed that the petitioners differ in observables from the average conference participant in the occurring conferences. Petitioners and nonpetitioners do not differ in number of publications weighted by journal quality or in institutional ranking. These results are not shown in the paper, but are available by request.

↵11. The CEM approach consists of a one-to-one match that assigns a pair of control–treatment observations based on the exact matching on the joint support of a set of (selected) characteristics. Each individual characteristic is, however, considered in coarse terms. In applying this methodology, we transformed all variables in Table 1 to a discrete form. The specific variables we use to determine the matching are: number of article authors, whether any article author has a previous publication, whether any article author has a previous working paper in SSRN, whether the highest affiliation rank is [1, 10], [11, 100], or [101, ∞), and whether the accumulated number of publications weighted by journal impact factor is zero (56.3 percent of observations), (0, 1.65], (1.65, 3.802], (3.802, 8.668], or (8.668, ∞), (the last four ranges each being 25 percent of the nonzero observations).

↵12. Outcomes were collected using commercial web-scraping providers. For the main sample, the service provider was Mozenda, Inc., and for the full sample, an independent professional programmer.

↵13. However, hits—from this first ten—were dropped if the conference paper had no citations. Therefore, in the later search outcomes we cannot differentiate between articles with zero citations and articles “not found in Google Scholar.”

↵14. Coauthored papers will appear as multiple observations, one for each of the authors.

↵15. When examining data at the article–author level, 76.5 percent of papers are authored by academics that participated in multiple conferences among the eight that we observe.

↵16. Results in Table 3 are based on the full article sample (with all of the MPSA papers), using outcomes recorded three years after the 2012 conferences. In the Online Appendix (Table A4) we show results based on the main sample (with 20 percent of the MPSA papers), as recorded one year, two years and three years after the conferences. In Table A7 in the Online Appendix, we replicate results from Table 3, using a Poisson model.

↵17. In principle, an alternative explanation could be that the 2012 APSA meeting cancellation particularly deterred the authors of stronger papers—with higher prospective downloads—from posting these in SSRN. In difference-in-differences regressions for the sample of articles in SSRN, using article covariates as dependent variables, we did not find evidence that the 2012 APSA articles posted in SSRN were less likely to have been authored by more experienced (that is, published or better-published) academics, or that they differed systematically in number of authors.

↵18. In Figure A3 in the Online Appendix, we show how we recovered this information from Google Scholar.

↵19. The citation variables in Table 5 differ from Table 4 also because we use the first ten Google Scholar hits, instead of the first three Google Scholar hits. For a more controlled comparison, in Table A8 in the Online Appendix, we provide results for citations measured four years after the 2012 meetings, but using only the first three Google Scholar hits.

↵20. In addition to the analysis in Tables 4 and 5, in Table A9 in the Online Appendix, we present OLS results using the number of cites and the log of (1+cites) as dependent variables. We also present results from negative binomial and Poisson regressions explaining the number of articles’ cites.

↵21. Neither the APSA nor MPSA Programs indicate who the presenting author is in the case of a coauthored paper. However, as shown in Table 1, 70.9 percent of papers are solo-authored.

↵22. We defined highly cited academics as those whose number of citations falls into the top 2.5 percentiles based on publications in a window of five years preceding the conference.

↵23. An alternative explanation could in principle be that citations are generated by

*advertising to the star academic*: that a star academic will have greater propensity than others to subsequently cite the papers he or she sees in the session. But this is not supported by the coefficients, or pattern of statistical significance, in regressions in which the dependent variable is an indicator for being cited by academics in the same session (Column 7).↵24. The data are decomposed here by Web of Science citations for publications prior to the conference. The difference between this measure and our outcome measure (Google Scholar citations) should be noted. Google Scholar citations capture more types of scientific work (including books and unpublished papers).

↵25. We find similar results when the decomposition is based instead on publications weighted by journal impact factor.

↵26. The cutoff is based on the top quartile impact factor journal for a sample of 155 journals in our WoS data set, in 2008, that was approximately an impact factor of two.

↵27. In addition to direct conference costs, recent studies (Green 2008; Jena et al. 2015), focusing particularly on medical conferences, have noted and estimated other externalities associated with academic meetings.

- Received November 2016.
- Accepted May 2018.

This open access article is distributed under the terms of the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0) and is freely available online at: http://jhr.uwpress.org.