Spanning two days in April 2017, academic scholars, public policy makers, non-governmental advocates, and media experts gathered to attend Evidence: An Interdisciplinary Conversation About Knowing And Certainty, sponsored by the Center for Science and Society (CSS) and the Institute for Social and Economic Research and Policy (ISERP) at Columbia University. The format of the conference was designed to facilitate dialogue and cross-pollination between fields, with each session consisting of two primary speakers from one discipline and responses from two to three panelists with differing and contrasting backgrounds. This creative structure allowed conference participants to critically and multidimensionally examine the use of evidence within and between disciplines. EVIDENCE was organized around guiding questions, including: What counts as evidence in different fields; why do some disciplines have explicit norms surrounding evidence while others are guided by implicit customs; why do evidentiary norms change over time in a given discipline and how can these changes be explained; and what happens when new theories outpace a discipline’s current evidentiary practices? Together these questions served as both a baseline for exploration and the common thread throughout the eleven sessions and two keynote speeches.
After an introduction from the conference organizers (Pamela Smith, Stuart Firestein, and Jeremy Kessler), Veronica Vieland, Niall Bolger, and Shai Silberberg examined the role of reproducibility in evidence calibration and authenticity. Veronica Vieland and Niall Bolger highlighted the importance of reproducibility in creating and evaluating evidence, but underscored that variation across results and replications should be expected. While discussing big data, David Madigan reiterated this point, stating how the same datasets can produce contradictory interpretations. Shai Silberberg challenged the use of the term “reproducibility,” stating that it masks the various reason that experiments cannot be replicated, and why he uses the phrase “transparency and rigor” instead. Moving forward, Wendy Wagner questioned how standards of scientific evidence were regulated and how they evolved over time. The federal funding panel questioned the five factors (significance, innovation, approach, investigators, and environment) used to evaluate a research funding proposal. The explicit preference for quantitative evidence has shifted the conception of research and consequently which projects are funded. For example, the BMJ (formerly known as the British Medical Journal) does not publish qualitative studies because they do not produce tangible results. Thus certain kinds of evidence are prioritized, creating explicit norms about the uses of evidentiary fact.
Participants in the journalism panel discussed the role of evidence in journalism, especially within a 21st-century context. When writing scientific articles for the general public, journalists are forced to weigh the conflicting priorities of correctly representing the integrity and complexity of issues, abiding by space and time constraints, and creating mass appeal for the general public. Additionally, journalists act as a voice of authority and accuracy, while often only providing everchanging partial knowledge as the evidence develops. Nick Lemann described the press as the “ER doctors of epistemology,” working with limited information and under pressure. The lack of shared standards and definitions of evidence only exacerbates the challenges journalism face.
EVIDENCE also examined the difference between scientific standards of evidence and the role of evidence in the courtroom. Like journalism, legal scholars struggle to standardize proof, leading to a high margin of jury error. How are “preponderance of the evidence” and “beyond a reasonable doubt” conceptualized within a legal context and what standards of proof can be applied? Without distilling evidence and testimony into statistics, how can courts weigh the facts? As stated by Barbara Shapiro, the notion of “probable cause” is not a static term describing a specific quantity of evidence, but variable nomenclature based on time, place, and situation context. The amorphous notion of evidence positions the courtroom as an adversarial testing ground for evidentiary standards.
Annie Duke, a World Series of Poker champion, and Jennifer Mnookin, Dean of the UCLA School of Law, served as EVIDENCE’s keynote speakers. Annie Duke’s address focused on the use of evidence within poker and how the game was instrumental in creating game theory. Poker serves as a unique testing laboratory with a tight and closed loop offering many opportunities for collecting evidence. However, Ms. Duke noted that emotions create short-term swings in poker, creating a self-serving bias that can color evidence collection. She states strong emotions generally trump evidence in many settings. Meanwhile, Jennifer Mnookin offered a history and critique of forensic evidence in the courtroom. Forensic science is culturally perceived as absolute, but in reality produces subjective evidence that is rarely tested. As noted by Zoe Crossland, statistical evidence is endowed with scientific power, granting it authority and validity. There has been very little success in altering the use of evidence within the legal system or changing evidentiary norms.
Evidence: An Interdisciplinary Conversation About Knowing And Certainty provided a venue for an interdisciplinary analysis of evidence and its boundaries within academic and professional fields. The conference critiqued and analyzed current notions surrounding evidence, while also serving as an incubator – allowing participants to think about cross-discipline solutions for problematic evidentiary practices, bias, and theories. Throughout the conference, participants explored how evidence can be conceptualized, defined, compared, and applied across and within disciplines.
For more details about the conference, including a session-by-session conference report, please click HERE.