SciBeh Virtual Workshop 2024: Epistemic Boundaries
6th–7th March 2024
Interested in the boundaries of expertise? Join us at @scibeh on March 6 & 7, 2024 for an engaging virtual workshop.
SciBeh is committed to enhancing knowledge management in crisis situations within the behavioral sciences. We have organized several workshops aimed at bridging the gap between researchers and policymakers. These workshops have covered topics such as collective intelligence in science communication and creating online environments to facilitate information sharing.
In our upcoming workshop, we want to address the role and limitations of expertise in providing policy advice during crises. Currently, societies are confronted with crises such as pandemics and climate change, which necessitate collective action. The complex nature of these issues inherently demands interdisciplinary expertise for effective resolution. However, offering expertise on issues that extend beyond the scope of a single discipline carries the risk of epistemic trespassing —- making judgments in a field beyond one’s expertise.
In this workshop, we want to approach expertise boundaries from four different angles:
- Transfer: Exploring aspects of expertise that are transferable and applicable across various disciplines.
- Barriers: Identifying the obstacles that impede the detection of epistemic trespassing.
- Danger Zones: Examining scenarios where problematic epistemic trespassing is most likely to occur.
- Collective Intelligence: Discussing how collective intelligence can assist in identifying and mitigating instances of epistemic trespassing, while facilitating the transfer of expertise.
We hope to publish the workshop’s result in a special issue. If participants are interested in this opportunity, SciBeh will approach relevant journals and coordinate the publication process.
Gaëlle Vallée-Tourangeau, Kingston Business School
Cultivating diversity in scientific innovation: Insights and reflections from a study of grant peer review practices
In this talk, I will discuss the barriers we need to overcome to harness and foster a greater diversity of scientific ideas and expertise in the grant funding landscape. I will present and build upon recent findings from a multi-phased, mixed-methods meta-research study of the key quality factors peer-reviewers pay attention to when evaluating applications for grant funding and funded by the Wellcome Trust. Adopting the theoretical lens of actor-network theory (ANT), I will invite a critical reflection on the current relationships between human actors and non-human ‘actants’ and how they may, overtly or unwittingly, influence the grant funding process. I will conclude with a discussion of emerging new actors/actants (e.g., SciBeh, the OSF, diversity policies and new funding models) and their possible impact for reshaping the grant funding landscape.
Wataru Toyokawa, University of Konstanz
Collective intelligence from dissimilar individuals
Humans are remarkably effective social learners, which have enabled them to generate collective intelligence and the cumulative cultural evolution. In an effort to better understand this complex phenomenon, recent studies have modelled social learning as integration of social information into individual reinforcement learning. However, previous research in this domain was limited to tasks in which observer and demonstrator share the same value function, while in reality humans should also learn from others who have different value functions due to, for example, differences in preferences, skills, or goals. To address this issue, we added social correlations to a spatially-correlated multi-armed bandit task, which lets us operationalize differences in taste while maintaining a common ground truth that applies to all agents. We introduce a novel model, Social Generalization (SG), which integrates social information into the spatial generalization step. Evolutionary simulations show that SG out-competes existing models in settings where social information is not perfectly applicable to one’s own situation, suggesting conditions under which collective intelligence can emerge despite the individual differences. In the talk, I will also show results from online experiments, suggesting that humans are able to use social information in this task setting, and are best fit by the normatively optimal SG model. Our study shows that humans are able to use social information more flexibly than previous studies have implied, and that they replace exploration with socially guided exploration.
Harry Collins, University of Cardiff
Interdisciplinarity and language
The difficulty of interdisciplinary work is to do with the fact that interaction within core groups of scientists depends on a shared language and the languages are different as one moves from narrow specialty to narrow specialty. This can cause confusion because the same words may be used in the different languages though they have different meanings. This is not always the case. There are ‘hypernormal’ sciences such as molecular biology where the language is settled by the time a PhD is completed and one finds that the same puzzles are being solved everywhere. This means expertise can be found wherever one finds the science and it ranges from research laboratories to industrial firms and even to automated procedures with the science being computerisable. Sciences like physics, on the other hand, are continually developing new languages at the research front so small specialist groups cannot talk to each other. The key to knowing whether members of one group can provide their referred expertise to another group’s work, assuming there is no time to acquire the new ‘practice language’ (interactional expertise), is the level of the contribution. For example, a high energy physicist can supply big-science management skills, or mathematical analysis, to gravitational wave physicists or telescope designers but can only supply an understanding of the relationship between signal and noise or design principles, once the relevant practice languages have been acquired. The ‘Periodic Table of Expertise’ and the ‘Fractal Model of Society’ provide the framework for understanding these points.
Joe Roussos & Erik Angner, Stockholm University
In previous work, we have articulated the idea of epistemic humility in terms of calibration – as a matter finding the golden mean between under- and overconfidence – and examined several reasons why epistemic humility (so understood) is appropriate, esp. in the context of scientists acting as experts in policy contexts. In our current work, we revisit this conception and these reasons. We examine the apparent tension between being epistemically humble and giving decisive guidance as an advisor. Humility does promote cautious communication, and messages with greater uncertainty. But it is a mistake to think that this damages the position of the advisee. Instead, we maintain it is what they ought to prefer.
Christina Pagel, University College London
Some reflections on communication of data and the role(s) of the scientist
Throughout the Covid-19 pandemic I ended up doing a lot of analysis of many different types of Covid data from the UK and elsewhere and then communicating that data to the public, through online live briefings, print and broadcast media and Twitter. In this talk I will reflect on what I have learned about communication of science across different media and how I have made sense of my role as a scientist and communicator.
Matthew Fisher, Southern Methodist University
Expertise and overconfidence
In an age of information accessibility, keeping track of the limits of your knowledge poses a challenge. Across several experimental paradigms, we explore how online information retrieval can impair memory retention and lead to metacognitive miscalibration. After searching online, people mistakenly equate access to information for comprehension. Furthermore, in a separate line of empirical work we explore the “curse of expertise,” where experts exhibit a false sense of competence about familiar or specialized topics. Our research aims to offer insights into the evolving definition of expertise in the digital era.
Robert Evans, Cardiff University
Citizen science: Expertise, activism and isolation
Citizen science projects typically take one of two forms: data collection on behalf of larger scientific projects run by professionals or self-organised projects initiated by communities in order to address some local need. When the local need is to campaign against a proposed or actual development, this bottom-up form of citizen or community science is often adversarial in nature as local knowledge and expertise is pitted against regulatory and planning institutions. In this paper, I summarise findings from a recent project involving attempts by a community group to develop an air quality monitoring project as part of their campaign against a proposed biomass incinerator plant. The group’s experience mirror those reported elsewhere in the literature, revealing high levels of expertise and commitment but also the difficulty and fragility of citizen science work.
Eric Kennedy, York University
Assessing Relevant Expertises in Emergencies
Conflicts between experts are no less common during emergency situations than in more day-to-day debates. Indeed, emergencies offer particularly acute contestation about who has relevant expertise and should be trusted, and whose expertise is less salient or credible. I discuss different origins of these conflicts - including competing disciplines, contested knowledge hierarchies, and public inclusion - and suggest that many of these conflicts are exacerbated by disagreement about the ultimate goals and theories of change in an emergency. I argue that to make the best use of tacit expert knowledge requires more explicit processes for clarifying goals and values and for synthesizing different forms of expert judgment; a prescription that is at odds with a simplistic form of the prevalent view that ’scientists should be more involved in policy.
Domnique Brossard, University of Wisconsin-Madison
Dietram Scheufele, University of Wisconsin-Madison
Jamie Watson, Cleveland Clinic Center for Bioethics
The Problem of Fake Authorities: Deciding Whom to Trust in a World of Imposters
The recognition problem—or, the difficulty for non-experts to appropriately distinguish trustworthy experts from fake authorities—is a perennial challenge in expertise studies. And the more we learn about human cognition and the social distribution of knowledge, the more intractable the problem seems. Unfortunately, the extant literature on the problem frames solutions in ways that simply deny crucial features of the problem, which leaves non-experts no better off. Here, I build on recent work in social epistemology to suggest a solution that both takes the problem seriously but provides practical guidance for non-experts.
*Aditional participation in our study on collective deliberation methodologies. Alongside our standard workshop, this year’s edition introduces an opportunity to participate in one of our research studies. At SciBeh, we are dedicated to advancing research that enhances efficient and collectively intelligent deliberation methodologies. Our aim is to utilize data gathered during the workshop to gain a deeper understanding of the challenges in collective deliberation processes and explore solutions through machine-supported approaches. Upon completing your registration, you will receive an email with detailed information and a consent agreement form should you decide to also participate in the study. For further information contact [email protected]).