Ilaria Ampollini (University of Trento) & Mattia Andreoletti: “Objectivity: a qualitative analysis”
The philosophical and historical debate about what objectivity is and how it is entangled within the practices of science has notably been lively and challenging during the last decades. However, to date no study has addressed to researchers and scientists themselves, in order to investigate how they perceive the concept of objectivity in their everyday practice. Hence, we want to give empirical contribution to the contemporary debate about objectivity, by exploring the topic through the opinions and direct experience of researchers across different fields. In collaboration with Ilaria Ampollini (University of Trento) a series of focus groups will then investigate how scientists perceive, discuss, and pursue objectivity in their disciplines. More specifically, participants will be asked to reflect about what do they consider as “objective” and in what measure they think that objectivity is linked to research quality. Moreover, we will investigate if and to what extent they do agree with some definitions and interpretations given by scholars who have been extensively studied scientific objectivity.
Carlo Martini (Università Vita-Salute San Raffaele) & Mattia Andreoletti: “Progressive Science or Pseudoscience: The case of medical controversies”
Science progresses through debate and disagreement, and scientific controversies play a crucial role in the growth of scientific knowledge. But not all controversy and disagreement is progressive in science. Sometimes controversies can be manufactured and what looks like genuine scientific disagreement can be a distortion of science set up by non-scientific actors (e.g. interest groups). Manufactured controversies are detrimental to science because they can hinder scientific progress and eventually bias evidence-based decisions. The first goal of this paper is to elucidate the distinction between pseudo and genuine scientific controversies. The second goal of this paper is to provide some epistemic criteria for distinguishing pseudo from genuine scientific debates in medicine. Drawing on the research from social epistemology and the literature on expertise, we will discuss a series of criteria which have been proposed to identify genuine from pseudo expertise. We will apply such principles to discern between legitimate and pseudo-scientific controversies, testing them in a case study. The case study is based on current research on statins, which are drugs widely prescribed to decrease the level of cholesterol in order to reduce the risk of cardiovascular diseases (CVD).
Mattia Andreoletti: “The terrorists and the czars. Methodological versus social solutions to the replicability crisis”
Nowadays, almost everyone seems to agree that science is facing an epistemological crisis – namely the “replicability” crisis – and that we need to take action. But as to precisely what to do or how to do it, there are no firm answers. Two groups of proposals for fixing science are currently on the plate, tackling different putative causes of the crisis. Some scholars argue that the current statistical inferential framework is inadequate and therefore we should focus on improving statistical methods. Some others claim instead that the only way to fix science is to change the scientific reward system, promoting quality rather than quantity of scientific publications. However, every positive proposal, either methodological or social has a valid counterargument. This paper is meant as a modest contribution to the recent debate on scientific reforms. In fact, I take it as a case in point to show that the recent controversies on scientific reforms reveal a genuine philosophical disagreement on what counts as scientific objectivity. More specifically I argue that statistical reformists embrace a new ideal of scientific objectivity, which has been recently conceptualized and described by sociologists of science as “statistical objectivity”. On the contrary, social reformists are committed to the more traditional idea of “procedural objectivity”. The epistemic competition between these two perspectives might help to understand why so far nothing has changed. The disagreement may be resolved at this higher-level of discussion.
Michał Sikorski and Mattia Andreoletti: “Epistemic and Social Functions of Replicability”
Is replicability a crucial feature of science? Many philosophers of science have discussed the limitations rather than the advantages of replicability (see e.g. Leonelli 2018; John Norton 2015). Whereas, scientists see replicability as one of the defining features of their disciplines and consider the high rate of replicability failures as an “apocalypse” for science (Bishop, 2019). We will try to make sense of this tension. We start by specifying what replicability means. We will defend the epistemic and social value of replicability. Finally, we will suggest a strategy to select the appropriate level of replications and present two case studies.
Noah van Dongen and Leonie van Grootel: “Systematic Review on NHST criticism“
The null hypothesis significance test (NHST) procedure is used ubiquitously for analyzing experimental and observation data in the social science. However, since its infancy it has been criticized for it deficiencies and (its ease for) misuse by scientists. The body of literature on NHST criticism has been steadily growing over the years, which accelerated when talk of a ‘replication crisis’ started in the social sciences. All though, many articles have been published on the subject, a clear overview of NHST’s deficiencies, misinterpretations, and misuses is absent and we hope to correct this via a systematic review.
Michał Sikorski: “Values, Bias and Replicability”
The value-free ideal is a view which claims that scientist should not use their non-epistemic values while they justifying their hypothesis. The view recently become unpopular among philosophers. I defend the ideal by showing that if we accept the uses of non-epistemic values prohibited by it, we are forced to accept as legitimate scientific conduct some of the disturbing phenomena of nowadays science e.g. sponsorship bias. I will also show how the use of non-epistemic values contributes to the replication crisis.
Noah van Dongen, Matteo Colombo, Felipe Romero, and Jan Sprenger: “Semantic Intuitions – a meta-analysis”
(contact authors for more details)
At the beginning of the twenty-first century, experimental philosophy started to contribute to the debate on theories of reference through Machery, Mallon, Nichols, and Stich’s seminal article Semantics, cross-cultural style (2004) Their empirical results indicated a difference in semantic intuitions between Western and East Asian people. While these results have raised several questions about the evidential support of philosophical theories of the semantics of proper names, they have more generally contributed to challenge the use of intuitions in philosophy.
However, our attempt to replicate these results failed. Als part of the X-phi replication project (Cova et al., 2018), we tried to replicated the original results in a high-powered study using a a similar design. Our failed replications was considered a curiosity, because the authors of the original study claimed that the effect had been replicated on many occasions.
Therefore, we decided to perform a meta-analysis on all available research on semantic intuitions to obtain a better understanding of the robustness of Machery et al’s (2004) finding, and, more generally, of cross-cultural differences in semantic intuitions.
Felipe Romero and Jan Sprenger: “The Replication Crisis: Social or Statistical Reform?”
(contact authors for more details)
Science is going through a worrisome replication crisis. Many findings in the behavioral sciences don’t replicate: in follow-up experiments, effects often diminish or disappear completely (e.g., Open Science Collaboration, 2015). And albeit less pronounced, the same pattern has been observed for economic and medical sciences. This crisis casts severe doubt on the reliability and trustworthiness of experimental research.
How should we change science to make it more reliable? Statistical reformists hypothesize that the reliability of experiments would be greatly improved by moving away from significance tests (NHST), for instance, by increasing reliance on Bayesian statistics. On the other hand, social reformists hypothesize that changes in inference methods alone do not make science more reliable: we also have to change the social structure of science and the current credit reward scheme.
Using a computer simulation study, we evaluate whether the replication crisis is an artifact of the currently dominant statistical framework, or whether it has deeper causes in the social structure of science. We end up advocating a middle ground between the social and statistical reformist: statistical reform will science more reliable, but only if combined with social reform.
Matteo Colombo, Georgi Duev, Michèle B. Nuiten and Jan Sprenger: “Statistical Reporting Inconsistencies in Experimental Philosophy”
Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science.
Jan Sprenger: “Conditional Degrees of Belief”
http://philsci-archive.pitt.edu/13515/
Conditional degree of belief is a fundamental concept in Bayesian inference. Some conditional degrees of belief—the probability of an observation given a statistical hypothesis—are closely aligned with the probability density functions (i.e., “objective chance distributions”) of the corresponding statistical models. Justifying this alignment is, however, a far from trivial task. This paper articulates and defends a suppositional analysis of conditional degree of belief: it is in line with our reasoning practices and explains, unlike other accounts, why degrees of belief often track probability density functions. My account also clarifies the role of chance-credence coordination principles in Bayesian inference. Then, I extend the suppositional analysis and argue that all probabilities in Bayesian inference should be understood as (model-relative) conditional degrees of belief. I conclude with an exploration of how this view affects the relationship between Bayesian models and their target system, and the epistemic significance of Bayes’ Theorem.
Noah van Dongen and Michal Sikorski: “Objectivity for the Research Worker”
Objectivity is important for science. Unfortunately, philosophers failed to provide a definition that can be put into practice. The popular way of thinking about objectivity in philosophy focuses on it’s complexity. However, such conception of objectivity lacks justification and it is unusable for the researcher that wants to put it into practice. Therefore, we want to try to approach objectivity from a different route. Healthcare operates without a clear and specified definition of what it is to be healthy, while clearly being capable of treating illness and injury that damage or reduce health. We argue that in science no clear definition of objectivity is necessary, as long as we can identify and remedy the particular practices and components of methods that harm objectivity (e.g., multiple testing of data, small samples, absence of falsification attempt, and biases).