Science & More Talks

The “Science & More” talks are a series of work in progress talks at the Center for Logic, Language and Cognition (LLC) in Turin and run by the ERC project. We use it to present our own work in progress and to learn about the ongoing research of our colleagues. From time to time, also speakers from outside Turin present their work.

Thematically, the talks are centered around philosophy of science, but we are also open to topics from related areas (e.g., logic, epistemology, philosophy of language) and other philosophical subdisciplines where exact methods are applied.

The talks take place in Palazzo Nuovo (Via Sant’Ottavio 20) from noon (= 12:00) to 13:00, usually on Wednesdays, followed by a joint lunch in one of the surrounding restaurants. They are meant to be low-key events, open to everybody and aiming at the improvement of ongoing work through critical, but constructive discussion.

Upcoming Talks

Silvia De Toffoli (Princeton University): Can we form a justified belief in mathematics based on an incorrect argument?

Wednesday, 16 June, noon, Aula 12, Palazzo Nuovo

According to the received view in philosophy of mathematics, proofs are necessary and sufficient for mathematical justification, that is, for direct, inferential justification of mathematical propositions. I challenge this view. More generally, I call into question the idea that mathematical justification is infallible. The fallibility of mathematical justification with respect to the axioms has been amply discussed in the literature. I am concerned, however, with mathematical claims conditional on the axioms. In this context, to say that mathematical justification is fallible is tantamount to saying that arguments that are not necessarily truth-preserving can confer mathematical justification. My aim in this talk is to spell out the conditions that these arguments must satisfy. That is, to clarify when justification is successfully transmitted across careful (but at times incorrect) mathematical inferences. To do so, I consider the social dimension of mathematics and how epistemic norms at play in mathematical practice present a social component. Although I will focus on the case of mathematics, the picture I will present generalizes in a straightforward fashion to other intellectual endeavors.

Past Talks

Martina Calderisi (University of Turin): Can we rationally believe that our own beliefs are irrational? How to defend the irrationalist explanation of polarization

Wednesday, 9 June, noon, Aula 10, Palazzo Nuovo

Currently, the well-known rationality debate in cognitive science has not yet been resolved. The deadlock, it turns out, extends to the interpretation and assessment of the phenomenon of increasing (political) group polarization. I do not intend to settle the matter here. Rather, in this talk I will focus on an argument that has been recently put forward by Dorst (2020) against the irrationalist explanation of polarization. Firstly, I will present an informal version of the argument. Secondly, I will take into consideration and critically evaluate two ways in which it could be formalized. Finally, I will suggest some questions to be addressed by future research.

Paul Égré (CNRS): Truth and Falsity in Buridan’s Bridge

Wednesday, 19 May, noon

A chapter of Cervantes’s Don Quixote (II, 51) confronts Sancho Panza to a version of the Liar Paradox, also known as Buridan’s Bridge (Sophismata, chap. 8, sophism 17). The law declares that whoever comes to cross a certain bridge will be hanged at the gallows if they make a false statement as to the goal of their passing, and will pass free if they tell the truth. Then, “one man said that by the oath he took he was going to die upon that gallows that stood there, and for nothing else”. The judges face a quandary: if the man passes free, he will have lied, and must be hanged; but if he is hanged, he will have spoken truly, and should be set free. The main motivation for this paper is to examine the sense in which the problematic sentence can be considered both true and false, in line with dialetheism (Priest 2003) and with the strict-tolerant account of the Liar (Ripley 2012, Cobreros et al. 2013), but also with earlier remarks by Jacquette (1991), suggestive of a dialetheist analysis. Buridan’s own solution is that because it pertains to a future contingent, the utterance is neither true nor false (see also Ulatowski 2003). In Cervantes’ case, Sancho judges that the utterance is both true and false, and lets the man pass freely, in a way that is more dialetheist friendly. One issue there is whether this outcome is compatible with the man’s oath and intention to die at the gallows, assuming the utterance is true. I will discuss how an account along the ST lines might deal with this problem.

María Martínez-Ordaz (Federal University of Rio de Janeiro): Understanding defective theories: an inferentialist approach to scientific understanding

Wednesday, 12 May, noon

Here, I aim at explaining under which circumstances when scientists report having understood a defective theory, their claim might be legitimate. In particular, I argue that scientists understand a defective theory if they can recognize the theory’s underlying inference pattern(s) and if they can reconstruct and explain what is going on in specific cases of defective theories as well as consider what the theory would do if not-defective –even before finding ways of fixing it. To do so, I focus on cases in which the defective character of either entities or theoretical chunks might be both useful and an essential feature of what is being understood; and I contend that, when falsehoods or any other defective elements are included in the content of understanding, they must be joined by the (non-classical) inference patterns that allow them to remain ‘well behaved’.

Francesco De Pretis (University of Modena and Reggio Emilia): EA³ : A softmax algorithm for evidence appraisal aggregation

Wednesday, 5 May, noon, Aula 10, Palazzo Nuovo

Real World Evidence (RWE) and its uses are playing a growing role in medical research and inference. Prominently, the 21st Century Cures Act – approved in 2016 by the US Congress – permits the introduction of RWE for the purpose of risk-benefit assessments of medical interventions. However, appraising the quality of RWE and determining its inferential strength are, more often than not, thorny problems, because evidence production methodologies may suffer from multiple imperfections. The problem arises to aggregate multiple appraised imperfections and perform inference with RWE. In this presentation, we thus develop an evidence appraisal aggregation algorithm called EA³. Our algorithm employs the softmax function – a generalisation of the logistic function to multiple dimensions – which is popular in several fields: statistics, mathematical physics and artificial intelligence. We prove that EA³ has a number of desirable properties for appraising RWE and we show how the aggregated evidence appraisals computed by EA³ can support causal inferences based on RWE within a Bayesian decision making framework. We also discuss features and limitations of our approach and how to overcome some limitations.

Michele Pra Baldi (University of Cagliari): Paraconsistent Belief Revision: An Algebraic Investigation

Wednesday, 21 April, noon

We provide a logico-algebraic investigation of AGM belief revision based on the logic of paradox (LP). First, we define a concrete belief revision operator for LP, proving that it satisfies a generalised version of the traditional AGM postulates. Moreover, we investigate to what extent the Levi and Harper identities, in their classical formulation, can be applied to a paraconsistent account of revision. We show that a generalised Levi-type identity still yields paraconsistent-based revisions that are fully compatible with the AGM postulates. The main outcome is that, once the classical AGM framework is lifted up to an appropriate level of generality, it still appears as a regulative ideal for treating of paraconsistent-based epistemic operators.

Enzo Crupi (University of Turin): A puzzle about reasons

Wednesday, 14 April, noon

Reasons are grounds for belief. To say that p is a reason for q is to say that some relation of support obtains between p and q in such a way that one is justified in believing that q holds in virtue of the assumption that p holds. We postulate a connection between reasons and conditionals, then survey a variety of alternative approaches including Adams’s probabilistic conditional, the Lewis/Stalnaker conditional, the classical belief revision account of conditionals, and Hans Rott’s theory of difference-making conditionals. We discuss extant options and contrast them with our favorite interpretation, based on a specific characterisation of evidential conditionals. In our assessment, we test the competing views against a puzzle which crucially involves the notion of supererogatory reasons. As a bonus, we get new insight into the logical analysis of concessives. [Joint work with A. Iacona]

Emanuele Ratti (Johannes Kepler University Linz): AI and Medicine

Wednesday, 31 March, noon

In the past few years, several scholars have been critical of the use of machine learning systems (MLS) in medicine, in particular for three reasons. First, MLSs are theory agnostic. Second, MLSs do not track any causal relationship. Finally, MLSs are black-boxes. For all these reasons, it has been claimed that MLSs should be able to provide explanations of how they work. Recently, Alex John London claims that these reasons do not stand scrutiny. As long as MLSs are thoroughly validated by means of rigorous empirical testing, we do not need explanations of how they work or why they have generated certain outputs. London’s view is based on three assumption: (1) we should treat MLSs as akin to pharmaceuticals, (2) the opacity of the internal processes of MLSs is all is in need of explanation, (3) MLSs have unlimited interoperability. In this talk, I will argue that, if London’s assumptions are questioned, we can show that at least two types of explanation about MLSs are fundamental. The first explanation is a thorough account of the design process of MLSs, beyond black-boxes. The second is an explanation by translation: in order to integrate MLSs within a system of practice in which they are supposed to work, we need to present MLSs findings in a way that is compatible with the conceptual and representational apparatus used in that system of practice.

Jan Sprenger (University of Turin): Causal Attribution and Partial Liability: A Probabilistic Model

Wednesday, 24 March, noon

Probabilistic models have been successful at quantifying the strength of the causal influence of an intervention on a target event. It is unclear, however, whether such measures of causal strength transfer to the problem of causal attribution—more specifically, the degree to which a certain event X1 has contributed to the occurrence of another event Y, especially when other causes X2, X3, etc. are present. This is not a purely philosophical problem: it is relevant for the practice of tort law, whenever the legal system allows for partial liability instead of all-or-nothing liability. There is a widespread intuition in tort law, encoded in various rulings and scholarly texts, that a defendant is liable for a claimant’s loss to the degree—and only to the degree—to which the defendant’s actions caused the claimant’s loss. The question is, of course, how this degree should be quantified. In this talk I argue against a proposal by Kaiserman (2016, Proc.Arist.Soc.) and propose two alternative models with principled motivations. The idea behind both proposals is to express the “lost chance for a better result”. As a result, we obtain two probabilistic models which are isomorphic to measures of absolute and relative risk reduction. A case from Dutch tort law serves as a running example.

Javier Suárez (Universität Bielefeld): Using networks to build biological explanations: The paradigmatic case of the microbiome

Wednesday, 17 March, noon

In this talk, I will present the type of explanation characteristic of microbiome research. Microbiomes are collections of microorganisms of diverse species that interact together and reside in a shared niche. While some aspects of microbiome research are mechanistic, looking to uncover the molecular basis of certain properties by de-composing the system into its component parts, other forms of microbiome research are not accessible to this type of research because the properties only exist as a consequence of the global structure of the microbiome. Network research is a prominent tool employed to study these types of properties. My goal will be to explain what network research consists in, how it differs from mechanistic research, and why it is essential to study certain properties of the microbiome.

Alberto Tonda (Université Paris-Saclay, INRAE) and Pietro Barbiero (University of Cambridge): Emergence of Meaning in Machine Learning Embeddings

Wednesday, 10 March, noon

In machine learning, embeddings are defined as vector spaces that describe data, where (relative) positions of the data points and (relative) displacements have a meaning. Most interestingly, the meaning emerging in embeddings is often not explicitly accessible in the data used to train a machine learning algorithm, but it is rather derived as an unintended byproduct of the training process. In this talk, we present a high-level introduction to embeddings, showing a few remarkable examples from the state of the art in the field: embedding for words, photos, and paintings. We then discuss the limitations of this technique, consider possible similarities with living organisms, and make the case that even humans might possess similar vector spaces, used for cognitive processes.

Andrea Iacona (University of Turin): Relative validity

Wednesday, 3 March, noon

Validity is usually conceived as an absolute notion. In this presentation it will be shown that validity can be relativized to certain parameters. Some consequences of this relativization will be explored.

Francesca Boccuni (Vita-Salute San Raffaele University) and Luca Zanetti (IUSS Pavia): How to Hamlet a Caesar

Wednesday, 24 February, noon

Neologicism aims at providing a foundation for arithmetic on the basis of so-called Hume’s Principle (HP), which states that the number of the Fs is identical with the number of the Gs iff there is one-to-one correspondence between the concepts F and G. Philosophically, Neologicism amounts to three main claims: (1) HP is analytic; (2) HP is a priori; (3) HP captures the nature of cardinal numbers. Nevertheless, Neologicism is faced with the so-called Caesar problem: though HP provides an implicit definition of the concept Cardinal Number, which arguably might be known a priori, HP does not determine the truth-value of mixed identity statements such as “Caesar=4”. Neologicists tackle the Caesar problem by claiming that the applicability conditions of the concept Cardinal Number can be obtained from the identity conditions determined by HP, so that the truth of mixed identity statements as “Caesar=4” can be determined in the negative. In this talk, we will argue that the neologicist solution to the Caesar problem gives rise to what we call the Caesar Problem problem: if the Caesar problem is indeed solved as neologicists aim at, then (1)-(3) cannot be jointly argued for. We will also consider some ways in which neologicists can try to solve the Caesar Problem problem, and we will argue that none of these solutions are favourable to Neologicism. Finally, we will suggest a further perspective, i.e. “Neologicism without metaphysics”, in which (3) is abandoned in favour of (1) and (2).

Lorenzo Casini (University of Turin): Meta-analyses and conflicts of interest

Wednesday, 17 February, noon

In medical research, meta-analyses of randomized controlled trials (RCTs) are praised for mitigating the problem of confounding due to the small sample size of individual RCTs. Yet, meta-analyses have also been severely criticized (Ioannidis 2016; Stegenga 2018). An underestimated limitation of meta-analyses is that most RCTs suffer from conflicts of interest (Roseman et al. 2011). The closer one gets to drug approval in the process of assessment of benefits and harms, the stronger the conflicts of interest become. Indeed, most phase III trials are sponsored by pharmaceutical companies. “Meta-level” evidence seems to suggest that conflicts of interest raise the probability of biased estimates (Friedman and Richter 2004; Kjaergard and Als-Nielsen 2002). And yet, “first-level” evidence would indicate that the very same trials are prima facie more reliable, in virtue of their better design. Intuitively, the two considerations—trusting studies with a better design and distrusting studies subject to conflicts of interest—pull in different directions, namely including vs excluding RCTs with conflicts of interest from meta-analyses. Although conflicts of interest are reported by individual studies, current protocols for meta-analyses do not specify how to deal with this information. We believe that this is an important methodological gap, which deserves more attention. We endorse the principle of total evidence. The endorsement of this principle has two consequences. First, studies with a better design cannot be disregarded if one is to accurately calculate effect sizes. At the same time, the information that several of such studies may contain hidden biases cannot be neglected either. In the literature on evidence synthesis, some (Welton et al. 2009; Dias et al. 2010; Verde et al. 2020) have proposed models for adjusting the overall estimate of effect sizes based on the prior probability that a meta-analysis contains biased studies. In this paper, we review such proposals and argue that they need to be modified to account for all of our evidence, whether first-level or meta-level. (joint work with J. Sprenger)

Elena Casetta (University of Turin): From Conserving Nature to promote Naturalness

Wednesday, 10 February, noon

Since the beginning of the XX Century, nature has been the subject of conservation and protection actions. Yet, such a pivotal role is today called into question for at least two reasons. First, “biodiversity” widely replaced “nature” in conservation policies (Blandin 2007); second, because of the pervasiveness of the anthropogenic impact on the biosphere, there would be no nature left to conserve (Editorial 2008). Shall we just get rid of “nature” and “natural” from conservation debates? I do not think so. In this talk, I will argue for the importance of the concept of naturalness as a guide for conservation, and I will try to provide an account of the natural/artificial distinction suited to contemporary conservation framing. I will sketch this view with the help of some case studies, and I will show some of its advantages.

Stefano Bonzio (University of Turin): Beyond de Finetti’s coherence

Wednesday, 3 February, noon

This talk is based on the ideas of the MC project which I will carry on in the next three-years. Coherence is a criterion that Bruno de Finetti introduced to provide a subjective foundation of probability theory. In his view, probability can be identified with coherent degrees of belief for events to happen. Such degrees are regarded to as fair prices of gambles in a suitably defined betting situation. The betting scenario involves two players: a bookmaker, that fixes betting odds for events, and a gambler, who chooses real-valued stakes for each event. A bookmaker is coherent when chooses odds in such a way that prevents the gambler from getting a sure-win. Interestingly, de Finetti’s theorem states that a bookmaker is coherent if and only if betting odds are chosen according to Kolmogorov axioms of probability.
In a recently published paper (coauthored with T. Flaminio and P. Galeazzi), we introduced a simple, yet realistic, modification of de Finetti’s betting game into a situation where a gambler is allowed to place stakes on two or more coherent books over the same set of events. In this scenario, the sole coherence of the books is not sufficient to bar the gambler from obtaining a sure-win. We have therefore introduced joint-coherence as the criterion that a multiplicity of bookmakers shall satisfy in order to prevent the gambler from getting a sure-win.
After setting the scene by providing a geometrical characterization of joint-coherence, I will present the main research objectives of the project.

Erica Onnis (University of Turin): Emergence: From physics to biology

Wednesday, 27 January, noon

The twofold aim of this talk is to introduce the debate about emergence and provide examples of emergent phenomena occurring in the physical, chemical, and biological domains. The talk is divided into three parts. In the first one, I present the notion of emergence and what can be called its “Standard View”. In the second part, I provide some examples of emergent phenomena occurring in physics (emergent spacetime and quantum decay), chemistry (molecular geometry), and biology (the pigmentation patterns of Timon lepidus and some features of ant colonies). From the examination of these examples, it follows that emergence is a highly heterogeneous phenomenon whose features depend upon the ontological domain in which it appears. For these reasons, in the third part, I briefly suggest revising and broadening the Standard View in light of a more pluralist property cluster theory of emergence.

Federico Boem (University of Florence): Scientific Protocols as Recipes: A New Way to Look at Experimental Practice in the Life Sciences and the Hidden Philosophy Within

Wednesday, 20 January, noon

The experimental practice in contemporary molecular biology oscillates between the creativity of the researcher in tinkering with the experimental system, and the necessity of standardization of methods of inquiry. Experimental procedures, when standardized in lab protocols, might definitely be seen as actual recipes. Considering these protocols as recipes can help us understand some epistemological characteristics of current practice in molecular biology. On the one hand, protocols represent a common ground, i.e. the possibility of reproducibility, which constitutes one of the essential properties for contemporary science to define an actual discovery. At the same time, however, protocols are flexible enough to be adapted by the individual researcher (within a space of maneuver given by the experimental system and by the practices that each individual discipline gives to itself) to his/her specific needs. These variations, just like the recipes, remind us that the legitimacy of an experimental practice involves both objective and subjective constraints and it is articulated on a fuzzy background rather than a rigid and clear context. Moreover, looking at experiments, according to this perspective, can provide a key to understanding how different forms of science (which adopt different methodologies but which investigate the same phenomena), such as computational biology, are precisely different in the use of a different “cookbook”. Indeed, given the procedural/operational realism of biologists towards phenomena, the clash of different procedures has opened a discussion also about the nature and the meaning of the obtained results. Thus, according to the recipe-perspective that, the methodological struggle over the nature of biological phenomena (and their ways of discovery) among scientists, might be seen as a not always explicit, epistemological debate, however coming from the practice of science itself.

Marco Giovanelli (University of Turin): Nothing but Coincidences. Adventures and Misadventures of Einstein’s Point-Coincidence Argument

Wednesday, 9 December, noon

In his 1916 review paper on general relativity, Einstein made the often-quoted remark that all physical measurements amount to a determination of coincidences, like the coincidence of a pointer with a mark on a scale. This argument, which was meant to express the requirement of general covariance, immediately gained great resonance. Philosophers such as Schlick found that it expressed the novelty of general relativity, but the mathematician Kretschmann deemed it as trivial and valid in all spacetime theories. With the relevant exception of the physicists of Leiden (Ehrenfest, Lorentz, de Sitter, and Nordström), who were in epistolary contact with Einstein, the motivations behind the point-coincidence remark were not fully understood. Only at the turn of the 1960s did Bergmann (Einstein’s former assistant in Princeton) start to use the term ‘coincidence’ in a way that was much closer to Einstein’s intentions. In the 1980s, Stachel, projecting Bergmann’s analysis onto his historical work on Einstein’s correspondence, was able to show that what he started to call ‘the point-coincidence argument’ was nothing but Einstein’s answer to the infamous ‘hole argument.’ The latter has enjoyed enormous popularity in the following decades, reshaping the philosophical debate on spacetime theories. The point-coincidence argument did not receive comparable attention. By reconstructing the history of the argument and its reception, this paper argues that this disparity of treatment is not justified. Einstein’s claim that only coincidences are observable is not only a historical curiosity, but offers a philosophical insight on the nature of spacetime after general relativity.

Fabrizio Calzavarini (University of Bergamo & LLC, Turin) and Gustavo Cevolani (IMT Lucca): Abductive reasoning in cognitive neuroscience: Weak and strong reverse inference

Wednesday, 2 December, noon

Abductive inference is reasoning backward from facts to their possible explanations, or from effects to their possible causes. It is at play in a wide array of contexts, from everyday life to science. In cognitive neuroscience, researchers often infer from specific activation patterns to the engagement of particular mental processes. This “reverse inference” [RI] plays a crucial role in many applications of fMRI, both inside and outside cognitive neuroscience. In this talk, we argue that the current debate on RI overlooks an important distinction. In the philosophical debate, two separate roles for abductive inference are discussed: (i) a heuristic role — that of generating new explanatory hypotheses and assisting discovery (weak abduction); (ii) a justificatory role, one of evaluating and possibly accepting selected hypotheses (strong abduction). We suggest that the heuristic and the justificatory role of abduction in cognitive neuroscience can be usefully separated, by distinguishing a weak and a strong form of RI. We claim that, although strong RI may be often fallacious, weak RI plays an essential role as a search strategy that tells us which explanatory conjecture we should set out first for further empirical inquiry – or more generally, which suggests a short and most promising (though not necessarily successful) path through the exponentially explosive search space of possible explanatory hypothesis.

Malvina Ongaro (University of Turin): A notion of relevance for rational decision modelling

Wednesday, 25 November, noon

Different aspects of a decision can be assessed in terms of their rationality. Among these, there is the selection of the elements of the model representing the decision problem. Under a notion of rationality that includes a criterion of efficiency, the agent should have in her model all and only the elements of her environment that are relevant for the decision. In this paper, I present a framework to analyse the notion of relevance for a decision in terms of reasons. I define the elements of the decision model and of the environment as propositions, which can be the object of two types of attitudes: a conative and an epistemic one. These lead to two different ways of being relevant for a decision. Broadly, I claim that a proposition is epistemically relevant if it provides reasons to expect certain consequences of an action, and it is evaluatively relevant if it provides reasons to assign some value to such consequences. Finally, I explore some applications of this analysis of relevance to the notions of preference and of rationality.

William D’Alessandro (MCMP): Modeling in Mathematics: Understanding, Explanation, Counterfactuals

Wednesday, 18 November, noon

Models are indispensable tools of scientific inquiry, and one of their main uses is to improve our understanding of the phenomena they represent. How do models accomplish this? And what does this tell us about the nature of understanding? While much recent work has aimed at answering these questions, philosophers’ focus has been squarely on models in empirical science. I aim to show that pure mathematics also deserves a seat at the table. I begin by presenting two cases: Cramér’s model of the prime numbers and the dyadic model of the integers. These cases show that mathematicians—like empirical scientists—rely on simple models to gain understanding of complex phenomena, including some that are pervasively distorted and unrealistic. There are also morals here for some much-discussed theses about scientific understanding. Two issues in particular are worth highlighting. First, modeling practices in mathematics seem to confirm that one can gain understanding without obtaining an explanation (contra [de Regt 2009], [Khalifa 2012], [Strevens 2013], [Trout 2007]). Second, these cases cast doubt on the idea that unrealistic models confer understanding by imparting counterfactual knowledge (contra [Bokulich 2011], [Grimm 2011], [Hindriks 2013], [Lipton 2009], [Rice 2016], [Saatsi forthcoming]).

Michele Lubrano (University of Turin): On (not) being ad hoc: use-novelty as an epistemic virtue in mathematics

Wednesday, 11 November, noon

The epistemic virtues of mathematical proofs are receiving much attention by the epistemologist of mathematics. In the recent literature, it’s not hard to find interesting analyses of the depth, the elegance, the purity and the explanatoriness of proofs. I would like to enrich this picture by adding a virtue that, unjustly, has received no attention so far. Following Worrall (1985), I’ll call it use-novelty. I’ll try to define it in a precise way and show that a proof displaying such a virtue is not ad hoc. I’ll also show that the reason why mathematicians are unsatisfied with certain proofs is very often due to their being ad hoc. It follows that use-novelty is an important epistemic virtue for mathematical proofs.

Mattia Andreoletti (Vita-Salute San Raffaele University): Genuine vs. Manufactured Scientific Controversies: The Case of Statins

Wednesday, 4 November, noon, Aula 11, Palazzo Nuovo

Science progresses through debate and disagreement, and scientific controversies play a crucial role in the growth of scientific knowledge. But not all controversy and disagreement are progressive in science. Sometimes controversies can be “manufactured” and what looks like genuine scientific disagreement can be a distortion of science set up by non-scientific actors (e.g. interest groups). Manufactured controversies are detrimental to science because they can hinder scientific progress and eventually bias evidence-based decisions. The first goal of this paper is to elucidate the distinction between ‘pseudo’ and ‘genuine’ scientific controversies, and to provide a qualitative methodology, based on the literature on expertise, for distinguishing between the two. We illustrate six epistemic criteria for distinguishing pseudo from genuine scientific debates in science and in medicine in particular. We will then apply the above criteria to a case study: the controversy over statins, which are widely-prescribed drugs used to decrease the level of cholesterol in order to reduce the risk of cardiovascular diseases.

Giuliano Rosella (University of turin): A truthmakers Semantics for Modal Logic

Wednesday 28 october, noon

In the present work, I introduce a modal extension of Kit Fine’s truthmaker semantics due to Johannes Korbmacher. The main idea is to combine a truthmaker model (i. e. state model) with a Kripke frame and relativize the relation of truthmaking with respect to possible worlds. From this extended structure, we manage to get semantic clauses for modal formulas in the language. I then show how within our modal framework we can provide a semantics for the modal logic of analytic containment (AC_L) and prove some interesting relations between AC_L and some non-classical modal logics.

Samuele Iaquinto (University of Turin): A philosophically neutral semantics for perception sentences, and beyond

Wednesday, 14 October, noon

Jaakko Hintikka proposed treating objectual perception sentences, such as ‘Alice sees Bob’, as de re propositional perception sentences. Esa Saarinen extended Hintikka’s idea to eventive perception sentences, such as ‘Alice sees Bob smile’. These approaches, elegant as they may be, are not philosophically neutral, for they presuppose, controversially, that the content of all perceptual experiences is propositional in nature. The aim of our paper is to propose a formal treatment of objectual and eventive perception sentences that builds on Hintikka’s modal approach to propositional attitude ascriptions while avoiding controversial assumptions on the nature of perceptual experiences. Despite being simple and theoretically frugal, our approach is powerful enough to express a variety of interesting philosophical views about propositional, objectual, and eventive perception sentences, thus enabling the study of their inferential relationships. Our framework can also find interesting applications in other contexts: we will discuss how to extend it to attributions of knowledge of acquaintance.

Cristina Sagrafena (University of Turin): The subjective rational choice of scientific theories: Sen meets Bayes

Wednesday, 7 October, noon

Okasha (2011), applying Arrow’s impossibility theorem (1951) for social choice to scientific theory choice, argues that there is no acceptable theory choice algorithm based on Kuhn’s criteria (1977). Namely, there is no theory choice rule that satisfies the analogue Arrovian conditions.

The only escape route is imposing on the rule desiderata that demand the latter to consider more information about the preferences of each criterion over the alternative theories, following Sen’s work for social choice (1970).

Bradley (2017) highlights that Okasha, in his argument, relies on an objective rational theory choice, according to which the choice is determined by agent-independent principles of rationality. However, in light of this, the Sen’s escape route can be applied in narrow domains of science, as it requires conditions difficult to find.

A remedy to such a problem is using the Sen’s escape route according to a subjective rational theory choice. Thus, the Sen’s escape route desiderata become principles of rationality that apply to all the agents, but the choice is determined by subjective elements regarding the evaluation of the criteria and their trade-offs.

Since Bradley does not make any example of an algorithm to apply the Sen’s escape route in this way, my suggestion is to use the subjective Bayesian algorithm. In fact, the Bayes’ rule satisfies Sen’s escape route desiderata, as proved by Okasha (ibid., Appendix), and informs the prior probabilities using the subjective trade-offs among criteria that do not meet the aforementioned conditions, as argued by Salmon (1990).

However an objection can be made to my proposal. Stegenga (2015) claims that Arrow’s theorem applies also to an algorithm that translates the information provided by the criteria in prior probabilities. Conversely, I will show that, as Salmon foresees, there can be many acceptable algorithms of that kind because of the Sen’s escape route.

Claudio Calosi (Université de Genève) and Samuele Iaquinto (University of Turin): Quantum Fragmentalism

Wednesday, 17 June, noon

Fragmentalism was originally introduced as a new A-theory of time. Recently it has been advocated — or at least considered — as a possible interpretation of physical theories such as Special Relativity. In a celebrated paper, Simon suggests that fragmentalism offers a new insight into Quantum Mechanics as well. In particular, Simon contends that fragmentalism delivers a new realist account of the quantum state — which he calls conservative realism — according to which: (i) the quantum state is a complete description of a physical system; (ii) the quantum (superposition) state is grounded in its terms, and (iii) the superposition terms are themselves grounded in local goings-on about the system in question. The key insight in Simon (2018) is to identify different terms in a superposition state with Fine’s fragments — or states of affairs within those fragments. In this paper we offer an argument against this core insight. This raises the question about whether there are some other viable forms of quantum fragmentalism.

Stefano Bonzio (University of Turin): A new proposal for the logical modeling of ignorance

Thursday, 11 June, 17:00

Ignorance is traditionally defined recurring to epistemic logic S4. In particular, an agent ignores a formula φ when s/he does not know neither φ nor its negation ¬φ: ¬Kφ ∧ ¬K¬φ (where is the epistemic operator for knowledge). In words, ignorance is essentially interpreted as “lack of knowledge”. Contrarily to this trend, we believe that the notion of ignorance can be treated as a primitive notion. In this paper, we introduce and investigate a modal logic having a primitive epistemic operator I, modeling ignorance. Our modal logic is essentially constructed on the modal logics based on three-valued logic introduced by Krister Segerberg. Such non-classical propositional basis allows to define a Kripke-style semantics with the following, very intuitive, interpretation: a formula φ is ignored by an agent if φ is neither true nor false in every world (epistemically) accessible to the agent. In particular, we axiomatize, prove completeness and decidability for the logic of reflexive (three-valued) Kripke frames, which we find the most suitable candidate for our novel proposal. We finally compare our approach with the most traditional ones, founded on epistemic logic S4.

Silvia Tossut (University of Turin): The Psychological Science Accelerator: An Epistemological Perspective

Wednesday, 3 June, noon

The Psychological Science Accelerator (PSA) is the most recent attempt to overcome the replicability crisis in psychology. It consists in a globally distributed network of labs, whose main goal is the collection of a huge amount of comparable evidence.

In this talk, I analyze the PSA’s organizational structure and its official policies from an epistemological point of view, to state whether it can be effective to contrast the replicability crisis.

The comparison with previous projects addressing the replicability crisis in the field, indicates that the PSA exhibits several elements of epistemological interest. I focus mainly on two connected assumptions of the PSA: (i) that open and transparent communication among the members can help in reaching an epistemic desideratum; (ii) that the acceleration of evidence acquisition is relevant to overcome the crisis.

Starting from a thesis of Zollman’s network epistemology, I suggest that the relation between full communication in a network and acceleration of knowledge within it is not straightforward. Then, I argue that social epistemology can provide the instruments to analyze and improve the organizational structure of a network with an epistemic desideratum. I show what this could mean in practice in the case of the PSA.

In the conclusion, I state that the PSA will arguably be good for psychological science, but for reasons other than acceleration.

Noah Van Dongen (University of Turin): A systematic review of the essay and opinion literature on null hypothesis significance testing

Wednesday, 27 May, noon

In the social sciences, this is a period of fierce debates on failed replications and failing statistical methods. It is our opinion that these debates in particular and methodological development in general could benefit from an increased understanding of the ubiquitously used statistical tool, the Null Hypothesis Significance Test (NHST). To this end, we will systematically review the literature on the misconceptions and limitations of NHST, and alternative methods that are therein suggested. Specifically, we focus on essays and opinion pieces in the academic literature that identify and discuss these issues, either in favour or against NHST, with the use of arguments supported by references to empirical and mathematical evidence.

This project is still in progress. In this presentation, I will talk about its conception, aims, methods, and preliminary results.

Maria Paola Sforza Fogliani (University School for Advanced Studies IUSS Pavia): Pluralism about Criteria

Wednesday, 20 May, noon

According to logical pluralism (LP), more than one account of logical consequence is correct or legitimate; the contrasting view is, of course, logical monism. In particular, so far, LP has been advanced as the view that more than one logical theory complies either with (i) a given legitimacy criterion or (ii) a certain set of criteria – these being familiar parameters of the abductive methodology (e.g. simplicity, adequacy to the data, fruitfulness), or crucial philosophical requirements (e.g. necessity, normativity, formality). I’ll here argue for a different perspective, which consists in allowing for a plurality of legitimacy criteria or sets of legitimacy criteria themselves; call this – unsurprisingly – pluralism about criteria. After having presented the position, I’ll sketch a quasi-mechanical procedure that, being fed some selection criteria, is able to generate the spectrum of (nearly all) the possible views that can be maintained with respect to the number of legitimate logics out there. In the end, I’ll highlight some of the advantages that pluralism about criteria has over traditional LP, especially in answering monist objections.

Joachim Frans (Vrije Universiteit Brussel): Unificatory Understanding and Explanatory Proofs

Wednesday, 13 May, noon

One of the central aims of the philosophical analysis of mathematical explanation is to determine how one can distinguish explanatory proofs from non-explanatory proofs. There seems, however, little consensus on which proofs are genuinely explanatory. Instead of identifying proof(s) as the starting point of this task, I suggest we start by analyzing the concept understanding. More precisely, I will defend four claims: (i) understanding is a condition for explanation, (ii) unificatory understanding is a type of explanatory understanding, (iii) unificatory understanding is valuable in mathematics, and (iv) mathematical proofs can contribute to unificatory understanding. As a result, in a context where the epistemic aim is to unify mathematical results, I argue it is fruitful to make a distinction between proofs based on their explanatory value.

Matteo Plebani (University of Turin) and Michele Lubrano (University of Turin): Parts of Structures

Wednesday, 29 April, noon

“Nor is it self-contradictory that a proper part should be identical (not merely equal) to the whole, as is seen in the case of structures in the abstract sense. The structure of the series of integers, e.g., contains itself as a proper part” (Goedel, “Russell’s Mathematical Logic”, p. 130 in his Collected Works). When it comes to judging whether something is contradictory or not, Goedel is quite an authority. However, his claim is puzzling. On the one hand, he seems to be right: mathematics displays a variety of fractal-like structures. On the other hand, the idea that something is a proper part of itself sounds contradictory: what makes a part proper is precisely that it is not identical to the whole. Our aim is to show how this tension can be partially solved, by means of a definition of ‘proper part of a structure’ that connects in a natural way systems of objects with their abstract structures. We will also show that our account of what counts as part of an abstract structure has many interesting and surprising consequences.

Mattia Andreoletti (University of Turin) and Malvina Ongaro (University of Turin): Non-Epistemic Uncertainties in Evidence-Based Policy

Wednesday, 22 April, noon

The last few decades have seen the rise of a movement calling for more evidence in social policy. Within this trend, randomized experiments occupy a prominent place. The number of studies involving randomization has been constantly on the rise, receiving an increasingly large attention in economics. However, philosophers of science have argued time and again that the strive for evidence-based policy tends to assign excessive weight to effectiveness considerations. And overconfidence in the epistemic merits of randomization leads to overlooking the non-epistemic aspects of policy-making. A complete understanding of evidence-based policy requires therefore an understanding of these latter. But while considerations on evidence abound in the philosophical literature, exhaustive analyses of non-epistemic issues are still lacking. In this paper, we aim to remedy this.

The importance of evidence in policy-making lays in its potential to reduce uncertainty. Since arguably any policy decision happens under conditions of uncertainty, then resorting to evidence should lead to better decisions. Against this backdrop, we claim that (i) non-epistemic uncertainties play a key role in the assessment of the evidence and (ii) the extent to which a policy decision is determined by the evidence depends on how non-epistemic uncertainties are resolved. 

Lorenzo Casini (Université de Genève): Causal Heterogeneity and Independent Components

Wednesday, 15 April, noon

Successful policy requires estimates of causal effects that are also useful predictors of possible interventions. An obstacle to finding suitable estimates is the causal heterogeneity of the population where effects are estimated. In this talk, I sketch a solution to the problem of heterogeneity based on an  “independent component” (IC) representation of the data, as may be recovered by statistical techniques for blind source separation (e.g., independent component analysis). In previous work (Casini, Moneta, and Capasso 2020), I argued that the IC representation can aid causal inference by identifying “ill-defined variables” responsible for “ambiguous manipulations” (Spirtes and Scheines, 2004). Here, I suggest that this result may be brought to bear on the issue of causal heterogeneity. The key idea is that, in the presence of heterogeneity, the cause of interest trasmits its effect ambiguously — because it is a coarse-grain variable with heterogeneous causal roles in different subpopulations, or because its influence is modulated by latent causes of the outcome. By uncovering hidden sources of variation, the IC representation can separate different component effects underpinning the aggregate effect. This strategy promises to help assess the reliability of causal models and to reduce the uncertainty of model-based decisions.

Casini, L., Moneta, A. and Capasso, M. (2020). Variable definition and independent components. Unpublished manuscript.

Spirtes, P. and Scheines, R. (2004). Causal inference of ambiguous manipulations. Philosophy of Science, 71(5):833–45.

Cristina Sagrafena (University of Turin): Scientific Rationality is and can be Silent

Wednesday, 1 April, noon

Samir Okasha (2011), applying Arrow’s theorem (1951) to theory choice on the basis of the Kuhnian epistemic criteria (Kuhn, T., 1977), argues that there is no acceptable theory choice rule, namely there is no theory choice rule that satisfies the Arrovian desiderata for social choice rephrased for theory choice. The only promising escape route seems to be enlarging the information that the theory choice rule takes in, so that its inputs provide more-than-ordinal information and permit inter-criteria comparisons, following Amartya Sen’s work for social choice (1970). Regarding the argument just sketched, Seamus Bradley (2017) points out that Okasha relies on an objective reading of rational theory choice according to which there are agent-independent standards for choice among theories. Conversely, he proposes to use Sen’s escape in the light of a subjective idea of rational theory choice. In fact, individual scientists’ choices among theories implicitly reveal subjective trade-offs among the virtues which can meet Sen’s escape desiderata. In this talk I ask what happens to social choice framework applied to theory choice if we impose on the latter the desideratum of objectivity, arguably the most important desideratum for theory choice. For this purpose, I take into consideration two ideas of objectivity: objectivity as transformative criticism (Longino, 1990) and objectivity as value-free-ideal. I will show that since in both cases, subjective trade-offs of epistemic (and contextual) values are recognized (and encouraged), Sen’s escape route should be used in light of a subjective reading of rational theory choice.

Federico Boem (University of Turin) and Stefano Bonzio (Polytechnic University of the Marche): A Logic for Scientific Attitude?

Wednesday, 25 March, noon

Individuating the (but also “a”) logic of scientific discovery appears a very hard task. An easier effort is trying to figure out a way to logically describe the epistemic attitude in certain experimental contexts of scientific practice. Accordingly, we claim that classical logic cannot play such a descriptive role. We propose, instead, one of the three-valued logics in the Kleene family: the one apparently classified as the less attractive, namely Hallden’s logic (for which there is no philosophical interpretation yet). We will show that, within certain experimental contexts, researchers tend to reason in ways that are not adequately represented by (classical) inferential schemes, as witnessed by the failure of modus ponens. However, this does not mean that scientists abandon a logical stance or any regulative idea of rationality. Indeed, researchers do make inferences. These inferences may seem “weak” when compared to traditional ones. However this weakness is not detrimental. On the contrary, it nicely captures the type of scientific thinking occuring in these situations.

Giuliano Rosella (University of Turin) and Vita Saitta (University of Genoa): Truthmakers and Modality

Wednesday 18 March, noon

Truthmakers and truthmaking relations are the subject of one of the most florishing debates within contemporary philosophy. In our work, we will focus on an inquiry concerning truthmakers of modal truths, i.e under what conditions something in the world makes true (is a truthmaker of) a statement of the form “possibly A” or “necessarily A”. After introducing the novel formal framework of truthmaker semantics (recently developed by Kit Fine in a series of publications [Fine, 2012, 2016, 2017]), we will propose our idea of extending this framework by means of a new model theoretic structure which is able to account for the truthmaker conditions of modal statements. We will argue that our approach has some philosophical advantages over Kit Fine’s system. Finally, we will: (i) explore some possible connections between our semantic framework and non-classical modal logics, and (ii) show how this framework may help us gain new insights into the different philosophical conceptions of truthmakers of modal truths.

Stefano Bonzio (Polytechnic University of the Marche): How to believe long conjunctions of beliefs: probability, quasi-dogmatism and contextualism

Wednesday, 29 January, noon, Aula 21, Palazzo Nuovo

Introduction: Federico Boem
Chair: Mattia Andreoletti
Comments by Vincenzo Crupi and Jan Sprenger

According to the Lockean thesis, a rational agent believes a proposition if and only if he assigns to it a probability which is greater than a previously fixed threshold r. The validity of this thesis is threatened by the Preface paradox, which is usually taken to show that the Lockean thesis clashes with the requirement that a rational agent should believe the conjunction of his own beliefs. In this talk, I show in which cases the Lockean thesis is compatible with such requirement and hence can be consistently adopted as a reliable defining criterion for rational belief. In particular, relying on well known results in probability theory, it is possible to compute the bound s such that, if a rational agent believes each of n statements with a probability at least s, then he also believes their conjunction with probability greater than the Lockean threshold r. Moreover, I show how, adopting non-standard probability, the Preface paradox is dissolved and the Lockean thesis is fully compatible with the conjunctive closure requirement. The price one has to pay for the proposed solutions to the paradox is the adoption of a view of rational belief, dubbed here “quasi-dogmatism”, for which a rational agent should believe only those propositions of which he is “nearly certain”.

The talk is based on joint work with Gustavo Cevolani and Tommaso Flaminio.

Matteo Grasso (University of Wisconsin-Madison): IIT’s roadmap to a quale: how to explain any experience in 3 simple steps

Wednesday, 15 January, noon, Aula 21, Palazzo Nuovo

Liked by some, hated by many, Integrated Information Theory (IIT) is one of the main protagonists in the current debate on consciousness. In this talk I will propose a conceptualization of IIT’s methodology to explain the content of experience. After a brief introduction to IIT’s core ideas, I will propose a “roadmap to a quale”, a plan of action in 3 steps to explain any particular content of experience using the conceptual and empirical tools of IIT. As an instance of this approach, I will discuss and evaluate IIT’s recent account of the experience of visual space (Haun & Tononi, 2019). Finally, I will outline IIT’s plan for the future to tackle other contents of experience (such as phenomenal time, object invariance, and local qualities like color and pain).

Mattia Andreoletti and Michal Sikorski (University of Turin): Epistemic and Social Functions of the Replicability Principle in Experimental Sciences

Wednesday, 11 December, noon, Aula 21, Palazzo Nuovo

In the literature on the so-called “replicability crisis” there are two general sentiments about the scope and the scale of the crisis. On the one hand, methodologists and meta-researchers share a high general excitement. In this view, the crisis is appreciated as a period of intense examination and improvement of science (e.g., Vazire 2018). The crisis is bringing about many salient changes both in the methodology and in the social structure of scientific practice. According to the epistemic activists of replicability, we are witnessing a revolution that unquestionably improve science and its methods. On the other hand, philosophers of science have been more skeptical about this (e.g., Leonelli 2018; Andreoletti and Teira 2016; Norton 2015), as they have contested the “power” of replicability as a guiding principle to improve science. Aiming at replicability and acting accordingly is not a panacea for all the issues at stake in the crisis. Here we propose an alternative way to interpret the replicability principle and to evaluate the consequences of the methodological and social changes which science is undergoing. More specifically, we want to show that the replication is both epistemically and socially relevant. Firstly, we will show that replicability is not only epistemically useful, but also that the turn toward replicability is consistent with the traditional philosophy of science. We will provide a taxonomy that will be helpful to appreciate the epistemic import of replicating scientific experiments. Secondly, from a social standpoint, we can see recent “calls for institutionalizing replicability” as a way to restore the credibility of scientific enterprise. With this regard, replications are inheriting some functions which were usually assigned to other institutions, such as peer review and journals, which have recently proved to be less reliable than we thought.

Carlo Debernardi (University of Florence), Eleonora Priori (University of Turin) and Marco Viola (University of Turin): Simulating epistemic bias in Academic recruiting

Wednesday, 27 November, noon, Aula di Medievale, Palazzo Nuovo

According to some authors (e.g. Gillies 2014, Viola 2017), when researchers are called to express a judgment over their peers, they might exhibit an epistemic bias that make them favouring those who belong to their School of Thought (SoT). A dominant SoT is also most likely to provide some advantage to its members’ bibliometric indexes, because more people potentially means more citations. In the long run, even the slight preference for one SoT over the others might lead to a monopoly, hampering the oft-invoked pluralism of research. In academic recruitment, given that those who recruited to permanent position will often become the recruiter of tomorrow, such biases might give rise to a self-reinforcing loop. However, the way in which this dynamics unfolds is affected by the institutional infrastructure that regulates academic recruitment. In order to reason on how the import of epistemic bias changes across various infrastructures, we built a simple Agent-Based Model using NetLogo 6.0.4., in which researchers belonging to rival SoTs compete to get promoted to professors. The model allows to represent the effect of epistemic and bibliometric biases, as well as to figure out how they get affected by the modification of several parameters.

Mara Floris (University of Turin): Language and Categorisation: A Critique of Bottom-Up Approaches to the Effects of Labels on Perception

Wednesday, 6 November, noon, Aula 25, Palazzo Nuovo

In the past decades, the relationship between language and categorisation has been investigated by cognitive psychology, especially in the domain of developmental psychology. The main question is whether learning the names of objects affect the way they are categorised. There is little disagreement about the presence of such an effect, but the explanation of this phenomenon is still debated. According to the traditional view, labels affect categorisation because this is what language does: names refer.

Against the traditional view, two bottom-up explanations of this effect were proposed by Sloutsky (Sloutsky et al. 2001) and Plunkett (Plunkett et al. 2008). They both consider labels as perceptual features which increase or decrease perceptual similarity of visual stimuli. I will discuss their positions and present some objections.

Erik Nyberg (Monash University): Automating Explanations of Evidence Impact and Model Selection

Wednesday, 30 October, noon, Aula di Medievale, Palazzo Nuovo

In a recent AI effort, as part of the CREATE-BARD project, we began to develop automated explanations of causal Bayesian network features. Our explanations have some interesting aspects, when compared to the extensive, prior philosophical work on scientific explanation. Rather than explaining why things happen (e.g. the causes), which has been the predominant goal of philosophers, our main goal is to explain the impact of evidence on other beliefs — including hypotheses and predictions. Yet the structure of these evidential explanations is markedly similar to causal ones, including the use of background variables and sensitivity. Rather than very simple networks, we try to explain longer, tangled paths and interactions between multiple pieces of evidence. Rather than using one probabilistic measure, as some philosophers have advocated for causal or explanatory power, our explanations will benefit from using multiple measures. Rather than assuming a network model, some of our explanations will address model selection. The implications of all this for the philosophy of scientific explanation, or vice versa, are not entirely clear.  

Noah van Dongen and Jan Sprenger (University of Turin): A Fresh Look at Popper: A Bayesian Perspective on Severe Testing

Wednesday, 23 October, noon, Aula di Medievale, Palazzo Nuovo

The central concepts of Karl R. Popper’s account of hypothesis testing
are /falsifiability/ and /severity/. The less potential observations
conform to the theory, the more falsifiable it is. A test of that
theory, by contrast, is the more severe the more likely it is to produce
observations inconsistent with the theory if it is actually incorrect.
It is a common critique of Bayesian inference that these concepts,
although methodologically valuable in science, drop out of the picture.

Our attempt to take a fresh look at Popper from a Bayesian perspective
consists of five parts:
1) We contrast our understanding of severe testing to Mayo’s
conceptualization of severity.
2) We connect these conceptualizations to statistical notions of fit and
complexity.
3) We operationalize falsifiability and severity as testing specific
hypotheses (i.e., a small range of parameter values) in contrast to an
all-encompassing alternative hypothesis (e.g., all non-null parameter
values).
4) We put this testing of specific hypotheses into practice in a
Bayesian framework.
5) We explore the potential benefits of our account for hypothesis
testing in scientific practice.

Fabrizio Calzavarini (University of Bergamo), Fabrizio Elia and Franco Aprà (San Giovanni Bosco Hospital, Turin), Vincenzo Crupi (LLC, University of Turin): Rationality, Biases, and Nudges in Healthcare: the Case of Hand-Hygiene

Wednesday, 16 October, noon, Aula di Medievale, Palazzo Nuovo

Recent research in cognitive science indicates that human choices do not usually arise as the logical consequences of stable preferences and beliefs, as it has been traditionally assumed. They are largely driven by heuristic processes, instead, which can be systematically biased and highly context-sensitive. This is not necessarily bad news, though: insights into the quirks and limitations of human rationality can help us improve decision outcomes by the design of suitable nudges. A nudge implies a non-coercitive change of the choice context which exploits inherent tendencies of agents in order to promote beneficial outcomes.

Nudging strategies have been recently applied in many domains, including healthcare.

Hand hygiene in the hospital is one of the most interesting areas where various effective interventions investigated are best understood as nudges. For instance, poster campaigns with messages emphasizing the advantages of hand hygiene, rather than the risks of noncompliance, have been found to be effective in increasing hand hygiene compliance among professionals, since gain-framed messages are more psychologically persuasive in encouraging prevention behaviour than loss-framed appeals. Similarly, the placement of the dispensers at the bedside of every patient, where all professionals have to work most of the time, has a significant role in improving hand hygiene compliance, because human agents perform more easily procedural steps that are not functionally isolated from the principal course of action.

In this talk, we discuss the effectiveness of various nudging interventions to improve hand hygiene compliance rates among healthcare professionals in hospital settings. We also present the methodology and the initial results of an experiment conducted at the San Giovanni Bosco Hospital, ASL TO2, Turin, Italy.


Vlasta Sikimić (University of Belgrade): Epistemic characteristics of modern science: group structures and epistemic openness

Wednesday, 10 July 2019, noon, Aula 3, Palazzo Nuovo

Contemporary natural science is growing over several dimensions: the total number of researchers, the size of research teams, time spent on projects, and general funding. The structure within research groups is complex, the publishing process complicated, and the pressure for acquiring funding is high. Social epistemology of science uses different tools to tackle the question of optimization of the scientific pursuit, such as simulations, statistical analyses, mixed-methods, etc.

In the talk, I will focus on the question of optimal structures of research groups and on epistemically optimal communication between scientists. First, I will present results obtained by computer simulations that indicate that levels of hierarchy are helpful over different epistemic landscapes. However, this is not the case for egalitarian and centralized groups. In particular, while the egalitarian group performs best for finding the optimal hypothesis in simple epistemic landscapes, its efficiency drops when the complexity of the landscape increases. Then, I will turn to a data-driven study on epistemic openness of researchers. In this study, we tested connections between epistemic tolerance and scepticism about the scientific method on the one hand, and the nature of research on the other hand.

Olav Benjamin Vassend (Nanyang Technological University): Trusting the Predictions of a Hypothesis vs Believing that the Hypothesis is True

Wednesday 26 June, 12:00-13:00, Aula 5, Palazzetto Goressio in Via Giulia di Barolo 3/a

Scientists will often trust a hypothesis for predictive purposes even if they believe that the hypothesis is false. Moreover, it is clear that trust—like belief—comes in degrees and ought to be updated in response to evidence. I argue that degrees of trust and degrees of belief are governed by different updating norms and that they are therefore two fundamentally distinct epistemic attitudes, neither one of which may be reduced to the other.

Agostino Pinna Pintor (University of Eastern Piedmont and Institut Jean Nicod, Paris): The Opacity of Embodied Cognition

Wednesday, 22 May 2019, 12:00-13:00, Aula di Medievale, Palazzo Nuovo

Over the past few decades, the idea that cognition is embodied (hereafter EC) has gained growing attention (cf. Mahon 2015). Both scientists and philosophers of mind have increasingly proposed reasons for believing cognition to be embodied. However, what exactly they take EC to mean is not always crystal clear. Despite the use of a common label, people notoriously hold different positions (cf. Wilson 2002), which are frequently formulated without precision and often contain expressions whose meaning is quite ambiguous. The work of Goldman (2012, 2014) does represent a notable exception to such lack of conceptual precision. Goldman’s idea is that cases of EC are cases of b-formats reused, where “reuse” refers to the redeployment of brain structures for new functions and “b-format” picks out specific vehicles coding for bodily properties. The aim of this talk is to show how, despite its merits, the b-formats view does not solve the opacity of EC, for its core notions seems neither conceptually clearer nor empirically more fruitful than the notion of EC. The solution, I will argue, has to be found elsewhere.

Michal Sikorski (University of Turin): Values, Bias and Replicability

Wednesday, 8 May 2019, 12:00-13:00, Aula di Medievale, Palazzo Nuovo

The value-free ideal of science is a view which claims that scientists should not use non-epistemic values when they are justifying their hypotheses. Recently, it is generally regarded as obsolete (e.g., Douglas 2009, Elliott 2011 or Tsui 2016). I will defend the value-free ideal by showing that if we accept the uses of non-epistemic values prohibited by it, we are forced to accept, as legitimate scientific conduct, some of the disturbing phenomena of present-day science(e.g., founder bias or questionable research practices). My strategy will be to demonstrate that the only difference between legitimate and problematic cases is that during the course of the problematic ones the non-epistemic values motivate methodological decisions. Because of that, if we reject the value-free ideal we are no longer able to explain, in a principled way, why we should not accept the problematic cases. I will show how two well-known proposals, value-laden science from Douglas 2009 and a proposal concerning ontological choices from Ludwig 2015, lead to this problem and present examples from actual scientific practice. Then, I will show that when different scientists share different non-epistemic values, the use of them prohibited by value-free ideal contributes to the replication crisis. This makes the crisis a direct consequent of value-laden science. Finally, I will present two strategies which, when followed, make the value-free ideal realizable. Firstly, following Betz 2013, scientists can avoid problematic (value-laden) methodological choices by highlighting uncertainties and formulating their results carefully. Secondly, as proposed by Levi 1960, a scientific community can instantiate a scientific convention which recommends a particular solution for a given methodological problem and therefore makes a corresponding (value-laden) choice unnecessary.

Marco Viola (University of Turin): Cognitive functions and neural structures: population-bounded mappings

Wednesday, 3 April 2019, 12:00-13:00, Aula 7, Palazzo Nuovo (first floor)

For a long time, cognitive neuroscientists tried to unravel the ‘true’ function of some brain structure, i.e. the function that accounted for all and only its activation. However, nowadays there is a growing skepticism about the assumption that one-to-one mapping (function-to-structure) is to be found. Several scholars are thus suggesting that the ontology of cognitive neuroscience must be contextualist (function-to-structure-in-context). In my talk, I urge for a contextualist framework that takes into account patterned inter-individual differences. While establishing structure-function mappings have been traditionally conceived as a universal enterprise about every normal mind/brain, I seek to show how relaxing this universality claim is both necessary and fruitful: necessary, because specific populations of individuals do develop idiosyncratic function-structure mappings due to ontogeny or clinical reasons; fruitful, because it broadens the scope of cognitive functions that can be mapped.

Paola Berchialla (University of Turin) & Daniele Chiffi (Politecnico di Milano): Can you believe the results of a meta-analysis? The heterogeneity, the power and the case of small meta-analyses

Wednesday, 13 March 2019, 12:00-13:00, Aula TBA, Palazzo Nuovo, first floor

Meta-analysis is a procedure by which the results of multiple studies are combined to provide a higher level of statistical evidence. One of the key components of a meta-analysis is the heterogeneity, which is the total variation across the studies included. Assessing the heterogeneity is a crucial methodological issue, especially in small meta-analyses, since it may affect the choice of the statistical model to apply.

From a statistical perspective, power is related to all those features impacting on heterogeneity; namely, effect size, type I error, and sample size. Considering the statistical power of the original studies provides a different point of view for reviewing the meta-analysis. On the other hand, not considering statistical power may have an impact on study conclusions, especially when they are drawn from meta-analyses based on underpowered studies, or when the difference in statistical power of single studies is not considered.

According to a review by Turner and colleagues, most meta-analyses include studies that do not have enough statistical power to detect a true effect. In this study, we propose a modified way to assess heterogeneity based on a posteriori statistical power, which is calculated on the basis of the actual sample size. Two motivating examples based on published meta-analyses are presented to illustrate the relationship between a posteriori power and the results of meta-analyses and the impact of the statistical power on the assessment of the heterogeneity.

Refining judgement of heterogeneity across studies based on a posteriori power may provide an informative and applicable way to improve the evaluation of meta-analytic results.

Noah van Dongen and Michał Sikorski (LLC, University of Turin): Objectivity for the Research Worker

In the last few years, many problematic cases of scientific conduct were diagnosed; some of which involve outright fraud (e.g., Stapel, 2012) others are more subtle (e.g., supposed evidence of extrasensory perception; Bem, 2011). We assume that these and similar problems are caused by a lack in scientific objectivity. The current theories of objectivity do not provide scientist with conceptualizations that can be effectively put into practice in remedying these issues. We propose a novel way of thinking about objectivity; a negative and dynamic approach. It is our intention to take the first steps in providing an empirically and methodologically informed inventory of factors (e.g., Simmons, Nelson & Simonsohn, 2011) that impair the scientific practice of researchers. The inventory will be compiled into a negative definition (i.e., what is not objective), which can be used as an instrument (e.g., check-list) to assess deviations from objectivity in scientific practice.

Fabrizio Calzavarini (University of Bergamo and LLC, University of Turin): Work on the dual structure of lexical semantic competence

Wednesday, 6 February, 12:00-13:00, Aula 22, Palazzo Nuovo, first floor

Philosophical arguments and neuropsychological research on deficits of lexical processing converge in indicating that our competence on word meaning may have two components: inferential competence, that takes care of word-word relations and is relevant to tasks such as recovery of a word from its definition, pairing of synonyms, semantic inference and more; and referential competence, that takes care of word-world relations, or, more carefully, of connections between words and perception of the outside world (through vision, hearing, touch). Interestingly, recent experiments using neuroimaging (fMRI) found that certain visual areas are active even in purely inferential performances, and a current experiment appears to show that such activation is a function of what might be called the “imageability” of linguistic stimuli. Such recent results will be presented and discussed. In addition, future studies on lexical inferential competence in congenitally blind subjects will be also presented and discussed.

[This presentation is partly based on the outcomes of a research project titled The role of visual imagery in lexical processing, funded by “Compagnia di San Paolo di Torino”, P.I. Diego Marconi].

Andrea Strollo (Nanjing University): Truth Pluralism and Many-Valued Logic: How to solve the problem of mixed inferences

Wednesday, 16 January, 11:00-12:00 (!), Aula di Medievale

According to truth pluralism there is not a single property of truth but many: propositions from different areas of discourse are true in different ways. This position has been challenged to make sense of validity, understood as necessary truth preservation, when inferences involving propositions from different areas (and different truth properties as well) are involved. To solve this problem, a natural temptation is that of replicating the standard practice in many valued logic, thus appealing to the notion of designated values. Validity would just be preservation of designation. Such a simple approach, however, is usually considered a non starter, since, in this context, ‘designation’ seems to embody nothing but a notion of generic truth, namely what truth pluralists abhor. In my talk, I show how to defend such a simple solution relying on designation by exploring the analogy with Many-Valued Logic even further.

Michele Lubrano (University of Turin): Mathematical Explanation: Some Reflections on Steiner’s Model

Wednesday, 12 December, 12:00-13:00, Aula di Medievale

Mathematical explanation has recently started to receive attention by philosophers interested in mathematical practice. Professional mathematicians usually distinguish between explanatory and not explanatory proofs of theorems. Indeed, there can be proofs of a mathematical statement that, despite providing perfectly acceptable justifications of it, offer no clue on the reasons why it holds. On the contrary other proofs have the virtue of telling why the statement is true, in a way that, once one understands the argument, such a statement doesn’t look surprising or mysterious any longer. The first interesting theorization of mathematical explanation was proposed by Mark Steiner in 1978. In his model a proof of a statement S about an entity E is explanatory if and only if it makes reference to some essential properties of E. Although Steiner’s model works well in a number of examples, some critical issues have emerged over time. I would like to propose a reworking of his model, able to capture in a more precise way Steiner’s underlying idea – which seems to me fundamentally correct – and also able to face the objections.

Giorgio Castiglione (University of Turin): Tolerance versus Dogma: Revising the Carnap-Quine Debate on Analyticity

Wednesday, 28 November, 16:00-17:00, Aula di Medievale

The Carnap-Quine debate dealt a blow to the thought of logical empiricism, dictating a new agenda for analytic philosophy. I propose an overview of the Quinean objections to the notion of analyticity, and claim that, in denouncing its vagueness, arbitrariness, epistemological and explanatory emptiness, Quine hinges on an erroneous interpretation of Carnapian positions. Showing Carnap’s explication at work, I outline some crucial methodological differences with Quine: divergent understandings of empiricism trace back to distinct conceptions of the tasks of philosophy. Finally, I present some reasons in favour of the Carnapian meta-philosophical paradigm, in which assuming an analytic/synthetic dichotomy, far from being an empiricist dogma, acts as a presupposition for tolerance.

Carlo Martini (San Raffaele University, Milan): Ad Hominem Arguments, Rhetoric, and Science Communication

Tuesday, 6 November, 12:00-13:00, Aula 25, 1st floor, Palazzo Nuovo

Science communication needs to be both accurate and effective. On the one hand, accurate scientific information is the product of strict epistemological, methodological and evidential requirements On the other hand, effectiveness in communication can be achieved through rhetorical devices, like powerful images, figures of speech, or amplification, aiming at getting the readers’ attention and persuading them. For their nature, rhetorical tools can distort the contents of the message, and effectiveness can be achieved at the expense of accuracy.

For example, attacking a scientist’s stance on the effectiveness of a drug, by referring to the scientist’s ties to the pharmaceutical industry that produces that drug, does not show anything about the effectiveness or (lack thereof) of the drug. Yet, under appropriate circumstances, it may be enough to discredit the reliability of the scientist’s claims. This type of argument is called ad hominem: it attacks the source of information, not the substance of the matter – i.e., whether the drug is effective or not. Ad hominem attacks, even when fallacious, can be powerful. For instance, people opposing the use of vaccination routinely use ad hominem attacks, by alleging ties between the scientists defending the use of vaccines and the pharmaceutical industry (see Davies, Chapman and Leask 2002).

The recent controversy on the safety of vaccinations and their possible links to number of conditions, including autism, presents a challenge for science communication. Critics of vaccines are well equipped with rhetorical arguments and a wealth of supposed evidence in support of their various claims: e.g., that vaccines can cause autisms and that vaccines contain chemicals harmful to children. Anti-vaccination movements appeal to anecdotal evidence and powerful imagery to persuade public and policy makers of their arguments, including alleging commercial ties between the medical profession and the pharmaceutical industry. Cases of bad science and pharmaceutical disasters (see Daemmrich 2002, Russell 2009) only make the anti-vaccination arguments stronger in the eyes of the public.

In this talk, I contend that evidence-focused strategies of science communication may be complemented by possibly more effective rhetorical arguments in current public debates on vaccines. I analyse the case of direct science communication – that is, communication of evidence – and argue that it is difficult to effectively communicate evidential standards of science in the presence of well-equipped anti-science movements.

Michal Sikorski, Noah van Dongen and Jan Sprenger (LLC, University of Turin): Causal Strength and Causal Conditionals

Wednesday, 24 October, 12:00-13:00, Aula di Medievale

Causal conditionals and (tendency) causal claims share several properties, and their relation has been the object of substantial philosophical discussion. That said, the discussion mainly moves on a theoretical level without being informed by empirical results. In this project, we investigate several hypotheses on the (probabilistic) predictors of the truth values of indicative conditionals from an empirical point of view and compare them to predictors of causal strength.

  1. Causal conditionals are evaluated as true only if the corresponding tendency causal claim is evaluated as true.
  2. The subjective probability p(E|C) predicts the evaluation of both the causal conditional “if C, then E” and the tendency causal claim “C causes E”.
  3. Statistical relevance predicts the assessment of a tendency causal claim as true; but not the assessment of a causal conditional (in the class of true tendency causal claims).
  4. The effect of probabilistic factors on the assessment of tendency causal claims is greater than its effect on causal conditionals.

We present the experiment that tests these hypotheses and discuss the implication of our findings.

Lina Lissia (LLC, University of Turin): From McGee’s puzzle to the Lottery Paradox

Tuesday 9 October, 12:00-13:00, Aula di Medievale, Palazzo Nuovo

Vann McGee (1985) provided a famous counterexample to Modus Ponens. I show that, contrary to a view universally held in the literature, assuming the material conditional as an interpretation of the natural language conditional “if …, then …” does not dissolve McGee’s puzzle. Indeed, I provide a slightly modified version of McGee’s famous election scenario in which (1) the relevant features of the scenario are preserved (2) both modus ponens and modus tollens fail, even if we assume the material conditional. I go on to show that in the modified scenario (which I call “the restaurant scenario”) conjunction introduction is also invalid. More specifically, I demonstrate that the restaurant scenario is actually a version of the Lottery Paradox (Kyburg 1961), and conclude that any genuine solution to McGee’s puzzle must be a solution to the Lottery Paradox, too. Finally, I provide some hints towards a solution to both McGee’s puzzle and the Lottery Paradox.

Mattia Andreoletti (LLC, University of Turin): The Meanings of Replicability

Thursday, 27 September, 12:00-13:00, Aula di Medievale, Palazzo Nuovo

Throughout the last decade there has been a growing interdisciplinary debate on the reliability of scientific findings: experiments (and statistical analyses in general) are rarely replicated. Intuitively, replicability of experiments is a central dogma of science, and the importance of multiple studies corroborating a given result is widely acknowledged. However, there is no consensus on what counts as a successful replication, and researchers employ a range of operational definitions reflecting different intuitions. The lack of a single accepted definition opens the door to controversy about the epistemic import of replicability for the trustworthiness of scientific results. Disentangling the meanings of replicability is crucial to avoid potential misunderstanding.

Vincenzo Crupi (University of Turin): The logic of evidential conditionals

Wednesday, 27 June, 12:45-13:45, Aula 13, 1st floor, Palazzo Nuovo

Once upon a time, some thought that indicative conditionals could be effectively analyzed by means of the material conditional. Nowadays, an alternative theoretical construct largely prevails and receives wide acceptance, namely, the conditional probability of the consequent given the antecedent. Partly following earlier critical remarks made by others (most notably, Igor Douven), I advocate a revision of this consensus and suggest that incremental probabilistic support (rather than conditional probability alone) is key to the understanding of indicative conditionals and their role in human reasoning. There have been motivated concerns that a theory of such evidential conditionals (unlike their more traditional suppositional counterparts) can not generate a sufficiently interesting logical system. I will present results largely dispelling these worries. Happily, and perhaps surprisingly, appropriate technical variations of Ernst Adams’s classical approach allow for the construction of a new logic of evidential conditionals which is nicely superclassical, fairly strong, and also (as it turns out) a kind of connexive logic.

Workshop in Philosophy of Science

Friday, 25 May, 11:00-13:30
Aula 8, Palazzo Nuovo (first floor)

Instead of the usual longer presentation on Wednesdays, we have four short ones on a Friday, featuring young and promising philosophers of science, affiliated to various European universities.

11:00—11:35Mike Stuart (London School of Economics): Locating Objectivity in Models of Science
11:35—12:10William Peden (Durham University): Selective Confirmation Answers to the Paradox of the Ravens
12:10—12:20Coffee Break
12:20—12:55Mattia Andreoletti (IEO Milan): Rules versus standards in drug regulation
12:55—13:30Borut Trpin (University of Ljubljana): Some Problematic Consequences of Jeffrey Conditionalization

The abstracts can be found here.

Andrea Iacona (University of Turin): Strictness vs Connexivity

Wednesday, 9 May, 12:00-13:00, Aula 23, Palazzo Nuovo (first floor)

I will compare two views of conditionals that exhibit some interesting affinities, the strict conditional view and the connexivist view. My aim is to show that the strict conditional view is at least as plausible as the connexivist view, contrary to what the fans of connexive logic tend to believe. The first part of the talk draws attention to the similarity between the two views, in that it outlines three arguments that support both of them. The second part examines the case for the theses that characterize the connexivist view, Aristotle’s theses and Boethius’ theses, and finds that the core intuition on which it rests is consistent with the strict conditional view, so it can be accommodated within classical logic.

Noah van Dongen, Felipe Romero and Jan Sprenger (LLC/University of Turin): Semantic Intuitions—A Meta-Analysis

Wednesday, 18 April, noon, Aula 16, Palazzo Nuovo (first floor)

One of the most famous papers in experimental philosophy (Machery, Mellon, Nichols, and Stich, 2004) analyzes semantic intuitions in prominent cases taken from Saul Kripke’s seminal book “Naming and Necessity” (1970). Machery and colleagues found cross-cultural differences in semantic intuitions pertaining to the reference of proper names in Kripke’s “Gödel” and “Jonah” cases, which were transformed into vignettes that are usable for experimental research. Their paper kicked off an experimental research program on cross-cultural differences in semantic intuitions. But what is the state of the art right now, almost 15 years later?

We conduct a statistical meta-analysis of experiments which investigate systematic semantic intuition differences between Westerners and East Asians and present our preliminary findings. Along the way, we explain some problems we experienced in completing the project,such as the question of which studies should be included and which ones should be left out as being too remote from the original experiment.

The project is joint work with Matteo Colombo (Tilburg University).

Wednesday, 11 April, noon.
Aula di Antica (in the Department of Philosophy and Educational Sciences, second floor of Palazzo Nuovo).

Claus Beisbart (University of Bern): Reflective equilibrium fleshed out

Reflective equilibrium (RE) is often taken to be the crucial method of normative ethics (Rawls), philosophy (Lewis) or understanding more generally (Elgin). Despite its apparent popularity, however, the method is only vaguely characterized, poorly developed and almost never applied to real-world problems in an open-minded way. The aim of this talk is to present an operationalization and a formal model of the RE. The starting point is an informal characterization of what I take to be the key idea of RE, viz. an elaboration of one’s commitments due to pressure from systematic principles. This idea then is spelled out in the framework of the Theory of Dialectical Structures, as developed by Gregor Betz. The commitments of an epistemic subject are described as a position in a dialectical structure; desiderata for the positions are postulated; and rules for changing the commitments expounded. Simple examples, in which the model is applied, display a number of features that are well-known from the literature about RE. The talk concludes by discussing the limitations of the model. This paper is based upon work done jointly with Gregor Betz and Georg Brun.

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close