Science & More Talks

The “Science & More” talks are a series of work in progress talks at the Center for Logic, Language and Cognition (LLC) in Turin and run by the ERC project. We use it to present our own work in progress and to learn about the ongoing research of our colleagues. From time to time, also speakers from outside Turin present their work.

Thematically, the talks are centered around philosophy of science, but we are also open to topics from related areas (e.g., logic, epistemology, philosophy of language) and other philosophical subdisciplines where exact methods are applied.

The talks take place in Palazzo Nuovo (Via Sant’Ottavio 20) from noon (=12:00) to 13:00, usually on Wednesdays, followed by a joint lunch in one of the surrounding restaurants. They are meant to be low-key events, open to everybody and aiming at the improvement of ongoing work through critical, but constructive discussion. The first meeting of spring 2019 is scheduled for 16 January; successive meetings will be announced soon.

Upcoming Talks

Marco Viola (Università di Torino): Cognitive functions and neural structures: population-bounded mappings

Wednesday, 3 April 2019, 12:00-13:00, Aula 7, Palazzo Nuovo (first floor)

For a long time, cognitive neuroscientists tried to unravel the ‘true’ function of some brain structure, i.e. the function that accounted for all and only its activation. However, nowadays there is a growing skepticism about the assumption that one-to-one mapping (function-to-structure) is to be found. Several scholars are thus suggesting that the ontology of cognitive neuroscience must be contextualist (function-to-structure-in-context). In my talk, I urge for a contextualist framework that takes into account patterned inter-individual differences. While establishing structure-function mappings have been traditionally conceived as a universal enterprise about every normal mind/brain, I seek to show how relaxing this universality claim is both necessary and fruitful: necessary, because specific populations of individuals do develop idiosyncratic function-structure mappings due to ontogeny or clinical reasons; fruitful, because it broadens the scope of cognitive functions that can be mapped.

Past Talks

Paola Berchialla (University of Turin) & Daniele Chiffi (Politecnico di Milano): Can you believe the results of a meta-analysis? The heterogeneity, the power and the case of small meta-analyses

Wednesday, 13 March 2019, 12:00-13:00, Aula TBA, Palazzo Nuovo, first floor

Meta-analysis is a procedure by which the results of multiple studies are combined to provide a higher level of statistical evidence. One of the key components of a meta-analysis is the heterogeneity, which is the total variation across the studies included. Assessing the heterogeneity is a crucial methodological issue, especially in small meta-analyses, since it may affect the choice of the statistical model to apply.

From a statistical perspective, power is related to all those features impacting on heterogeneity; namely, effect size, type I error, and sample size. Considering the statistical power of the original studies provides a different point of view for reviewing the meta-analysis. On the other hand, not considering statistical power may have an impact on study conclusions, especially when they are drawn from meta-analyses based on underpowered studies, or when the difference in statistical power of single studies is not considered.

According to a review by Turner and colleagues, most meta-analyses include studies that do not have enough statistical power to detect a true effect. In this study, we propose a modified way to assess heterogeneity based on a posteriori statistical power, which is calculated on the basis of the actual sample size. Two motivating examples based on published meta-analyses are presented to illustrate the relationship between a posteriori power and the results of meta-analyses and the impact of the statistical power on the assessment of the heterogeneity.

Refining judgement of heterogeneity across studies based on a posteriori power may provide an informative and applicable way to improve the evaluation of meta-analytic results.

Noah van Dongen and Michał Sikorski (LLC, University of Turin): Objectivity for the Research Worker

In the last few years, many problematic cases of scientific conduct were diagnosed; some of which involve outright fraud (e.g., Stapel, 2012) others are more subtle (e.g., supposed evidence of extrasensory perception; Bem, 2011). We assume that these and similar problems are caused by a lack in scientific objectivity. The current theories of objectivity do not provide scientist with conceptualizations that can be effectively put into practice in remedying these issues. We propose a novel way of thinking about objectivity; a negative and dynamic approach. It is our intention to take the first steps in providing an empirically and methodologically informed inventory of factors (e.g., Simmons, Nelson & Simonsohn, 2011) that impair the scientific practice of researchers. The inventory will be compiled into a negative definition (i.e., what is not objective), which can be used as an instrument (e.g., check-list) to assess deviations from objectivity in scientific practice.

Fabrizio Calzavarini (University of Bergamo and LLC, University of Turin):

Work on the dual structure of lexical semantic competence

Wednesday, 6 February, 12:00-13:00, Aula 22, Palazzo Nuovo, first floor

Philosophical arguments and neuropsychological research on deficits of lexical processing converge in indicating that our competence on word meaning may have two components: inferential competence, that takes care of word-word relations and is relevant to tasks such as recovery of a word from its definition, pairing of synonyms, semantic inference and more; and referential competence, that takes care of word-world relations, or, more carefully, of connections between words and perception of the outside world (through vision, hearing, touch). Interestingly, recent experiments using neuroimaging (fMRI) found that certain visual areas are active even in purely inferential performances, and a current experiment appears to show that such activation is a function of what might be called the “imageability” of linguistic stimuli. Such recent results will be presented and discussed. In addition, future studies on lexical inferential competence in congenitally blind subjects will be also presented and discussed.

[This presentation is partly based on the outcomes of a research project titled The role of visual imagery in lexical processing, funded by “Compagnia di San Paolo di Torino”, P.I. Diego Marconi].

Andrea Strollo (Nanjing University): Truth Pluralism and Many-Valued Logic: How to solve the problem of mixed inferences

Wednesday, 16 January, 11:00-12:00 (!), Aula di Medievale

According to truth pluralism there is not a single property of truth but many: propositions from different areas of discourse are true in different ways. This position has been challenged to make sense of validity, understood as necessary truth preservation, when inferences involving propositions from different areas (and different truth properties as well) are involved. To solve this problem, a natural temptation is that of replicating the standard practice in many valued logic, thus appealing to the notion of designated values. Validity would just be preservation of designation. Such a simple approach, however, is usually considered a non starter, since, in this context, ‘designation’ seems to embody nothing but a notion of generic truth, namely what truth pluralists abhor. In my talk, I show how to defend such a simple solution relying on designation by exploring the analogy with Many-Valued Logic even further.

Michele Lubrano (University of Turin): Mathematical Explanation: Some Reflections on Steiner’s Model

Wednesday, 12 December, 12:00-13:00, Aula di Medievale

Mathematical explanation has recently started to receive attention by philosophers interested in mathematical practice. Professional mathematicians usually distinguish between explanatory and not explanatory proofs of theorems. Indeed, there can be proofs of a mathematical statement that, despite providing perfectly acceptable justifications of it, offer no clue on the reasons why it holds. On the contrary other proofs have the virtue of telling why the statement is true, in a way that, once one understands the argument, such a statement doesn’t look surprising or mysterious any longer. The first interesting theorization of mathematical explanation was proposed by Mark Steiner in 1978. In his model a proof of a statement S about an entity E is explanatory if and only if it makes reference to some essential properties of E. Although Steiner’s model works well in a number of examples, some critical issues have emerged over time. I would like to propose a reworking of his model, able to capture in a more precise way Steiner’s underlying idea – which seems to me fundamentally correct – and also able to face the objections.

Giorgio Castiglione (University of Turin): Tolerance versus Dogma: Revising the Carnap-Quine Debate on Analyticity

Wednesday, 28 November, 16:00-17:00, Aula di Medievale

The Carnap-Quine debate dealt a blow to the thought of logical empiricism, dictating a new agenda for analytic philosophy. I propose an overview of the Quinean objections to the notion of analyticity, and claim that, in denouncing its vagueness, arbitrariness, epistemological and explanatory emptiness, Quine hinges on an erroneous interpretation of Carnapian positions. Showing Carnap’s explication at work, I outline some crucial methodological differences with Quine: divergent understandings of empiricism trace back to distinct conceptions of the tasks of philosophy. Finally, I present some reasons in favour of the Carnapian meta-philosophical paradigm, in which assuming an analytic/synthetic dichotomy, far from being an empiricist dogma, acts as a presupposition for tolerance.

Carlo Martini (San Raffaele University, Milan): Ad Hominem Arguments, Rhetoric, and Science Communication

Tuesday, 6 November, 12:00-13:00, Aula 25, 1st floor, Palazzo Nuovo

Science communication needs to be both accurate and effective. On the one hand, accurate scientific information is the product of strict epistemological, methodological and evidential requirements On the other hand, effectiveness in communication can be achieved through rhetorical devices, like powerful images, figures of speech, or amplification, aiming at getting the readers’ attention and persuading them. For their nature, rhetorical tools can distort the contents of the message, and effectiveness can be achieved at the expense of accuracy.

For example, attacking a scientist’s stance on the effectiveness of a drug, by referring to the scientist’s ties to the pharmaceutical industry that produces that drug, does not show anything about the effectiveness or (lack thereof) of the drug. Yet, under appropriate circumstances, it may be enough to discredit the reliability of the scientist’s claims. This type of argument is called ad hominem: it attacks the source of information, not the substance of the matter – i.e., whether the drug is effective or not. Ad hominem attacks, even when fallacious, can be powerful. For instance, people opposing the use of vaccination routinely use ad hominem attacks, by alleging ties between the scientists defending the use of vaccines and the pharmaceutical industry (see Davies, Chapman and Leask 2002).

The recent controversy on the safety of vaccinations and their possible links to number of conditions, including autism, presents a challenge for science communication. Critics of vaccines are well equipped with rhetorical arguments and a wealth of supposed evidence in support of their various claims: e.g., that vaccines can cause autisms and that vaccines contain chemicals harmful to children. Anti-vaccination movements appeal to anecdotal evidence and powerful imagery to persuade public and policy makers of their arguments, including alleging commercial ties between the medical profession and the pharmaceutical industry. Cases of bad science and pharmaceutical disasters (see Daemmrich 2002, Russell 2009) only make the anti-vaccination arguments stronger in the eyes of the public.

In this talk, I contend that evidence-focused strategies of science communication may be complemented by possibly more effective rhetorical arguments in current public debates on vaccines. I analyse the case of direct science communication – that is, communication of evidence – and argue that it is difficult to effectively communicate evidential standards of science in the presence of well-equipped anti-science movements.

Michal Sikorski, Noah van Dongen and Jan Sprenger (LLC, University of Turin): Causal Strength and Causal Conditionals

Wednesday, 24 October, 12:00-13:00, Aula di Medievale

Causal conditionals and (tendency) causal claims share several properties, and their relation has been the object of substantial philosophical discussion. That said, the discussion mainly moves on a theoretical level without being informed by empirical results. In this project, we investigate several hypotheses on the (probabilistic) predictors of the truth values of indicative conditionals from an empirical point of view and compare them to predictors of causal strength.

  1. Causal conditionals are evaluated as true only if the corresponding tendency causal claim is evaluated as true.
  2. The subjective probability p(E|C) predicts the evaluation of both the causal conditional “if C, then E” and the tendency causal claim “C causes E”.
  3. Statistical relevance predicts the assessment of a tendency causal claim as true; but not the assessment of a causal conditional (in the class of true tendency causal claims).
  4. The effect of probabilistic factors on the assessment of tendency causal claims is greater than its effect on causal conditionals.

We present the experiment that tests these hypotheses and discuss the implication of our findings.

Lina Lissia (LLC, University of Turin): From McGee’s puzzle to the Lottery Paradox

Tuesday 9 October, 12:00-13:00, Aula di Medievale, Palazzo Nuovo

Vann McGee (1985) provided a famous counterexample to Modus Ponens. I show that, contrary to a view universally held in the literature, assuming the material conditional as an interpretation of the natural language conditional “if …, then …” does not dissolve McGee’s puzzle. Indeed, I provide a slightly modified version of McGee’s famous election scenario in which (1) the relevant features of the scenario are preserved (2) both modus ponens and modus tollens fail, even if we assume the material conditional. I go on to show that in the modified scenario (which I call “the restaurant scenario”) conjunction introduction is also invalid. More specifically, I demonstrate that the restaurant scenario is actually a version of the Lottery Paradox (Kyburg 1961), and conclude that any genuine solution to McGee’s puzzle must be a solution to the Lottery Paradox, too. Finally, I provide some hints towards a solution to both McGee’s puzzle and the Lottery Paradox.

Mattia Andreoletti (LLC, University of Turin): The Meanings of Replicability

Thursday, 27 September, 12:00-13:00, Aula di Medievale, Palazzo Nuovo

Throughout the last decade there has been a growing interdisciplinary debate on the reliability of scientific findings: experiments (and statistical analyses in general) are rarely replicated. Intuitively, replicability of experiments is a central dogma of science, and the importance of multiple studies corroborating a given result is widely acknowledged. However, there is no consensus on what counts as a successful replication, and researchers employ a range of operational definitions reflecting different intuitions. The lack of a single accepted definition opens the door to controversy about the epistemic import of replicability for the trustworthiness of scientific results. Disentangling the meanings of replicability is crucial to avoid potential misunderstanding.

Vincenzo Crupi (University of Turin): The logic of evidential conditionals

Wednesday, 27 June, 12:45-13:45, Aula 13, 1st floor, Palazzo Nuovo

Once upon a time, some thought that indicative conditionals could be effectively analyzed by means of the material conditional. Nowadays, an alternative theoretical construct largely prevails and receives wide acceptance, namely, the conditional probability of the consequent given the antecedent. Partly following earlier critical remarks made by others (most notably, Igor Douven), I advocate a revision of this consensus and suggest that incremental probabilistic support (rather than conditional probability alone) is key to the understanding of indicative conditionals and their role in human reasoning. There have been motivated concerns that a theory of such evidential conditionals (unlike their more traditional suppositional counterparts) can not generate a sufficiently interesting logical system. I will present results largely dispelling these worries. Happily, and perhaps surprisingly, appropriate technical variations of Ernst Adams’s classical approach allow for the construction of a new logic of evidential conditionals which is nicely superclassical, fairly strong, and also (as it turns out) a kind of connexive logic.

Workshop in Philosophy of Science

Friday, 25 May, 11:00-13:30
Aula 8, Palazzo Nuovo (first floor)

Instead of the usual longer presentation on Wednesdays, we have four short ones on a Friday, featuring young and promising philosophers of science, affiliated to various European universities.

11:00—11:35 Mike Stuart (London School of Economics): Locating Objectivity in Models of Science
11:35—12:10 William Peden (Durham University): Selective Confirmation Answers to the Paradox of the Ravens
12:10—12:20 Coffee Break
12:20—12:55 Mattia Andreoletti (IEO Milan): Rules versus standards in drug regulation
12:55—13:30 Borut Trpin (University of Ljubljana): Some Problematic Consequences of Jeffrey Conditionalization

The abstracts can be found here.

Andrea Iacona (University of Turin): Strictness vs Connexivity

Wednesday, 9 May, 12:00-13:00, Aula 23, Palazzo Nuovo (first floor)

I will compare two views of conditionals that exhibit some interesting affinities, the strict conditional view and the connexivist view. My aim is to show that the strict conditional view is at least as plausible as the connexivist view, contrary to what the fans of connexive logic tend to believe. The first part of the talk draws attention to the similarity between the two views, in that it outlines three arguments that support both of them. The second part examines the case for the theses that characterize the connexivist view, Aristotle’s theses and Boethius’ theses, and finds that the core intuition on which it rests is consistent with the strict conditional view, so it can be accommodated within classical logic.

Noah van Dongen, Felipe Romero and Jan Sprenger (LLC/University of Turin): Semantic Intuitions—A Meta-Analysis

Wednesday, 18 April, noon, Aula 16, Palazzo Nuovo (first floor)

One of the most famous papers in experimental philosophy (Machery, Mellon, Nichols, and Stich, 2004) analyzes semantic intuitions in prominent cases taken from Saul Kripke’s seminal book “Naming and Necessity” (1970). Machery and colleagues found cross-cultural differences in semantic intuitions pertaining to the reference of proper names in Kripke’s “Gödel” and “Jonah” cases, which were transformed into vignettes that are usable for experimental research. Their paper kicked off an experimental research program on cross-cultural differences in semantic intuitions. But what is the state of the art right now, almost 15 years later?

We conduct a statistical meta-analysis of experiments which investigate systematic semantic intuition differences between Westerners and East Asians and present our preliminary findings. Along the way, we explain some problems we experienced in completing the project,such as the question of which studies should be included and which ones should be left out as being too remote from the original experiment.

The project is joint work with Matteo Colombo (Tilburg University).

Wednesday, 11 April, noon.
Aula di Antica (in the Department of Philosophy and Educational Sciences, second floor of Palazzo Nuovo).

Claus Beisbart (University of Bern): Reflective equilibrium fleshed out

Reflective equilibrium (RE) is often taken to be the crucial method of normative ethics (Rawls), philosophy (Lewis) or understanding more generally (Elgin). Despite its apparent popularity, however, the method is only vaguely characterized, poorly developed and almost never applied to real-world problems in an open-minded way. The aim of this talk is to present an operationalization and a formal model of the RE. The starting point is an informal characterization of what I take to be the key idea of RE, viz. an elaboration of one’s commitments due to pressure from systematic principles. This idea then is spelled out in the framework of the Theory of Dialectical Structures, as developed by Gregor Betz. The commitments of an epistemic subject are described as a position in a dialectical structure; desiderata for the positions are postulated; and rules for changing the commitments expounded. Simple examples, in which the model is applied, display a number of features that are well-known from the literature about RE. The talk concludes by discussing the limitations of the model. This paper is based upon work done jointly with Gregor Betz and Georg Brun.

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close