Metascience and Methodology

In 2005, John Ioannidis published an earth-shaking article titled “Why Most Published Research Findings Are False”. It outlined the fact that most empirical findings in science failed to be replicated, meaning that they are likely the product of experimental settings or statistical fluctuations and do not correspond to any real effect. Ioannidis proposed that this fact was the result of norms and institutions which reward the publication of positive results - even when the empirical methods to establish those results are insufficient - while impeding the publications of negative results. Indeed, papers typically receive attention by peers when they claim the existence of novel interesting effects, and academies typically prefer to employ and promote researchers who receive the most attention by peers. In other words, empirical scientists are de facto rewarded for running many experiments and only publishing about it when their results are positive. This elicited a vast wave of worry about the validity of empirical science, and opened the way for more research about the way that science is conducted: metascience. Metascience employs research methods, typically borrowed from the empirical sciences, to scrutinise how research is conducted. It generally seeks to elevate the quality of scientific inquiry by pinpointing areas where improvements can be made.

For example, metascience enabled a push for pre-registering the methods and expected results of an empirical study before it is conducted. This enforces a higher level of visibility for negative results, and notably makes p-hacking - a publishing strategy where scientists run many statistical tests and publish about whichever positive they can get - more difficult. However, statistical rigor can only ensure the internal coherence of the specific tests which we choose to apply on the available data or experimental setup. It cannot help us judge the adequation of specific methodologies to answer specific questions, and much less the relevance of the questions we choose to ask. For exemple, the field of behavioural economics has been characterized since its inception by the construction of elaborate setups demonstrating deviation from economic rationality in human behaviour. This is without ever asking whether economic rationality was a reasonable model to begin with, which other model would be more adequate to explain human behaviour, or even what was the role of the setup itself in producing the observed behaviour. A field which relies on poorly conceived tests of ill-defined hypothesis does not need bad statistical analysis to produce a null or even negative contribution to the edifice of scientific knowledge.

Kairos aims to cultivate a critical and reflexive relationship to scientific methodology, and to participate in the construction of more efficient and coherent methodologies to specify and test scientific hypothesis. We notably draw on epistemology and formal modelling to better understand how specific experimental setups produce data, as well as how scientific communities understand the relation of these data to their hypothesis. We have disscussed in depth the role of experimental methodology in cognitive science, given the difficulties of relating behavioural data to specific mechanistic models of t he mind. More generally, we seek to understand how the contextuality of natural processes constrains (and enables the construction of) human knowledge. Indeed, we recognise the importance of the context-specific semiotics developed by complex natural systems, such as minds, organisms, and ecosystems. The exercise of scientific representation necessitates to capture the “subjective” semiotics developed by natural systems through the “objective” semiotics developed by science. As the very attempt to frame acontextual “laws of nature” entailing the evolution of natural systems erases the contextuality of their behaviour, we need to find strategies for scientific modeling which conserve an adequate relation between the semiotics of the target system and those of the model.

Our publications

  • Friedman, Daniel et al. 2022. ‘An Active Inference Ontology for Decentralized Science: From Situated Sensemaking to the Epistemic Commons’. https://zenodo.org/record/6320575 (March 7, 2022).
  • Guénin–Carlut, Avel. 2020. “Cognition in Eco, Cognition in Vitro - Measurement and Explanation in Cognitive Science,” March. https://doi.org/10.17605/OSF.IO/ERCZ6.

This article was updated on October 23, 2023