Projects - Second Funding Phase
This is an overview of the thirteen projects funded in the second phase (2025 to 2027) of META-REP
This is an overview of the thirteen projects funded in the second phase (2025 to 2027) of META-REP
PI: Dr. Marc Jekel
This project aims to enhance metastudies—an approach using diverse small-scale experiments to examine replicability and generalizability—by incorporating a validity-based framework. Focusing on the constructs of intuition and affect and their causal link to cooperation, the study will assess the effectiveness of manipulations and the functional relationships between variables. It includes pre-studies to test construct validity, uses machine learning to model complex variable interactions, and conducts simulations to predict replication outcomes. A follow-up replication study will validate these predictions. The project seeks to clarify why effect sizes vary and improve methods for forecasting replication success in psychological research.
PIs: Prof. Dr. Ulrich Schroeders & Dr. Kristin Jankowsky
This project addresses a growing replication crisis in machine learning (ML) applications within psychology, where methodological flaws often lead to inflated predictive accuracy. It introduces a five-step ML workflow tailored to psychological research, covering conceptualization through interpretation. The project includes: (1) a systematic review of ML practices in psychology, (2) development of a checklist and risk of bias tool for ML studies, (3) an experimental ML prediction challenge to test the impact of best-practice guidance, and (4) creation of an open online course. Together, these initiatives aim to improve the robustness, transparency, and reproducibility of ML-based psychological research.
PI: Prof. Julia M. Haaf
This project aims to improve how cognitive scientists study individual differences by developing a principled Bayesian workflow. It addresses challenges in measuring reliability and planning studies involving cognitive tasks, such as estimating required observations per participant. The project will create tools for building Bayesian models of individual differences and for flexible study planning that adjusts sample sizes during data collection. An empirical study on agreement attraction in antecedent-reflexive constructions will be conducted to apply and validate these tools, ultimately enhancing the reliability and interpretability of individual differences research in cognitive science
PIs: Dr. Johannes Breuer & Prof. Dr. Mario Haim
This project aims to enhance replicability and reproducibility in Computational Communication Science (CCS), a field facing unique challenges due to data restrictions, evolving topics, and unclear replication standards. Building on findings from the first-phase META-REP project, the team will develop and test “proactive replicability” protocols that encourage reproducible practices during the publication process. These protocols will be supported by online tools that use data mining and language models to extract and assess replicability-related information from manuscripts. The project includes experimental evaluations in collaboration with a major journal and will produce guidelines and training materials for broader adoption across disciplines
PI: Dr. Dirk Wulff
This project addresses psychology’s generalizability crisis by proposing a language-based approach to map the relationships between psychological constructs and their measures. Leveraging advanced language models, the project will use language embeddings to examine conceptual overlap between constructs and their operationalizations in self-report and behavioral tasks. This method aims to clarify the nomological network of psychological constructs, enhancing our ability to predict when findings will generalize across contexts and measures. The ultimate goal is to improve conceptual clarity and empirical validity in psychological research, moving beyond narrow task-specific findings toward more robust, generalizable science.
PIs: Prof. Dr. Stefan Debener, Dr. Carsten Gießing, Prof. Dr. Andrea Hildebrandt & Prof. Dr. Christiane M. Thiel
This project, METEOR 2.0, advances tools and methods to improve the robustness and replicability of brain-cognition findings in cognitive neuroscience, particularly using mobile EEG and graph-based fMRI. Building on METEOR 1.0, which mapped the vast landscape of analysis choices, this phase focuses on optimizing and sustaining multiverse analysis workflows. Key goals include automating updates to the knowledge space, refining analysis through sensitivity-based multiverse deflation, and developing a modular METEOR toolbox for defining, reducing, analyzing, and visualizing multiverse pipelines. The project also promotes standardized reporting practices, supporting more transparent and replicable neuroimaging research.
PIs: Prof. Dr. Moritz Heene & Frank Renkewitz
This project investigates heterogeneity in psychological replications, focusing on more accurate and interpretable ways of measuring variability in effect sizes. Building on insights from META-REP’s first phase, it introduces the coefficient of variation (CV) of unstandardized effects as an alternative to conventional indices. The second phase pursues three goals: improving measurement of heterogeneity, expanding empirical knowledge, and explaining sources of variability. Four modules guide this work: (1) simulation studies comparing standard indices and CV and explaning how heterogeneity in true effect sizes can occur, (2) analyses of empirical replication data across effects, treatments, and moderators, (3) extension of analyses to diverse outcome measures in the context of conceptual replications, and (4) further development of the MetaPipeX framework to integrate replications and moderator analyses. The results will sharpen our understanding of replicability and generalizability in psychology, and clarify their relation to theoretical underspecification.
PIs: Prof. Dr. Katrin Auspurg & Prof. Dr. Josef Brüderl
This project conducts a comprehensive audit of reproducibility and robustness in research using European Social Survey data. It analyzes around 100 articles across disciplines and journal types through four steps: openness audits, reproducibility checks, correctness/congruence assessments, and robustness analyses. The goal is to identify where and why reproducibility and robustness vary, based on factors like discipline, journal impact, and author practices. By comparing practices against FAIR principles, the project aims to pinpoint key intervention points and develop practical tools to support authors, replicators, and editors in improving research transparency and reliability
PIs: Dr. Björn Hommel & Prof. Dr. Ruben Lennartz
This project, SYNTH, leverages large language models (LLMs) to enhance research reliability by developing tools in three key areas. First, Synthetic Replications uses LLMs to generate synthetic survey data, allowing reviewers to detect replicability issues before data collection. Second, the Synthetic Nomological Net maps thousands of behavioral science instruments to identify redundancies and help researchers avoid duplicating measures. Third, the Synthetic Peer assists peer reviewers by comparing preregistration protocols with final papers, highlighting deviations. SYNTH includes a field study with Psychological Science to evaluate the Synthetic Peer and emphasizes ethical use, focusing on consent, fairness, transparency, and ensuring human oversight in all decisions.
PI: Dr. Xenia Schmalz
This project explores the bidirectional link between theory building and replicability, using reading and developmental dyslexia research as a case study. Work Package 1 systematically outlines essential steps for robust theory building, creating a generalizable template. Work Package 2 examines computational modeling as a tool to enhance theory development and replicability, identifying when it is most effective. Work Package 3 focuses on practical implementation, developing a living document to share results and recommendations globally, and discussing the potential for mandating best practices. Ultimately, the project aims to break the cycle of vague theories and low replicability, with findings applicable across social and behavioral sciences.
PI: Prof. Dr. Andreas Glöckner
This project addresses how vague theoretical specifications contribute to low replicability in psychology. It aims to develop a formal methodology for specifying theories tailored to replication contexts and apply it to 50 key social psychology theories. Using z-curve analysis and other robustness checks, the project will examine the link between theory specificity and empirical replicability. It will also create automated tools using web scraping and large language models to identify and code relevant studies, considering control and moderator variables. The outcomes include an Open Theory Database to support transparent, cumulative theory development and facilitate systematic evaluation of theories and empirical evidence in psychology.
PI: Prof. Dr. Felix Schönbrodt
This project extends an existing META-REP project by using agent-based models (ABMs) to explore how academic structures and reform proposals affect researchers' careers and the quality of scientific output. Shifting focus from academic outputs to researchers themselves, the project will simulate how demographics, incentives, institutional constraints, and career dynamics shape research behavior and replicability. Key areas include the effects of research evaluation practices, timing of interventions, and institutional conditions. Calibrated with empirical data, the models will be complemented by an evaluation of the “switch-to-open” training program, generating insights to inform policies that support trustworthy and sustainable academic systems.
PIs: Dr. Susanne Frick & Prof. Dr. Eunike Wetzel
This project addresses issues of transparency and heterogeneity in measurement reporting in original and replication studies. Building on findings that item-based measure reporting is often incomplete and modifications impact replicability, it has two goals: (1) improve transparency by developing and evaluating a Measures Checklist for authors to disclose measure use and modifications, supported by a machine-learning-powered Measures Shiny app; (2) provide practical tools for replicators and metascientists, including a taxonomy of measure modifications and a Modifications Shiny app to assess their effects, plus investigating how violations of measurement invariance affect replicability. The project advances META-REP’s “WHY” and “HOW” questions by enhancing reporting standards and tools for assessing measurement impact on replicability.