Showing posts with label replication. Show all posts
Showing posts with label replication. Show all posts

Friday, December 13, 2024

Journal of Comments and Replications in Economics invites papers from Ph.D. students

I recently received this email, inviting papers from PhD students:

"I am writing you as Editor of the Journal of Comments and Replications in Economics (JCRE). JCRE is an online journal published by the German National Library of Economics (ZBW – Leibniz Information Centre for Economics). It is an open access journal with no article processing charges. Our Advisory Board includes David Autor, Anna Dreber, Richard Easterlin, Edward Leamer, David Roodman, and Jeffrey Wooldridge.

We are recruiting replication submissions from PhD students at top universities. With the end of the semester upon us, I am asking if you might be aware of any students who have done replications, either in your course or in the courses of your colleagues. If so, the Christmas break could be a great time to encourage them to prepare their replication research for submission to a journal.

We believe JCRE could be an attractive outlet for graduate students’ replication research. Our quick turnaround time and online publishing model provides an opportunity to achieve a peer-reviewed journal publication quickly. Perhaps in time for next year’s job market.

The philosophy of JCRE is that replications are essential to assess the reliability of economics research. While some top journals publish replications, it is still difficult for most replications to get published in a peer-reviewed journal. JCRE provides a home for these studies.

We are asking your help to circulate this opportunity to any students or colleagues who might be interested. The attached flyer may be helpful in this regard

Thank you for your help. If you have any questions, please do not hesitate to contact the journal at jcre@zbw-online.eu."

Tuesday, October 8, 2024

An own-goal in replication science--retraction of a paper that reported high replicability

  A 2023 paper reporting high replicability of psychology experiments has been retracted from Nature Human Behavior. The retraction notice says in part 
"The concerns relate to lack of transparency and misstatement of the hypotheses and predictions the reported meta-study was designed to test; lack of preregistration for measures and analyses supporting the titular claim (against statements asserting preregistration in the published article); selection of outcome measures and analyses with knowledge of the data; and incomplete reporting of data and analyses."

RETRACTED ARTICLE: High replicability of newly discovered social-behavioural findings is achievable

This article was retracted on 24 September 2024

Matters Arising to this article was published on 24 September 2024

This article has been updated

Abstract

Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of methods or whether presumptively optimal methods are not, in fact, optimal. This paper reports an investigation by four coordinated laboratories of the prospective replicability of 16 novel experimental findings using rigour-enhancing practices: confirmatory tests, large sample sizes, preregistration and methodological transparency. In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing (P < 0.05) in 86% of attempts, slightly exceeding the maximum expected replicability based on observed effect sizes and sample sizes. When one lab attempted to replicate an effect discovered by another lab, the effect size in the replications was 97% that in the original study. This high replication rate justifies confidence in rigour-enhancing methods to increase the replicability of new discoveries.

########


############

In general, I'm more optimistic about replications than preregistrations for identifying replicable results and testing hypotheses about them.  In this case, preregistration apparently revealed that what was written up as a replication study had begun as something else, and that the goal posts had been moved ex post, apparently in inappropriate ways.
######
Somewhat related are my posts on the Einstein Foundation Award for Promoting Quality in Research.

Friday, January 19, 2024

Incentives and mis-incentives in science (Freakonomics part II)

 Freakonomics has a second post on fraud in science, and you can listen or read the transcript here:

Can Academic Fraud Be Stopped?

Two quotes stood out for me:

1. VAZIRE: Oh, I don’t mind being wrong. I think journals should publish things that turn out to be wrong. It would be a bad thing to approach journal editing by saying we’re only going to publish true things or things that we’re 100 percent sure are true. The important thing is that the things that are more likely to be wrong are presented in a more uncertain way. And sometimes we’ll make mistakes even there. Sometimes we’ll present things with certainty that we shouldn’t have. What I would like to be involved in and what I plan to do is to encourage more post-publication critique and correction, reward the whistleblowers who identify errors that are valid and that need to be acted upon, and create more incentives for people to do that, and do that well.

...

2. BAZERMAN: Undoubtedly, I was naive. You know, not only did I trust my colleagues on the signing-first paper, but I think I’ve trusted my colleagues for decades, and hopefully with a good basis for trusting them. I do want to highlight that there are so many benefits of trust. So, the world has done a lot better because we trust science. And the fact that there’s an occasional scientist who we shouldn’t trust should not keep us from gaining the benefit that science creates. And so one of the harms created by the fraudsters is that they give credibility to the science-deniers who are so often keeping us from making progress in society.


############

Earlier:

Sunday, January 14, 2024

Sunday, January 14, 2024

"Why Is There So Much Fraud in Academia?" Freakonomics interviews Max Bazerman and others

Below is the latest Freakonomics podcast (and transcript), on fraud in academia.  Those most in the headlines weren't available to be interviewed, but their coauthor (and my longtime HBS colleague) Max Bazerman gives his perspective.

Also interviewed are the Data Colada authors/data sleuths Leif Nelson Uri Simonsohn, and Joe Simmons (with some clues about the name of their blog), and Brian Nosek, who founded the prizewinning Center for Open Science (https://www.cos.io/ 

Here it is:

Why Is There So Much Fraud in Academia?  Some of the biggest names in behavioral science stand accused of faking their results. Freakonomics EPISODE 572.

######

And here are two paragraphs from Max's HBS web page (linked above), suggesting more to come:

"I have been connected to one of the most salient episodes of data fabrication in the history of social science – involving the signing first effect alluded to above. I am working on understanding all known social science frauds in this millennium. Social science also struggles with a broader problem, namely the fact that many studies fail to replicate due to faulty research practices that have become common in social science. Most replication failures can be traced back to the original researchers twisting their data to conform to their predictions, rather than from outright fraud. Trying to produce “significant” results, they may run a study multiple times, in a variety of ways, then selectively report the tests that worked and fail to report those that didn’t. The result is the publication of conclusions that do not hold up as accurate. Both problems – outright data fabrication and this reporting bias that shapes results – need to be tackled, so all of us in academia can publish results that are replicable and can help create value in society.

         "The last dozen years have witnessed multiple efforts to reform social science research to make it more credible, reproducible, and trusted. I am writing a book on reforming social science, which will provide an account of recent data fabrications, and highlight strategies to move forward to create more credible and impactful scientific research."

Friday, October 6, 2023

Correcting science faster by making replication easier and more fun, by Brodeur, Dreber, Hoces de la Guardia & Miguel

If imitation is the sincerest form of flattery, we need to think of replication as one of the sincerest forms of inquiry in social science. (I'm more optimistic about the potential role of replication than I am about pre-registration.)

Here's a Comment in Nature that points out that we're not going to get lots of replications unless we can make them easier and more fun than they have been traditionally.

Replication games: how to make reproducibility research more systematic.  In some areas of social science, around half of studies can’t be replicated. A new test-fast, fail-fast initiative aims to show what research is hot — and what’s not.  by Abel Brodeur, Anna Dreber, Fernando Hoces de la Guardia & Edward Miguel

"we decided to try to make replication efforts in our fields of economics and political science more systematic. Our virtual, non-profit organization, the Institute for Replication, now holds one-day workshops — called replication games — to validate studies.

"Since October 2022, we’ve hosted 12 workshops across Europe, North America and Australia, with 3 more scheduled this year. Each workshop has typically involved around 65 researchers in teams of 3–5 people, re-analysing about 15 papers. The teams either try to replicate papers, by generating new data and testing hypotheses afresh, or attempt to reproduce them, by testing whether the results hold if the published data are re-analysed. For many papers in our fields of study, in which the reproduction of results often involves re-running computer codes, it’s possible to do much of this work in a single day (see ‘A typical replication games project’). Each team’s findings are released as a preprint report, and these reports will be collated and published each year as a meta-paper. 

...

"To assess large numbers of papers, collaborating with research centres and universities is essential. For example, our current goal is to reproduce and replicate studies in journals that have a high impact factor — specifically, 25% of empirical studies published from 2022 onwards in 8 leading economics journals and 3 leading political science journals, totalling about 350 papers per year. Then we plan to expand into other areas of the social sciences.

...

"Broader partnerships can expand replication efforts beyond academic papers. Earlier this year, we were invited to run replication games with the International Monetary Fund (IMF) and the World Bank, to assess economics and finance papers from the two organizations. We aim to keep running these games annually, validating not only scholarly studies but also policy-oriented reports.

"Establishing these relationships need not be time consuming. We’ve found that simply tweeting about our project and speaking about it at conferences can garner interest. That, along with word of mouth after the Oslo workshop, has been sufficient to make our project well known among economists. As a result, all the organizations that we partnered with originally contacted us — rather than the other way round — asking to get involved.

"Other researchers following in our footsteps should be aware that care is needed to avoid conflicts of interest. We receive no money from the collaborations we’re involved in, because taking payment could be viewed as unethical. At the IMF and World Bank games — where people were reproducing and replicating the work of co-workers — we decided to randomly assign participants to a study, allowed them to remain anonymous and prevented participants from assessing studies authored by direct supervisors or friends.

"It is crucial to protect researchers who check papers from career threats — particularly when an effort uncovers major errors. We recommend that an organization or institute mediates communication between the original study’s authors and the replicators, allowing the latter to remain anonymous if they wish. One of us, acting as a representative for the Institute for Replication, serves in this capacity after each replication game.

"We know that receiving an e-mail to say that someone is checking your work can be stressful. So we contact the original authors only after replicators have written up their reports, to avoid causing researchers undue worry while they wait for an effort’s results. Rather than treating the discovery of errors as a ‘gotcha’ moment, which can put authors on the defensive, we acknowledge in our correspondence that all researchers make mistakes. To help make the process collegial, we allow authors to suggest edits to the report, and ask replicators to suggest changes to the authors’ responses.

...

"We think that efforts such as ours that normalize replication will ultimately put pressure on funders and journals to play their part. We are excited to see replication efforts in our fields — and others — continue to expand. Systematic replication has the potential to make correcting science faster. Let the games begin."


Tuesday, May 16, 2017

Replication in Economics

Professor Bob Reed at U of Canterbury points me to The Replication Network, a website he co-founded with Dr Maren Duvendack of  U of E. Anglia to promote replication in Economics, and assemble information about replication studies.

See also their paper and the others in the  Papers and Proceedings (May 2017)  issue of the American Economic Review, which begins with a section on replication:
REPLICATION IN MICROECONOMICS
REPLICATION AND ETHICS IN ECONOMICS: THIRTY YEARS AFTER DEWALD, THURSBY, AND ANDERSON