Department of Health & Human Services

Case Summaries

2008 and older.

PDF

Email Updates

  • Mobile Site
  • Staff Directory
  • Advertise with Ars

Filter by topic

  • Biz & IT
  • Gaming & Culture

Front page layout

Lazy —

Top harvard cancer researchers accused of scientific fraud; 37 studies affected, researchers accused of manipulating data images with copy-and-paste..

Beth Mole - Jan 22, 2024 10:45 pm UTC

The Dana-Farber Cancer Institute in Boston.

The Dana-Farber Cancer Institute, an affiliate of Harvard Medical School, is seeking to retract six scientific studies and correct 31 others that were published by the institute’s top researchers, including its CEO. The researchers are accused of manipulating data images with simple methods, primarily with copy-and-paste in image editing software, such as Adobe Photoshop.

Further Reading

DFCI Research Integrity Officer Barrett Rollins told The Harvard Crimson that David had contacted DFCI with allegations of data manipulation in 57 DFCI-led studies. Rollins said that the institute is "committed to a culture of accountability and integrity," and that "every inquiry about research integrity is examined fully."

The allegations are against: DFCI President and CEO Laurie Glimcher, Executive Vice President and COO William Hahn, Senior Vice President for Experimental Medicine Irene Ghobrial, and Harvard Medical School professor Kenneth Anderson.

The Wall Street Journal noted that Rollins, the integrity officer, is also a co-author on two of the studies. He told the outlet he is recused from decisions involving those studies.

Amid the institute's internal review, Rollins said the institute identified 38 studies in which DFCI researchers are primarily responsible for potential manipulation. The institute is seeking retraction of six studies and is contacting scientific publishers to correct 31 others, totaling 37 studies. The one remaining study of the 38 is still being reviewed.

Of the remaining 19 studies identified by David, three were cleared of manipulation allegations, and 16 were determined to have had the data in question collected at labs outside of DFCI. Those studies are still under investigation, Rollins told The Harvard Crimson. "Where possible, the heads of all of the other laboratories have been contacted and we will work with them to see that they correct the literature as warranted,” Rollins wrote in a statement.

Despite finding false data and manipulated images, Rollins pressed that it doesn't necessarily mean that scientific misconduct occurred, and the institute has not yet made such a determination. The "presence of image discrepancies in a paper is not evidence of an author's intent to deceive," Rollins wrote. "That conclusion can only be drawn after a careful, fact-based examination which is an integral part of our response. Our experience is that errors are often unintentional and do not rise to the level of misconduct."

The very simple methods used to manipulate the DFCI data are remarkably common among falsified scientific studies, however. Data sleuths have gotten better and better at spotting such lazy manipulations, including copied-and-pasted duplicates that are sometimes rotated and adjusted for size, brightness, and contrast. As Ars recently reported, all journals from the publisher Science now use an AI-powered tool to spot just this kind of image recycling because it is so common.

reader comments

Channel ars technica.

  • Share full article

Advertisement

Supported by

Scientists Investigating Alzheimer’s Drug Faulted in Leaked Report

A professor at the City College of New York engaged in “significant research misconduct,” an expert committee concluded.

A close view of a placard with "Cassava Sciences" written on it affixed to a stone sign outside an office building.

By Apoorva Mandavilli

A neuroscientist whose studies undergird an experimental Alzheimer’s drug was “reckless” in his failure to keep or provide original data, an offense that “amounts to significant research misconduct,” an investigation by his university has concluded.

The drug, simufilam, is made by Cassava Sciences , a pharmaceutical company based in Texas, and is in advanced clinical trials. The neuroscientist, Hoau-Yan Wang, a professor at the City College of New York, frequently collaborated with Lindsay H. Burns, the company’s chief scientist, on studies that outside experts and journals have called into question.

A committee was convened by the City University of New York, of which the college is a part, to investigate the work, and it concluded in a report that Dr. Burns was responsible for errors in some of the papers. But the investigators reserved their sharpest criticism for Dr. Wang, reproaching him for “long-standing and egregious misconduct in data management and record keeping.”

The report was obtained and made public by the journal Science on Thursday. Dee Dee Mozeleski, a spokeswoman for City College, declined to comment on the document but said that the school would formally release the report later this month.

Dr. Wang did not respond to a request for comment. Remi Barbier, the founder and chief executive of Cassava, said in a statement that the company would continue its clinical trials. “We remain confident in the underlying science for simufilam, our lead drug candidate,” he said.

Alzheimer’s disease affects roughly six million Americans. Simufilam has been eagerly anticipated by patients and families, and fervidly supported by a group of investors. Cassava’s stock soared after each round of reported results from its trials — at one point by more than 1,500 percent.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Advertisement

  • Publications

This site uses cookies to enhance your user experience. By continuing to use this site you are agreeing to our COOKIE POLICY .

Grab your lab coat. Let's get started

Create an account below to get 6 c&en articles per month, receive newsletters and more - all free., it seems this is your first time logging in online. please enter the following information to continue., as an acs member you automatically get access to this site. all we need is few more details to create your reading experience., not you sign in with a different account..

Password and Confirm password must match.

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

Already have an ACS ID? Log in here

The key to knowledge is in your (nitrile-gloved) hands

Access more articles now. choose the acs option that’s right for you..

Already an ACS Member? Log in here  

$0 Community Associate

ACS’s Basic Package keeps you connected with C&EN and ACS.

  • Access to 6 digital C&EN articles per month on cen.acs.org
  • Weekly delivery of the C&EN Essential newsletter

$80 Regular Members & Society Affiliates

ACS’s Standard Package lets you stay up to date with C&EN, stay active in ACS, and save.

  • Access to 10 digital C&EN articles per month on cen.acs.org
  • Weekly delivery of the digital C&EN Magazine
  • Access to our Chemistry News by C&EN mobile app

$160 Regular Members & Society Affiliates $55 Graduate Students $25 Undergraduate Students

ACS’s Premium Package gives you full access to C&EN and everything the ACS Community has to offer.

  • Unlimited access to C&EN’s daily news coverage on cen.acs.org
  • Weekly delivery of the C&EN Magazine in print or digital format
  • Significant discounts on registration for most ACS-sponsored meetings

recent cases of research misconduct

Your account has been created successfully, and a confirmation email is on the way.

Your username is now your ACS ID.

Research Integrity

Us office of research integrity received 269 allegations of research misconduct last fiscal year, office closed 36 cases and released nine findings of research misconduct during the period, by dalmeet singh chawla, special to c&en, february 24, 2023.

  • Senior biochemist made up data in 13 studies
  • Former chemistry professor jailed for making meth
  • Harvard chemist Charles Lieber charged with fraud
  • Retracted chemistry studies most often plagued with plagiarism
  • Court overturns conviction of chemist Feng “Franklin” Tao

The US Office of Research Integrity (ORI) received a total of 269 complaints of alleged research misconduct between Oct. 1, 2021 and Sept. 30, 2022, a new report released by the agency reveals.

During the period, the agency closed 42 cases and released nine findings of research misconduct (one involving a single person but two institutions); 10 other investigated cases yielded no such findings. The ORI declined to pursue the remaining 22 cases. In the nine cases with guilty findings, seven were cases of falsification and fabrication, one was falsification alone, and one was of plagiarism.

The ORI defines research misconduct as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.” In two of the nine cases, the researchers were banned from federal research funding for a certain period of time, and four papers have been requested to be retracted or corrected.

In the last fiscal year, the ORI continued 33 cases from previous years and opened 38 new ones. The ORI has also awarded three grants totaling just under $450,000 to researchers conducting studies in the area of research integrity.

In September 2022, the ORI released a request for information , asking institutions, funders, and concerned individuals for their views on the ORI’s plans to revise the 2005 Public Health Service Policies on Research Misconduct. In the new report, the ORI reveals that 31 institutions, organizations, and individuals submitted comments, which the agency will use to develop a notice for public comment.

You might also like...

Serving the chemical, life science, and laboratory worlds

Sign up for C&EN's must-read weekly newsletter

Contact us to opt out anytime

  • Share on Facebook
  • Share on Linkedin
  • Share on Reddit

This article has been sent to the following recipient:

Join the conversation

Contact the reporter

Submit a Letter to the Editor for publication

Engage with us on Twitter

The power is now in your (nitrile gloved) hands

Sign up for a free account to get more articles. or choose the acs option that’s right for you..

Already have an ACS ID? Log in

Create a free account To read 6 articles each month from

Join acs to get even more access to.

Login to your account

If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password

If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password

Property Value
Status
Version
Ad File
Disable Ads Flag
Environment
Moat Init
Moat Ready
Contextual Ready
Contextual URL
Contextual Initial Segments
Contextual Used Segments
AdUnit
SubAdUnit
Custom Targeting
Ad Events
Invalid Ad Sizes
  • Submit Article

Access provided by

Safeguarding research integrity

Download started

  • Download PDF Download PDF

recent cases of research misconduct

Article metrics

  • Download Hi-res image
  • Download .PPT
  • Institutional Access: Log in to ScienceDirect
  • New Subscriber: Claim access with activation code. New subscribers select Claim to enter your activation code.

Academic & Personal

Corporate r&d professionals, the lancet choice.

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Home

Research Misconduct by the Numbers

The data on this page reflect the Research Integrity and Administrative Investigations Division’s allegations of research misconduct received, research misconduct cases opened and closed, and outcome of research misconduct cases closed by Fiscal Year (FY). This page will be updated yearly.

FY 2023 (October 1, 2022 – September 30, 2023)

The table below shows the number of research misconduct allegations received, cases opened, and cases closed in FY 2023. Investigations may span multiple FYs.

Research Misconduct Cases FY 2023

  Plagiarism Fabrication/Falsification Mixed* Total
Allegations 33 21 N/A 54
Cases Opened 14 12 0 26
Cases Closed 10 4 1 15

* “Mixed” indicates cases that involved more than one type of allegation. 

When we receive an allegation of research misconduct, we initiate an inquiry to determine whether it has enough substance to warrant an investigation. For example, we may send the subject of the allegation a letter requesting an explanation and supporting evidence. If there is enough substance to proceed, we open a formal investigation.

Investigations involve collecting and reviewing facts, assessing the elements required for a research misconduct finding, and determining whether research misconduct occurred. We generally refer research misconduct investigations, along with any evidence we obtained during our inquiry, to the awardee institution. We also provide procedural guidance to the institution’s investigation committee. Once the institution completes its investigation, it sends us a report. We review the report for accuracy and completeness and decide whether to accept its conclusions. We may accept an institution’s report in whole or in part, request additional information, or initiate our own independent investigation.

If we conclude that research misconduct did not occur, we either close the case with no action or close the case with a warning letter. A warning letter may be sent to multiple subjects and/or to an institution.

If we conclude research misconduct occurred, we send NSF a report of investigation, which includes recommended actions. These actions can range from required training up to government-wide debarment. NSF ultimately decides whether to make a finding of research misconduct and implement our recommended actions. NSF often requires more than one action for a case in which they make a finding of research misconduct. In serious cases, NSF may issue a government-wide debarment, which prevents individuals or entities from participating in any government contracts, subcontracts, loans, grants, and other assistance programs for a specified period. Institutions may also take actions at the conclusion of their investigative process.

The table below shows the outcomes of the research misconduct cases closed during FY 2023.

Outcomes of Research Misconduct (RM) Cases FY 2023

  Plagiarism Fabrication/Falsification Mixed* Total
NSF RM Finding/Action 3 2 1 6
Included Debarment** 1 1 1 3
Closed with Warning 4 1 0 5
Closed with No Action 3 1 0 4

** “Included Debarment” is a subset of NSF finding and actions.  

FY 2022 (October 1, 2021 – September 30, 2022)

The table below shows the number of research misconduct allegations received, cases opened, and cases closed in FY 2022. Investigations may span multiple FYs.

Research Misconduct Cases FY 2022  

  Plagiarism Fabrication/Falsification Mixed* Total
Allegations Received 37 16 0 53
Cases Opened 10 8 1 19
Cases Closed 15 2 2 19

The table below shows the outcomes of the research misconduct cases closed during FY 2022. 

Outcomes of Research Misconduct Cases FY 2022 

  Plagiarism Fabrication/Falsification Mixed Total
NSF Findings and Actions 5 1 0 6
Included Debarment* 1 0 0 1
Closed with Warning 8 1 2 11
Closed with No Action 2 0 0 2

* “Included Debarment” is a subset of NSF Findings and Actions. 

FY 2021 (October 1, 2020 – September 30, 2021) 

The table below shows the Research Integrity and Administrative Investigations Division’s allegations received, cases opened, and cases closed during FY 2021. 

Research Integrity and Administrative Investigations FY 2021

  Plagiarism Fabrication/Falsification Whistleblower Retaliation Other* Mixed* Total
Allegations 40 11 4 118 N/A 173
Cases Opened 19 4 3 3 1 31
Cases Closed 20 2 4 10 3 39

* “Other” indicates violations of non-research misconduct regulations (e.g., violations of reviewer confidentiality, human subject regulation, or matters not appropriate for investigation).  * “Mixed” indicates cases that involved more than one type of allegation. 

The figure below shows the outcomes of the research misconduct cases closed during FY 2021. 

The outcomes of research misconduct cases closed during FY 2021. 13 cases referred to institution; 19 cases closed with warning letter; 9 cases with NSF findings and actions; 1 case with debarment; 39 cases closed in total; 51.3% plagiarism; 5.1% fabrication/falsification; 25.6% other; 10.3% whistleblower retaliation; and 7.7% mixed.

  • Health Tech
  • Health Insurance
  • Medical Devices
  • Gene Therapy
  • Neuroscience
  • H5N1 Bird Flu
  • Health Disparities
  • Infectious Disease
  • Mental Health
  • Cardiovascular Disease
  • Chronic Disease
  • Alzheimer's
  • Coercive Care
  • The Obesity Revolution
  • The War on Recovery
  • Adam Feuerstein
  • Matthew Herper
  • Jennifer Adaeze Okwerekwu
  • Ed Silverman
  • CRISPR Tracker
  • Breakthrough Device Tracker
  • Generative AI Tracker
  • Obesity Drug Tracker
  • 2024 STAT Summit
  • All Summits
  • STATUS List
  • STAT Madness
  • STAT Brand Studio

Don't miss out

Subscribe to STAT+ today, for the best life sciences journalism in the industry

Q&A: The scientific integrity sleuth taking on the widespread problem of research misconduct

Deborah Balthazar

By Deborah Balthazar Feb. 28, 2024

SPEECH_BUBBLES_02

E lisabeth Bik, a microbiologist by training, has become one of the world’s most influential science detectives. An authority on scientific image analysis who’s been profiled in The New Yorker for her unique ability to spot duplicated or doctored photographs, she appeared frequently in the news over the past year as one of the experts who raised research misconduct concerns that led to an investigation into, and the eventual departure of, former Stanford president Marc Tessier-Lavigne .

Bik first became interested in plagiarism as a hobby while working as a researcher at Stanford University in 2013. She later began specializing in image duplication specifically, which she believes is a more serious problem for science as a whole.

advertisement

To plagiarize or be plagiarized is bad for scientists, “but it doesn’t necessarily bring a new or false narrative into science,” Bik told STAT. But “if a scientist photoshopped something, or has two images that overlap, but presents them with two different experiments, that is actually cheating.”

Elisabeth Bik

Bik made the decision to become a full-time science sleuth in 2019. STAT caught up with Bik, who was selected as a member of the 2024 STATUS List , while she was traveling in Taiwan to talk about her talent for spotting patterns, the impact of artificial intelligence on her work, and why she thinks journals and institutions are still too slow to address research misconduct.

This interview has been edited for clarity and brevity.

After the investigation into the Stanford president Marc Tessier-Lavigne, your work has definitely come into the mainstream. Are there any other really high-profile things that you’ve been looking at since then?

I’ve been working with [investigative journalist] Charles Piller and some other sleuths in discovering some cases in fraud in the Alzheimer’s space. And so Marc Tessier-Lavigne sort of fell under that. But we worked on a case: [University of Southern California neuroscientist] Berislav Zlokovic. Charles Piller wrote about it in Science [in November].

That is sort of a big case, because this is a big lab with lots of money. This researcher works in Alzheimer’s, but also on stroke. And there was a clinical trial that he was getting involved in, a drug that was a result of his research. [The National Institutes of Health] halted the clinical trial because of his articles.

That is a pretty big and very immediate action. I don’t think that has happened very, very frequently that because of these misconduct investigations, a clinical trial gets paused. So that was one of the consequences of this research.

Related: A flurry of research misconduct cases has universities scrambling to protect themselves

In this case, dozens of papers co-authored by Zlokovic had doctored evidence — images and data — supporting the idea that a compound he studied, 3K3A-APC, could benefit stroke patients. This is a clear example of how this kind of erroneous data can have an impact on people. Is there any way these drugs might get through?

In general, it’s hard to say, because a drug might still work, even though the people might have cheated in the lab. I’m not ruling that out. I think the chances that the drug will work are low if there was obvious cheating, looking at images that have been published. But it’s just hard to know, hard to predict.

In the beginning, you were looking at plagiarism in general. What made you want to focus directly on images?

Because once I found the first case of image duplication that I found myself, I just thought that was more serious for science as a whole. I felt plagiarism is bad for scientists, or to be plagiarized, but it doesn’t necessarily bring a new or false narrative into science. Well, if a scientist photoshopped something, or has two images that overlap, but presents them with two different experiments, that is actually cheating. And so now, those scientists then would present results as if they happened, but they didn’t happen. They were falsified or even fabricated.

Of the three forms of misconduct — which are plagiarism, falsification, and fabrication — I feel plagiarism is the least bad; it’s not as bad as falsification or fabrication. And so as soon as I found images that appear to have been duplicated and reused to represent different experiments, I felt this is much worse for science. And I have, apparently, a talent to recognize them.

Sign up for Morning Rounds

Understand how science, health policy, and medicine shape the world every day

Speaking of your talent, has anyone wondered how your brain works?

So I was profiled for The New Yorker , by Ingfei Chen. I was tested by [Jeremy Wilmer, a psychology researcher at Wellesley University] who had a lot of online tests. I did a lot of online tests that were designed by this person to test people’s ability to spot patterns or to recognize faces, and I turned out to be pretty bad in recognizing faces. It takes me much longer than the average person to remember faces at a conference. I feel very miserable, because I have no idea who they are.

I just don’t have that brain module, but I am good at pattern recognition and 3-D, spatial orientation. Most people see what I see, once I point it out. I think it’s a combination: I have perhaps a little bit better than average talents for spotting patterns. But I also am crazy enough to do this as a hobby, too.

When I still worked at Stanford, I scanned 20,000 papers to have an idea of how often we see these duplications. And that was when I was still full-time employed. Now I just do this full time. But back then, it was just in the evenings or on the weekends that I did that. I don’t think a lot of people would have scanned 20,000 papers, just to have an idea how often a particular phenomenon happens.

At this point, how many papers have you analyzed?

Oh, at some point, it was over 100,000. It’s hard to know because I don’t keep track of that anymore. I know how many [papers with problems] I’ve found — around 8,000. Some of those have plagiarism or other problems, such as animal ethics or lack of ethical approval, most are images. If we assume roughly one in 25 papers has an image problem, and I found around 8,000 — and this is a very rough calculation — I would have screened roughly 200,000 papers.

Is this how you imagined what your life would look?

I would rather just do image duplication searches, because I really enjoy the deep focus that I can get in a day, if I just do it for hours and hours in a row. I don’t mind doing that. But I also think it’s important to give talks because it’s important to share my frustrations about the lack of response sometimes, from scientific journals and institutions. Now, having the chance to go to Taiwan and talk to and meet with lots of people and just hear a lot of different viewpoints, it’s just an amazing opportunity that I don’t want to miss.

What are you giving talks on in Taiwan?

It’s just my general talk about how I got to switch my career and do this, why I think it’s important, why I think misconduct is bad. But also, what can we do better — as scientific publishing, institutions, or researchers. This was mainly talking to people who were involved in teaching graduate students research integrity classes on how not to do science fraud. Also, I usually talk about ChatGPT and artificial intelligence; how that can, on one hand, find these problems, but on the other hand, create them as well, because generative AI can generate text, and also images that are completely fake and look fairly realistic.

I’m not originally from the U.S., I’m from the Netherlands. English is not my first language. And I share that people who speak English from birth have some advantage in writing a scientific paper in English, because English is the scientific universal language. It is hard if your English is not your first language, and is it then allowed to use ChatGPT, or some other AI language model, to help you rewrite your text? Of course, you have this thin line: When is it just rewriting your own text? When is it completely generating it from scratch?

Related: From a small town in Wales, a scientific sleuth has shaken Dana-Farber — and elevated the issue of research integrity

From the perspective of the researchers, I can see that AI would definitely make their jobs easier. Would it make your job harder in trying to determine what exactly is a real image?

I don’t think I will be able to recognize a good AI-generated image anymore. We have found some images generated two, three, four years ago, which we believe were AI-generated. But this was by a paper mill, and I think they made the error of putting all these AI-generated Western blot bands on the same background. So because they all have the exact same background, we could recognize that pattern of noise. And we found 600 papers that we believe are AI-generated, but a more primitive form of AI.

But I think there’s probably a lot of papers being produced right now that we can no longer recognize as fake. We might have an idea, thinking, that’s probably a paper mill, but you also don’t want to falsely accuse anybody. So if there’s no real duplication or something that is obviously wrong, it’s just hard to really comment on that. You also don’t want to insult anybody saying, “Oh, your paper is fake.” You have to have some real proof that a paper is fake.

You are probably one of the most visible people doing this work, especially since you do use your full name and a lot of people use pseudonyms on PubPeer. Have you faced any danger?

I’ve been threatened to be sued several times. None of that actually ever happened. But at some point, my home address was published online, in one of those complaints that was filed against me. I will be worried, a little bit, that there will be a disgruntled author whose work I’ve criticized. And of course, there have been many of those. It only takes one mad person to do something harmful.

And I’ve had a lot of insults online. But so far, I’ve stayed, relatively, in the safe zone, but one of the professors at Harvard has now sued three whistleblowers , the Data Colada team. And that definitely gave me pause.

I know that you’ve experienced a lot of frustrations. Is there a percentage of papers that people like the journal editors are just not taking a look at?

Most of the papers, the journals are not taking action. I actually almost gave up on sending the emails to editors, because it’s so much work. If I investigate a bunch of papers from a set of researchers, let’s say I find 30 papers that have problems, those might have been published in 20 different journals. And now I have to track down [the email addresses of] editors of 20 different journals.

So my initial set, after five years, two-thirds were still not corrected or retracted. I think that number is slowly moving towards 50%. But that’s almost 10 years since I’ve reported them, and half of them are still not addressed. So that is just frustrating.

Is there anything you’ve seen that’s positive in what journal editors are doing to increase their scrutiny, or signs people are taking this more seriously?

Journals seem to be slowly starting to be convinced that they need to take action, but it’s still a very slow process. Institutions seem to be still lagging in how they address these cases; they seem to operate mostly in secrecy. I think with the Stanford president, that was a unique case. It was because of the writing of student journalists [at The Stanford Daily] that the whole case blew up and was actually then investigated by an outside committee. And I think so from my perspective, it seems to be working with journalists that moves these cases forward.

Is there anything that you think that would be important to touch on that I haven’t asked you yet?

I don’t do this to break people’s careers. I do this because I care about science. I feel that is also an important part of science, and there should be a little bit more of a career in it. I’m crowdfunded. Why isn’t that part of science also being funded?

I just think it’s wonderful that I got recognition by the STATUS List. It’s very helpful to see that this type of work is appreciated, perhaps not directly by the scientific community, but by other people who think that this work is important.

About the Author Reprints

Deborah balthazar.

Sharon Begley Science Reporting Fellow

Deborah Balthazar is the 2023-2024 Sharon Begley Science Reporting Fellow at STAT. Her stories look at how science and medicine could have an impact on people's lives.

scientific misconduct

To submit a correction request, please visit our Contact Us page .

recent cases of research misconduct

Recommended

recent cases of research misconduct

Recommended Stories

recent cases of research misconduct

The FDA is ‘making sandwiches,’ ‘breakfast salad’ in DC, and what keeps us from choking on food

recent cases of research misconduct

Sickle cell community scrambles to find safe plan after a drug is pulled from the market

recent cases of research misconduct

STAT Plus: Can Anne Wojcicki save 23andMe?

recent cases of research misconduct

STAT Plus: Electronic health records giant Epic Systems sued over alleged monopolistic practices

recent cases of research misconduct

STAT Plus: What the withdrawal of its sickle cell drug means for Pfizer, patients, and the FDA

recent cases of research misconduct

recent cases of research misconduct

  • NIH Grants & Funding
  • Blog Policies

NIH Extramural Nexus

recent cases of research misconduct

Trends in Extramural Research Integrity Allegations Received at NIH

At the start of the year, we briefly touched on our efforts to address research integrity violations in our 2022 Year In Review . Today we are sharing some more information on the overall trends in research integrity allegations associated with the NIH grants process. I want to note that while we are sharing these aggregate data, NIH does not discuss grants compliance reviews on specific funded awards, recipient institutions, or supported investigators, and whether such reviews occurred or are underway.

Table 1 shows the total number of new research integrity allegation types that NIH received between calendar years (CYs) 2013 and 2022. These include allegations related to traditional research misconduct and professional misconduct, such as peer review, foreign interference, harassment, grant fraud, and other types. We generally handled an average of 100 violations each year up to around 2017. Over the last five years, however, the numbers rose precipitously.

Table 1: Total Research Integrity Allegation Types: CYs 2013 to 2022

Calendar Year Allegations*
2013 104
2014 110
2015 89
2016 77
2017 117
2018 342
2019 549
2020 531
2021 573
2022 564

* Represents the number of allegation types received each calendar year (not the number of allegations). For instance, an allegation that involves two different types (such as research misconduct and harassment) is counted twice.

Our integrity portfolio broadened greatly around 2018 as professional misconduct became a major focus along with traditional scientific misconduct. Importantly, we also made concerted efforts with the research community over recent years to identify and address integrity issues.

Table 2 breaks down the data presented above by the type of research integrity allegation. Here are the highlights:

  • Allegations of research misconduct (fabrication, falsification, and plagiarism) were generally less than a hundred per year, but the numbers started rising in 2019. This increase may be due to our reminder in late 2018 that recipients notify NIH when they find, learn of, or suspect research misconduct that impacts or may impact an NIH-supported project. Check out our webpage on how NIH handles research misconduct allegations for more details.
  • Alleged violations of peer review rules increased from fewer than ten in 2013 when we first started tracking them, to nearly a hundred in 2022. This increase is also commensurate with many reminders to the research community about rules and violations, such as sharing and disclosing of confidential peer review materials with unauthorized individuals. Read more about these types of violations on our peer review case studies page.
  • Foreign interference allegations associated with the NIH grants process were tracked starting around 2017. The number of these types of allegations peaked around 2019 but have substantially decreased over the past few years (likely related to our outreach and increased awareness in the community of reporting requirements). Data on outcomes of foreign interference allegations are updated semi-annually on our site.
  • Allegations and notifications related to harassment (including sexual harassment), discrimination, and hostile work environments have increased substantially since we started tracking these in 2018. This rise in numbers is likely due in part to the heightened awareness and attention about harassment in the scientific workforce, together with our outreach efforts and strengthened recipient notification requirements . See our site for more data on harassment allegations and outcomes .
  • The number of grant fraud allegations have increased proportionally with the overall increase in allegation numbers of the years. The NIH Grants Policy Statement shares examples of grant fraud including embezzlement, misuse of property, theft of funds, or false statements. Of note, we refer the grant fraud allegations we receive to the NIH Office of Management Assessment, who is charged with receiving and handling these types of allegations.
  • The final category, referred to as “Other”, includes other types of research integrity concerns, such as misrepresentations in grant documents (e.g., false credentials in a biosketch) and intellectual property or patent disputes.

Table 2: Research Integrity Allegations by Type: CYs 2013 to 2022

Calendar Year Research Misconduct Peer Review Foreign Interference Harassment Grant Fraud Other
2013 74 1 0 0 10 19
2014 87 3 0 0 5 15
2015 59 10 0 0 8 12
2016 46 8 0 0 4 19
2017 77 7 5 0 14 14
2018 98 45 111 31 29 28
2019 134 67 187 108 31 22
2020 138 59 169 108 27 30
2021 158 55 108 162 44 46
2022 169 98 23 183 41 50

Figure 1 below is a visual representation of the same CY 2013 to 2022 data as presented in Table 2.

Figure 1 is a stacked bar graph showing research integrity allegations by year. The X axis represents calendar year from 2013 to 2022, while the Y axis represents the number of allegations from 0 to 700. Research misconduct allegations are in the dark blue bars, peer review in dark orange, foreign interference in green, harassment in light orange, grant fraud in light blue, and other in gray.

In April 1999 the National Science and Technology Council issued a report on “ Renewing the Government-University Partnership .” That report stated as a Guiding Principle “The ethical obligations entailed in accepting public funds and in the conduct of research are of the highest order and recipients must consider the use of these funds as a trust. Great care must be taken to ‘do no harm’ and to act with integrity. The credibility of the entire enterprise relies on the integrity of each of its participants.” NIH strives to exemplify and promote the highest level of integrity, accountability, and stewardship. We appreciate the research community’s continued support and engagement.

If you have any concerns or come across misconduct, we strongly encourage you to report it. Appropriate contacts are available on our Report a Concern   page (please also see this NIH Extramural Nexus article ).

RELATED NEWS

Thank you for sharing this information. Curious as to whether the allegation numbers represent total of allegations rather than number of cases. E.g. one case may be about 1 or more allegations.

Thank you for reaching out. Please refer to the footnote at the end of Table 1, which notes that every count of an allegation “Represents the number of allegation types received each calendar year (not the number of allegations). For instance, an allegation that involves two different types (such as research misconduct and harassment) is counted twice.”

What is the process once an allegation is made?

Thank you, This is good information for the university Research Integrity Committee to discuss at the next meeting. Any Changes in NIH viewpoints or policy changes is important to know.

What is the NIH doing to rectify the careers of those harmed by alleging wrongdoing?

Before submitting your comment, please review our blog comment policies.

Your email address will not be published. Required fields are marked *

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Committee on Science, Engineering, Medicine, and Public Policy; Committee on Responsible Science. Fostering Integrity in Research. Washington (DC): National Academies Press (US); 2017 Apr 11.

Cover of Fostering Integrity in Research

Fostering Integrity in Research.

  • Hardcopy Version at National Academies Press

Appendix D Detailed Case Histories

The following five detailed case histories of specific cases of actual and alleged research misconduct are included in an appendix to raise key issues and impart lessons that underlie the committee's findings and recommendations without breaking up the flow of the report. In several cases, including the translational omics case at Duke University and the Goodwin case at the University of Wisconsin, the committee heard directly from some of those involved.

The case histories differ in length in order to devote sufficient explanation to the issues involved in each case. For example, the translational omics case at Duke University unfolded over several years and involved multiple complex issues, making a lengthier discussion necessary. Issues covered in the cases include individual and institutional conflicts of interest, data falsification and fabrication, whistleblower retaliation and protection, insufficient or abusive mentoring, ghostwriting, authorship roles, institutional and administrator responsibilities, journal responsibilities, implementation of the federal government's research misconduct policy, and the costs and impacts of research misconduct.

Some cases mentioned in the report are not included in the appendix because the shorter descriptions already sufficed to illustrate the issues being described.

  • THE WAKEFIELD MMR-AUTISM CASE

Synopsis and Rationale for Inclusion: An undisclosed conflict of interest between a principal investigator and the entity funding their research can have far-reaching effects beyond the scope of the research study. In the MMR-autism case, Andrew Wakefield had undisclosed monetary conflicts of interest and was found to have violated human subjects protection rules in research underlying an article published in the Lancet ( UK GMC, 2010 ; Triggle, 2010 ). 1 In the opinion of the British Medical Journal, Wakefield also falsified data ( Godlee et al., 2011 ). A formal retraction did not occur for over a decade, allowing ample time for the purported findings to become an important support for the anti-vaccine movement. This case not only confronts the issue of conflicts of interest but also weaknesses in institutional research governance, coauthor responsibility, and journal responsibility.

In 1998, Andrew Wakefield published a paper in The Lancet claiming that he had found a link between the measles, mumps, and rubella (MMR) 3-in-1 vaccine and regressive autism, as well as a bowel disorder, using a sample of 12 children. Within a year, an article with a sample of 498 children rebutted Wakefield's findings, followed by additional rebuttal articles for several years thereafter ( Taylor et al., 1999 ). However, Wakefield's article resonated with anti-vaccine movements in several countries, especially in the United Kingdom and United States, prompting some parents to refrain from vaccinating their children for fear of a connection to autism, contributing to decreased vaccination rates in the United States and United Kingdom and compromising the near success of eradicating these diseases from Western countries.

Six years after the 1998 article was published, 10 of the 12 coauthors retracted the paper's interpretation that the results suggested a possible causal link between the MMR vaccine and autism ( Murch et al., 2004 ). In 2010, based on the UK General Medical Council's (GMC) Fitness to Practice Panel findings, The Lancet retracted the full article ( Lancet Editors, 2010 ). Both of these retractions were prompted by the investigation by a British journalist, Brian Deer, initially published in the Sunday Times in early 2004. Deer exposed that Wakefield had undisclosed financial interest in the research results, reporting that Wakefield had negotiated a contract with a lawyer who hired him to provide evidence against the MMR vaccine to help support a lawsuit against the MMR manufacturing company ( Deer, 2011a ). Deer reported that Wakefield profited approximately $750,000 USD from the partnership ( Deer, 2011a ). In addition, Deer stated that Wakefield applied for a patent on his own measles vaccine, from which he was positioned to personally profit ( Deer, 2011a ). In addition, Deer reported that throughout the study, “Wakefield had repeatedly changed, misreported and misrepresented diagnoses, histories and descriptions of the children, which made it appear that the syndrome had been discovered” ( Deer, 2011a ). Lastly, Deer reported that the study sample was selectively recruited and not consecutively chosen as Wakefield had reported ( Deer, 2011a ; Wakefield et al., 1998 , retracted). Deer then broadcast his findings on a UK television program, excerpts of which were later broadcast in the United States during an NBC Dateline investigation on Wakefield.

In addition to Deer's findings, the GMC found that Wakefield had performed unnecessary invasive tests on children that were “against their best interests,” was not qualified to perform the tests, did not have the necessary ethics approval to conduct his study, and unethically gathered blood samples by paying children at his son's birthday party for samples ( Triggle, 2010 ; UK GMC, 2010 ). He was found guilty of more than 30 charges of serious professional misconduct and removed from the UK's medical register ( Triggle, 2010 ; UK GMC, 2010 ).

Also in 2004 and soon after Deer's investigation, The Lancet launched an investigation of the paper. Other than undisclosed parallel funding and ongoing litigation, the Lancet reported that their editors did not find evidence of intentional deception or data falsification and so did not retract the paper ( Eggertson, 2010 ). The article remained in the publication until the GMC's findings and subsequent actions in 2010, at which point The Lancet editors agreed “several elements of the 1998 paper by Wakefield et al are incorrect, contrary to the findings of an earlier investigation” and fully retracted the paper ( Lancet Editors, 2010 ). The journal's editor, Richard Horton, said that “he did not have the evidence to [retract the paper] before the end of the GMC investigation” ( Boseley, 2010 ).

In 2011, Brian Deer produced additional investigative reporting in support of his allegation that Wakefield falsified data, which was published by the British Medical Journal ( Deer, 2011b ). Deer's work was endorsed by the editors of BMJ ( Godlee et al., 2011 ).

Wakefield denies ever having committed research misconduct; in a press complaint, Wakefield insisted “he never claimed that the children had regressive autism, nor that they were previously normal . . . never misreported or changed any findings in the study, never patented a measles vaccine . . . and he never received huge payments from the lawyer” ( Deer, 2011b ). Furthermore, he claims to be a victim of conspiracy via a Centers for Disease Control (CDC) cover-up, alleging the “CDC has known for years about an association between the MMR vaccine and autism” ( Ziv, 2015 ). Wakefield's recent basis of this claim is a 2014 article by Brian Hooker published in Translational Neurodegeneration in which Hooker reevaluates data collected by the CDC and suggests African American boys who received the MMR vaccine before 24 months and after 36 months of age showed higher risks for autism ( Hooker, 2014 , retracted). However, the Hooker paper was later retracted because of conflicts of interest and questionable research methods ( Translational Neurodegeneration Editor and Publisher, 2014 ).

Following the 2004 investigation, Wakefield moved to the United States, where he is not licensed, but continues to defend the MMR-autism connection. He attempted to sue Deer and the BMJ in 2010 for defamation, but the lawsuit was dismissed ( Lindell, 2014 ). Wakefield works out of Austin, Texas, as an anti-vaccine activist, where he has received support from parents of children with autism ( Deer, 2014 ). He directed the documentary Vaxxed: From Cover-Up to Catastrophe , which was to have been shown at the 2016 Tribeca Film Festival, but was withdrawn ( Goodman, 2016 ).

In March 2011, the University College London (UCL), which took over the Royal Free Hospital where Wakefield worked at the time, announced intentions to conduct an institutional investigation on Wakefield ( Reich, 2011 ). However, over 1 year later, UCL had not completed the investigation and explained that “given the passage of time, the fact that the majority of the main figures involved no longer work for UCL, and the fact that UCL lacks any legal powers of compulsion,” an investigation would not be a worthwhile endeavor for the university ( UCL, 2012 ). Instead, UCL published a paper, MMR and the Development of a Research Governance Framework in UCL , detailing revisions made to the university's research governance framework in response to the shortcomings raised by the Wakefield case.

Synopsis and Rationale for Inclusion: The Paxil case illustrates issues related to biomedical ghostwriting and unacknowledged conflicts of interest. In this practice, the listed authors of an article reporting on a clinical study may consist solely of prominent academicians, yet unacknowledged industry-supported researchers may have undertaken key tasks associated with the research, including aspects of concept design, subject enrollment, monitoring, data collection and interpretation, and writing the article. In extreme cases, the listed authors may not be able to confirm the integrity of the data or reported results. There have also been several notable cases over the past several decades in which suppression of negative findings or data falsification have been alleged or confirmed in industry-supported studies. Biomedical ghostwriting has been condemned by numerous scientific organizations worldwide.

Ghostwriting, “the practice whereby individuals make significant contributions to writing a manuscript but are not named as authors,” has been condemned as an “example of fraud” and “a disturbing violation of academic integrity standards, which form the basis of scientific reliability” ( Bosch and Ross, 2012 ; Stern and Lemmens, 2011 ). The practice is not currently equated with plagiarism and so is not within the Office of Research Integrity's (ORI) power to regulate. Bosch and Ross (2012) suggest that ORI include ghostwriting in its definition of research misconduct so that it can be investigated and offenders can be punished under the federal research misconduct policy.

ICMJE (2015) established criteria against which to determine appropriate assignment of biomedical authorship and recommends that those who do not meet all of the criteria only be listed in the acknowledgments sections. COPE (2011) also recommends that specific rules be implemented to prevent ghostwriting, which is explicitly defined as misconduct in their guidelines.

If data are falsified or the reported results are misleading in a clinical study and the listed authors are not able to vouch for the integrity of the data or results, using the study as a basis for treating patients may present serious health and safety risks. If fabricated or falsified results are alleged for privately funded research, institutions are not required to report the investigation results to federal agencies under the federal research misconduct policy.

One example that illustrates these two issues is a 2001 paper overstating the benefits and understating the risks of the Glaxo SmithKline (GSK) drug Paxil in off-label treatment of children ( Basken, 2012 ). Four GSK employees acted as whistleblowers, revealing “improper practices” to the U.S. government, including GSK enticing doctors with vacations and knowingly publishing misreported data ( Thomas and Schmidt, 2012 ). Although the lead authors listed on the paper were respected academics in the field, as part of Glaxo's $3 billion settlement with the federal government, the company admitted that it had hired authors who were not listed as such and that the resulting publication had misrepresented the results.

Brown University, employer of the lead author, Martin B. Keller, launched an internal investigation, the results of which were not made public ( Basken, 2012 ). No actions were taken against Keller, or the other 21 authors listed on the paper. Keller and at least five of the other authors continue to receive federal funding from the National Institutes of Health. The Journal of the American Academy of Child and Adolescent Psychiatry , which published the article, has not yet retracted it.

A recent reanalysis of Keller et al.'s 2001 study found no significant differences in efficacy between Paxil and the placebo in treating adolescents with major depression, but did find adverse emotional effects leading to increased suicidal thoughts and attempts for adolescents being treated with Paxil ( Le Noury et al., 2015 ).

In 2015, Keller and 8 of the 22 authors of the original study wrote a letter to the blog Retraction Watch rebutting many points of Le Noury et al.'s 2015 reanalysis of the study; Keller claimed that data used in the reanalysis were not available during the time of the original study. He also firmly asserted that none of the paper was ghostwritten. Keller concluded that describing the original “trial as ‘misreported’ is pejorative and wrong,” specifically from a retrospective point of view ( Keller et al., 2015 ).

At this point, it appears that key issues related to this episode may never be resolved. In addition to the Paxil case, there have been several other cases of possible biomedical ghost writing that led to legal consequences for both medical companies and ghostwriters, indicating a heightened level of responsibility on the part of authors (see Chapter 7 ).

The Food and Drug Administration recently released draft guidance on publications reporting use of approved products for off-label indications: Guidance for Industry Distributing Scientific and Medical Publications on Risk Information for Approved Prescription Drugs and Biological Products—Recommended Practices . The guidelines state that scientific journals should not publish articles “written, edited, excerpted, or published specifically for, or at the request of, a drug or device manufacturer,” nor “be edited or significantly influenced by a drug or device manufacturer or any individuals having a financial relationship with the manufacturer” ( FDA, 2014 ). In addition, articles including information on pharmaceuticals should include a statement disclosing the manufacturer's interest in the drug and any financial interest between authors and the manufacturer ( FDA, 2014 ). Final guidance is expected, but has not yet been released.

  • THE GOODWIN CASE AT THE UNIVERSITY OF WISCONSIN

Synopsis and Rationale for Inclusion: Graduate students may need support and protection from repercussions that may arise as a result of research misconduct committed by their mentor. Students stand to lose years of work if their mentor is found guilty of research misconduct, and may need to find another research group to continue their work, restart their graduate research from the beginning, or leave academia completely. With this in mind, graduate students of Elizabeth Goodwin, formerly a geneticist at the University of Wisconsin, found that data had been fabricated in one of Goodwin's proposals and reported her to the university. This case demonstrates difficult choices that may confront whistleblowers, especially those in vulnerable positions such as graduate students or postdoctoral fellows, the need for institutions to support young researchers put into difficult situations through no fault of their own, and the need for better mentoring in some laboratory and institutional environments.

In fall 2005, graduate students working in the laboratory of University of Wisconsin geneticist Elizabeth Goodwin were confronted with evidence that their advisor had falsified data contained in a proposal to the National Institutes of Health ( Couzin, 2006 ). Specifically, one experiment described in the proposal had not actually been performed, and figures appeared to have been manipulated. Over a period of several months, the students sought explanations from Goodwin, with which they were ultimately unsatisfied, and discussed among themselves what they should do ( Allen, 2012 ). Recognizing that a decision to bring their concerns to university administrators would essentially shut down Goodwin's lab and have a severe negative impact on their own graduate careers, they decided that any such decision would need to be made unanimously.

Ultimately, the students decided to turn Goodwin in, which led to a university investigation finding that data in several grant applications had been falsified, a ruling confirmed by the Office of Research Integrity ( ORI, 2010 ). Goodwin also pled guilty to making false statements on government documents, and was sentenced to 2 years' probation, fined $500, and was ordered to pay $100,000 in restitution ( Winter, 2010 ). Several papers that Goodwin had coauthored were also investigated, but falsification was not found.

As they anticipated, the graduate students did suffer negative impacts from the case ( Allen, 2012 ). One was able to continue work in another lab, and one was able to start a new project in a different lab at Wisconsin. One left Wisconsin to enter the PhD program at another institution, essentially starting over after 4 years. The remaining three students decided to embark on careers outside of academic research.

The case highlights several key issues. The first is the importance of whistleblowers to the system of ensuring research integrity. Although failure to replicate results, statistical analysis, and other mechanisms may be increasingly important in uncovering research misconduct, postdoctoral fellows and graduate students are responsible for reporting a significant percentage (up to half) of cases involving nonclinical research that come to ORI ( Couzin, 2006 ). And these whistleblowers often suffer negative consequences, primarily severe damage to their careers, even when the institution takes appropriate steps to protect them from retaliation.

In addition, former students report that in the years immediately preceding Goodwin's falsified applications, problems were apparent in the lab. Several students were not making progress on their research, with no publications to show for years of work, but were advised to continue on these “dead projects” ( Allen, 2012 ). Goodwin had also reportedly been encouraging students to overinterpret data and conceal data that conflicted with desired results ( Couzin, 2006 ). Such ineffective mentoring and promotion of detrimental research practices create a poor environment for research integrity.

  • THE HWANG STEM CELL CASE AND THE UNIVERSITY OF PITTSBURGH: COAUTHOR RESPONSIBILITIES AND INSTITUTIONAL RESPONSES

Synopsis and Rationale for Inclusion: The Hwang case raises several important research integrity issues, including data fabrication and falsification, abuse of mentorship status, whistleblower retaliation, and endangering the health of trial participants. The University of Pittsburgh's role in this case highlights the need for institutional oversight and defined standards for authorship roles. A second, more recent case at the University of Pittsburgh further demonstrates the need for oversight and institutional focus on addressing all cases of research misconduct.

One highly publicized case that raises several important research integrity issues is that of Hwang Woo-suk, whose purportedly groundbreaking stem cell research turned out to be based on fabricated experiments ( Holden, 2006 ). In his first article published in Science (in 2004), Hwang claimed to have “generated embryonic stem cells from an adult human cell,” a process often referred to as therapeutic cloning, so that cells could be transplanted “without immune rejection to treat degenerative disorders” ( Wade, 2006 ; Hwang et al., 2004 , retracted). University of Pittsburgh stem cell researcher Gerald Schatten began corresponding with Dr. Hwang in late 2003, offering editorial input and support to Hwang's 2004 paper that had earlier been rejected by Science . Following the acceptance of the paper, Schatten and Hwang began discussing a follow-up paper in which Hwang claimed his laboratory team had “created human embryonic stem cells genetically matched to specific patients” ( Sang-Hun, 2009 ). According to Schatten, he and Hwang drafted and edited the article together; Schatten was responsible for much of the writing and was a prominent public promoter of the findings ( University of Pittsburgh, 2006 ). The article was published in Science in 2005 naming Schatten as a senior author, a role he later denied, claiming to have been no more than a coauthor.

In June 2005, immediately following the second article's published release and Hwang's announcement of a clinical trial, Young-Joon Ryu, a former researcher in Hwang's laboratory aware of the fabricated data, worried for the safety of trial participants. Ryu e-mailed Korean television network, Munhwa Broadcasting Corporation (MBC) recommending an investigation ( Cyranoski, 2014b ). Unfortunately, Ryu endured negative effects for his role as a whistleblower. Ryu's identity was leaked early on in the MBC investigation and he received negative backlash from Hwang's ardent supporters that led to Ryu's resignation from his position at a hospital and to a period of unemployment.

As the MBC investigation was under way, ethical concerns with Hwang's research methods were being raised. Sun Il Roh, a coauthor of the 2005 paper and fertility specialist at a hospital in Seoul, disclosed that 20 eggs he had provided to Hwang for the study had been paid for (a violation of human subjects protections), but that Hwang was unaware of this ( Cyranoski and Check, 2005a ). Amongst this and other signs that accepted ethical procedures were not being followed, including that a young, female graduate student in Hwang's laboratory had donated eggs to the experiment (another violation of human subjects standards), Schatten asked that his name be removed from the 2005 publication and ceased working with Hwang ( Cyranoski and Check, 2005b ). Four days after Roh came forward and after a year of denials, Hwang admitted that “his stem-cell research used eggs from paid donors and junior members of his team” ( Cyranoski and Check, 2005a ). Days later, Hwang revealed to Science that of the 11 photos used in the 2005 article, several were duplicates, “even though each was meant to show a different human cell colony” ( Wade, 2005 ). Hwang claimed that this was a mistake and that it occurred only when Science requested higher-resolution photos, not in the original submission. Roh was interviewed in the MBC television broadcast on Hwang and revealed that “Hwang had told him ‘there are no cloned embryonic stem cells’” ( Cyranoski, 2005 ).

After its formal investigation in 2005, a Seoul National University committee determined that both of Hwang's articles were based on fabricated data ( SNU, 2006 ). Numerous accusations ensued with Hwang admitting to “ordering subordinates to fabricate data,” but also blaming a coauthor who “admitted to switching stem cells without Hwang's knowledge” ( Cyranoski, 2014c ). Preceding the SNU investigation's conclusion, Schatten and Hwang had together requested that the paper be retracted from Science . Based on the investigation findings, Donald Kennedy, Science editor-in-chief, retracted both the 2004 and 2005 papers, reporting that “seven of the 15 authors of Hwang et al., 2004 have agreed to retract their paper” and “all of the authors of Hwang et al., 2005 have agreed to retract their paper” ( Kennedy, 2006 ). Following the retractions, Korea's National Bioethics Committee (created in response to ethical questions concerning Hwang's early research) found that Hwang had “forced junior members of his lab to donate eggs, and that he used more than 2,221 eggs in his research” ( Nature , 2005 ). Hwang had only reported using approximately 400 eggs. Throughout the entire investigation, Hwang maintained that his laboratory did “create stem cells matched to individual patients,” but acknowledged that mistakes were made throughout the research process. His achievement of the first cloned dog, Snuppy, was never discredited ( Nature , 2005 ).

Hwang was indicted on three charges, “embezzling KRW2.8 billion [(US$2.4 million)], committing fraud by knowingly using fabricated data to apply for research funds, and violating a bioethics law that outlaws the purchase of eggs for research” ( Nature , 2005 ). In 2009, Hwang was convicted on two of the three charges, violating the bioethics law and embezzling government funds. The fraud charge was dropped because the “companies involved gave the money knowing that they would not benefit from the donation” ( Cyranoski, 2014a ). Hwang was sentenced to a 2-year suspended prison sentence.

Today, with private funding, Hwang runs the Sooam Biotech Research Foundation that he opened in July 2006. The laboratory clones animals with the goals of “producing drugs, curing diabetes and Alzheimer's disease, providing transplantable organs, saving endangered species and relieving grief-stricken pet owners” ( Cyranoski, 2014a ). Since opening Sooam, Hwang has been published in peer-reviewed journals and has been successful in obtaining a Canadian patent on a cloned cell line (NT-1), which was found to be fraudulent in Hwang's 2004 Science article. While Hwang attempts to make a comeback, he has twice been denied approval for therapeutic cloning of human embryos by the Korean health ministry and, for now, continues to clone animals.

While a subsequent investigation by a University of Pittsburgh panel found that Gerald Schatten had not been involved with the fabrication, the incident raised questions about whether Schatten's contributions to the paper merited authorship in the first place. To what extent should coauthors, honorary or otherwise, be held responsible for the fabricated results of their collaborators? Schatten argued over the definition of the term write , as he did not generate the data on which the text was based, but the panel found this and disagreements over the definition of senior author to be dishonest attempts to relieve himself of responsibility ( University of Pittsburgh, 2006 ). The panel found Schatten's authorship role to be reasonable given that he wrote each draft of the paper. Schatten was also named coauthor on Hwang's 2005 Snuppy paper; however, Schatten reported to the panel that his “major contribution to the paper” was to suggest using a professional photographer to present Snuppy ( University of Pittsburgh, 2006 ). The panel did not doubt this claim, but found it “less clear that this contribution fully justified co-authorship” ( University of Pittsburgh, 2006 ). At his own request, Schatten was not acknowledged in Hwang's 2004 paper. Among questions of the appropriateness of authorship, also ethically problematic was Schatten's acceptance of approximately $40,000 in honoraria and research proposals to Hwang's laboratory valued at more than $200,000 for a 4-month period with implications that the grant would be continued annually ( University of Pittsburgh, 2006 ).

The University of Pittsburgh panel's report stated that Schatten “did not exercise a sufficiently critical perspective as a scientist,” but because he likely did not “intentionally falsify or fabricate experimental data, and there is no evidence that he was aware of the misconduct,” he was found guilty of “research misbehavior” rather than “research misconduct” ( University of Pittsburgh, 2006 ). “Research misbehavior” was not used or defined in the University of Pittsburgh research misconduct policy in effect at the time. The panel did not recommend any specific disciplinary action against him. Chris Pascal, director of the Office of Research Integrity supported the decision, stating “universities have a right to add refinements to categories of malfeasance” ( Holden, 2006 ). The term research impropriety is contained in the University of Pittsburgh research misconduct policy adopted in 2008 ( University of Pittsburgh, 2008 ).

  • THE TRANSLATIONAL OMICS CASE AT DUKE

Synopsis and Rationale for Inclusion: The case of Duke University researchers Joseph Nevins and Anil Potti, which stretched out over several years and attracted national media attention, illustrates shortcomings and deficiencies in current approaches to research integrity on the part of researchers, research institutions, government agencies and journals ( CBS News, 2012 ). Potti's fabricated results endangered trial participants and may have contributed to public mistrust in scientific research. Institutionally, supervisors at the laboratory level and senior administrators did not respond effectively for several years despite multiple warning signs. This case also raises questions about the responsibility of a journal to respond appropriately if numerous inquiries are made on the same original article. Several parties' unresponsiveness to questions on Potti's work may have delayed the findings of research misconduct.

Omics is the study of molecules in cells, such as DNA sequences (genomics) and proteins (proteomics). Translational omics research seeks to apply this new knowledge to the creation of diagnostic tests that better detect disease and determine individualized treatment. Translational omics involves several significant challenges. Research “generates complex high-dimensional data” and resulting diagnostics are characterized by “difficulty in defining the biological rationale . . . based on multiple individual biomarkers” ( IOM, 2012 ). In addition, diagnostic tests differ from drugs and other medical technologies regarding regulatory oversight; tests may be reviewed by the Food and Drug Administration, or be validated in a CLIA-certified laboratory (Clinical Laboratory Improvement Act).

Beginning in 2006, a series of papers appearing in major journals such as Nature Medicine and the New England Journal of Medicine purported to show that the gene activity in a patient's tumor cells could be used to determine which chemotherapy drugs would be most effective for that patient. This capability would enable significant advances in cancer treatment. Since individual reactions to these drugs are heterogeneous, the drugs that are effective for one person may not be effective for another. The lead author of the papers was cancer researcher Anil Potti, who worked at Duke University in the lab of Joseph Nevins.

Soon after the first papers were published, Keith Baggerly, Kevin Coombes, and Jing Wang, bioinformaticians at the M. D. Anderson Cancer Center of the University of Texas, began working to replicate the results. They immediately encountered difficulties using the data made publicly available with the paper, and began communicating with Potti and Nevins. Data provided by the Duke team to Baggerly, Coombes, and Wang contained numerous anomalies and obvious errors, making it impossible to replicate or verify the results. A correspondence by the M. D. Anderson researchers submitted to Nature Medicine in 2007 raising these issues was quickly rebutted by Potti and Nevins ( Coombes et al., 2007 ; Potti and Nevins, 2007 ). However, when Baggerly, Coombes, and Wang examined additional information provided by the Duke team they found that there were still significant problems. For example, in some cases, sensitive and resistant labels for cell lines were reversed, which would lead to patients being treated with the least effective chemotherapy drug if the tests were used to direct treatment, rather than the most effective.

Over the next several years, in response to interest expressed by M. D. Anderson clinicians in utilizing the advances that continued to be reported by Potti and Nevins, Baggerly and Coombes worked with the data. In several cases where they discovered clearly incorrect results, they submitted correspondence to journals such as Lancet Oncology , Journal of Clinical Oncology , and Nature Medicine , but these were rejected without explanation ( Baggerly, 2010 , 2012 ).

In 2007, at the same time questions were being raised about the data underlying the Nevins-Potti research, Duke University and Duke University Medical Center investigators not associated with Nevins or Potti launched three clinical trials based on the results, and an additional trial was launched at Moffitt Cancer Center ( IOM, 2012 ). Duke also applied for patents, and several companies were working to commercialize the research, including one in which Potti served as a director and secretary ( Reich, 2010b ; Tracer, 2010 ). Learning about the trials in June 2009, Baggerly and Coombes prepared a critical analysis of the Duke work, which was published in the Annals of Applied Statistics after it had been rejected by a biomedical journal ( Baggerly and Coombes, 2009 ).

In January 2015, the Cancer Letter , a specialist newsletter, reported that Bradford Perez, a third-year medical student who was working with Potti in the Nevins lab, became very concerned about the methodology and reliability of the research ( Goldberg, 2015 ). He shared these concerns in a detailed memo with Potti, Nevins, and several Duke administrators in the spring of 2008 ( Goldberg, 2015 ). In addition to providing specifics about a number of concerning factors, he asked that his name be removed from four papers based on the work he had contributed to, including a paper submitted to the Journal of Clinical Oncology , and left the Nevins-Potti laboratory ( Perez, 2008 ). Rather than catalyzing any independent assessment of the serious concerns raised by Perez about the quality of the research, Duke administrators referred him back to Nevins with no apparent follow-up by any institutional official. Nevins and Potti committed to revalidate all of their work, but it appears that this did not happen. Perez left the Nevins lab knowing he would repeat a year of his medical education, in his words, “to gain a more meaningful research experience” ( Perez, 2008 ).

As noted in a 2012 Institute of Medicine (IOM) report discussed further below, Duke “did not institute extra oversight or launch formal investigations of the three trials during the first 3 years after the original publications triggered widely known controversy about the scientific claims and after concerns started to develop about the possible premature early initiation of clinical trials” ( IOM, 2012 ). Not only did Duke's administration fail to act decisively on Perez's suspicions, but an administrator who counseled Perez on the matter did not even inform the IOM committee that Perez had come forward years earlier ( Goldberg, 2015 ; IOM, 2012 ). In response to the 2015 revelations by the Cancer Letter , Duke Medicine officials did not answer specific questions, but did state that “there are many aspects of this situation that would have been handled differently had there been more complete information at the time decisions were made” ( Goldberg, 2015 ).

National Cancer Institute (NCI) researcher Lisa McShane had also been unsuccessful in attempts to replicate the work ( Economist , 2013 ). In the fall of 2009, NCI expressed concern about the clinical trials at Duke as well as the parallel trial at Moffitt. The trials were suspended, and Duke's Institutional Review Board formed an external review panel to evaluate the concerns. The Duke trials were restarted in early 2010 after the review panel concluded that the approaches used in the trials were “viable and likely to succeed” ( IOM, 2012 ).

During the first half of 2010, NCI continued to raise questions about the research. Through a Freedom of Information Act request submitted by the Cancer Letter , it was revealed that the external review panel was not provided with several critical pieces of information, including a detailed description of the statistical methods used in the original research, and a new critique from Baggerly and Coombes based on analysis of updated data posted by Potti and Nevins ( Baggerly, 2010 ; Duke University, 2009 ). About that material, the 2012 IOM report notes that it “was never forwarded to the external statistical reviewers because of the university leadership's concerns that it might ‘bias’ the committee's review” ( IOM, 2012 ).

Several developments in July 2010 brought matters to a head. It was reported that Potti's claim on his resume that he had been a Rhodes Scholar was exaggerated, and this was confirmed by the University of Oxford ( Goldberg, 2010 ; Singer, 2010 ). Also, several dozen prominent biostatisticians wrote to NCI director Harold Varmus to request that the clinical trials based on the Duke research be suspended until the science could be publicly clarified ( Barón et.al., 2010 ; Singer, 2010 ). Duke suspended the trials and suspended Anil Potti's employment in response. The trials were ultimately terminated and Potti left Duke. Starting in the fall of 2010, a number of the papers reporting the Duke results have been retracted.

Over the time since the trials were suspended, there have been several significant developments. NCI asked the Institute of Medicine to develop principles for evaluating omics-based tests, and IOM released its report in 2012 ( IOM, 2012 ). Drawing on lessons from the Duke case and informed by the development of other omics-based tests, the IOM report lays out a recommended development and evaluation process for these tests, and makes specific implementation recommendations to researchers, institutions, agencies, and journals ( IOM, 2012 ).

Duke University has also taken steps to respond ( Califf, 2012 ). Its Translational Medicine Quality Framework emphasizes new science and management approaches to ensure data provenance and integrity, the incorporation of adequate quantitative expertise, explicit management accountability in the institution beyond the individual lab for research affecting patient care, and enhanced conflict-of-interest reviews.

In 2015, ORI concluded that Potti had “engaged in research misconduct by including false research data,” citing specific examples of Potti's data that had been reversed, switched, or changed in a number of (now retracted) articles and other submissions ( ORI, 2015 ). While Potti did not “admit nor deny ORI's findings of research misconduct,” he has expressed that he has no intention of applying for PHS (Public Health Service)–funded research, but agreed that if he is engaged with any PHS-funded research in the future, his research will be supervised for 5 years ( ORI, 2015 ).

In this case, just about all the scientific checks and balances intended to uncover incorrect or fabricated research and protect human subjects failed over the course of several years. A summary of these failings illustrates some of the U.S. research enterprise's key vulnerabilities regarding integrity. Effective steps on the part of Duke to address the problems with Potti's work and investigate possible misconduct were delayed for years, and were finally triggered only by the disclosure of Potti's resume falsification. Those pointing out these problems were appropriately cautious about making formal allegations of misconduct, since there was a possibility that the problems were due to error or extreme sloppiness rather than falsification. Another contributing factor was the willingness of Joseph Nevins, a highly prestigious researcher, to vouch for the work and advocate for Potti with university administrators and others.

Individual Researchers

Anil Potti's misbehavior is at the center of the case. Prior to ORI's conclusion of research misconduct, Joseph Nevins and Robert Califf had both said that it is highly likely that Potti intentionally fabricated or falsified data ( CBS News, 2012 ). In addition, Baggerly, Coombes, and Wang had documented many instances of sloppy or careless data analysis, and Perez documented use of unreliable predictors and omission of data not showing desired results. The negative impact of such sloppy and careless practices on the ability to replicate results and ultimately on patient care might be similar to the impact of fabrication or falsification.

In addition to problems with data and analysis, the IOM committee described a number of poor practices related to the clinical trials for the tests, including trials being undertaken simultaneously with preliminary studies ( IOM, 2012 ).

Potti's collaborators also share responsibility. For example, despite being principal investigator of the lab where the research was undertaken, as well as Potti's mentor and coauthor, Joseph Nevins did not thoroughly check the original data files until after it was revealed that Potti had exaggerated his credentials in July 2010, more than 3 years after the data issues were originally raised ( CBS News, 2012 ). Moreover, we now know from a deposition cited in court documents that Nevins “pleaded with Perez not to send a letter about his concerns to the Howard Hughes Medical Institute, which was supporting him, because it would trigger an investigation at Duke” ( Kaiser, 2015 ). Indeed, Duke administrators testified to the IOM that none of Potti's coauthors (a total of 162 for 40 papers) raised any questions or concerns about the papers or tests until they were contacted by Duke at the start of the process of determining which papers should be retracted ( IOM, 2012 ). Bradford Perez, the medical student described above, did raise concerns and removed his name from the papers that he contributed to, so his documented concerns were apparently not considered when that statement was made. Nevins remained on faculty as a department chair until his retirement in 2013, the year after the IOM report was released.

Institutional Policies and Procedures

In addition to the failures of individual researchers, lessons can be drawn from the responses by Duke as an institution during the controversy. Institutional shortcomings in policies and procedures, structure, systems, and oversight contributed to delays in recognizing that the science underlying the Nevins-Potti research was unsound. First, Duke's Institute for Genomic Science and Policy and its component Center for Applied Genomics and Technology, where Nevins and Potti worked, instituted its own system for undertaking clinical trials, separate from the extensive existing infrastructure of the Duke Cancer Center ( IOM, 2012 ). This parallel pathway lacked the normal checks and balances as well as clear lines of authority and oversight.

In addition, systems for managing conflicts of interest at the individual and institutional levels were inadequate ( IOM, 2012 ). For example, the IOM committee found evidence that researchers involved with undertaking the clinical trials had unreported financial or professional conflicts of interest. Some investigators held patents on one or more of the tests, or had links with one of the companies founded to market the tests. The institution itself, through its licensing relationships, had a financial interest in the success of the tests, as well as a reputational interest in having generated such an important new technology. It is of note that the institution had created a set of video and print materials featuring the research ( CBS News, 2012 ; Singer, 2010 ).

As noted in the 2012 IOM report, as a “responsible party” for assuring the integrity of the science conducted under their auspices, universities have particularly important responsibilities. These include responsibility for the hiring and promotion of the faculty members conducting research, the establishment and maintenance of oversight structures, and responsibilities for properly responding to and resolving questions about the validity of research or allegations of misconduct when they arise. It also includes the responsibility for ensuring the existence of an organizational culture and climate that sets expectations for research integrity that “are transmitted by the institution and modeled by its leadership. Institutional culture starts with the dean, senior leaders, and members of their team stating how research is to be conducted, with integrity and transparency, and with clarity that shortcuts will not be tolerated and that dishonesty is the basis for dismissal” ( IOM, 2012 ).

The evidence now available, some that has come to light only after Freedom of Information Act requests and court depositions, suggests that Duke University and its leadership failed in virtually all of these responsibilities: for undertaking clinical studies outside the established review structures; for the failure to pursue internal investigation of serious, documented concerns until forced by outside forces to do so; for withholding from an external committee the full Baggerly/Coombes critique; for referring responsibility for rechecking Potti's work back to the laboratory of his (explicitly conflicted) principal investigator, Joseph Nevins; for failing to employ the full set of institutional checks and balances that were in place; and for either incomplete or factually unsupportable statements made to the IOM Committee charged with examining the issue. The breadth and depth of these institutional failings are disappointing. Occurring in an institution of Duke's stature and resources, they raise troubling questions about the ability of research institutions, without more support and reinforcement, to manage complex cases when directed against prominent institutional researchers.

Duke suspended the trials and launched an investigation in the fall of 2009 in response to NCI concerns. However, this investigation had several serious flaws. Although the trials were resumed based on the report of the two external statistical experts, as noted above, these experts were not provided with several critical pieces of information. The IOM report also raises the possibility that Nevins was improperly in direct contact with the reviewers during the inquiry ( IOM, 2012 ). As for the clinical trials that were undertaken based on the fabricated work, 117 patients were ultimately enrolled. Duke later faced a lawsuit brought by the families of eight of these patients, which was settled in May 2015. The terms of the settlement were not disclosed ( Ramkumar, 2015 ).

In its Translational Medicine Quality Framework activity, Duke also identified an environment that might discourage postdocs or grad students from raising concerns with research within the lab or taking their concerns to others at the university as a possible problem. The university reported that it has established an ombudsman's office and taken other steps to address this.

Taken together, these institutional failings raise the question of whether, in addition to strengthening policies and procedures to the extent possible, research institutions should explore new mechanisms for bringing in outside perspectives in cases where it might be difficult for an institution to objectively address allegations of misconduct or other challenges to the soundness of science. In 2016, four members of the IOM committee published a piece critical of how Duke handled the case as an institution ( DeMets et al., 2016 ).

Journal Policies and Practices

Although Nature Medicine and the Journal of Clinical Oncology did publish letters from Baggerly, Coombes, and Wang questioning the validity of data, along with responses from Potti, they rejected further questioning of the Duke results. This is likely the result of the common journal practice of not publishing additional comments on an article that appear to repeat concerns already raised in a previously published comment, so as to avoid involving the journal in an ongoing dispute. Further, other journals that had published other articles reporting the Nevins-Potti work were not responsive to questions raised by Baggerly and Coombes. This stance contributed to delays in recognizing the nature and extent of the problems with the papers. The translational omics case raises issues of how scholarly publishers, institutions, and the broader community should respond when the work underlying numerous papers in a variety of journals is questioned.

Sponsor and Regulator Policies and Practices

The IOM report identifies some ambiguities in Food and Drug Administration requirements for launching clinical trials on diagnostics as possibly contributing to the clinical trials being launched prematurely and to delays in finally shutting them down ( IOM, 2012 ). The IOM report also points out that NCI felt constrained in communicating what it knew and the extent of its concerns with Duke and others early in the case, particularly before officials were aware that the agency was supporting aspects of the clinical trials ( IOM, 2012 ). More direct and complete communication would be helpful in future cases.

  • THE RIKEN-STAP CASE

Synopsis and Rationale for Inclusion: The RIKEN-STAP case illustrates issues that may arise related to authorship roles, mentoring, and data falsification. The extent to which coauthors should be held responsible for the data and findings of papers on which they are listed is a recurring question in many research misconduct cases.

Yoshiki Sasai, a stem cell biologist of Japan's RIKEN research institute, committed suicide in August 2014 after the lead author on papers that he coauthored, Haruko Obokata, was found guilty of research misconduct ( RIKEN, 2014 ). Obokata claimed to have found that a process that reprogrammed somatic cells into pluripotent cells by exposing the cells to stress; the authors termed the process “stimulus-triggered acquisition of pluripotency (STAP)” ( Obokata et al., 2014a , retracted). Obokata collaborated with Charles Vacanti's laboratory at Brigham and Women's Hospital, where the idea of STAP had supposedly originated ( Knoepfler, 2015 ). Vacanti, professor of anesthesiology at Harvard Medical School and former chairman of the Department of Anesthesia at the Brigham and Women's Hospital, was a corresponding author on one of the papers, a coauthor on the other, and Obokata's mentor while she worked as a postdoctoral research fellow at Brigham and Women's Hospital.

Shortly after Obokata's findings were published in Nature , outside researchers were unable to replicate the study or achieve similar results, prompting an internal RIKEN investigation. The investigation committee concluded that she had fabricated data in at least one of the papers ( RIKEN, 2014 ). The committee found problems with the data underlying the other papers, but was not able to conclude that fabrication or falsification had occurred because they did not have access to the original data ( RIKEN, 2014 ). The committee found that Sasai had no involvement with the data fabrication, but bore a “heavy responsibility” for the incident because he did not insist that experiments be repeated even after problems with the data became obvious ( RIKEN, 2014 )

Both Sasai and Obokata made public apologies, but maintained that STAP works. Already disgraced, the Japanese media soon began to make “unsubstantiated claims about [Sasai's] motivations” and personal life, as well as shame him for a lack of oversight responsibility, all of which, Sasai wrote in a suicide note, drove him to take his own life ( Cyranoski, 2014c ). Vacanti also maintained “absolute confidence” in the phenomenon and released follow-up protocols to the retracted Nature papers to assist in the reproducibility of STAP cells ( Vacanti and Kojima, 2014 ). Following RIKEN's investigation and the retraction of the Nature papers, Vacanti stepped down as chairman of the Department of Anesthesia at the Brigham and Women's Hospital and took a 1-year sabbatical from his professorship at Harvard Medical School. He did not reference the STAP case in his letter of resignation from Brigham and Women's Hospital.

  • REFERENCES FOR APPENDIX D
  • Allen M. The Goodwin Lab. Aug 14, 2012. Presentation to the committee.
  • Baggerly KA. The Importance of Reproducible Research in High-Throughput Biology: Case Studies in Forensic Bioinformatics. 2010. [November 20, 2016]. Video Lecture. Available at: http: ​//videolectures ​.net/cancerbioinformatics2010 ​_baggerly_irrh/
  • Baggerly KA. Learning from the Duke Case and the IOM Translational Omics Report: Context. Jul 9, 2012. Presentation to the committee.
  • Baggerly KA, Coombes KR. Deriving chemosensitivity from cell lines: Forensic bio-informatics and reproducible research in high-throughput biology. Annals of Applied Statistics. 2009; 3 (4):1309–1334.
  • Baron AE, Bandeen-Roche K, Berry DA, Bryan J, Carey VJ, Chaloner K, Delorenzi M, Efron B, Elston RC, Ghosh D, Goldberg JD, Goodman S, Harrell FE, Galloway Hilsenbeck S, Huber W, Irizarry RA, Kendziorski C, Kosorok MR, Louis TA, Marron JS, Newton M, Ochs M, Quackenbush J, Rosner GL, Ruczinski I, Skates S, Speed TP, Storey JD, Szallasi Z, Tibshirani R, Zeger S. Letter to Harold Varmus: Concerns About Prediction Models Used in Duke Clinical Trials. Bethesda, MD: Jul 19, 2010. [March 5, 2013]. http://www ​.cancerletter ​.com/categories/documents .
  • Basken P. Chronicle of Higher Education. Aug 6, 2012. Academic Researchers Escape Scrutiny in Glaxo Fraud Settlement. http://www ​.chronicle ​.com/article/Academic-Researchers-Escape/133325/
  • BMJ (British Medical Journal). Wakefield's article linking MMR vaccine and autism was fraudulent. Jan 6, 2011. [November 12, 2016]. https://doi ​.org/10.1136/bmj.c7452 . [ PubMed : 21209060 ]
  • Bosch X, Ross J. Ghostwriting: Research misconduct, plagiarism, or fool's gold? American Journal of Medicine. 2012; 125 (4):324–326. [ PubMed : 22305580 ]
  • Boseley A. Lancet retracts “utterly false” MMR paper. The Guardian; Feb 2, 2010. http://www ​.theguardian ​.com/society/2010/feb ​/02/lancet-retracts-mmr-paper .
  • Califf RM. Translational Medicine Quality Framework: Challenges and Opportunities. Jul 9, 2012. Presentation to the committee.
  • CBS News. Deception at Duke 60 Minutes. Feb 12, 2012.
  • Coombes KR, Wang J, Baggerly KA. Microarrays: Retracing steps. Nature Medicine. 2007; 13 :1276–1277. [ PubMed : 17987014 ]
  • COPE (Committee on Publication Ethics). Code of Conduct and Best Practice Guidelines for Journal Editors. 2011. [July 23, 2013]. http: ​//publicationethics ​.org/files/Code_of ​_conduct_for_journal_editors_Mar11.pdf .
  • Couzin J. Truth and consequences. Science. 2006; 313 :1222–1226. [ PubMed : 16946046 ]
  • Cyranoski D. Stem-cell pioneer accused of faking data. Nature. 2005 December 15; [ CrossRef ]
  • Cyranoski D. Cloning comeback. Nature. 2014a January 23; 505 (7484) [ PubMed : 24451524 ] [ CrossRef ]
  • Cyranoski D. Whistle-blower breaks his silence. Nature. 2014b January 30; 505 (7485) [ PubMed : 24476864 ] [ CrossRef ]
  • Cyranoski D. Nature. Aug 13, 2014c. Stem-cell pioneer blamed media ‘bashing’ in suicide note. http://www ​.nature.com ​/news/stem-cell-pioneer-blamed-media-bashing-in-suicidenote-1.15715 .
  • Cyranoski D, Check E. Clone star admits lies over eggs. Nature. 2005a December 1; 438 :536–537. [ PubMed : 16319847 ] [ CrossRef ]
  • Cyranoski D, Check E. Stem-cell brothers divide. Nature. 2005b November 16; 438 :262–263. [ PubMed : 16292267 ] [ CrossRef ]
  • Deer B. Exposed: Andrew Wakefield and the MMR-autism fraud. 2011a. http://briandeer ​.com/mmr/lancet-summary ​.htm .
  • Deer B. How the case against the MMR vaccine was fixed. BMJ. 2011b; 342 :77–82. [ PubMed : 21209059 ]
  • Deer B. Disgraced ex-doctor fails in his fourth attempt to gag media. 2014. http://briandeer ​.com ​/solved/slapp-introduction.htm .
  • DeMets DL, Fleming TR, Geller G, Ransohoff DF. Institutional responsibility and the flawed genomic biomarkers at Duke University: A missed opportunity for transparency and accountability. Science and Engineering Ethics. 2016 November 23; [ PubMed : 27882502 ] [ CrossRef ]
  • Duke University. Review of Genomic Predictors for Clinical Trials from Nevins, Potti, and Barry. 2009.
  • Economist, The. Trouble at the lab. Oct 19, 2013. https://www ​.economist ​.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble .
  • Eggertson L. Lancet retracts 12-year-old article linking autism to MMR vaccines. Canadian Medical Association Journal. 2010; 182 (4):188–200. [ PMC free article : PMC2831678 ] [ PubMed : 20142376 ]
  • FDA (Food and Drug Administration). (Draft) Guidance for Industry: Distributing Scientific and Medical Publications on Risk Information for Approved Prescription Drugs and Biological Products—Recommended Practices. 2014. http://www ​.fda.gov/downloads ​/drugs/guidancecomplianceregulatoryinformation ​/guidances/ucm400104.pdf .
  • Goldberg P. Duke officials silenced med student who reported trouble in Anil Potti's lab. Cancer Letter. 2015 January 9; 41 (1)
  • Goldberg P. Prominent Duke scientist claimed prizes he didn't win, including Rhodes Scholarship. The Cancer Letter. 2010; 36 (27):1–7.
  • Holden C. Schatten: Pitt panel finds “misbehavior” but not misconduct. Science. 2006; 311 :928. [ PubMed : 16484456 ]
  • Hooker BS. Measles-mumps-rubella vaccination timing and autism among young African American boys: A reanalysis of CDC data. Translational Neurodegeneration. 2014; 3 :16. RETRACTED. [ PMC free article : PMC4128611 ] [ PubMed : 25114790 ]
  • Hwang WS, Young JR, Park JH, Park ES, Lee EG, Koo JM, Jeon HY, Lee BC, Kang SK, Kim SJ, Ahn C, Hwang JH, Park KY, Cibelli JB, Moon SY. Evidence of a pluripotent human embryonic stem cell line derived from a cloned blastocyst. Science. 2004; 303 (5664):1669–1674. RETRACTED. [ PubMed : 14963337 ]
  • ICMJE (International Committee of Medical Journal Editors). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. 2015. [November 14, 2016]. Updated December 2015. http://www ​.icmje.org/recommendations/
  • IOM (Institute of Medicine). Evolution of Translational Omics: Lessons Learned and the Path Forward. Washington, DC: The National Academies Press; 2012. [ PubMed : 24872966 ]
  • Kaiser J. Duke University officials rebuffed medical student's allegations of research problems. Science. 2015 January 12; [ CrossRef ]
  • Keller MB, Birmacher B, Clarke GN, Emslie GJ, Koplewicz H, Kutcher S, Ryan N, Sack WH, Strober M. Letter to Retraction Watch. 2015. http: ​//retractionwatch ​.com/wp-content/uploads ​/2015/12/Response-to-BMJ-Article-9-1515.pdf .
  • Kennedy D. Editorial Retraction: Retraction of Hwang et al., Science 308(5729): 1777-1783. Science. 2006; 311 (5759):335.
  • Knoepfler P. Knoepfler Lab Stem Cell Blog. Sep 23, 2015. New Nature papers debunk STAP cells. http://www ​.ipscell.com ​/tag/charles-vacanti/
  • Lancet Editors. Retraction—Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet. 2010; 375 (9713):445. doi: http://dx ​.doi.org/10 ​.1016/S0140-6736(10)60175-4 . [ PubMed : 20137807 ]
  • Le Noury J, Nardo J, Healy D, Jureidini J, Raven M, Tufanaru C, Abi-Jaoude E. Restoring study 329: Efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence. BMJ. 2015; 351 :h4320. https://doi ​.org/10.1136/bmj.h4320 . [ PMC free article : PMC4572084 ] [ PubMed : 26376805 ]
  • Lindell M. Court: Andrew Wakefield, autism researcher, cannot sue in Texas. Statesman News. 2014 September 19;
  • Murch SH, Anthony A, Casson DH, Malik M, Berelowitz M, Dhillon AP, Thomson MA, Valentine A, Davies SE, Walker-Smith JA. Retraction of an interpretation. The Lancet. 2004; 363 (9411):750. [ PubMed : 15016483 ]
  • ORI (Department of Health and Human Services, Office of Research Integrity). Case Summary: Goodwin, Elizabeth. Aug 24, 2010.
  • ORI. Office of Research Integrity Newsletter. 4. Vol. 22. 2014.
  • ORI. ORI Case Summary: Potti, Anil. 2015. [November 22, 2016]. https://ori ​.hhs.gov/content ​/case-summary-potti-anil .
  • Perez B. Letter to Howard Hughes Medical Institute. Apr 22, 2008. [November 22, 2016]. https://issuu ​.com/thecancerletter ​/docs/duke_letters_to_hhmi .
  • Potti A, Nevins J. Reply to “Microarrays: retracing steps.” Nature Medicine. 2007; 13 :1277–1278.
  • Ramkumar A. Duke Chronicle. May 3, 2015. Duke lawsuit involving cancer patients linked to Anil Potti settled.
  • Reich ES. Self-plagiarism case prompts calls for agencies to tighten rules. Nature. 2010a; 768 (745) [ PubMed : 21150967 ]
  • Reich ES. Nature News. Sep 7, 2010b. [January 4, 2017]. Troubled geneticist rebuffed by US patent office. http://www ​.nature.com ​/news/2010/100907/full/news ​.2010.450.html .
  • Reich ES. Fresh dispute about MMR “fraud.” Nature. 2011; 479 :157–158. [ PubMed : 22071737 ]
  • RIKEN. Report on STAP Cell Research Paper Investigation. 2014. [November 8, 2016]. http://www3 ​.riken.jp ​/stap/e/c13document52.pdf .
  • Sang-Hun C. New York Times. Oct 26, 2009. Disgraced cloning expert convicted in South Korea.
  • Singer N. Duke scientist suspended over Rhodes Scholar claims. New York Times. 2010 July 20;
  • SNU (Seoul National University). Text of the Report on Dr. Hwang Woo Suk. New York Times. 2006 January 6;
  • Stern S, Lemmens T. Legal remedies for medical ghostwriting: Imposing fraud liability on guest authors of ghostwritten articles. PLOS Medicine. 2011; 8 (8):e1001070. [ PMC free article : PMC3149079 ] [ PubMed : 21829331 ]
  • Taylor B, Miller E, Farrington CP, Petropoulos MC, Favot-Mayaud I, Li J, Waight PA. Autism and measles, mumps, and rubella vaccine: No epidemiologic evidence for a causal association. The Lancet. 1999; 353 :2026–2029. [ PubMed : 10376617 ]
  • Thomas K, Schmidt M. New York Times. Jul 2, 2012. Glaxo agrees to pay $3 billion in fraud settlement.
  • Tracer Z. Duke Chronicle. Sep 14, 2010. [January 3, 2017]. Health care companies cut ties with Potti. http://www ​.dukechronicle ​.com/article/2010 ​/09/health-care-companies-cut-ties-potti .
  • Translational Neurodegeneration Editor and Publisher. Retraction Note: Measles-mumps-rubella vaccination timing and autism among young African American boys: a reanalysis of CDC data. Translational Neurodegeneration. 2014 October 3; 3 (22) https: ​//translationalneurodegeneration ​.biomedcentral ​.com/articles/10 ​.1186/2047-9158-3-22 . [ PMC free article : PMC4183946 ] [ PubMed : 25285211 ]
  • Triggle N. BBC News. May 24, 2010. [November 12, 2016]. MMR doctor struck from register. http://news ​.bbc.co.uk ​/2/hi/health/8695267.stm .
  • UCL (University College London). MMR and the development of a research governance framework. UCL News. 2012 September 13; http://www ​.ucl.ac.uk ​/news/news-articles/1209 ​/13092012-Governance .
  • UK GMC (United Kingdom General Medical Council). Fitness to Practice Panel Hearing. Jan 28, 2010.
  • University of Pittsburgh. Summary Investigative Report on Allegations of Possible Scientific Misconduct on the Part of Gerald P. Schatten, Ph.D. Feb 8, 2006.
  • University of Pittsburgh. Research Integrity Policy. 2008. https://www ​.cfo.pitt ​.edu/policies/policy/11/11-01-01.html .
  • Vacanti CA, Kojima K. Revised STAP Cell Protocol. Brigham and Women's Hospital. 2014. http://research ​.bwhanesthesia ​.org/research-groups ​/cterm/stap-cell-protocol .
  • Wade N. New York Times. Dec 7, 2005. Journal defends stem cell article despite photo slip.
  • Wade N. New York Times. Nov 29, 2006. Journal faulted in publishing Korean's claims.
  • Wakefield AJ, Murch SH, Anthony A, Linnell J, Casson DM, Malik M, Berelowitz M, Dhillon AP, Thomson MA, Harvey P, Valentine A, Davies SE, Walker-Smith JA. Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet. 1998; 351 (9103):637–641. RETRACTED. [ PubMed : 9500320 ]
  • Winter S. BioTechniques. Sep 17, 2010. [December 5, 2016]. Former Wisconsin researcher sentenced for misconduct. www ​.biotechniques.com ​/news/Former-Wisconsin-researcher-sentenced-formisconduct ​/biotechniques-302891 ​.html .
  • Ziv S. Newsweek. Feb 10, 2015. Andrew Wakefield, father of the anti-vaccine movement, responds to the current measles outbreak for the first time.

The United Kingdom General Medical Council's findings of fact from its January 2010 hearing are available in document form. Its verdict finding Wakefield guilty of serious professional misconduct and decision to strike him from the medical register are not available in document form, having been read aloud at a May 2010 hearing, so a news report of this hearing is cited.

  • Cite this Page National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Committee on Science, Engineering, Medicine, and Public Policy; Committee on Responsible Science. Fostering Integrity in Research. Washington (DC): National Academies Press (US); 2017 Apr 11. Appendix D, Detailed Case Histories.
  • PDF version of this title (3.0M)

In this Page

Related information.

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Detailed Case Histories - Fostering Integrity in Research Detailed Case Histories - Fostering Integrity in Research

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Research Misconduct: ORI Issues Final Rule with Modernized and Streamlined Regulations

On September 17, 2024, the Office of Research Integrity (ORI) issued a final rule adopting changes to federal regulations governing research misconduct involving federally funded research (Final Rule). The regulations have not been updated since 2005 and were in need of revision to address the complexities of modern-day research. In the Final Rule, the U.S. Department of Health & Human Services (HHS) and ORI implemented changes which may significantly impact how institutions must address and respond to allegations of research misconduct.

The Final Rule follows the previously issued notice of proposed rulemaking (Proposed Rule). Foley’s recent blog, “ ORI Proposes New Rulemaking for Research Misconduct Regulations ,” analyzed the Proposed Rule , which generated a host of comments submitted for HHS consideration. Many commenters requested that ORI reconsider proposed regulatory revisions that would have imposed burdens on institutions addressing allegations of research misconduct. In the Final Rule, and in response to such commenter requests, ORI removed many of the suggested revisions to the regulations in order to strike a “balance between ensuring a complete review of misconduct allegations and protecting the rights of respondents[.]”

The Final Rule applies to all institutions that receive Public Health Service (PHS) funding for research activities, including, but not limited to, universities, colleges, academic medical centers, medical schools, hospitals, and health care systems. All institutions that receive PHS funding and engage in research should take care to familiarize themselves with the Final Rule. The effective date of the Final Rule is January 1, 2025; however, ORI has clarified that institutional compliance will be required as of January 1, 2026, giving institutions adequate time to prepare for and effectuate compliance with the Final Rule.

In this article, we summarize certain of the significant changes to the regulations made in the Final Rule and analyze their potential impact on institutions.

Key Terms Defined

ORI adopted definitions for key terms — some as set forth in the Proposed Rule and some with revisions.

A finding of research misconduct requires the misconduct be committed  intentionally ,  knowingly , or  recklessly . ORI did not define these terms in the 2005 rulemaking. The Final Rule adopts the following definitions:

  • Intentionally means to act with the aim of carrying out the act;
  • Knowingly means to act with awareness of the act; and
  • Recklessly means to act with indifference to a known risk of fabrication, falsification, or plagiarism. (Note the Proposed Rule’s formulation had been “without proper caution despite a known risk.” ORI explained in the Final Rule that it adopted revisions to the definition to “make it specific to proposing, performing, or reviewing research, or reporting research results, rather than ‘acting’ more generally, and specific to a risk of fabrication, falsification, or plagiarism.”)

We continue to note that this level of knowledge is the same as required for False Claims Act (FCA) knowledge, making it easier for the Department of Justice (DOJ) Civil Division attorneys to evaluate and investigate allegations of fraud with respect to the provision of PHS funds. 

The Final Rule excludes authorship or credit disputes, as well as self-plagiarism, from the definition of plagiarism (which constitutes research misconduct).

The Final Rule adopted a definition for Institutional Record :

  • Documentation of assessment;
  • The inquiry report and any supporting records;
  • The investigation report and any supporting records
  • Decisions by the institutional deciding official; and
  • The complete record of any institutional appeal.

The Institutional Record also includes a record index listing all records, except those the institution did not rely upon, and a general description of records not relied upon.

Research misconduct does not include honest error . While the Proposed Rule sought to define honest error (“a mistake made in good faith”), ORI opted to remove the definition in light of comments suggesting it was unnecessary. Still, the exception for “honest error” remains consistent with the FCA, which is not intended to convert honest mistakes to fraud claims. Additionally, ORI removed proposed language that would prohibit institutions from making a finding regarding honest error at the inquiry stage, agreeing with commentors that such a prohibition would unfairly burden institutions and respondents.

ORI likewise removed definitions of several terms suggested in the Proposed Rule (Appeal, Difference of Opinion, Research Integrity, Suspension, and Debarment) in light of comments that these terms did not help clarify the regulations.

Responsibility of Sub-Recipients

In the Proposed Rule, ORI sought to explicitly place responsibility for sub-recipient compliance with ORI regulations upon the primary PHS-funded recipients. ORI noted that commenters recommended dropping this provision, “because institutional responsibility for regulatory compliance was not clarified.” In the end, ORI removed the responsibility proposal. ORI explained it “did not intend to impose a new burden on prime funding recipients.” The Final Rule requires sub-recipients to have their own assurances filed with ORI.

Multiple Respondents and Multiple Institutions

The Proposed Rule introduced an obligation for institutions to consider whether additional researchers are involved in the alleged misconduct. Specifically, institutions would have needed to consider principal investigators, co-authors on publications, co-investigators, collaborators, and laboratory members during the assessment, inquiry, and investigation stages.

ORI removed the obligatory language, and finalized the regulation to allow institutions to exercise their own judgement in deciding to consider whether other researchers are implicated. ORI also noted that commenters had been concerned listing the types of potential co-respondents “created a confusing standard and could be detrimental to those individuals.” ORI agreed and removed the list.

The Final Rule leaves intact the requirement that for allegations involving multiple institutions, one institution must be designated the lead in misconduct proceedings and will be responsible for obtaining research records and witness testimonies from the other institutions. ORI intends to publish guidance on multi-institution investigations, including considerations for designating a lead institution.

Subsequent Use Exception to the Six-Year Statute of Limitations

ORI’s research misconduct rules only apply to misconduct occurring within six years prior to the date ORI or the institution received an allegation, except in the case of subsequent use of the tainted research (the Subsequent Use Exception). The Subsequent Use Exception applies when a respondent uses, republishes, or cites to a portion of the research record that allegedly has been fabricated, falsified, or plagiarized within six years of receipt of the allegation. In practice, the Subsequent Use Exception applies to instances in which tainted research occurs more than six years prior to the date HHS or the institution learned of the allegations and, as such, would not be within the scope of the ORI regulations, except the tainted research was cited to by the respondent within six years prior to receipt of the allegation.

The Final Rule adopts language in the Proposed Rule clarifying that the Subsequent Use Exception applies to re-publication, or citation in processed data, journal articles, funding proposals or data repositories, submitted or published manuscripts, PHS grant applications, progress reports, posters, presentations, and other research records.

In the Proposed Rule, ORI proposed a requirement that institutions inform ORI of the relevant facts before concluding the Subsequent Use Exception does not apply. However, commenters expressed concern about the potential cost and burdens imposed upon institutions by that requirement, and ORI removed that language. In its place, ORI finalized a requirement that institutions document (and retain) how they determined the Subsequent Use Exception did not apply.

Assessment, Inquiry, Investigation, and Appeal Time

In the Final Rule, ORI agreed with concerns raised by commenters related to proposed time limits to complete the assessment, inquiry, and investigation stages, along with any institutional appeal.

Specifically, commenters rejected the Proposed Rule’s requirement for institutions to complete an assessment within 30 days of its initiation, since research institutions have historically struggled to comply with this aggressive timeline. The commenters noted that even an assessment (as opposed to the more formal inquiry or investigation) can require an involved process of evaluating allegations of misconduct, which may include reviewing the research record and interviewing the complainant, respondent, and witnesses. ORI responded by removing the 30-day requirement.

ORI also agreed to lengthen the time to complete the inquiry stage from the proposed 60 days to 90 days. If 90 days will be exceeded, institutions must document the reason for exceeding the 90-day period. ORI also extended the time limit to complete an investigation stage from 120 days to 180 days. If the 180-day time period will be exceeded, institutions must ask ORI for an extension in writing that sets forth the circumstances warranting extension.

The Proposed Rule contained a requirement that institutions, whose policies permit an institutional appeal, complete any such appeal within 120 days of initiation, and request an extension from ORI in writing. ORI removed this requirement in light of comments that the institutional appeal is “within the institution’s purview, not ORI’s.” However, ORI finalized a requirement that institutions promptly notify ORI if an appeal is filed after the institution has transmitted the institutional record to ORI. 

In practice, many institutions struggle to complete the steps of a research misconduct investigations in ORI’s prescribed time frame. ORI’s agreement to extend many of these time frames may make it more likely institutions can complete each stage without the need to either document reasons for the delay or affirmatively request an extension from ORI (though institutions still should do so in case ORI raises issues).

Documentation of the Assessment Stage

In the Proposed Rule, institutions would have been required to document the completion of an assessment of a research misconduct allegation in the form of an assessment report. ORI removed this requirement in response to commentors’ concern that this requirement caused significant burden on institutions. In its place, ORI will merely require institutions document the outcome of the assessment.

Changes Related to Confidentiality of Impacted Individuals and Institutions

The Proposed Rule sought to require ORI, where an investigation results in a settlement or research misconduct finding, to provide notification to the respondent, relevant institution, and HHS officials. However, the Final Rule adopted language that ORI may provide such information to those individuals in light of commentors’ concerns regarding confidentiality and regulatory overreach.

The Proposed Rule also identified potential “need to know” institutions, but ORI removed that list in recognition that it “overcomplicated institutional confidentiality obligations.”

The Final Rule does not include a Proposed Rule provision that would have required an institution to inform respondents, complainants, and witnesses before they are interviewed about how their information may be disclosed.

Additionally, where a final HHS action does not contain a settlement or finding of research misconduct, the Proposed Rule had suggested permitting ORI to publish notice of the outcome “if doing so is within the best interests of HHS to protect the health and safety of the public, to promote the integrity of the PHS supported research and research process, or to conserve public funds.” This language was not adopted in the Final Rule, which only allows ORI to provide written notice to the respondent, the institution, the complainant, and HHS officials.

Streamlined Appeals Process

The Final Rule features a streamlined process for disputing ORI’s findings and administrative actions. The 2005 regulations required a Departmental Appeals Board Administrative Law Judge (ALJ) to undertake a  de novo  review of ORI’s findings based on evidence presented at a hearing before an ALJ. Under the Final Rule, an ALJ will instead review the administrative record, including information provided by the respondent to ORI, and determine whether ORI’s findings and HHS’s administrative actions are (or are not) based on a material error of law or fact.

ORI suggested in the Proposed Rule that ALJs be permitted to hold a limited hearing should the ALJ determine there is a genuine dispute over a material fact. However, ORI has removed the proposal in light of comments suggesting that the “research misconduct process allows for sufficient procedures to make such limited hearings unnecessary.”

Transcription

The Proposed Rule sought to require institutions to prepare transcripts of all interviews conducted during research misconduct proceedings. ORI noted that commenters objected to the broad requirement out of concern for institutional burdens and discouragement of reporting of misconduct allegations. The Final Rule retains the interview transcription requirement during the investigation phase but no longer calls for transcription during the assessment and inquiry phases.

The Final Rule introduces clarifications to regulations governing research misconduct in PHS-funded research. These clarifications reflect consideration of the changing research landscape and attempt to reduce certain of the burdens to research institutions. In particular, the Final Rule reflects ORI’s appreciation that many institutions struggle with the time-intensive and costly process of investigating allegations of research misconduct. Additionally, ORI further recognized the complexity of research misconduct investigations and indicated that guidance is forthcoming on topics such as: practices for pursuing leads, general policies for compliance, research integrity assurances, institutional committee or consortium, multi-institution investigations, and the institutional record.

Whether the revised regulations are better suited to govern investigations involving innovative research across institutions and disciplines, and spanning many years, remains to be seen. Research institutions of all sizes should take care to review the Final Rule and consider how the revisions to the research misconduct regulations affect current and future institutional practices. Foley has an expert team of attorneys with deep experience advising institutions, including universities, academic medical centers and health care systems, navigate internal and ORI-initiated research misconduct proceedings.

We will continue to monitor the implementation of these new regulations, as well as any guidance affecting research institutions.

Foley is here to help you address the short and long-term impacts in the wake of regulatory changes. We have the resources to help you navigate these and other important legal considerations related to business operations and industry-specific issues. Please reach out to the authors, your Foley relationship partner, our  Health Care Practice Group  or to our  Government Enforcement Defense & Investigations Practice Group  with any questions.

Olivia Benjamin headshot.

Olivia Benjamin (King)

recent cases of research misconduct

Jared H. Pliner

recent cases of research misconduct

Lisa M. Noller

recent cases of research misconduct

Michael J. Tuteur

Related insights, artificial intelligence in health care: key considerations for oncology, medicare advantage: a circuit court addresses what is (or is not) material in false claims act cases, the 340b drug pricing program: top five things to know.

American Psychological Association Logo

Leading the charge to address research misconduct

Like all science, the field of psychology is vulnerable to fabrication, falsification, and poor research practices, but psychologists are leading the charge for change

Vol. 52 No. 6 Print version: page 71

  • Conducting Research

recent cases of research misconduct

When James DuBois, ScD, PhD, launched a training program in 2013 for researchers caught failing to comply with research protocols, plagiarizing, or falsifying and fabricating data, it was controversial, to say the least. The program’s launch was accompanied by a feature article in Nature’ s news section, and much of the feedback was incensed ( Cressey, D., Vol. 493, No. 197).

“Oh, my goodness, the chat for the online story!” DuBois, an applied psychologist at the Washington University School of Medicine in St. Louis, recalled. “There was so much hate.”

It’s no wonder. Misconduct flies in the face of the values of scientific research, which at its heart is about the search for truth. But the reality is that misconduct and its cousin, questionable research practices, occur on a spectrum. The most egregious cases are from outright hucksters who don’t generally qualify for second chances. The majority, though, live in a gray area where an honest mistake, publish-or-perish pressure, and lack of clear norms or quality training can lead to blunders. Anyone, no matter how well intentioned, could be vulnerable.

Psychology as a research field is susceptible to all these pressures, but psychologists are also a major force in solving the problem. With expertise in behavior change, motivations, and incentives, psychological researchers are tackling misconduct at both the individual and structural level. It’s an effort that has already led to big shifts in the field, with more likely to come.

“Psychologists are leading the charge in changing their individual behaviors and the structural components that lead to questionable research practices, including misconduct,” said Brian Nosek, PhD, a social psychologist at the University of Virginia and a proponent of the open science movement as a bulwark for improving credibility. “Most changes have occurred in journals so far. Funders and institutions have begun to change, but there’s still a lot to do.”

The scope of the problem

There are no firm numbers on how much research misconduct goes on. Attempts to quantify the problem usually either count retractions, survey scientists about their own and colleagues’ practices, or statistically reanalyze published papers. A 2019 review of studies using all three methods found that between 0.6% and 2.3% of psychologists surveyed admitted to falsifying data themselves, and between 9.3% and 18.7% said they’d witnessed another researcher doing so ( Zeitschrift für Psychologie , Vol. 227, No. 1). Statistical reanalyses suggested that there were major inconsistencies in between 12.4% and 20.5% of published studies, but it’s unclear whether these were examples of outright misconduct, questionable research practices, or errors. Finally, a count of retractions found that 64.84% of retractions in PsycINFO were due to misconduct, similar to rates in other scientific fields.

It’s best to take these numbers with a grain of salt, said Armin Günther, PhD, a research associate at the Leibniz Institute for Psychology in Germany who coauthored the review. Misconduct can be tricky to define, and there are many poor practices that can easily slip by unnoticed.

“I’m not sure if it is so important to have a number, to say ‘It’s 3% of articles,’ or ‘It’s 10%,’” Günther said. “It’s more of a qualitative issue, because if you suspect that research and scientific work is basically not trustworthy, that’s a problem.”

Setting aside egregious fraudsters, misconduct can be a slippery slope—and one that is all too often incentivized by the academic structure, says Jennifer Crocker, PhD, the chief editorial advisor at APA and a social psychologist at The Ohio State University. “People’s careers, livelihoods, and reputations depend on their ability to publish in the scientific literature,” Crocker said. “There is incredible pressure for early career researchers to have not only publications but lots of them—first-authored publications, publications in top journals. And that pressure doesn’t go away over the course of a career.”

Participants in DuBois’s course, the P.I. Program at the Washington University in St. Louis, are usually referred by their universities as a requirement for continued employment or research privileges. One common reason for referrals is failure to provide adequate oversight when someone else in a lab falsifies data, DuBois said. Often, the root cause is being overextended and failing to prioritize compliance.

In a study of 120 cases of misconduct pulled from the research literature and from records of federal research oversight agencies, DuBois and his colleagues found that in these cases at least, oversight failures were common ( Accountability in Research , Vol. 20, No. 5–6, 2013). Motivated reasoning and narcissistic thinking were common, with primary investigators who fabricated or falsified data often talking about the pressure to publish or get grants in the pursuit of fame. “They’re not falsifying data just to keep their career afloat, but really to become a superstar,” DuBois said.

Changing incentives

It can be fraught to link misconduct directly to publish-­or-perish culture, DuBois said; after all, many researchers cope with the pressure without lapses in ethics, and individual psychological traits seem to play a role for those who don’t. Nevertheless, the past decade or so has seen calls for a change in the incentive structure that links top-tier jobs to flashy results in high-impact journals. The San Francisco Declaration on Research Assessment (DORA), first drafted in 2012, recommends that employers and funders “not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”

A few universities have tried to put that recommendation into practice. In the Netherlands, a program called Recognition & Rewards aims to inject balance into how academics at participating universities are judged. The goal is to put more focus on aspects outside of publications, including teaching, leadership, and patient care. Even among those who support these ideals, some signatories to DORA have found change to be challenging: In 2019, a chemical systems engineer at ETH Zurich, one of the DORA supporters, had to apologize after including a requirement that postdoctoral applicants must have published in a journal with an impact factor of 10 or higher. And DORA still lacks a critical mass of signatories. Fewer than 50 psychological journals, associations, and programs around the world have endorsed the recommendations.

“Everybody knows that the impact factor is not a good criterion for the scientific work of single persons, but even though everybody knows, it is quite hard to get rid of these kinds of evaluations,” Günther said. “I hope and I think that we will go in the right direction, but it is very slow, like a big ship you have to turn in the ocean.”

If it’s hard to drop the prestige associated with big-name journals altogether, attempts to change the kind of research those journals publish seem to be making more headway. One goal of the open science movement is to ensure that journals don’t just jump on splashy positive results; to better represent the state of science, they should also publish negative results and incremental work. As a result, open science should take the pressure off researchers to produce always flashy, always significant results, reducing one major incentive that leads to questionable research practices.

Open science also elevates transparency as a core value, which makes it more difficult to fabricate or falsify data on the way to a conclusion. Research led by Daniele Fanelli, PhD, a fellow in quantitative methodology at the London School of Economics and Political Science, found that a culture of transparency and communication, paired with strong rules around misconduct, is associated with fewer misconduct-related retractions ( PLOS ONE , Vol. 10, No. 6, 2015).

“Transparency does a few things,” Nosek said. “As an author, it gives me some additional accountability. As a reader, it gives you a means to evaluate the basis of my claims. And as a community, it gives us shared evidence to debate credibility and whether biases were injected in practice.”

This kind of transparency is likely familiar to many psychological researchers. In November 2020, APA signed on to the Transparency and Openness Promotion (TOP) Guidelines for all its journals, which require certain disclosure of data and research materials and encourage full data transparency when ethically possible.

Research suggests these efforts are having the desired effect. A study led by Anne Scheel, a doctoral candidate at the Eindhoven University of Technology in the Netherlands, found that registered reports in psychology—in which methods and data analysis plans are laid out prior to the study being done, and publication is guaranteed if those plans are met—return findings consistent with their hypotheses about half the time, compared with 95% of journal papers published without preregistration ( Advances in Methods and Practices in Psychological Science , Vol. 4, No. 2, 2021). This is a good sign, Nosek said, because the registered reports more accurately reflect the reality of research—and they dispense with the unrealistic expectation that researchers publish a stream of positive results.

Individual interventions

Open science is a promising way to change incentives, but it can’t combat fraud alone. Any set of new rules can be abused. And total data transparency is not always possible, Crocker said. Imagine, for example, a longitudinal study of romantic dyads. “If people know that they were in the study and they know their own characteristics and responses, they can find their own data in the data set,” Crocker said. “Then they can find their partner’s data in the data set, so anything the partner says is no longer confidential.”

So other kinds of change matter, too. “Responsible Conduct of Research” courses are now de rigueur in psychology training, but there are still gaps in how much funding and attention research institutions give to research integrity, said Nicholas Steneck, PhD, a historian of science, emeritus professor at the University of Michigan, and founder of the World Conferences on Research Integrity. If anything, Steneck said, pressures are worsening on research faculty at universities: State funding of higher education never fully recovered after the 2008 recession, and federal research dollars have flatlined, meaning steeper competition for jobs and grants. “What’s that going to do?” Steneck said. “It’s going to increase pressure on researchers.”

The research community is pushing for structural changes, but institutions haven’t been as proactive, Steneck said. Many rely on volunteer-only staffs in their research integrity offices, and upper administration is often stretched too thin to monitor the university research climate. One of the main goals of the World Conferences on Research Integrity is to get institutional research leaders involved in identifying the problem and solutions, Steneck said. One of his current projects is disseminating an online research integrity training program via Oxford University Press and using it to help university leadership understand potential problem spots in their own programs: In which fields are researchers under extreme pressure to get grants? In which labs are graduate students lacking support from their mentors?

DuBois’s P.I. Program offers a unique opportunity to correct these problems on a case-by-case basis. More than 100 researchers from 57 institutions have attended so far. They’re usually referred by their institutions after a lapse of moderate severity, such as failing to catch a lab member’s data falsification or dropping outliers from their reports without any change to the conclusion.

Evidence suggests that many researchers who make these kinds of errors will not be unceremoniously booted from the field; a study by Kyle L. Galbraith, PhD, a research ethicist now at the Piedmont Athens Regional Medical Center in Georgia, found that of 284 researchers disciplined by the U.S. Office of Research Integrity, at least 47% went on to continue doing research ( Journal of Empirical Research on Human Research Ethics , Vol. 12, No. 1, 2017 ). DuBois’s program aims to ensure that these researchers don’t continue making the same mistakes.

The program was founded with a National Institutes of Health (NIH) grant and is also funded by fees paid by participants. The three-day course is grounded in psychology. “Motivational interviewing plays a big role for us,” DuBois said. “People sometimes feel they have been treated unfairly by their institutions and that becomes a real barrier to change. We try to get them to ask themselves a question like, ‘How do you think you’re perceived at your institution? Is that helping your career?’ Pretty soon they are making exactly the changes you want to see without having to concede that it’s all their fault.”

DuBois and his team follow up with clients with a survey after a year. The results are self-reported but suggest that graduates of the program apply lessons they’ve learned in the course, such as implementing standard operating procedures, holding regular lab meetings, and reducing work stressors ( Academic Medicine , Vol. 93, No. 4, 2018 ). Although DuBois and his team can’t audit these claims themselves, there has not been a case to his knowledge of a program graduate being disciplined a second time for misconduct.

With an eye toward prevention, one of the coleaders of the P.I. Program is now developing a management and leadership training program for early career researchers. Alison Antes, PhD, an industrial and organizational psychologist at the Washington University School of Medicine in St. Louis, has just received NIH funding to launch an online program that would offer training in these skills for running labs with integrity and compliance.

Most researchers aren’t formally trained in management, even though primary investigators take on the roles of executive management, middle management, and basic supervision all at once, Antes said. “It is really a lot that researchers are asked to manage and deal with while at the same time trying to get grants, trying to get their papers out,” she said.

Antes’s research found that scientists who are rated as exemplary by their peers use seven key practices: holding regular team meetings, providing supervision, encouraging shared ownership, ensuring good training, communicating positive attitudes about compliance, reviewing data and findings, and following standard operating procedures ( PLOS ONE , Vol. 14, No. 4, 2019 ). Her new program aims to address failures in these domains. Leadership training goes hand in hand with management training, she added: Leadership involves bigger-picture concepts such as communicating values, setting expectations, and creating an environment that fosters good practices.

The COVID-19 pandemic made clear the costs of public distrust in the scientific process, Steneck said. But public trust must be earned, making dealing with research misconduct an urgent matter.

“If we don’t get our house in order when it comes to integrity, there’s a perfectly good reason for not believing researchers,” Steneck said. “We have to be concerned about that. We have to be above criticism, above fault.”

Part of combating misconduct is knowing what it is and how to report it. These websites can help.

  • APA Publication Practices & Responsible Authorship
  • APA Research Misconduct Information
  • U.S. Department of Health & Human Services Office of Research Integrity
  • NIH Grants & Funding Research Misconduct Overview

Further reading

Scientists’ reputations are based on getting it right, not being right Ebersole, C. R., et al., PLOS Biology , 2016

Fostering integrity in research National Academies of Sciences, Engineering, and Medicine , 2017

Assessing the climate for research ethics in labs: Development and validation of a brief measure Solomon, E. D., et al., Accountability in Research , online first publication

Recommended Reading

Contact apa, you may also like.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 13 September 2021

Swedish research misconduct agency swamped with cases in first year

You can also search for this author in PubMed   Google Scholar

Scientists have inundated Sweden’s new national research-misconduct investigation agency with cases, and there is no sign of a let-up in referrals.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 597 , 461 (2021)

doi: https://doi.org/10.1038/d41586-021-02451-4

Reprints and permissions

Related Articles

recent cases of research misconduct

Macchiarini scandal is a valuable lesson for the Karolinska Institute

recent cases of research misconduct

Misconduct is the main cause of life-sciences retractions

  • Research management
  • Institutions

‘Afraid to talk’: researchers fear the end for science in Venezuela

‘Afraid to talk’: researchers fear the end for science in Venezuela

News 27 SEP 24

Researchers in Hungary raise fears of brain drain after ‘body blow’ EU funding suspension

Researchers in Hungary raise fears of brain drain after ‘body blow’ EU funding suspension

Career News 26 SEP 24

Universities are not just businesses, but an investment in future generations

Universities are not just businesses, but an investment in future generations

Editorial 25 SEP 24

More measures needed to ease funding competition in China

Correspondence 24 SEP 24

Gender inequity persists among journal chief editors

The human costs of the research-assessment culture

The human costs of the research-assessment culture

Career Feature 09 SEP 24

Greening science: what’s in it for you?

Greening science: what’s in it for you?

Spotlight 25 SEP 24

Can South Korea regain its edge in innovation?

Can South Korea regain its edge in innovation?

Nature Index 21 AUG 24

Postdoctoral Fellow in Biomedical Optics and Medical Physics

We seek skilled and enthusiastic candidates for Postdoctoral Fellow positions in the Biomedical Optical Imaging for cancer research.

Dallas, Texas (US)

UT Southwestern Medical Center, BIRTLab

recent cases of research misconduct

Associate Professor

J. Craig Venter Institute is conducting a faculty search for Associate Professors position in Rockville, MD and San Diego, CA campuses.

Rockville, Maryland or San Diego, California

J. Craig Venter Institute

recent cases of research misconduct

Associate Professor position (Tenure Track), Dept. of Computational & Systems Biology, U. Pittsburgh School of Medicine. Please apply by Dec. 2, 2024.

University of Pittsburgh, Pittsburgh

University of Pittsburgh | DCSB

recent cases of research misconduct

Assistant Professor

Assistant Professor position (Tenure Track), Dept. of Computational & Systems Biology, U. Pittsburgh School of Medicine. Please apply by Dec. 2, 2024.

Independent Group Leader Positions in Computational and/or Experimental Medical Systems Biology

NIMSB is recruiting up to 4 Independent Group Leaders in Computational and/or Experimental Medical Systems Biology

Greater Lisbon - Portugal

NOVA - NIMSB

recent cases of research misconduct

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Image of Gollum

For too long, some scientists have acted like Gollums of the ivory tower, guarding precious study sites, model organisms, and even entire fields of inquiry.

research misconduct

Paolo Macchiarini stands at a podium with a microphone

A compilation of articles regarding Research Misconduct issues. 

This page offers news-worthy topics for the Responsible Conduct of Research and Research Misconduct. Note: Due to the nature of web page evolution, some links may be broken.

Research Misconduct News

September October November December

Richard Eckert

Former Maryland dept. chair with $19 million in grants faked data in 13 papers, feds say

August 13, 2024 Retraction Watch Ellie Kincaid

A former department chair engaged in research misconduct in work funded by 19 grants from the National Institutes of Health, according to the U.S. Office of Research Integrity. 

...Of the 13 papers in which Eckert faked Western blot and microscopy image data, according to ORI, four have been corrected and one retracted, and Eckert must request corrections or retractions for the remaining eight. The 13 papers have been cited 488 times, according to Clarivate’s Web of Science. ...

Read more...

Errors in scientific research — intentional or not — hurt the progress of science. CREDIT: ISTOCK.COM/ALEKSEYLISS

Science stands on shaky shoulders with research misconduct

Research misconduct poisons the well of scientific literature, but finding systemic ways to change the current “publish or perish” culture will help.

July 4, 2024 Drug Discovery News (DDN) Stephanie DeMarco, PhD

I distinctly remember the day I saw a western blot band stretched, rotated, and pasted into another panel. Zoomed out, it looked like a perfectly normal blot; the imposter band sat amongst the others like it had always been there.

Sitting at a long table with the other graduate students on my training grant, I watched as our professor showed us example after example of images from published scientific papers that had been manipulated to embellish the data. I really appreciate that course and the other research integrity courses I took during my research training for teaching me and my peers how to spot bad science and what to do when we encounter it. It made me a better scientist when I was in the lab, and now, it makes me a better journalist.

When bad science infiltrates the publication record, researchers unwittingly build their own research programs around shaky science. Not only does this waste researchers’ time and money, but it affects real people’s lives. 

PsyArXiv Preprints

Implementing statcheck during peer review is related to a steep decline in statistical reporting inconsistencies

June 20, 2024 PsyArXiv Preprints Michele B. Nuijten and Jelte Wicherts

We investigated whether statistical reporting inconsistencies could be avoided if journals implement the tool statcheck in the peer review process. In a preregistered pretest-posttest quasi-experiment covering over 7000 articles and over 147,000 extracted statistics, we compared the prevalence of reported p-values that were inconsistent with their degrees of freedom and test statistic in two journals that implemented statcheck in their peer review process (Psychological Science and Journal of Experimental and Social Psychology) and two matched control journals (Journal of Experimental Psychology: General and Journal of Personality and Social Psychology, respectively), before and after statcheck was implemented. Preregistered multilevel logistic regression analyses showed that the decrease in both inconsistencies and decision inconsistencies around p = .05 is considerably steeper in statcheck journals than in control journals, offering preliminary support for the notion that statcheck can be a useful tool for journals to avoid statistical reporting inconsistencies in published articles. We discuss limitations and implications of these findings.

How restorative justice could help to heal science communities torn apart by harassment misconduct

How restorative justice could help to heal science communities torn apart by harassment misconduct

Could a programme used by police to tackle repeat offending be used in academia? Some institutions are keen to find out.

June 11, 2024 Nature Sarah Wild

When a scientist is found to have harassed a colleague or student, it can reverberate through the research community — sometimes for decades. Not knowing how to treat the parties involved and a lack of guidelines for how to handle a harasser’s data, research, collaborations and publications can tear specialities apart. A debate is unfolding about how institutions handle sexual-harassment and bullying-misconduct cases — and whether there’s a way to deal with them that would heal communities rather than cause more harm.

Interviews with staff and students at three elite Chinese universities revealed a sense of pressure to publish. Credit: Hao Qunying/Costfoto/Sipa USA via Alamy

Elite researchers in China say they had ‘no choice’ but to commit misconduct

Anonymous interviewees say they engaged in unethical behaviour to protect their jobs — although others say study presents an overly negative view .

June 11, 2024 Nature Smriti Mallapaty

“I had no choice but to commit [research] misconduct,” admits a researcher at an elite Chinese university. The shocking revelation is documented in a collection of several dozen anonymous, in-depth interviews offering rare, first-hand accounts of researchers who engaged in unethical behaviour — and describing what tipped them over the edge. An article based on the interviews was published in April in the journal  Research Ethics 1 .

The interviewer, sociologist Zhang Xinqu, and his colleague Wang Peng, a criminologist, both at the University of Hong Kong, suggest that researchers felt compelled, and even encouraged, to engage in misconduct to protect their jobs. This pressure, they conclude, ultimately came from a Chinese programme to create globally recognized universities. The programme prompted some Chinese institutions to set ambitious publishing targets, they say.

The article offers “a glimpse of the pain and guilt that researchers felt” when they engaged in unethical behaviour, says Elisabeth Bik, a scientific-image sleuth and consultant in San Francisco, California.

Karen Ashe of the University of Minnesota Twin Cities stands by the conclusions of her team’s 2006 paper. JERRY HOLT/STAR TRIBUNE VIA GETTY IMAGE

Researchers plan to retract landmark Alzheimer’s paper containing doctored images

Senior author acknowledges manipulated figures in study tying a form of amyloid protein to memory impairment

June 4, 2024 Science Charles Pillar

Authors of a landmark  Alzheimer’s disease research paper  published in  Nature  in 2006 have agreed to retract the study in response to allegations of image manipulation. University of Minnesota (UMN) Twin Cities neuroscientist Karen Ashe, the paper’s senior author, acknowledged in a  post  on the journal discussion site PubPeer that the paper contains doctored images. The study has been cited nearly 2500 times, and would be the most cited paper ever to be retracted, according to  Retraction Watch data .

“Although I had no knowledge of any image manipulations in the published paper until it was brought to my attention two years ago,” Ashe wrote on PubPeer, “it is clear that several of the figures in Lesné et al. (2006) have been manipulated … for which I as the senior and corresponding author take ultimate responsibility.”

Biomedical paper retractions have quadrupled in 20 years — why?

Biomedical paper retractions have quadrupled in 20 years — why?

Unreliable data, falsification and other issues related to misconduct are driving a growing proportion of retractions.

May 31, 2024 Nature Holly Else

The retraction rate for European biomedical-science papers increased fourfold between 2000 and 2021, a study of thousands of retractions has found.

Two-thirds of these papers were withdrawn for reasons relating to research misconduct, such as data and  image manipulation  or  authorship fraud . These factors accounted for an increasing proportion of retractions over the roughly 20-year period, the analysis suggests.

“Our findings indicate that research misconduct has become more prevalent in Europe over the last two decades,” write the authors, led by Alberto Ruano‐Ravina, a public-health researcher at the University of Santiago de Compostela in Spain.

Copy-and-Paste: How Allegations of Plagiarism Became the Culture War’s New Frontier

Copy-and-Paste: How Allegations of Plagiarism Became the Culture War’s New Frontier

Harvard had already found itself in the crossfires of the culture war. But with new software at their disposal and a trove of unscrutinized scholarship to dive into, the plagiarism allegations against Claudine Gay had opened up a new frontier.

May 23, 2024 The Harvard Crimson Angelina J. Parker and Neil H. Shah

Plagiarism is a cardinal offense for academics. In December, it also became the latest cudgel in the conservative culture war on Harvard and diversity, equity, and inclusion.

The development could not have come at a worse time for the University. Harvard was  struggling to navigate public fallout  from former President Claudine Gay’s now-infamous  congressional hearing . The University was under a national microscope like never before, and politicians, alumni, and Harvard affiliates were  calling for Gay’s resignation .

And amidst it all — as the Harvard Corporation met to discuss Gay’s future at the University — right-wing activist Christopher F. Rufo and journalist Christopher Brunet hit publish on a piece that would add a new element to the controversy:  allegations  that Gay had plagiarized large sections of her Ph.D. dissertation at Harvard.

Why Scientific Fraud Is Suddenly Everywhere

Why Scientific Fraud Is Suddenly Everywhere

May 21, 2024 Intelligencer Kevin T. Dugan

Junk science has been forcing a reckoning among scientific and medical researchers for the past year, leading to thousands of retracted papers. Last year, Stanford president Marc Tessier-Lavigne resigned amid  reporting  that some of his most high-profile work on Alzheimer’s disease was at best inaccurate. (A probe commissioned by the university’s board of trustees later  exonerated  him of manipulating the data).

But the problems around credible science appear to be getting worse. Last week, scientific publisher Wiley  decided to shutter  19 scientific journals after retracting 11,300 sham papers. There is a large-scale industry of so-called “paper mills” that sell fictive research, sometimes written by artificial intelligence, to researchers who then publish it in peer-reviewed journals — which are sometimes edited by people who had been placed by those sham groups. Among the institutions exposing such practices is Retraction Watch, a 14-year-old organization co-founded by journalists Ivan Oransky and Adam Marcus. I spoke with Oransky about why there has been a surge in fake research and whether fraud accusations against the presidents of Harvard and Stanford are actually good for academia.

Pay researchers to spot errors in published papers

Pay researchers to spot errors in published papers

Borrowing the idea of ‘bug bounties’ from the technology industry could provide a systematic way to detect and correct the errors that litter the scientific literature.

May 21, 2024 Nature Malte Elson

In 2023, Google awarded a total of US$10 million to researchers who found vulnerabilities in its products. Why? Because allowing errors to go undetected could be much costlier. Data breaches could lead to refund claims, reduced customer trust or legal liability.

It’s not just private technology companies that invest in such ‘bug bounty’ programmes. Between 2016 and 2021, the US Department of Defense awarded more than US$650,000 to people  who found weaknesses in its networks .

Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering.

"‘Grimpact’: psychological researchers should do more to prevent widespread harm

‘Grimpact’: psychological researchers should do more to prevent widespread harm

Researchers carefully evaluate ethics for study participants, but Alon Zivony argues we need to consider wider guidelines for socially responsible science.

May 17, 2024 The British Psychological Society (the psychologist) Dr. Alon Zivony

In a recent study, US researchers claimed to have found that Black and Hispanic Americans are not more likely to be fatally shot by police officers than White Americans. Unsurprisingly, this study got a lot of attention with over 100 news outlets covering it and millions of people  discussing it  on social media. Meanwhile, scientists criticised the study for its analyses, claiming they were so flawed that they invalidated the conclusions entirely. At first, the authors rejected the criticisms.

But then, something almost unprecedented happened: in response to the public debate, the authors decided to retract their paper due to 'the continued use' of the work to 'support for the idea that there are no racial biases in fatal shootings, or policing in general'. In other words, this highly visible paper was retracted, not because of flaws in the methodology, but because of ethical concerns about its adverse impacts on society.

Scientists Possess Inflated Views of Their Own Ethics

Scientists Possess Inflated Views of Their Own Ethics

Scientists are many things. Being unbiased isn’t one of them.

May 6, 2024 Psychology Today Matt Grawitch, Ph.D.

A recent  Psychology Today  post by  Miller (2024)  discussed the results of a research study 1  that included a sample of more than 10,000 researchers from Sweden. Respondents were provided with a description of ethical research practices (Figure 1) and asked to rate (1) how well they applied ethical research practices relative to others in their field and (2) how well researchers in their field applied ethical research practices relative to those in other fields.

The study itself was not overly complex (in fact, each rating was just a single item). When it came to rating their own application of research ethics, 55 percent rated themselves as equal to their peers, close to 45 percent rated themselves as better, and less than 1 percent rated themselves as worse. When it came to assessing others in their field, 63 percent rated their field as similar to others, 29 percent rated their field as better, and close to 8 percent rated their field as worse.

How reliable is this research? Tool flags papers discussed on PubPeer

How reliable is this research? Tool flags papers discussed on PubPeer

Browser plug-in alerts users when studies — or their references — have been posted on a site known for raising integrity concerns.

April 29, 2024 Nature Dalmeet Singh Chawla

A free online tool released earlier this month alerts researchers if a paper cites studies that are mentioned on the website  PubPeer , a forum scientists often use to raise integrity concerns surrounding published papers.

Studies are usually flagged on PubPeer when readers have suspicions, for example about  image manipulation ,  plagiarism , data fabrication or  artificial intelligence (AI)-generated text . PubPeer already offers its own browser plug-in that alerts users if a study that they are reading has been posted on the site. The new tool, a plug-in  released on 13 April by RedacTek , based in Oakland, California, goes further — it searches through reference lists for papers that have been flagged. The software pulls information from many sources, including PubPeer’s database; data from the digital-infrastructure organization Crossref, which assigns  digital object identifiers  to articles; and  OpenAlex , a free index of hundreds of millions of scientific documents.

So you’ve found research fraud. Now what?

So you’ve found research fraud. Now what?

Harvard dishonesty researcher Francesca Gino faked her research. But she still has a lot to teach us.

April 26, 2024 Vox Kelsey Piper

When it is alleged that a scientist has manipulated data behind their published papers, there’s an important but miserable project ahead: looking through the  rest  of their published work to see if any of that is fabricated as well. 

After dishonesty researcher Francesca Gino was placed on leave at Harvard Business School last fall following allegations that  four of her papers contained manipulated data , the people who’d co-authored other papers with her scrambled to start double-checking their published works. 

ILLUSTRATION BY THE CHRONICLE; ISTOCK

One Scientist Neglected His Grant Reports. Now U.S. Agencies Are Withholding Grants for an Entire University.

April 10, 2024 The Chronicle of Higher Education Francie Diep

The National Institutes of Health, the Office of Naval Research, and the U.S. Army are withholding all of their grants from the University of California at San Diego because one scientist failed to turn in required final reports for two of his grants, according to a message sent to the campus community on Tuesday.

“This action is the result of one Principal Investigator’s extended non-submission of final technical reports for two awards,” Corinne Peek-Asa, vice chancellor for research and innovation, wrote in the message. “If you are a PI receiving a new or continuing award from one of these agencies, you will receive a notice that the award will be delayed.”

Students rally on the campus of UCLA in 2016 to protest the handling of a sexual-harassment case. Credit: LUIS SINCO, LOS ANGELES TIMES, GETTY IMAGES

One Way to Stop ‘Passing the Harasser’? Require Colleges to Ask About It.

April 8, 2024 The Chronicle of Higher Education Michael Vasquez

Higher education has long been dogged by the “pass-the-harasser” phenomenon, in which employees found responsible for sexual misconduct have been allowed to quietly depart their colleges, only to be hired by other campuses who knew nothing of their misdeeds. Sometimes the misconduct continues.

That is slowly changing. California is the latest state to consider enacting a law that would require colleges to contact job applicants’ current or past employers to ask about policy violations. Assembly Bill 810, part of  a larger package of anti-harassment legislation , has been passed by the state’s lower chamber and is now in the Senate.

Physicist Ranga Dias was once a rising star in the field of superconductivity research. Credit: Lauren Petracca/New York Times/Redux/eyevine

Exclusive: official investigation reveals how superconductivity physicist faked blockbuster results

The confidential 124-page report from the University of Rochester, disclosed in a lawsuit, details the extent of Ranga Dias’s scientific misconduct.

April 6, 2024 Nature Dan Garisto

Ranga Dias, the physicist at the centre of  the room-temperature superconductivity scandal , committed data fabrication, falsification and plagiarism, according to a investigation commissioned by his university.  Nature ’s news team discovered the bombshell investigation report in court documents.

The ten-month investigation, which concluded on 8 February, was carried out by an independent group of scientists recruited by the University of Rochester in New York. They examined 16 allegations against Dias and concluded that it was more likely than not that in each case, the physicist had committed scientific misconduct. The university is now attempting to fire Dias, who is a tenure-track faculty member at Rochester, before his contract expires at the end of the 2024–25 academic year.

ROBERT NEUBECKER

Why I foster multiple lines of communication with students in my lab

April 4, 2024 Science Denis Meuthen

When I started my faculty position, I was excited to be leading my own lab—and nervous. I’m legally deaf and rely on lip-reading for verbal communication. I had managed fine as a graduate student and postdoc, though not without misunderstandings and challenges. But leading a team was different. I worried about whether I would be able to communicate effectively with my lab members, and also whether they would respect me. My pronunciation is sometimes off because of my disability, which leads some people to judge my intelligence as lacking. I set out unsure how to navigate these uncertain waters. But after almost 2 years in the position, I’ve come up with a set of solutions for how best to communicate with my trainees. Many would be useful for any lab head, disabled or not.

Microbiology Society

Publishing negative results is good for science

April 2, 2024 Microbiology Society Elisabeth M. Bik

Scientists face challenges in publishing negative results, because most scientific journals are biassed in accepting positive and novel findings. Despite their importance, negative results often go unpublished, leading to duplication of efforts, biassed meta-analyses, and ethical concerns regarding animal and human studies. In this light, the initiative by Access Microbiology to collect and publish negative results in the field of microbiology is a very important and valuable contribution towards unbiassed science.

Bik, E. M. (2024). Publishing negative results is good for science. Access Microbiology , 6 (4). https://doi.org/10.1099/acmi.0.000792

Getty Images

Universities Oppose Federal Plan to Bolster Research Misconduct Oversight

The Office of Research Integrity is considering stronger regulations for institutional investigations of alleged research misconduct. Universities say it’s too prescriptive.

April 2, 2024 Inside Higher Ed Kathryn Palmer

The federal Office of Research Integrity (ORI) is  proposing changes  that would give the government more oversight of investigations of research misconduct at colleges and universities.

But scores of university and research hospital leaders and the organizations representing them are opposed and say the proposed rules would be burdensome to institutions and could potentially deter people from reporting alleged research misconduct, among other perceived negative consequences.

An unpleasant surprise awaited scientists who surveyed 1,035 journal articles to prepare a review about a test commonly carried out on rats. Credit: Oleksandr Bushko/Alamy

How papers with doctored images can affect scientific reviews

Scientists compiling a review scan more than 1,000 papers and find troubling images in some 10%.

March 28, 2024 Nature Sumeet Kulkarni

It was in just the second article of more than 1,000 that Otto Kalliokoski was screening that he spotted what he calls a “Photoshop masterpiece”.

The paper showed images from western blots — a technique used to analyse protein composition — for two samples. But Kalliokoski, an animal behaviourist at the University of Copenhagen, found that the images were identical down to the pixel, which he says is clearly not supposed to happen.

Image manipulation  in scientific studies is a known and widespread problem. All the same, Kalliokoski and his colleagues were startled to come across more than 100 studies with  questionable images  while compiling a systematic review about a widely used test of laboratory rats’ moods

Institutions such as Harvard Medical School argue their processes are adequate and adjustments would constitute government overreach. PHOTO: CHARLES KRUPA/ASSOCIATED PRESS

The Feds Want More Oversight of Scientific Research. Universities Are Fighting Back.

Research institutions are pushing back against proposed changes to misconduct, plagiarism investigations

March 28, 2024 The Wall Street Journal Melissa Korn and Nidhi Subbaraman

Research universities and hospitals are pushing back against a federal agency’s proposal to boost oversight of investigations related to fraud and plagiarism, even as many face questions over the credibility of their scientists’ work.

The Office of Research Integrity, part of the U.S. Department of Health and Human Services, oversees more than $40 billion in research funds and is calling for more transparency in research-misconduct investigations. The recommended changes come amid  high-profile cases  at schools including Stanford University, Harvard Medical School and the University of Rochester. 

Morteza Mahmoudi is the co-founder of the Academic Parity Movement, an organization that aims to end bullying in academia.Credit: Haniyeh Aghaverdi

Bullied in science: I quit my job and launched an advocacy non-profit

Ahead of the Academic Parity Movement’s annual conference, co-founder Morteza Mahmoudi describes how it supports whistle-blowers.

March 12, 2024 Nature Morteza Mohmoudi

I experienced a wide spectrum of academic bullying and eventually had to quit a job because of it. It was a heart-wrenching decision. Since my departure, I’ve found peace in a supportive work environment. I was determined to use all the available means to prevent others from facing similar situations.

So, alongside my scientific work, I study the root causes of academic bullying and harassment and seek solutions to them. I forgave my bully last year, but I still find it challenging to forgive those who protected the bully and ultimately forced my departure.

Nature

Automatically listing senior members of departments as co-authors is highly prevalent in health sciences: meta-analysis of survey research

March 11, 2024 Nature Reint A. Meursinge Reynders, David Cavagnetto, Gerben ter Riet, Nicola Dr Girolamo, & Mario Malicki

A systematic review with meta-analysis was conducted to assess the prevalence of automatically listing (a) senior member(s) of a department as co-author(s) on all submitted articles in health sciences and the prevalence of degrees of support on a 5-point justification scale. Survey research was searched in PubMed, Lens.org, and Dimensions.ai. until January 5 2023. We assessed the methodological quality of studies and conducted quantitative syntheses. We identified 15 eligible surveys, that provided 67 results, all of which were rated as having low quality. A pooled estimate of 20% [95% CI 16–25] (10 surveys, 3619 respondents) of researchers in various health sciences reported that a senior member of their department was automatically listed as an author on all submitted articles. Furthermore, 28% [95% CI 22–34] of researchers (10 surveys, 2180 respondents) felt that this practice was ‘never’, 24% [95% CI 22–27] ‘rarely’, 25% [95% CI 23–28] ‘sometimes’, 13% [95% CI 9–17] ‘most of the time’, and 8% [95% CI 6–9] ‘always justified’. The practice of automatically assigning senior members of departments as co-authors on all submitted manuscripts may be common in the health sciences; with those admitting to this practice finding it unjustified in most cases.

Meursinge Reynders, R.A., Cavagnetto, D., ter Riet, G.  et al.  Automatically listing senior members of departments as co-authors is highly prevalent in health sciences: meta-analysis of survey research.  Sci Rep   14 , 5883 (2024). https://doi.org/10.1038/s41598-024-55966-x

Ranga Dias and his team at the University of Rochester compressed materials in a device called a diamond anvil cell to explore superconductivity. Credit: Lauren Petracca/New York Times/Redux/eyevine

Superconductivity scandal: the inside story of deception in a rising star’s physics lab

Ranga Dias claimed to have discovered the first room-temperature superconductors, but the work was later retracted. An investigation by  Nature ’s news team reveals new details about what happened — and how institutions missed red flags.

March 8, 2024 Nature Dan Garisto

In 2020, Ranga Dias was an up-and-coming star of the physics world. A researcher at the University of Rochester in New York, Dias achieved widespread recognition for his  claim to have discovered the first room-temperature superconductor , a material that conducts electricity without resistance at ambient temperatures. Dias published that finding in a landmark  Nature  paper 1 .

Nearly two years later, that  paper was retracted . But not long after, Dias announced an even bigger result, also published in  Nature : another room-temperature superconductor 2 . Unlike the previous material, the latest one supposedly worked at relatively modest pressures, raising the enticing possibility of applications such as superconducting magnets for medical imaging and powerful computer chips.

Medscape

Peer Review and Scientific Publishing Are Faltering

March 7, 2024 Medscape Robert Villa, MD

A drawing of a rat with four testicles and a giant penis was included in  a scientific paper  and recently circulated on social media and in online publications. It graphically represents the outcome of disregarding the quality of science produced each year in favor of its quantity.

For many years, there has been talk of  paper mills : publishers who print scientific journals and articles for a fee without caring about the reliability of their research. These publishers of what are called predatory journals sometimes seem not to care whether their authors even exist. The business pleases publishing groups paid by researchers, researchers who can increase the number of their publications (which is crucial for their professional evaluation), institutions that can boast of researchers who publish a lot, and sometimes, even interest groups outside academia or research centers that exploit the system to give scientific legitimacy to their demands (as has sometimes happened within antivaccine movements). Serious scientists and, above all, trust in science suffer.

Lightspring/Shutterstock

Science integrity sleuths welcome legal aid fund for whistleblowers

Investor has pledged $1 million over 4 years

March 5, 2024 Science Holly Else

A Silicon Valley investor has pledged $1 million to help pay the legal costs of scientists being sued for flagging fraudulent research. Yun-Fang Juan, an engineer and data scientist by background, hopes the new Scientific Integrity Fund—the first of its kind—will make speaking up about wrongdoing less intimidating. The fund comes after a spate of cases in which high-profile scientists have retracted papers after whistleblowers made allegations of research fraud.

“As scientists, we need to be able to ask questions and raise concerns about other researchers’ work, without the risk of being sued, or going bankrupt because we have to hire a lawyer,” says prominent science sleuth Elisabeth Bik, an adviser to the fund.

Trends in US public confidence in science and opportunities for progress

Trends in US public confidence in science and opportunities for progress

March 4, 2024 PNAS Arthur Lupia, David B. Allison, Kathleen Hall Jamieson, Jennifer Heimberg, Magdalena Skipper, and Susan Wolf

In recent years, many questions have been raised about whether public confidence in science is changing. To clarify recent trends in the public’s confidence and factors that are associated with these feelings, an effort initiated by the National Academies’ Strategic Council for Research Excellence, Integrity, and Trust (the Strategic Council) analyzed findings from multiple survey research organizations. The Strategic Council’s effort, which began in 2022, found that U.S. public confidence in science, the scientific community, and leaders of scientific communities is high relative to other civic, cultural, and governmental institutions for which researchers regularly collect such data. 

Lupia, Arthur, et al. “Trends in U.S. Public Confidence in Science and Opportunities for Progress.” Proceedings of the National Academy of Sciences of the United States of America , vol. 121, no. 11, 4 Mar. 2024, https://doi.org/10.1073/pnas.2319488121. Accessed 18 Mar. 2024.

ALEX HOGAN/STAT

Q&A: The scientific integrity sleuth taking on the widespread problem of research misconduct

February 28, 2024 STAT News Deborah Balthazar

Elisabeth Bik, a microbiologist by training, has become one of the world’s most influential science detectives. An authority on scientific image analysis who’s been  profiled  in The New Yorker for her unique ability to spot duplicated or doctored photographs, she appeared frequently in the news over the past year as one of the experts who raised research misconduct concerns that led to an  investigation  into, and the eventual departure of, former Stanford president  Marc Tessier-Lavigne .

Taylor & Francis Online

Responding to research misconduct allegations brought against top university officials

February 27, 2024 Taylor & Francis Online David B. Resnik, Mohammad Hosseini, & Lisa Rasmussen

Investigating research misconduct allegations against top officials can create significant conflicts of interest (COIs) for universities that may require changes to existing oversight frameworks. One way of addressing some of these challenges is to develop policies and procedures that specifically address investigation of allegations of misconduct involving top university officials. Steps can also be taken now regardless of whether such a body is created. Federal and university research misconduct regulations and policies may need to be revised to provide institutions with clearer guidance on how to deal with misconduct allegations against top officials. For their part, institutions may benefit from proactively creating and transparently disclosing their own processes for independent investigation of research misconduct allegations against senior officials.

David B Resnik, Mohammad Hosseini & Lisa Rasmussen (2024) Responding to research misconduct allegations brought against top university officials, Accountability in Research, DOI:  10.1080/08989621.2024.2321179

ILLUSTRATION BY THE CHRONICLE; ISTOCK IMAGES

Wanted: Scientific Errors. Cash Reward.

February 21, 2024 The Chronicle of Higher Education Stephanie M. Lee

Scientific-misconduct accusations are leading to retractions of high-profile papers, forcing reckonings within fields and ending professorships, even presidencies. But there’s no telling how widespread errors are in research: As it is, they’re largely brought to light by unpaid volunteers.

A program launching this month is hoping to shake up that incentive structure.

 Yves Moreau has long waged a campaign against genetics studies from China with questionable consent procedures.Lies Willaert

‘Ethics is not a checkbox exercise.’ Bioinformatician Yves Moreau reacts to mass retraction of papers from China

A genetics journal has pulled 18 studies over concerns that study participants did not give free consent

February 20, 2024 Science Dennis Normile

Last week, bioinformatician Yves Moreau of KU Leuven scored an important victory: The journal Molecular Genetics & Genomic Medicine retracted 18 papers from Chinese institutions because of ethical concerns. Moreau has long waged a solo campaign against studies that fail to get proper free and informed consent when collecting genetic samples, especially from vulnerable populations in China. He had raised questions about the now-retracted papers in 2021 and says this appears to be the largest set of retractions ever over human rights issues.

PHOTO: CAMERON DAVIDSON

Passion is not misconduct

February 13, 2024 Science H. Holden Thorp

University of Pennsylvania climate scientist Michael Mann  was awarded more than $1 million  in a lawsuit against bloggers who accused him of scientific misconduct in inflammatory terms, likening his treatment of data to what a noted child molester did to children. The verdict suggests that there are limits to which scientists working on politically sensitive topics can be falsely attacked. But the case also says something profound about the difference between matters of opinion and scientific interpretations that can be worked out through normal academic processes. Although Mann has expressed strong—and even intemperate—emotions and words in political discourse, the finding of the District of Columbia Superior Court boiled down to the fact that it is not an opinion that determines when scientific misconduct occurs but rather, misconduct can be established using known processes.

SARA GIRONI CARNEVALE

Vendor offering citations for purchase is latest bad actor in scholarly publishing

Unscrupulous researchers have many options for gaming citations metrics, new study highlights

February 12, 2024 Science Katie Langin

In 2023, a new Google Scholar profile appeared online featuring a researcher no one had ever heard of. Within a few months, the scientist, an expert in fake news, was listed by the scholarly database as their field’s 36th most cited researcher. They had an h-index of 19—meaning they’d published 19 academic articles that had been cited at least 19 times each. It was an impressive burst onto the academic publishing scene.

But none of it was legitimate. The researcher and their institution were fictional, created by researchers at New York University (NYU) Abu Dhabi who were probing shady publishing practices. The publications were written by ChatGPT. And the citation numbers were bogus: Some came from the author excessively citing their own “work,” while 50 others had been purchased for $300 from a vendor offering a “citations booster service.”

ADOBE

A flurry of research misconduct cases has universities scrambling to protect themselves

February 12, 2024 STAT Angus Chen and Jonathan Wosen

There was a time when an allegation of data mishandling, scientific misconduct, or just a technical error felt like a crisis to Barrett Rollins, an oncologist and research integrity officer at Dana-Farber Cancer Institute. Now, it’s just another Tuesday.

The renowned cancer treatment and research center is in the midst of a  lengthy review  of possible discrepancies involving around 60 papers co-authored by four of its top researchers over a period of over 15 years, including CEO Laurie Glimcher and COO William Hahn. And it’s hardly alone. Over the past decade, the number of research misconduct allegations reported to the National Institutes of Health has more than doubled, climbing from 74 in 2013 to 169 in 2022. And  scientific sleuths  are finding plenty of other problems that don’t always qualify as outright misconduct.

Journals are making an effort to detect manipulated images of the gels used to analyse proteins and DNA. Credit: Shutterstock

How journals are fighting back against a wave of questionable images

Publishers are deploying AI-based tools to detect suspicious images, but generative AI threatens their efforts.

February 12, 2024 Nature Nicola Jones

It seems that every month brings a fresh slew of high-profile allegations against researchers whose papers — some of them years old — contain signs of possible  image manipulation  .

Scientist sleuths  are using their own trained eyes, along with commercial software based on artificial intelligence (AI), to spot image duplication and other issues that might hint at sloppy record-keeping or worse. They are bringing these concerns to light in places like PubPeer, an online forum featuring  many new posts  every day flagging image concerns.

Some of these efforts have led to action. Last month, for example, the Dana-Farber Cancer Institute (DFCI) in Boston, Massachusetts, said that it would ask journals to retract or correct a slew of papers authored by its staff members. The disclosure came after an  observer raised concerns  about images in the papers. The institute says it is continuing to investigate the concerns.

Taylor & Francis Online

An accidental discovery of scientific fraud: A reconstruction

February 9, 2024 Taylor & Francis Online Marijke Schotanus-Dijkstra

Dear Professor Covan,

You have recently decided to retract the paper of Hania et al. ( Citation2022 ). Thank you for inviting me to explain why I was suspicious about the originality of this paper which led to this retraction.

I am currently working on a scoping review about flourishing mental health during the menopausal transition. First, I read around 250 papers after initial screening of titles and abstracts. Second, I started with the data extraction process in which at that point around 40 articles were extracted in an Excel datafile. I started to impute the data of the paper of Iioka and Komatsu ( Citation2015 ), but I discovered that in each of the columns, except for “study” and “country,” Excel showed me the exact or almost exact answer with the information I wanted to extract. For some columns like the age range and mean age (SD), this might be possible as some articles use similar datasets. Yet, the further I worked on the extraction for this paper, the more suspicious I got because each column seemed to be identical to one particular article, the one of Hania et al. ( Citation2022 ). Especially the exact same key-findings of the exact same outcomes was disturbing.

Marijke Schotanus-Dijkstra (2024) An accidental discovery of scientific fraud: A reconstruction, Health Care for Women International, DOI:  10.1080/07399332.2024.2310709

A new method searches the scholarly literature for trends in authorship that indicate paper-mill activity. Credit: Zoonar GmbH/Alamy

Fake research papers flagged by analysing authorship trends

A new approach to detecting fraudulent paper-mill studies focuses on patterns of co-authors rather than manuscript text.

February 7, 2024 Nature Dalmeet Singh Chawla

A research-technology firm has developed a new approach to help identify journal articles that originate from  paper mills  — companies that churn out fake or poor-quality studies and sell authorships.

The technique, described in a preprint posted on arXiv last month  1  , uses factors such as the combination of a paper’s authors to flag suspicious studies. Its developers at London-based firm Digital Science say it can help to identify cases in which researchers might have bought their way onto a paper.

Accusations of plagiarism, including alleged misuse of ChatGPT, should not be made lightly. Credit: Alexandre Rotenberg/Alamy

‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud

A journal reviewer accused Lizzie Wolkovich of using ChatGPT to write a manuscript. She hadn’t — but her paper was rejected anyway.

February 5, 2024 Nature E. M. Wolkovich

I have just been accused of scientific fraud. Not data fraud — no one accused me of fabricating or misleadingly manipulating data or results. This, I suppose, is a relief because my laboratory, which studies how global change reshapes ecological communities, works hard to ensure that data are transparent and sharable, and that our work is reproducible. Instead, I was accused of writing fraud: passing off ‘writing’ produced by artificial intelligence (AI) as my own. That hurts, because — like many people — I find writing a paper to be a somewhat painful process. I read books on how to write — both to be comforted by how much these books stress that writing is generally slow and difficult, and to find ways to improve. My current strategy involves willing myself to write and creating several outlines before the first draft, which is followed by writing and a lot of revising. I always suggest this approach to my students, although I know it is not easy, because I think it’s important that scientists try to communicate well.

Fake research papers could jeopardise drug development, warn academics. Photograph: Westend61/Getty Images

‘The situation has become appalling’: fake scientific papers push research credibility to crisis point

Last year, 10,000 sham papers had to be retracted by academic journals, but experts think this is just the tip of the iceberg  

February 3, 2024 The Guardian Robin McKie

Tens of thousands of bogus research papers are being published in journals in an international scandal that is worsening every year, scientists have warned.  Medical research  is being compromised, drug development hindered and promising academic research jeopardised thanks to a global wave of sham science that is sweeping laboratories and universities.

Last year the annual number of papers retracted by research journals topped 10,000 for the first time. Most analysts believe the figure is only the tip of an iceberg of  scientific fraud .

Sheets of paper floating in clouds

Impact factor mania and publish-or-perish may have contributed to Dana-Farber retractions, experts say

Learning from past errors (and misconduct) in cancer research

February 2, 2024 The Cancer Letter Jacquelyn Cobb

More than a decade ago, Glenn Begley and Lee Ellis published a  paper  with astounding findings: of 53 “landmark” studies, only six, or 11%, were reproducible, even with the same reagents and the same protocols—and even, sometimes, in the same laboratory—as the original study.

Begley’s and Ellis’s classic paper, published in  Nature , gave rise to a movement that captured the attention of the uppermost crust of biomedical research. 

Then NCI Director Harold Varmus, for example, focused on the paper—and the broader problem of reproducibility—at a 2013 meeting of the National Cancer Advisory Board ( The Cancer Letter ,  Dec. 3 , 2013). In 2014, Francis Collins and Lawrence Tabak, then-director and then-deputy director of NIH,  outlined the institute’s plan  to address the issue of reproducibility in biomedical research. Journals and funding agencies took action.  Declarations ,  meetings , and  reports  suddenly materialized, and  research funders  rapidly responded.

Bad incentives in academia are leading to a surge in retractions. Conditions could create a trap even for well-meaning researchers. GETTY

Surge In Academic Retractions Should Put U.S. Scholars On Notice

February 1, 2024 Forbes James Broughel

A December  article  in Nature highlighted an alarming new record: more than 10,000 academic papers were retracted in 2023 alone, largely stemming from manipulation of the peer review and publication processes. Over 8,000 of the retractions came from journals run by the Egyptian company Hindawi, a subsidiary of Wiley, and many were in special issues, which are collections of articles often overseen by guest editors that can have laxer standards than normal.

For now, researchers from countries like Saudi Arabia, Pakistan, Russia and China face the highest retraction rates, but it is sensible to ask: what would happen if a major scandal hit a mainstream American discipline? The idea seems less far-fetched than it used to. With  disgraced  ex-Stanford President Marc Tessier-Lavigne and former Harvard President Claudia Gay's academic  records  fresh in public memory, a scandal involving elite American researchers and universities is all too plausible.

Hunter Moseley says that good reproducibility practices are essential to fully harness the potential of big data.Credit: Hunter N.B. Moseley

In the AI science boom, beware: your results are only as good as your data

Machine-learning systems are voracious data consumers — but trustworthy results require more vetting both before and after publication.

February 1, 2024 Nature Hunter Moseley

We are in the middle of a data-driven science boom. Huge, complex data sets, often with large numbers of individually measured and annotated ‘features’, are fodder for voracious artificial intelligence (AI) and machine-learning systems, with details of new applications being published almost daily.

But publication in itself is not synonymous with factuality. Just because a paper, method or data set is published does not mean that it is correct and free from mistakes. Without checking for accuracy and validity before using these resources, scientists will surely encounter errors. In fact, they already have.

Dana-Farber

Science sleuths are using technology to find fakery and plagiarism in published research

January 28, 2024 AP News Carla K. Johnson

Allegations of research fakery at a leading cancer center have turned a spotlight on scientific integrity and the amateur sleuths uncovering image manipulation in published research.

Dana-Farber Cancer Institute, a Harvard Medical School affiliate, announced Jan. 22 it’s requesting retractions and corrections of scientific papers after a British blogger flagged problems in early January.

The blogger, 32-year-old Sholto David, of Pontypridd, Wales, is a scientist-sleuth who detects cut-and-paste image manipulation in published scientific papers.

AI and the Future of Image Integrity in Scientific Publishing

AI and the Future of Image Integrity in Scientific Publishing

January 22, 2024 ScienceEditor Dror Kolodkin-Gal

Scientific publishing serves as a vital medium for sharing research results with the global scientific community. The images within an article are often integral to conveying those results clearly. However, with researchers sometimes including hundreds of sub-images in a manuscript, manually ensuring all images accurately depict the data they are intended to represent can be a challenge. Here, cancer researcher and founder of an artificial intelligence  (AI) image-checking software tool , 1  Dr Dror Kolodkin-Gal, explores how researchers and editors can improve image integrity, and how AI can streamline the publishing process.

AI and the future of image integrity in scientific publishing https://doi.org/10.36591/SE-4701-02

Whistleblowing microbiologist wins unfair dismissal case against USGS

Whistleblowing microbiologist wins unfair dismissal case against USGS

January 11, 2024 ChemistryWorld Rebecca Trager

A microbiologist has won her case for unfair dismissal against a US federal agency after she blew the whistle on animal welfare and biosafety failures. The US Geological Survey (USGS) hired  Evi Emmenegger  as a fisheries microbiologist in 1994, and in 2006 promoted her to manager of the highest biosafety level containment laboratory at the agency’s Western Fisheries Research Center (WFRC) in Seattle. But in 2017, she became a whistleblower when she filed a scientific integrity complaint that the agency dismissed before putting her on leave in January 2020 and then firing her for alleged lapses in her research – a termination that was later retracted.

Genuine images in 2024

Genuine images in 2024

January 5, 2024 Science H. Holden Thorp

In recent years, the research community has become increasingly concerned with issues involving the manipulation of images in scientific papers. Some of these alterations—involving images from experimental techniques such as microscopy, flow cytometry, and western blots—are inadvertent and may not change the conclusions of papers. But in rare cases, some are done deliberately to mislead readers. Image sleuths who can detect these alterations, like the scientific integrity consultant Elisabeth Bik, have risen to prominence, as has the website PubPeer, where many of the detected flaws are posted. High-profile incidents, such as one involving the laboratory of former Stanford University President Marc Tessier-Lavigne, have eroded public confidence in science and harmed careers of investigators who missed doctored images coming from their own laboratories. To address these problems, in 2024, the  Science  family of journals is adopting the use of Proofig, an artificial intelligence (AI)–powered image-analysis tool, to detect altered images across all six of the journals.

News Archive

Dhhs the office of research integrity, misconduct case summaries [ html ].

June 2024 Shaker Mousa, Ph.D., M.B.A., FACC, FACB, Albany College of Pharmacy and Health Sciences

ORI found that Respondent engaged in research misconduct by intentionally, knowingly, or recklessly falsifying and/or fabricating chick chorioallantoic membrane (CAM) assays used to determine angiogenesis activities of small molecules in (2) published papers.

May 2024 Darrion Nguyen, Baylor College of Medicine

ORI found that Respondent engaged in research misconduct by intentionally, knowingly, or recklessly falsifying and/or fabricating experimental data and results that were included in one (1) RPPR, one (1) presentation, one (1) poster, six (6) research records, and two (2) figures of a prospective manuscript.

April 2024 Gian-Stefano Brigidi, Ph.D. University of California San Diego and University of Utah

ORI found that Respondent engaged in research misconduct by knowingly or intentionally falsifying and/or fabricating data and results by manipulating primary data values to falsely increase the n-value, manipulating fluorescence micrographs and their quantification graphs to augment the role of ITFs in murine hippocampal neurons, and/or manipulating confocal images that were obtained through different experimental conditions in twenty (20) figures of one (1) published paper and four (4) PHS grant applications, one (1) panel of one (1) poster, and seven (7) slides of one (1) presentation.

November 2023 Sarah Elizabeth Martin, Auburn University

ORI found that Respondent engaged in research misconduct by intentionally or knowingly falsifying and/or fabricating experimental data and results obtained under different experimental conditions that were included one (1) grant application, one (1) published paper, one (1) submitted manuscript, and six (6) presentations.

October 2023 Lara S. Hwa, Ph.D., Baylor University and University of North Carolina at Chapel Hill

ORI found that Respondent engaged in research misconduct by knowingly or recklessly falsifying and/or fabricating data, methods, results, and conclusions in animal models of alcohol use disorders. Specifically, Respondent falsified and/or fabricated experimental timelines, group conditions, sex of animal subjects, mouse strains, and behavioral response data in two (2) published papers and two (2) PHS grant applications.

Redirect Notice

Research misconduct.

The scientific research enterprise is built on a deep foundation of trust and shared values. As the National Academies wrote in their 2019 On Being a Scientist report, "this trust will endure only if the scientific community devotes itself to exemplifying and transmitting the values associated with ethical scientific conduct" Falsifying, fabricating, or plagiarizing data—all forms of research misconduct—are contrary to these values and ethical conduct.

recent cases of research misconduct

What Is Research Misconduct

Understand what research integrity is and the importance of maintaining integrity in the scientific enterprise.

recent cases of research misconduct

Reporting a Concern

Learn how researchers, institutions, and the public can inform NIH about potential research misconduct violations.

recent cases of research misconduct

Process for Handling Allegations of Research Misconduct

Learn about NIH's process for handling allegations of research misconduct.

recent cases of research misconduct

NIH Expectations, Policies, and Requirements

Learn about the expectations, policies, and requirements related to research misconduct.

recent cases of research misconduct

NIH Actions and Oversight after a Finding of Research Misconduct

Learn about the steps NIH and related agencies take when findings of research misconduct are confirmed.

recent cases of research misconduct

Notices, Statements and Reports

Find related notices, policy references, statements, reports and other resources related to research misconduct.

recent cases of research misconduct

Responsible Conduct of Research (RCR)

Discover resources for instruction in the responsible conduct of research.

recent cases of research misconduct

Resources for NIH Staff

Information for NIH staff about research misconduct.

NIH also addresses the following topics as part of our efforts to foster a culture of research integrity:

  • Foreign Interference Integrity and Confidentiality in Peer Review
  • Harassment, Discrimination, and Other Forms of Inappropriate Conduct
  • Fraud, Abuse, Waste and Misconduct

IMAGES

  1. Research Misconduct by the Numbers

    recent cases of research misconduct

  2. Research Misconduct Resources

    recent cases of research misconduct

  3. Research Misconduct

    recent cases of research misconduct

  4. Research Misconduct Process at Penn State

    recent cases of research misconduct

  5. Scientific misconduct: a new approach to prevention

    recent cases of research misconduct

  6. PPT

    recent cases of research misconduct

VIDEO

  1. POLS 299 Podcast 5 3 1 Number of Cases & Research Design

  2. Harvard President Faces Plagiarism Scandal: Latest Revelations and Congressional Probe

  3. ENRIO 2023

  4. Sexual misconduct in the workplace

  5. Potti found guilty of research misconduct

  6. Peter Wilmshurst

COMMENTS

  1. Case Summaries

    This page contains cases in which administrative actions were imposed due to findings of research misconduct. The list only includes those who CURRENTLY have an imposed administrative actions against them. It does NOT include the names of individuals whose administrative actions periods have expired.

  2. Top Harvard cancer researchers accused of scientific fraud; 37 studies

    The Dana-Farber Cancer Institute, an affiliate of Harvard Medical School, is seeking to retract six scientific studies and correct 31 others that were published by the institute's top ...

  3. List of scientific misconduct incidents

    Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in the publication of professional scientific research.A Lancet review on Handling of Scientific Misconduct in Scandinavian countries gave examples of policy definitions. In Denmark, scientific misconduct is defined as "intention[al] negligence leading to fabrication of the scientific message ...

  4. Exclusive: investigators found plagiarism and data falsification in

    In Garofalo's case, a committee found 11 cases of research misconduct — 7 concerning plagiarism and 4 image falsification — in 8 papers published while she was in Croce's laboratory (of ...

  5. Scientists Investigating Alzheimer's Drug ...

    A professor at the City College of New York engaged in "significant research misconduct," an expert committee concluded. Share full article Cassava Science's Alzheimer's drug, simufilam ...

  6. 'Gagged and blindsided': how an allegation of research misconduct

    Swedish research misconduct agency swamped with cases in first year. Because the MIT investigation was confidential, Sasisekharan explains, he and his colleagues were effectively gagged for nearly ...

  7. US Office of Research Integrity received 269 allegations of research

    The US Office of Research Integrity (ORI) received a total of 269 complaints of alleged research misconduct between Oct. 1, 2021 and Sept. 30, 2022, a new report released by the agency reveals.

  8. Safeguarding research integrity

    Are allegations of research misconduct rising? With several recent high profile cases, it can certainly feel that way. Accusations of image manipulation by scientists at the Dana Farber Cancer Institute have recently led to several retractions. According to a Nature analysis, more than 14 000 papers were retracted in 2023—the highest number ever—with more than 8000 linked to Hindawi, an ...

  9. Research Misconduct by the Numbers

    The data on this page reflect the Research Integrity and Administrative Investigations Division's allegations of research misconduct received, research misconduct cases opened and closed, and outcome of research misconduct cases closed by Fiscal Year (FY). This page will be updated yearly. FY 2023 (October 1, 2022 - September 30, 2023)

  10. Elisabeth Bik tackles the widespread issue of research misconduct

    Q&A: The scientific integrity sleuth taking on the widespread problem of research misconduct. E lisabeth Bik, a microbiologist by training, has become one of the world's most influential science ...

  11. Trends in Extramural Research Integrity Allegations Received at NIH

    These include allegations related to traditional research misconduct and professional misconduct, such as peer review, foreign interference, harassment, grant fraud, and other types. We generally handled an average of 100 violations each year up to around 2017. Over the last five years, however, the numbers rose precipitously.

  12. Final U.S. misconduct rule drops controversial changes

    The U.S. agency that investigates research misconduct by federally funded biomedical scientists has dropped a controversial proposal that would have allowed it to publicize previously undisclosed misconduct findings by universities. In announcing the first revision of its research misconduct policy in 20 years, the federal Office of Research ...

  13. Retractions are part of science, but misconduct isn't

    Retractions are part of publishing research, and all journals must be committed to retracting papers after due process is completed. Although a paper can be retracted for many reasons, when the ...

  14. U.S. researchers guilty of misconduct later won more than $100 ...

    Galbraith identified 284 people who had been subjected to sanctions by the U.S. Office of Research Integrity (ORI) between April 1992 and February 2016 for scientific misconduct (falsifying or fabricating data, or plagiarism). Sanctions included being temporarily (or, in rare cases, permanently) barred from receiving funding from the federal ...

  15. Rooting out scientific misconduct

    Rooting out scientific misconduct. Ivan Oransky and Barbara Redman Authors Info & Affiliations. Science. 11 Jan 2024. Vol 383, Issue 6679. p. 131. DOI: 10.1126/science.adn9352. eLetters (1) Scientific misconduct is an issue rife with controversy, from its forms and definitions to the policies that guide how allegations are handled.

  16. Detailed Case Histories

    The following five detailed case histories of specific cases of actual and alleged research misconduct are included in an appendix to raise key issues and impart lessons that underlie the committee's findings and recommendations without breaking up the flow of the report. In several cases, including the translational omics case at Duke University and the Goodwin case at the University of ...

  17. Research Misconduct: ORI Issues Final Rule with Modernized and

    On September 17, 2024, the Office of Research Integrity (ORI) issued a final rule adopting changes to federal regulations governing research misconduct involving federally funded research (Final Rule). The regulations have not been updated since 2005 and were in need of revision to address the complexities of modern-day research.

  18. Leading the charge to address research misconduct

    In a study of 120 cases of misconduct pulled from the research literature and from records of federal research oversight agencies, DuBois and his colleagues found that in these cases at least, oversight failures were common (Accountability in Research, Vol. 20, No. 5-6, 2013). Motivated reasoning and narcissistic thinking were common, with ...

  19. Swedish research misconduct agency swamped with cases in first year

    Researchers brought 46 cases to the organization — called the National Board for Assessment of Research Misconduct (NPOF) and based in Uppsala — in its first year, according to a report ...

  20. Research Misconduct News, Articles

    New allegations of fraud committed under the watch of geneticist David Latchman were made last year. Page 1 of 6. With the rise of insecticide resistance, researchers crafted a novel probe that selectively targets mosquito larvae's weak spot. The latest news and opinions in research misconduct from The Scientist, the life science researcher's ...

  21. Research Misconduct News

    A flurry of research misconduct cases has universities scrambling to protect themselves. February 12, 2024 STAT Angus Chen and Jonathan Wosen. ... In recent years, the research community has become increasingly concerned with issues involving the manipulation of images in scientific papers. Some of these alterations—involving images from ...

  22. Duke University settles research misconduct lawsuit for $112.5 ...

    Duke University will pay $112.5 million to the U.S. government to settle a lawsuit brought by a former employee who alleged that the university included falsified data in applications and reports for federal grants worth nearly $200 million. The university will also take several steps "to improve the quality and integrity of research conducted ...

  23. Scientific misconduct in psychology: A systematic review of prevalence

    Spectacular cases of scientific misconduct have contributed to concerns about the validity of published results in psychology. In our systematic review, we identified 16 studies reporting prevalence estimates of scientific misconduct and questionable research practices (QRPs) in psychological research. Estimates from these studies varied due to differences in methods and scope. Unlike other ...

  24. Research Misconduct

    The scientific research enterprise is built on a deep foundation of trust and shared values. As the National Academies wrote in their 2019 On Being a Scientist report, "this trust will endure only if the scientific community devotes itself to exemplifying and transmitting the values associated with ethical scientific conduct" Falsifying, fabricating, or plagiarizing data—all forms of ...

  25. Landmark research integrity survey finds questionable practices are

    More than half of Dutch scientists regularly engage in questionable research practices, such as hiding flaws in their research design or selectively citing literature, according to a new study. And one in 12 admitted to committing a more serious form of research misconduct within the past 3 years: the fabrication or falsification of research ...