• Home
  • Practice Focus
    • Facial Plastic/Reconstructive
    • Head and Neck
    • Laryngology
    • Otology/Neurotology
    • Pediatric
    • Rhinology
    • Sleep Medicine
    • How I Do It
    • TRIO Best Practices
  • Business of Medicine
    • Health Policy
    • Legal Matters
    • Practice Management
    • Tech Talk
    • AI
  • Literature Reviews
    • Facial Plastic/Reconstructive
    • Head and Neck
    • Laryngology
    • Otology/Neurotology
    • Pediatric
    • Rhinology
    • Sleep Medicine
  • Career
    • Medical Education
    • Professional Development
    • Resident Focus
  • ENT Perspectives
    • ENT Expressions
    • Everyday Ethics
    • From TRIO
    • The Great Debate
    • Letter From the Editor
    • Rx: Wellness
    • The Voice
    • Viewpoint
  • TRIO Resources
    • Triological Society
    • The Laryngoscope
    • Laryngoscope Investigative Otolaryngology
    • TRIO Combined Sections Meetings
    • COSM
    • Related Otolaryngology Events
  • Search

Otolaryngologist Shares Experience with Image Manipulation in Research and How to Prevent It

by Do-Yeon Cho, MD • June 12, 2022

  • Tweet
  • Click to email a link to a friend (Opens in new window) Email
Print-Friendly Version

It was 2005, and I was working at the South Korean government office in lieu of military service. The whole nation was shocked when the country learned that the researcher Hwang Woo-Suk, PhD, who claimed he had cloned human embryos using cells taken from different people to produce embryonic stem cell lines from each cloned embryo, had simply made it all up. And it was also the first time I heard about scientific misconduct with image manipulation in his Science paper.

You Might Also Like

  • Publishers Are Making All COVID-19 Research Freely Available
  • An Otolaryngologist as Flight Surgeon: One Doctor’s Experience in Operation Iraqi Freedom
  • A Crisis in Biomedical Research
  • Benefits of Open Access Journals
Explore This Issue
June 2022

Fast forward 16 years later, and I had become involved in the Committee of Responsible Conduct of Research at my institution, the University of Alabama at Birmingham. Since then, my eyes have been opened to the world of scientific fraud through image manipulation.

I was going through training and devouring multiple articles, especially ones from Elisabeth Bik, PhD, a former staff scientist at Stanford University in California. She rose to prominence for discerning image duplications across numerous scientific papers. (You can read all about her in Nature’s “Meet This Super-Spotter of Duplicated Images in Science Papers.”) I became one of her most ardent fans and continue to follow her work. Some check the daily Wordle puzzle—I check Dr. Bik’s Twitter posts. In fact, solving her Twitter quiz about spotting fraud is one of my favorite pastimes.

Identifying the Problem

As a board-certified otolaryngologist in two countries (South Korea and the United States) and a physician–scientist, I’ve reviewed multiple articles from various otolaryngology journals for more than 10 years. I now feel like a detective: looking at each figure numerous times with my eyes wide open to spot any possible fraud. I’m not using a fancy software program to detect duplicated images—I mainly change the contrast or rotate the figures to find any signs of manipulation.

To my astonishment, here’s what I’ve found: In the past six months, I’ve detected duplicated images in multiple submitted articles, equating to more than 40% of the articles (four out of 10) I reviewed. More than 90% of these manipulations were found in the immunohistochemistry staining figures.

When I noticed the image duplication in an article for the first time, I initially attributed it to haphazard work. I kept seeing the duplications, however, one after the other, as I went through other submitted articles. In one instance, there were more than five manipulations in one figure. It was a shock. I recognize that human error can play a role—especially when uploading multiple figures. But I had to pause and ask myself: Is it considered a mistake when you see a figure with more than five manipulations?

Do-Yeon Cho, MD Negative results may poorly represent research skills, which, in turn, could affect opportunities and reduce chances of recognition or grant awards. Therefore, instead of publishing all negative results, some scientists add positive results to the negative results when submitting the manuscript. —Do-Yeon Cho, MD

Multiple articles have been corrected or withdrawn after the publication detected errors or frauds in PubPeer, a scientific forum or journal club created in 2012 in which scientific studies are discussed after publication (J Assoc Inf Sci Technol. doi:10.1002/asi.24568). According to a 2015 VOX article, PubPeer has helped uncover science fraud and created a successful peer review model that could replace the hallowed, yet flawed, traditional process. One recent article from Scientometrics commented that postpublication peer review venues are a mechanism for correcting science (Scientometrics, 2020;124:1225-1239). To illustrate this article’s point, when I type the word “sinusitis” in the PubPeer search engine, 32 articles pop up, starting with a paper published in January 2021. Three articles were the subject of an erratum (a short note in which authors or editors correct errors in the article), and three were the subject of retraction.

Further analyzing the PubPeer platform in a recently published article (Learn Publ. 2021;34:164-174) revealed that the journals that produce most editorial notices came mainly from biochemistry and medicine (more precisely, oncology). In addition, some of the cancer journals appeared to be the venues with the highest proportion of errata and high proportions of retractions compared to other research areas.

Breaking Down Possible Causes

All of this information adds up to one crucial question—why? It’s discouraging that I have to change the contrast and rotate figures to detect fraud in 2022.

The authors of “Fostering Integrity in Research,” a book published by the Committee on Responsible Science at National Academies of Sciences, Engineering, and Medicine (National Academies Press, April 11, 2017), contend that understanding why is crucial because this knowledge could inform the responses of the research enterprise and its stakeholders. They explore two different answers. First, suppose this type of misconduct is happening to specific people engaged in self-interested deception and shortcuts. The response might be limited to increased vigilance in detecting these “bad apples” and ending their research careers (the bad apple theory). Second, they posit that if other factors contribute to research misconduct and detrimental research practices (e.g., career and funding pressures, commercial conflicts of interest, institutional environments for research integrity, or incentive structures significantly shaped by funding availabilities), then other responses are required.

Here’s another consideration. We all live in a world of outcome and productivity. This is especially true for the academic science community. Many researchers are not enthusiastic about publishing negative findings. Based on a comment in the Q&A forum in Editage Insights, negative results may poorly represent research skills, which, in turn, could affect opportunities and reduce chances of recognition or grant awards. Therefore, instead of publishing all negative results, some scientists add positive results to the negative results when submitting the manuscript.

Alternatively, many investigators, including myself, prefer to pursue another way of proving our hypothesis rather than writing a manuscript that explains why the original hypothesis was wrong.

Not many platforms exist to publish negative results, as journals aren’t as open to publishing them. This pattern likely originated from the inclination to publish innovative and novel results rather than an article that describes how and why a hypothesis of the original research didn’t work.

Taken together, these factors may be drivers of scientific misconduct, including falsifying and fabricating data/figures to increase their impact or statistical significance (PLoS ONE. 2013;8:e68397).

What Can Be Done?

To start, we could consider building a different platform exclusively to publish negative results. This approach would yield two advantages: It could help prevent other researchers in a similar field from making the same mistakes, and it could incentivize authors who spend a lot of time on their research projects.

The National Institutes of Health (NIH) plays an important role in providing guidance as well. Principal investigators applying for NIH research grants or mentored career development awards (K-grants) are now instructed to describe plans to address any weaknesses in the rigor of prior research within the research strategy portion. NIH strives to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science. For the past few years, the guideline for rigor and reproducibility portions in NIH grant applications has been updated almost annually.

Publishing high quality research in a journal involves more rigors and transparent reporting guidelines about experimental design and statistical descriptions, for example, providing the raw data in the public domain. The Journal Impact Factors (JIF) of most journals, including those in our specialty, have been inflated recently by adding online publication citations to the calculations. Higher JIFs should follow more strict standards with accurate, robust, and transparent descriptions of the methods and outcomes.

Stricter standards alone might not be the answer. Eric Prager, PhD, and his colleagues argued in 2019 (Brain Behav. 2019;9:e01141) that compliance isn’t guaranteed—even with the most rigorous reporting guidelines and publication standards that include the precise application of the scientific method to ensure robust and unbiased experimental design, methodology, analysis, interpretation, and reporting of the results.

As we explore possible solutions, I believe that education and awareness of the critical elements in research design and analysis are essential to transparent and reproducible research. It’s also important to raise awareness of the available tools. For instance, NIH designs modules to train students or retrain scientists on the responsible conduct of research. We need to remind our scientists about the experimental design and how to employ correct data visualization techniques through training and education modules.

Technological tools can play a key role as well. We need more pre-screeners, but artificial intelligence can also detect signs of image manipulation from accepted manuscripts.

In sum, I don’t believe there’s one magic bullet. However, we can build a plan to significantly reduce scientific misconduct, including image manipulations. If our research community can adopt a multifaceted approach focused on prevention, awareness, and education today, we can start to see real results in the future.


Dr. Cho is an associate professor and the director of the Smell and Taste Clinic at the University of Alabama at Birmingham Heersink School of Medicine.

Pages: 1 2 3 4 | Multi-Page

Filed Under: Features, Home Slider, Viewpoints Tagged With: EthicsIssue: June 2022

You Might Also Like:

  • Publishers Are Making All COVID-19 Research Freely Available
  • An Otolaryngologist as Flight Surgeon: One Doctor’s Experience in Operation Iraqi Freedom
  • A Crisis in Biomedical Research
  • Benefits of Open Access Journals

The Triological SocietyENTtoday is a publication of The Triological Society.

Polls

Would you choose a concierge physician as your PCP?

View Results

Loading ... Loading ...
  • Polls Archive

Top Articles for Residents

  • Applications Open for Resident Members of ENTtoday Edit Board
  • How To Provide Helpful Feedback To Residents
  • Call for Resident Bowl Questions
  • New Standardized Otolaryngology Curriculum Launching July 1 Should Be Valuable Resource For Physicians Around The World
  • Do Training Programs Give Otolaryngology Residents the Necessary Tools to Do Productive Research?
  • Popular this Week
  • Most Popular
  • Most Recent
    • A Journey Through Pay Inequity: A Physician’s Firsthand Account

    • The Dramatic Rise in Tongue Tie and Lip Tie Treatment

    • Otolaryngologists Are Still Debating the Effectiveness of Tongue Tie Treatment

    • Is Middle Ear Pressure Affected by Continuous Positive Airway Pressure Use?

    • Rating Laryngopharyngeal Reflux Severity: How Do Two Common Instruments Compare?

    • The Dramatic Rise in Tongue Tie and Lip Tie Treatment

    • Rating Laryngopharyngeal Reflux Severity: How Do Two Common Instruments Compare?

    • Is Middle Ear Pressure Affected by Continuous Positive Airway Pressure Use?

    • Otolaryngologists Are Still Debating the Effectiveness of Tongue Tie Treatment

    • Complications for When Physicians Change a Maiden Name

    • Excitement Around Gene Therapy for Hearing Restoration
    • “Small” Acts of Kindness
    • How To: Endoscopic Total Maxillectomy Without Facial Skin Incision
    • Science Communities Must Speak Out When Policies Threaten Health and Safety
    • Observation Most Cost-Effective in Addressing AECRS in Absence of Bacterial Infection

Follow Us

  • Contact Us
  • About Us
  • Advertise
  • The Triological Society
  • The Laryngoscope
  • Laryngoscope Investigative Otolaryngology
  • Privacy Policy
  • Terms of Use
  • Cookies

Wiley

Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 1559-4939