A new survey suggests that misconduct in research—or fraud—is not going away, and the issue still requires strong and concerted
action from all parties.
The British Medical Journal
(BMJ) received nearly 2,800 responses from clinicians and academics in the United Kingdom. Around 13% of the sample have witnessed
colleagues intentionally altering or fabricating data during their research or for the purposes of publication, while 6% are
aware of possible research misconduct at their institution that has not been properly investigated.
"While our survey can't provide a true estimate of how much research misconduct there is in the United Kingdom, it does show
that there is a substantial number of cases and that UK institutions are failing to investigate adequately, if at all," noted
Fiona Godlee, MD, BMJ Editor in Chief. "The BMJ has been told of junior academics being advised to keep concerns to themselves to protect their careers, being bullied into
not publishing their findings, or having their contracts terminated when they spoke out."
She added that the findings highlight the need for better systems to deter, detect, and investigate research misconduct. Some
solutions are already being tested, and she hopes the survey will lead to action from the research community. "Doing nothing
is not an option," she said.
The full results were presented at a meeting held in London on January 12. It was hosted by the BMJ and the Committee on Publication Ethics (COPE), which aims for a consensus among institutions and funders towards tackling
misconduct. In the survey, a total of 9,036 e-mails were delivered, and 2,782 (31%) responses were received. Of the respondents,
29% regarded themselves as clinicians, 42% as academics, and 29% as both.
"This survey chimes with our experience from COPE where we see many cases of institutions not cooperating with journals and
failing to investigate research misconduct properly," stated COPE Chair, Elizabeth Wager, PhD.
According to the authors, some National Health Service bodies blocked access to the survey website. Also, they admitted that
some participants are not involved in research and have not had firsthand experience with it. Furthermore, research misconduct
is a sensitive subject and they fear there may have been concerns over confidentiality.
On January 3, the BMJ also published a special issue about missing data in clinical research, arguing that this problem distorts the scientific
record to the extent that clinical decisions cannot be based on the best evidence. Several comprehensive papers examine the
extent, causes, and consequences of unpublished evidence, and they confirm that a large proportion of evidence from human
trials is unreported, and much of what is reported is done so inadequately.
There is a "culture of haphazard publication and incomplete data disclosure," according to an editorial by Richard Lehman,
MD, from the University of Oxford, UK and Elizabeth Loder, MD, BMJ Clinical Epidemiology Editor. They call for more robust regulation and full access to raw trial data to allow better understanding
of the benefits and harms of many kinds of treatment.
"When the word 'mandatory' turns out to mandate so little, the need for stronger mechanisms of enforcement becomes very clear,"
they wrote, adding that concealment of data is a serious ethical breach and that clinical researchers who fail to disclose
data should be subject to disciplinary action by professional organizations. "The evidence we publish shows that the current
situation is a disservice to research participants, patients, health systems, and the whole endeavour of clinical medicine."
Other studies published in the special issue highlight the difficulties researchers face when they try to assess the true
harms and benefits of common interventions. An article by Beth Hart, clinical research fellow and medical student at the Department
of Clinical Pharmacy, Institute for Health Policy Studies, University of California, San Francisco, and colleagues, finds
that including unpublished data in published meta-analyses of drug trials often changed their results. She argues that access
to full trial data is needed to allow drugs to be independently assessed.
Further studies show poor adherence to requirements for compulsory trial registration and timely sharing of results. Joseph
S. Ross, Assistant Professor of Medicine, Section of General Internal Medicine, Department of Medicine, Yale University School
of Medicine, New Haven, CT, and colleagues showed that fewer than half of US National Institutes of Health funded trials are
published in a peer-reviewed journal within 30 months of completion. Andrew Prayle, NIHR doctoral research fellow, Queens
Medical Centre, Nottingham, UK, and colleagues pointed out that only 22% of trials subject to mandatory reporting had results
available within one year of completion.
A podcast on the issues raised by these studies is available at: http://www.bmj.com/multimedia. — Philip Ward