What is ClinicalTrials.gov's Destiny?

May 8, 2012

Applied Clinical Trials

Somewhere along the line between the FDAAA legislation in 1997 that all clinical trials results be reported in the NIH’s ClinicalTrials.gov website and today, the expectations of what the website is supposed to deliver have completely diverged. Of course, those expectations are driven by the interest of whichever stakeholder is involved. But unless you dig deeper, all that’s left is the headline and first paragraph. And those words usually aren’t supportive of the clinical trials industry as a whole.

Even our own web site featured this headline from a blog on the recent BMJ analysis of ClinicalTrials.gov—“Research Shows Most Clinical Trials are Too Small and Missing Data.” 

Here are some points taken out of this article from consumer.healthday.com.

  • 62 percent of the trials from 2007-2010 were small, with 100 or fewer participants. Only 4 percent had more than 1,000 participants.

  • Ideally, medical studies compare randomly chosen groups of people to each other, with one group getting a treatment and the other getting another treatment or an inactive placebo. However, 65 percent of cancer studies didn't randomize their participants, compared to 26 percent of cardiovascular studies.

  • ”The lack of correct data slows down researchers who are trying to analyze a swath of research for studies known as meta-analyses that help doctors figure out guidelines,” explained Kay Dickersin, director of the Center for Clinical Trials at Johns Hopkins Bloomberg School of Public Health in Baltimore and co-author of an editorial accompanying the report.

Point 1: Of the 62% of trials that were “small”, how many were Phase I, how many were Phase II, how many were for rare diseases? All of those factors would impact the enrollment size of the trial, as well as impact the overall percentage of participants in clinical trials. Does the general public understand that “small” is not necessarily a negative in clinical trials?

Point 2: Randomized controlled trials (RCTs) are still considered the gold standard of research, and until there are fundamental underlying changes to how RCTs are accepted by the medical and regulatory community, they will continue. Most cancer trials do not randomize patients based on the nature of cancer. That should be pointed out rather than comparing it to CV studies. The only note that is made is whether oncology studies “should be done differently.” This is a consumer article. If the word cancer is mentioned in a negative light, that is a red flag. The difference should be explained.

Point 3: The final comment from the physician may be true: “researchers who are trying to analyze a swath of research for studies known as meta-analyses that help doctors figure out guidelines.” But for a database that initially was developed to “provides [sic] patients, family members, health care professionals, and other members of the public easy access to information on clinical studies on a wide range of diseases and conditions” (see Fact Sheet on ClinicalTrials.gov) I personally am at a loss why researchers would indeed use ClinicalTrials.gov as a way to develop disease state guidelines. From what I heard at the DIA EuroMeeting this past March from Sean Tunis, President and CEO for the Center for Medical Technology Policy, is that from the doctors’ point of view, clinical trial outcomes are not used in clinical practice and are not clinically important.

In the March 3, 2011 issue of the NEJM, the authors of “The ClinicalTrials.gov Results Database—Update and Key Issues”—one of whom is Nicholas Ide, the architect of the ClinicalTrials.gov database, offered their update. In the article, they note that a growing number of researchers were using ClinicalTrials.gov for analyzing trends in globalization of the clinical research enterprise; selective publication of study results and correspondence levels between registered and published outcome measures.

But the authors also caution about the limitations of ClinicalTrials.gov—trials that aren’t required to be registered; missing records of information based on imprecise entries and human error; and new policies in registry and registration worldwide.

What we hear from sponsors is that ClinicalTrials.gov, as well as the multitude of registry and registration requirements from regulatory bodies in virtually every country in the world, is no easy task. In addition, it’s not as if each sponsor has one database where all the information for each of these registries is held and easily downloadable into the ClinicalTrials.gov format. ClinicalTrials.gov was designed as a separate system that each sponsor must register for and then find, collect and input information. At some point, there is going to be human error. At some point, it is going to become extremely burdensome for sponsors to keep track and handle all of the data input requirements for multi-country registries. At some point, ClinicalTrials.gov, just like any software system, is going to need to be upgraded, but to what and at what cost?

Another aspect of ClinicalTrials.gov is the usability and ability of ClinicalTrials.gov to serve the public, as was one of its original intents. Transparency is good. But transparency requires understanding. Mostly, the information collected in clinical trials is medically and scientifically worded. The people whose job is to input the information are not medical-to-consumer writers. How easy is it to translate medical information into easy-to-read, grade 8 level material? There are a plethora of web sites that offer consumer health information, but if informed consent forms used in clinical trials come under fire for having too high of a reading literacy level, where does that leave ClinicalTrials.gov as a transparent tool?

I could continue down this path for awhile, but the bottom line is that ClinicalTrials.gov is definitely necessary. But what others want it to be is not necessarily what was intended. And if it is going to be the de facto collection of all information for clinical trials—results, design, participants, consumer-friendly descriptions, searchable, meta-analysis tool for physicians and payers, then some hefty dollars and resources are going to need to go its way. Heavy public criticism is only going to alienate the public even further from the truth of clinical trials.

Related Content:

Regulatory | Blogs