OR WAIT null SECS
To really transform global research we need to agree on a common language.
On a recent trip to the Far East, still a world away from the West, I found myself perplexed, as usual, trying to read the signs. Of course, the hotels, airports, subways, and ATMs make it easy for us English-speakers, but it's not always so easy communicating with a taxi driver, merchant, or a man on the street. English is still far less ubiquitous than we expect in much of the Orient.
Wayne R. Kubick
And this also holds for clinical research. While English still reigns as the predominant language of research, it may not be so well understood by those who participate in the research process outside the English-speaking world—coordinators, data managers, clinical monitors, physicians, and patients. While many non-English speakers are eager to do clinical studies, and hungry to learn more of the methods, standards, and processes we've developed over the years in the West, they often need translations of documents, forms, and software before they can really make use of our knowledge. People in the East are especially curious about what's going on with the FDA, since the United States is still the most important market for pharmaceuticals. What's impressive—despite the effort of translation—is how open-minded and enthusiastic they are about pushing the envelope (rather than suspiciously risk averse).
While there's interest in how the old world is doing, there's a keen recognition of the opportunities offered by the steady migration of clinical studies from developed Western countries toward the developing world, as more and more trials are being conducted in China, Korea, India, and other high-growth economies. The idea of transformational initiatives in clinical research doesn't sound quite so radical to these newcomers—rather, the attitude seems to be "why wouldn't we try something different, especially since we're starting out with a clean slate?"
Meanwhile, in the Western world, we are investing in many exploratory initiatives intended to transform the research process. And yet, despite these ongoing efforts, we still find the costs of research rising unchecked, while the time it takes to bring a promising new drug compound to market still takes more than a decade and over a billion dollars. At a recent FDA public hearing, Doug Peddicord, Executive Director of ACRO, questioned whether the pharmaceutical industry could really tolerate too many more transformational initiatives: "Every effort at innovation that the FDA supports, directly or through the funding of public-private collaborations, should be measured against three objectives: does it make the drug development process faster, cheaper, or more productive."
When we talk about transformational initiatives, we think of things such as incorporating more regulatory science, predictive analytics, or adaptive designs into protocols. Or we look at assessing the viability of protocol eligibility criteria against patient databases to ensure we can find patients to shorten enrollment periods. Or we look at productivity improvements through technologies or remote monitoring as companies try to do more with fewer people and try to control escalating trial costs. Many other initiatives are based on secondary use of healthcare and administrative claims data—for exploration of safety signals or epidemiological research—or even to feed clinical research data collection. These ideas have been explored for many years, and can hardly be considered as truly transformational anymore, especially since so many have struggled to achieve traction or, when they do, provide a significant and repeatable return on investment.
Meanwhile, as we continue to try to make healthcare information more electronic and exchangeable, we still can't quite seem to make it very interoperable between patient care and research. This is due to more than the challenge of integrating technologies—the primary problem is really one of semantics. While healthcare and research both want to improve patient treatment, we just can't seem to speak the same language.
There are many excuses for this. The typical way that researchers devise protocols is based on identifying what data they want to observe and which endpoints to measure, yet they don't define these in the same languages as healthcare IT. And it's not just a matter of translation—it's also a question of data reliability. Epidemiologists are well aware of the limitations involved in secondary uses of healthcare information, which is grounded in the assignment of billable diagnoses for reimbursement purposes, a consequence of the United States' fee for services healthcare system. As a result, much of what's in an electronic healthcare record is more likely to be driven by reimbursement than data quality or accuracy. Then there are the coding system variations, such as our use of MedDRA to code adverse events for clinical studies, even though MedDRA is never used in healthcare to describe problems or diagnoses.
And yet even data that should flow smoothly between systems—such as laboratory results and vital signs—tends to get stuck along the way. Healthcare systems, for instance, typically use LOINC codes to represent lab tests and their attributes—which research systems prefer to spell out as a set of variables in a dataset instead.
The disconnect that exists between basic science, patient care, and clinical research has been aptly described by Chris Chute as the "chasm of semantic despair." Chris is referring primarily to the botched hand-offs in translational medicine, but it applies equally to the gap between healthcare and research information systems.
Thus, in healthcare, a lot can be lost in the flow of information between patient, physician, institution, reimburser, and researcher. In such information exchanges, the parties may think they're talking about the same thing, but often they're not. It's like the child's game of telephone—also known as "Chinese whispers"—something always gets lost along the way. It's particularly perplexing when we use the same terms with variable meanings depending on different contexts, dependencies, or parameters. The problem is that we can't always tell when we're talking about concepts that are exactly the same, or merely similar, or not alike at all despite appearances to the contrary. This is sometimes caused by conflicting assumptions, but more often simply a result of using terms with imprecise meanings that have not been explicitly and unambiguously defined. And rather than spend the time to look for common concepts between these parallel but conflicting worlds, it's easier for a scientist to just describe in plain English what he wants just this time. Over and over again.
Asia makes one appreciate the need for coded data, which makes translation so much easier. Even when we do use common terminologies in research we feel a need to create our own lexicon rather than build on systems already in place. For example, why don't we use LOINC more in research? Now that the National Library of Medicine is defining quality measures in LOINC to support the assessment of meaningful use related to implementation of EHR systems, including questions like those in the FACIT quality of life questionnaires for chronic disease sufferers, isn't there an opportunity to represent research questions there as well? Especially when some of these quality measures look quite similar to efficacy measurements for clinical protocols, and could become fundamental objects of comparison for comparative effectiveness research—if we only used the same language.
Moreover, as research data standards for disease areas are developed, they will be specifying clinical data elements in trials that will often parallel or overlap quality measures and value sets in healthcare.
Of course, learning to speak and read in other languages is difficult. But the lesson is clear—we need to clearly define the clinical observations and endpoints we use during research and equate those with any observations used in healthcare, and represent them, wherever possible, as coded data elements. The use of coded data not only ensures that we can pool and compare data elements for multiple primary and secondary purposes, but it also makes it possible to more easily internationalize our data. It's not practical to translate endless documents into multiple languages over and over for each study, but it's entirely possible to represent the definitions and descriptions of a coded data element in other languages and character sets, so the meaning of those codes can be instantly rendered in the language of the reader. This is a classic "do once, use many times" scenario that enables the universality of language used in healthcare and medical research.
And this is a way to sensibly expand clinical research to China, Korea, and other developing geographic areas which will bring new patients to research studies in coming years.
Coincidentally, this also helps cross the chasm between the world of healthcare and that of research. Rather than describe the research concepts, measures, and endpoints we wish to observe merely in prose, we need to record these as coded, structured data. As much as possible we should try to use the same terminologies—or at least have consistent mapping between them. Of course we have to deal with the vast universe of vocabularies already in place, but the relationships between these concepts can be represented in systems, such as through the power of the semantic web. But we need to stop creating new languages as we go along, and learn to build more on what's already in place.
This is an essential foundational requirement of a learning healthcare system that can operate as a true ecosystem between scientific research, clinical research, and patient care—from bench to bedside and back. So, it's about time we all started getting on the same page.
Wayne R. Kubick is Chief Technology Officer for the Clinical Data Interchange Standards Consortium (CDISC). He resides near Chicago, IL, and can be reached at firstname.lastname@example.org.