Clearing the Way for the Internet of Things

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-12-01-2013
Volume 22
Issue 12

Clinical research processes need to be simplified before new technology can be properly utilized.

I recently acquired one of those neat new fitness bands that keeps track of my daily activity and sleep patterns, warning me when I haven't run far enough this week and praising me when I have. One of the newer versions also tracks my food and drink intake, will buzz me when I'm spending too much time as a couch potato, and even keeps track of my mood (grumpy meter alert). It doesn't directly link with my GPS watch and heart rate monitor—yet—but I'm sure the upgrade's coming for a nominal additional cost. All this real-time data about me is safely (allegedly) stored in the cloud where I can view it on my phone, tablet, or laptop.

My neighbor across the street has recently wired his house (the "connected home") with learning thermostats programmable by phone, and with smoke detectors that can send alerts—either for smoke or when its batteries need replacing, instead of that infernal beeping. He may next buy one of those refrigerators that can notify Peapod when you're running out of milk. Time recently described the Motorola pill that is swallowed and emits a password that can be read by a computer. This is not sci-fi­—an ABI Research study estimates that 10 billion wireless devices are already connecting to the Internet today—with many more to come. Many commonly refer to this as the "Internet of things," a term used to describe the wired world where our devices track and monitor themselves, keep us informed, and cry for help when they need replacing, refilling or repairing.

Some 15 years ago, I wrote a paper in the DIA Journal titled "The Elegant Machine," which discussed how to introduce the concept of elegance into what had become a very busy, overly-complex, and ridiculously inefficient clinical data management process. The guiding principles of my machine—plan before you do, standardize data, push data capture upstream using best fit tools, provide online access to information, and let technology drive the process—are surprisingly still relevant, and, in a broad sense, not incompatible with the Internet of things. But, alas, many of the shortcomings of our research processes I criticized then are sadly still with us.

In 1998, mobile telephones were clunky and far from ubiquitous. Data connectivity was primarily by telephone modem. Fifteen years later, smartphones can be found in the farthest corners of the world, and everyone is connected through the Internet. Yet, in clinical research, we're still generating queries, overcompensating with monitoring and documentation, and manipulating and transforming data over and over again as we move it through the research loop.

Our research machine is much closer to Rube Goldberg than elegance. For you youngsters who haven't heard of Rube Goldberg cartoons, try Wikipedia. Webster's dictionary describes Rube Goldberg as an adjective for "accomplishing something simple through complex means." Yep, that sounds like exactly what we do with many of our research processes, though hardly with the amusing creativity of a Rube Goldberg machine.

Now, like others my age, I regularly look for opportunities to simplify my life. I steer clear of garage and estate sales, try to avoid paper and ruthlessly purge my e-mail inbox. I even try to spend time now and then digging through drawers, closets, and garages to fill a trash bag or two. While I'm not quite so thrifty about my wine collection, I at least try to avoid creating new collections of other things I'll eventually have to shed.

But in the world of research, we embrace clutter. SOPs enforce complex documentation requirements with red tape workflows, measured in volume rather than quality and necessity. Take our approach to computer system validation. Processes around validation are typically medieval—analysts and testers generate reams of printed paper with wet signatures, and paper clipped attachments of screenshots and reports. This mountain of detailed documentation is meticulously scrutinized by the regulatory and quality assurance police—painstakingly combing for non-conforming corrections or missing dates on every trivial typo. If the documentation trail can't be measured in feet, it's generally assumed to be inadequate.

And what exactly does this accomplish? The basic premise of validation—documented evidence that a computer system meets requirements—is inherently simple. We simply need to describe our key needs, and provide just enough evidence that the system we used has been shown to meet them. The rest of the mess is created by us. Surely we would all prosper if we simplified—describing the most crucial functions in plain, simple terms, and providing spare, clear evidence that we actually tried the functions, maybe by using the tracking tools that computer systems already offer. Some overworked auditors might actually thank us for making the review so much easier.

The ingrained habits of excessive validation expectations—a practice that's ironically sustained by extending job security for both IT staff and auditors—is further encumbered by the timeworn tradition of sponsors conducting technology vendor audits of that validation. For some reason, each company feels it has to audit each technology vendor separately in person, sifting through those documentation haystacks over and over again. The process is often punitive—there's a small cottage industry of contract auditors for hire who feel they can only earn their pay by finding deficiencies that are often trivial or irrelevant and demanding immediate corrective action.

This costly, time consuming, and unnecessary practice still comes in an age where different pharmaceutical sponsors are working together to solve other common pre-competitive problems in partnerships like TransCelerate Biopharma and the Innovative Medicines Initiative. TransCelerate Biopharma, in addition to helping define therapeutic area data standards with CFAST, is already supporting multiple projects to share information about the qualification of sites.

It's not a stretch to set up a similar industry initiative to subscribe to an independent registry of audit information about other suppliers, such as technology vendors, CROs, and labs. We simplify when we do something once, and use it many times.

You can't really blame the regulators anymore either, even though perceived regulatory risk inhibits so many companies from trying anything even remotely different. FDA has opened the door to improved efficiency with guidance documents endorsing risk-based monitoring and use of electronic source documents. And regulators are working more closely with industry in many cases to address many of our long-standing problems. Some long-held sacred cow beliefs—such as the impression that CRAs had to review every data point on-site—are not really prescribed in the predicate rules. Much of the bureaucracy of research has just evolved over time and become firmly embedded in our culture.

Elsewhere, global demand for increased data transparency is making research data more available, yet its usefulness is still severely handicapped. It's not just that published data often do not conform to public industry standards, which makes it difficult to fully understand, much less roll up and compare data from study to study. In many cases, the published data lack sufficient context to allow thorough understanding—being able to view the data through the lens of the study protocol and analysis plan. Use of a standardized, structured protocol definition—which should accompany each posting of study data—would be an enormous leap forward.

But just making computer-readable protocol documents available would help as well. In the same way that Google Images indexes graphics found on the web with keywords—by looking at the textual content on the page the image was found, as well as the page title, filename, etc.—we already have technology available that would allow us to extract much useful contextual information about protocols that provide useful metadata to associate with these postings of data for shared use by the research community. Instead of waiting for an order to dig back and unearth such artifacts later, the industry can simplify by just making such behavior a habit for every study.

Which brings us back to the Internet of things. We should be using direct monitoring devices on study subjects already. We already have bottles that can track when pills have been dispensed. Companies are developing lab on a chip devices that can be used directly by patients and report back to a database. Not to mention microscopic streaming cameras that can be swallowed and stream video as with the Motorola password pill.

But we're never going to be able to use the new technologies of everyday life in research with our feet shod in the cement of antiquated processes. So, time to get back to basics. We need to focus on the core elements of the predicate rules, such as ensuring patient safety, maintaining good science, and ensuring traceability and reproducibility of results.

Of course, people say it's too risky, too difficult to take the first step, and too overwhelming a problem to address anyway. Which brings up the familiar anecdote of the little boy, painstakingly tossing beached starfish one by one back into the sea, who is warned that with so many thousands in the sand he can't possibly make a difference. "Well," he says tossing another, "I made a difference to that one."

Maybe some of us can make a difference on the next study we start. Simplifying our lives in research can make a huge difference in providing better treatments and cures to patients, even if they only happen one at a time.

Wayne R. Kubick is Chief Technology Officer for the Clinical Data Interchange Standards Consortium (CDISC). He resides near Chicago, IL, and can be reached at [email protected].

© 2024 MJH Life Sciences

All rights reserved.