Personalized Medicine, Data, and Me

October 1, 2013
Wayne Kubick

Applied Clinical Trials

Applied Clinical Trials, Applied Clinical Trials-10-01-2013, Volume 22, Issue 10

Is it time to realize the extraordinary promise and vision of personalized medicine?

Okay, I now know, as a former girlfriend once contended, that I am indeed part Neanderthal—but the Neanderthal 2.6% of my DNA is actually .1% less than that of other males like me—hah! I am also pleased to report that my DNA doesn't have variants associated with an increased risk for some 50 diseases, although I do have a marginal increase for five others, none of which, fortunately, seem quite that bad. I have a slightly increased sensitivity to Warfarin, and am a slow metabolizer of caffeine, which keeps me awake some nights. Despite the evidence on my hairbrush, I am unlikely to lose my hair anytime soon, am more of a sprinter than a distance runner, and am not bothered by bitter-tasting foods. And I definitely do not have a tendency toward alcohol flush—no matter what my wife insists. I know all this because my 23andMe genetic profile tells me so.

For those who haven't checked it out yet, 23andMe offers genetic insights for the common man over the web. While 23andMe hasn't fully sequenced my DNA, it's amazing how much it can tell me about my genetic profile from the half-teaspoon of saliva that I sent along with my $100 payment. The reports on my health risks, drug response, inherited conditions, traits, and ancestry are pretty comprehensive and often engrossing, but a lot of the results also depend on the answers I provided in a series of questionnaires I filled out to help them flesh out my genetic profile. 23andMe is also quite a sensitive and empathic confidante; it will repeatedly confirm if I really, really want to know certain results—such as a higher than normal susceptibility to some serious diseases—before it breaks the possibly bad news.

Some of this information, when shared with my physician, might directly help open the door into the realm of personalized medicine. You remember personalized medicine, which was supposed to become a rallying point for teeming, venture-funded bioinformatics start-ups that emerged just when the Human Genome Project was announcing completion of its draft human genome at the beginning of this century.

Now, more than a dozen years later, we're still waiting for the breakthroughs. Most of those promising new bioinformatics companies from 2000 are no longer with us, while the rest of us are still prescribed drugs by traditional methods, often a matter of experience-based trial and error. There have been notable inroads particularly in oncology, such as the HercepTest that identifies over-expression of the HER2 protein and identifies which breast cancer patients will most benefit from Trastuzumab. But in most cases medicine has not yet gotten nearly as personal as we patients might like.

Meanwhile, drug research is still burdened with tales of once promising new drugs that fell by the wayside during clinical trials even when they showed a profound effect in one subset of patients but had no discernible effect or, worse, exhibited serious safety concerns in too many others. In my case, my 23andMe profile predicts substantially increased odds for me of experiencing myopathy on statin therapy, which might have been useful information for my physician when he prescribed a statin to treat my elevated LDL cholesterol a few years ago—which I ultimately abandoned along with the French fries after a month's worth of annoying muscle aches.

If only we could explain why certain drugs work so well for some patients and not others, and create simple, predictive diagnostic tests (also known as theranostics) that can identify the positive responders while screening out those most vulnerable to its adverse events—perhaps we could reclaim from the scrap heap of discarded drugs a few that showed extreme promise for some while simultaneously introducing too much hazard for too many others. If so, maybe we'd learn that the next set of wonder drugs already exist—if we only knew how to prescribe and use them properly.

Given that 23andMe is already giving me so much insight into my risk factors and likely response to certain drugs for my $100 and spit investment, why haven't we progressed further on the personalized medicine frontier by now?

Well, to reinvigorate such failed drugs we need data—lots of data. We need to pool diverse sources and types of data together in a consistent manner so we can probe the data systematically to help identify more of those theranostic biomarkers that may inform physicians as to which type of drug is most likely to have the best benefit/risk ratio. We need to build much more understanding of the genetic profiles of patients and seek out genetic markers that may predict their response to various types of drugs to inform the physician's treatment plan with predictive evidence.

As I've noted in prior columns, the data transparency movement may help make clinical study data more widely available, but the utility of such data is often hampered by insufficient data standardization while availability is obstructed by critical concerns such as data privacy. Including personal genetic info among such data is seen as even more threatening—is this something my insurer could use against me? And yet it seems that many of us customers willingly grant 23andMe consent to allow our very private data to be used for multiple research purposes. So maybe it's less of a barrier than commonly perceived—as long as we get in the habit of systematically securing the proper kind of informed consent for research that enables reuse of the data for expanded research purposes—and have adequate assurances that the data won't be misused or confidentiality breached.

But then science always marches on, and research is ongoing in many areas. But what does it take to finally achieve what Malcolm Gladwell calls the Tipping Point, when a formerly emerging trend, idea or behavior finally turns the corner and develops enough sudden momentum to spread like wildfire? Think of a roller coaster slowly crawling over the summit of its first hill before accelerating into the next exhilarating moments of sheer terror. One of the criteria for a successful tip is stickiness, and what is stickier than finding ways to treat illness effectively for affected patients with minimal side effects? It's all in the magic of conjuring the right potion—if we only knew how.

But maybe the tide is finally turning. As I've noted previously, there's much more activity around the world on translational medicine looking at combining such diverse data to fuel new research activities. And there are prominent examples such as the innovative I-SPY2 trial, a large-scale adaptive trial that is exploring multiple interventions for breast cancer patients and collecting a broad range of clinical trials data including genomics, proteomics, pathology, and imaging data to help identify biomarkers associated with the most beneficial treatment combinations.

And we're finally seeing more traction in the use of data standards, with expanded interest among European and Japanese regulators, who traditionally did not require submission of data to accompany product-licensing applications. We're now seeing a more consistent effort at FDA to capitalize on CDISC standards, partly as a result of the FDASIA and PDUFA V mandates for therapeutic area data standards, and rapidly increasing adoption by sponsors as a result. And the involvement of Trans-Celerate Biopharma, a consortium of top sponsors, in the CFAST therapeutic area data standards initiative is a game changer—since it elevates awareness and commitment to data standards at the highest executive levels of the pharmaceutical industry.

While some will say that CDISC standards are not adequate to fully realize the vision, it's also pretty clear that they are the only proven set of global standards particularly designed to facilitate clinical research, and also the set most ready for prime time. They clearly are better than any other option, and they pass the stickiness test. They've also been designed explicitly to be complementary and compatible with healthcare standards—while still representing research in a way that's familiar to the current research community and compatible with most existing tools. Of course we may find better methods in the future through the semantic web and other emerging technologies—but such natural growth and evolution is only to be expected. What's important is to build now on what is best known, most used, and most feasible at this time—we can only go up from there.

In another of his books, Outliers, Gladwell tries to explain some of the factors that have made some people brilliant successes, while others never achieve their full potential. One of these is serendipity—the fact that a group of people working in a specific area somehow came together—or even competed—with the right tools, connections, and circumstances to succeed at a special point in time. Maybe it's finally time for the outliers of science to work together to really begin to realize the extraordinary promise and vision of personalized medicine. It took over a decade for EDC to hit its tipping point, more than seven years for CDISC SDTM to get full traction. Maybe next time we'll do it in five. But let's tip the balance now while we can.

Meanwhile, I wonder if there's a genetic basis for the headache I get from drinking certain wines. I can't wait for 23andMe to find out.

Wayne R. Kubick is Chief Technology Officer for the Clinical Data Interchange Standards Consortium (CDISC). He resides near Chicago, IL, and can be reached at [email protected].

download issueDownload Issue : Applied Clinical Trials-10-01-2013

Related Content: