TransCelerate’s Placebo and SOC Data Sharing Initiative

Article

Applied Clinical Trials

 

 

TransCelerate recently launched the Placebo and Standard of Care Data Sharing Initiative in order to foster a data sharing infrastructure between TransCelerate member companies. This initiative focuses on leveraging empirical data to create historical patient controls, which is expected to reduce the number of patients needed in a clinical trial, and minimize trial execution timeframes. In this interview, Ed Bowen, Senior Director of Translational and Bioinformatics at Pfizer and the lead of this TransCelerate Initiative, discusses how this directive will improve clinical trial productivity.

Moe Alsumidaie: Can you describe how TransCelerate plans to share the data? Who will be hosting the electronic data sharing platform?

Ed Bowen: Eleven TransCelerate member companies are actively participating in this initiative. We have conducted an RFP, and are in the negotiation process of selecting an external third-party to assist us with our data

sharing platform. Firstly, the third-party will assist us with mapping our data into the CDISC format for individual therapeutic areas. Secondly, harmonizing the vocabularies, the data, and conducting a de-identification process. Lastly, the data will be hosted through a cloud-based solution on the premises of the third-party; they will run the requirements, security, and authentication.

MA: The Clinical Study Data Request Portal, which some of TransCelerate member companies are involved with, has policies and an approval process around data transparency for member and non-member entities. Is TransCelerate planning to implement similar policies?

EB: I want to emphasize that we have established a data sharing agreement to make sure that the de-identified data is used by the member companies in approved ways intended to support exploratory research. We have a set of guidelines around acceptable use, and effectively, an authorization process to grant users access to the data. The data is currently restricted to be used by the member companies in ways that are compliant with authorized use guidelines. A number of our member companies participate in other data sharing consortia, such as Project Data Sphere (PDS), and the Critical Path Institute (C-Path) CAMD initiative, and I think the difference here is that all of the researchers will be from the member companies, as opposed to external researchers that require a review committee to look at the way that they would be using the data.

MA: Will this Placebo and Standard of Care Data Sharing Initiative also be accessible to external parties, such as academia, or is it only limited to TransCelerate member companies?

EB: Initially, this Initiative would include access only to TransCelerate member companies but we have agreed to

reassess the policy. We have to decide exactly how we would do that but we are also looking at other trends that are emerging in the industry and other data sharing efforts. I think there is a desire to make the data available, but, in the short term, we want to first make sure that it operationalizes successfully within TransCelerate member’s infrastructures. Once successful, we will reassess how we can share the data with external communities. We are having dialogue with C-Path and PDS about their experiences and best practices, and we definitely want to collaborate with those organizations and other groups working in this type of arrangement; we want to be a part of the broader ecosystem, we certainly don’t want to work towards our goal in a silo.

MA: How are TransCelerate’s data sharing initiatives different than PhRMA/EFPIA Principles for Responsible Clinical Trial Data Sharing, and Project Data Sphere?

EB: I can certainly start with PDS, because we have been close to them, we’ve worked with them, we originally discussed the idea of partnering with them on this, and I’d say that at some point in the future, we can see some of these initiatives coming together in either the form of collaboration, or another type of arrangement. PDS has been fantastic to work with -- they’ve been very open to sharing their best practices and it has been very helpful in accelerating our timelines. I think the major difference is that initially they have not focused on mapping the data to a consistent standard, where you can pull the data across studies, and that’s one of our main objectives. For example, we want to provide context around safety observations in a clinical trial, as well as pull safety data across similar data sets and similar types of patients as we build a control set. If we don’t have the data mapped to a consistent standard, it becomes very labor-intensive each time we try to build useful metrics and benchmarks. That’s the major difference between the two.

Another differentiator is that PDS has been focused on oncology, and we want to go beyond this therapeutic area. TransCelerate member companies are in several data sharing collaborations and are sharing data with both TransCelerate and PDS. We believe there’s value to member companies in trying to minimize redundancies in order to streamline as much as possible. Hence, we’re continuing to talk with TransCelerate and other data sharing initiatives about how we can partner to improve data sharing initiatives.

MA: What standardized data or similarities in patient data will you use to aggregate the statistical analyses and create benchmarks from both qualitative and quantitative aspects?

EB: Several member companies have developed large repositories of clinical trial data over the course of many years, some of which has come through M&A. Some of the initial questions we asked was, is it even feasible to pull this data together? If it is feasible, what can we do with the data? We discovered that some of the data is 10, 15, 20 years old, and were in legacy data standards from different companies. We determined that it was very expensive, and labor-intensive to try to map that data to build a control set. However, we discovered, that more recent data (within the last seven years) is much more useful and easier to map, and we were able to make sense out of the data as we looked across different studies.

Most of the clinical domains can be pulled together across disease areas. For example, factors such as safety, adverse events, laboratory values, and demography, aggregate relatively well across disease areas – and this data is very useful. The major exception in terms of mapping is efficacy data, for obvious reasons (i.e., comparing efficacy measurements in Alzheimer’s, pain and oncology; it’s very different).

MA: Can you elaborate on the benefits of using aggregated control data in clinical trials?

EB: In some use cases, such as safety, we were able to start building control pools of healthy subjects on placebo. Having a control pool enables us to realize significant improvements in clinical trial productivity. For instance, if you’re running a vaccine trial, and we happen to make a safety observation, such as an allergic reaction, alopecia, or some other type of adverse event, we can then go to the control data set, and inquire: in a medically stable and healthy control population, what would the adverse event rate be? How many types of allergic reactions can we see per 1,000 patient-years?  

This allows study teams to conduct a quick check to validate expected outcomes, and combine these findings with other contextual sources we traditionally use, such as literature searches. This approach allows us to obtain full picture around the safety of the patient, and really that’s what we’re striving towards -- we want safe outcomes for patients, and efficacious drugs that are safe.

MA: This is fascinating. Can you expand on the application of historical aggregated data control methods on current clinical trials?

EB: What really has people energized is the idea of the historical control. This involves the notion that we have collected placebo data on patients for years. When we conduct another trial and we’re going to dose another 100 subjects with placebo, historical controls give us knowledge on what happens to a patient when they are given a placebo; we already know in Alzheimer’s and pain trials what happens to patients we you give them a placebo.

C-Path did work around this via their Coalition Against Major Diseases (CAMD) initiative. When C-Path did disease modeling, they aggregated placebo arm data from 23 clinical trials, and they were able to build some models that were eventually validated by the FDA. We see an opportunity here, because a number of TransCelerate member companies have done this in proof of concept (POC) trials, Phase I and Phase IIa trials, and have used prior data to reduce the number of patients needed in the control arm.

When we look at some of the benchmark metrics around how much it costs to put a patient in a clinical trial, we’re looking at a $25,000 per patient cost, or a $30,000 per patient cost, or even in some of the more expensive trials, where it costs up to $60,000 to $70,000 per patient, removing 100 patients from a trial, is equivalent to reducing trial cost by $3 million to $7 million. When we’re conducting studies on hard-to-find patients we might be able to reduce the enrollment timeframe by 12 months. We’ve got real examples at our member companies where this has happened. And this is good for companies and patients. Research dollars are scarce, and if we can reduce the cost of a trial it allows us to reinvest these funds in additional research.

MA: The FDA recently executed The Lung Cancer Master Protocol, where they reused multiple study arms. Is this initiative focusing on achieving a similar approach?

EB: This approach is very exciting in rare disease applications. It is very hard to recruit for rare disease trials, and in order to get therapy to these patients faster, we want to be able to reuse, and get the most value out of the data. This isn’t a part of our effort, but, there is talk about having shared control arms, where one control arm for six

trials is concurrently ongoing. This results in enrolling far fewer patients, and improving the chances of finding an efficacious drug that’s going to help the patient. We really want to get therapies to patients in need, and these tools can help us do it faster and safer.

MA: Do you think that this approach will eventually eliminate the need for placebo arms in trials?

EB: No, and I wouldn’t advocate for that; rather we want to reduce the number of control patients in a trial. Firstly, standard of care changes over time, and we want to make sure we’re continuously refreshing the database by adding new active control patients. Secondly, we expect that patient population demographics and, with the emergence of precision medicine approaches, patient stratification based on genotype will evolve over time, so we want to make sure that historical control data incorporates these changes. Hence, replacing a portion of the control approach is the most appropriate approach. For example, if we’re running a trial that requires 150 patients in the control arm, we can use 100 patients as a historical control, and 50 patients in an active control; this would save us from enrolling those 100 patients. It’s much cheaper to do the research and it’s good for the 100 patients that would otherwise have been dosed with placebo.

MA: What drives your passion and commitment to this project?

EB: The reason this is exciting for me is, we all have family and friends who are patients. My daughter was a pediatric oncology patient and now she’s a happy and healthy 17 year-old. Her good health was achieved through treatments based on research that had been done 20 years ago with other kids, and she participated in a clinical trial eight years ago to help the next set of kids who would have that terrible disease. Because of this kind of work, we can save lives and give happiness and health to children and adults, so they can live longer and healthier lives. That’s what we’re all about and that’s what motivates us to do what we do every day. 

Related Content
© 2024 MJH Life Sciences

All rights reserved.