Tackling Trial Diversity Through Higher Innovation Standards

Publication
Article
Applied Clinical TrialsApplied Clinical Trials-03-01-2023
Volume 32
Issue 3

Advancing responsible AI innovation to help resolve patient disparities.

A lack of diversity in clinical trials can have serious implications for underrepresented populations as everyone reacts differently to medication. However, improving diversity isn’t going to happen on its own. It must be intentionally and proactively addressed in all realms of research, including the technology that patients use during the trial. As artificial intelligence (AI)-powered technologies become increasingly entrenched in drug research and are put in the hands of more patients, they become a critical avenue to making clinical research more inclusive.

Ensuring that algorithms are built with a diverse data foundation and holding the AI industry accountable to rectify biases will help encourage participation from diverse patients and foster trust in clinical research.

What is the danger of biased technology in clinical research?

The capabilities of AI that make it so promising to improve clinical research, such as automated data mining, predictive analytics, and augmented decision-support, are the same that can cause it to do more harm than good if not built on quality, diverse data. But not all AI in clinical trials carry the same weight. For example, an improperly trained algorithm used to help gauge patient engagement could misreport data around dropout rates, which can lead to frustrating operational inaccuracy that could be fixed and doesn’t impact patient safety.

On the other hand, the stakes can be much higher. If a clinical trial deploys a facial recognition algorithm to confirm depression patients’ adherence to their medication that promises to work with all patients, but in practice, it doesn’t work on those with darker skin, it could lead sponsors to act on misleading efficacy data. This ultimately can cause serious, long-term consequences when that depression drug is deployed in real-world scenarios—those with darker skin would be taking a medication that wasn’t vetted in the same way it was for white patients, potentially putting them at serious health risk.

For the sake of ethics and sound drug development, we need to establish processes that safeguard the quality of our datasets and confirm data is representative of the intended population.

Building diversity into AI with intention

Just like the patient population enrolled in a trial needs to be representative of those who will actually take the medication, the datasets to build AI tools need to be representative of those who will use them. Admittedly, building a truly diverse data foundation to one’s algorithm is a big undertaking. It takes trial and error, concentrated effort, and oftentimes creative approaches that go far beyond available open-source datasets.

For example, facial recognition algorithms need to consider multiple dimensions of diversity, and be able to recognize different skin tones, wrinkles, freckles, people wearing hats or sunglasses—the list goes on. This means an algorithm needs to be exposed to, interrogated, and constantly evaluated under a variety of scenarios before its ready for deployment in a clinical trial.

Doing so also helps expose which scenarios the algorithm does not work in and to use with caution, just like understanding the temperature range of a thermometer and only trusting its accuracy within that range.

In building our own datasets, we knew from the beginning that unbiased data capture was the only way to deliver true value to patients and the drug research process. When we first built our algorithm for detecting patients when taking medications, we quickly learned that the readily available, widely adopted open-source datasets we used were largely built using fair-skinned people. In turn, our algorithm wasn’t working properly with darker skinned patients. We then built our own diverse dataset by recruiting diverse volunteers from sources such as Craigslist to contribute videos to train the AI to distinguish a patient no matter their appearance or environment. Rather than recruiting a specific number of volunteers, we aim for a minimum threshold to train the algorithm until it does not show significant improvement.

It is an ongoing effort to continuously improve algorithm accuracy to achieve the most effective output. This process is a testament to the creative strategies necessary to build a quality, equitable product, and a warning to AI developers who should be careful of assuming that off-the-shelf open-source software fits every population and disease state.

From transparency to trust

Eliminating biases in AI-powered tools requires elevated governance and standardization for all algorithms to reassure users that they are safe and accurate. As an industry, we could afford to be more skeptical of AI’s conclusions and encourage transparency around the development of these tools. Pulling the curtain back on the historically mysterious nature of AI means developers must prioritize traceability of an algorithm’s development so they can readily illustrate its workflow and how one component feeds into the next.

There may be a day where developers will need to deliver an audit report that traces an algorithm’s inner workings and the quality of its data to act as a seal of approval for its broader use. This report developed by the technology company itself could include insight into how data was collected; how an algorithm was trained and tested and what datasets were chosen for each; what hypothesis the developers set out to solve; what generated a specific output; and more. Because biases can often be caught during the preemptive planning stages, accountability during the development phase is critical.

The work doesn’t stop once the AI is out in the world—there’s an ongoing responsibility and investment needed to monitor and trace its performance in the real world and refine it accordingly. If an algorithm isn’t working as planned, companies should be incentivized to refine it accordingly and incorporate more diverse patients into their testing, rather than introducing patients to inadequate and potentially harmful algorithms.

There is a natural incentive for a company to deliver on the promise of ethical technology, which rewards an organization with a better product, more engaged users, and a better user experience with market success.

Establishing a system of checks-and-balances protocol for AI in which we can detect inaccuracies in real-world scenarios will only help advance best practices and lead to stronger algorithms with fairer outcomes.

Raising the bar for equitable AI-powered tools

In order to make clinical research more equitable, we need to resolve disparities inherent to many of the tools put in front of patients. However, there’s reason to be hopeful. Just the mere fact that we are starting to have more honest conversations about implicit biases in AI is a step in the right direction.

With transparency comes trust, and by advancing responsible AI innovation, all patients can truly be represented in clinical trials to help fuel the development of therapies that can improve outcomes for a broader, more diverse audience. Together, we can hold each other to a higher standard so we can do right by our patients and realize the true promise of these technologies.

Ed Ikeguchi, MD, CEO, AiCure

© 2024 MJH Life Sciences

All rights reserved.