Interactive Voice Tools and AI Reduce Subject Burden

Article

Applied Clinical Trials

Bill Tobia, CEO of Clinstruct, Ahmed Bouzid, CEO, Witlingo, and Brielle Nickoloff, Lead Product Marketing, Witlingo, discuss how voice will change the clinical research paradigm.

Interactive Voice Tools and AI Reduce Subject Burden Many in the industry are discussing the use of artificial intelligence (AI) to assist in clinical trials, and as reported in a previous article, voice is the new digital experience for patients. Clinical trials are complex projects, and often place a significant burden on not only patients, but also principal investigator (PIs), study staff, and study monitors. Bill Tobia, CEO of Clinstruct, has partnered with Witlingo, specifically Ahmed Bouzid, CEO and Brielle Nickoloff, Lead Product Marketing, to create a new interactive voice tool for clinical research. In this interview, they discuss how voice will change the clinical research paradigm.

Moe Alsumidaie: What is prompting voice tools in clinical trials?

Bill Tobia: The clinical trials necessitate having active and compliant engagement by both the principal investigator (PI) and the participant. From the participants’ perspective, dropout and nonadherence are of concern, and voice tools can be used to engage them by delivering accurate answers during the conduct of the clinical trial when asked. The answers to participants’ questions are derived from the informed consent, thereby, increasing the level of communication, compliance and participation. From the PI’s standpoint, it is common for them to undergo the “one and done syndrome,” specifically, referring to the declining number of physicians participating in clinical trials. It is very time consuming to execute a clinical trial, and the physicians often do not have a robust infrastructure in their offices to efficiently conduct the study. Their time is limited, and they need to have their protocol questions answered quickly and accurately without having to sift through lengthy protocols or wait for responses from CRAs. Voice is also the answer to this challenge, as the technology is now capable of accurately deriving answers from the protocol and other study materials, hence, improving the efficiency of study conduct. Moreover, finally, just like the PI, monitors and research coordinators can have their questions answered quickly and accurately, using the same innovative voice tool.

Moe: So, we are talking about all sorts of questions that can be asked by people, and the questions may differ; people are not going to ask the same question. Can you describe the technology that operates behind the scenes to provide the right answer to the right question with interactive voice systems?

Ahmed Bouzid: Like any other piece of software, the foundation of a good product is design. Designing, in this case, is anticipating the questions that will be asked -- not precisely the wording as such, but the canonical questions (i.e., the type of questions that a user may ask). That is the key to an excellent natural language interface. The core value of natural language and speech is to enable users to ask questions in their own words, naturally, so that the interpretation burden is placed on the interface or the AI, the artificial intelligence, rather than on the human. This enables patients to ask questions naturally and in their own words.

The second part is how do we deal with the variations? In our case, after having identified the canonical questions we then feed to our software the full text of the answers to those questions, and the software builds a meaning model, or an ontology, for each of those answers, and makes those ontologies searchable, not at the word level, but at the semantic, the ontology level. So, when a user asks a question, what is happening behind the scenes is a special kind of a search. A search that maps a question to an answer and gives the mapping a score and the best scoring answer is played back to the user.

Moe: How does the system identify and report clinical trial adverse events (AE)s?

Brielle Nickoloff: Think of the tool as a content management system plus an AI that interacts with the user. Let us say a user launches an Alexa skill, asks it questions, and at some point, they may say, "yesterday I had a rash," which would be an example of an adverse event mention.

Now, from the vantage point of a clinical study manager, they use the tool, not only to build answers to the questions that users might ask, but they also use it to see what questions are being asked, and what kind of things they are saying in general. Our system also surfaces visuals if the user is accessing it on a device that has a screen. In this case, if a patient is asking about an adverse effect, and they need to contact somebody like their doctor quickly, then for a specific prompt you can include an image of contact information like a phone number. Voice tools can provide sponsors with a transcript that highlights words containing potential adverse events. Additionally, the system is capable of generating a notification to the subject to call the PI when a potential adverse event is reported, and notify their PI to review those transcripts.

Moe: How do voice tools support patient-centricity?

Bill: Voice tools can take all the information that is provided to the monitor, the PI, and the participant and make it easily accessible in a human interface that people are used to. “Patient-Centricity” is about focusing on the needs of the study participant - both medically and informationally.

Moe: Why voice and why now?

Ahmed Bouzid: If you look at the evolution of interfaces at every stage, more and more people can access intelligence with computers. The last iteration was with the smartphone where we could access information by tapping, swiping and typing. What we see now with voice is just the next iteration in the evolution of interfaces, where we can access information by simply asking and listening.

The simplest interface (i.e., voice) takes the most energy to implement because the burden is taken away from the patient and is handled by AI. Before this, at every stage, we had to learn something that was not natural -- coding, windows, tapping, and swiping, where we were bending to the interface. With voice, the interface is bending to us, instead, and is engaging us, in our human language, and that is what makes this so different, exciting, and useful in clinical trials.

© 2024 MJH Life Sciences

All rights reserved.