OR WAIT 15 SECS
The US FDA, in open meetings, stated the Thorough QT trial "must" be replaced.
After nearly a decade of implementation and three rounds of “Question and Answer” documents to clarify procedural and scientific questions surrounding the ICH E14, the US FDA, in open meetings, stated the Thorough QT (TQT) trial “must” be replaced. This marks an extraordinary reversal of regulatory opinion. It can be estimated that over 250 TQT trials have been conducted costing drug developers many millions of dollars.
While the TQT “process” (including pre-clinical investigations) has, unquestionably, kept QT prolonging drugs off the market, the process has tended to treat all non-antiarrhythmic drugs the same. It is well known, however, that not all QT-prolongation is pro-arrhythmic. Once a compound is shown to prolong the QT-interval drug developers have a very difficult task proving their QT-prolonging compound is safe.
This review covers both the problems with the TQT trial and what is being considered for the future of cardiac safety.
When administering a new chemical entity (NCE) to human beings and subsequently assessing the pro-arrhythmic potential, it is important to understand whether the problem resides with the parent drug, the metabolite, or a drug-drug interaction. The problem might also be a combination of these factors. For example, terfenidine was a non-sedating antihistamine that caused Torsade de Pointes (TdP), a sometimes fatal polymorphic ventricular tachycardia. Investigations revealed that the concentration of terfenidine (Seldane) in the blood increased when given along with ketoconazole, an anti-fungal that is metabolized through the CYP450 pathway.1,2 The active metabolite, fexofenadine, did not itself prolong the QT interval or cause TdP.
Since the early 1990’s, TdP increased in the consciousness of industry and the regulators. A simple MEDLINE search shows the number of articles between 1990 and 1999 averaged less than 100 per year, but increase to an average of over 150 articles per year from 2000 through 2013.
During the mid-to-late 1990s, as scientific awareness was raised, so was the concern of the regulators. The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) has introduced a number of guidance documents that detailed both clinical and preclinical testing for the potential of the new chemical entity to be pro-arrhythmic. These include:
ICH S7a: Safety Pharmacology for Human Pharmaceuticals3
ICH S7b: The Non-clinical Evaluation of the Potential for Delayed Ventricular Repolarization (QT Interval Prolongation) by Human Pharmaceuticals4
ICH E14: The Clinical Evaluation of QT/QTc Interval prolongation and Proarrhythmic Potential for Non-Antiarrhythmic Drugs5
While the ICH S7A discusses more than just the cardiovascular system, it reminds us that cardiac safety involves more than the QT interval. It recommends that heart rate and blood pressure be tested with possible follow-up studies involving indices such as cardiac output, ventricular contractility, and vascular resistance.
The ICH S7B is the preclinical correlate of the ICH E14. The S7B reviews the non-clinical evaluation for delayed ventricular repolarization potential. It calls for both in-vitro and in-vivo investigations. The in-vitro investigations are mainly the hERG channel assay and Purkinje fibers assessment. The acronym hERG stands for “human ether-a-go-go related gene.” The gene is responsible for encoding the inward potassium rectifying channel (Ikr). Blockage of this channel is correlated with the prolongation of the QT-intervals.6
The in-vivo assessments are focused on the ECG from intact animals such as the dog and monkey, as well as some others. Heart rates and other parameters can also be evaluated from the animal model.
The thorough QT trial is covered in the ICH E14. The E14 was discussed for a number of years before it was endorsed in 2005. Subsequent updates to regulatory thought have been published in question-and-answer documents. The first such document was released in 2008. The third document (r2) was released in 2014.7 These documents have clarified issues such as automated measures, baselines, gender analyses, as well as others.
The E14 is focused on ventricular repolarization in human trials. It calls for a dedicated trial and uses the corrected QT-interval (QTc) as a surrogate marker for the risk of TdP. It is important to note that the E14 does not specifically say that the QT interval causes TdP, only that the “delay in cardiac repolarization creates an electrophysiological environment that favors the development of cardiac arrhythmias…”5
The FDA went a step further and formed the QT Interdisciplinary Review Team (QTIRT) to review both protocols and data around the QT issue. This group has been instrumental in giving pre-study advice and reviewing thorough QT data.
However, it was recognized early in the ICH E14 era that NCE’s can have an effect on more than just the QT-interval on the electrocardiogram. The thorough and intensive QT trial also reports data regarding the QRS duration and the PR interval. We know that these intervals are encoded by a series of sodium and calcium channels, respectively.
Twelve-lead electrocardiograms are a wealth of information regarding the functioning of the heart. Features, both waveforms and intervals on the ECG correlate to the structure and function of different segments of the myocardium. The table below shows some of these structures and functions and the main abnormalities seen in clinical trials.
As we have come to know the basics of the TQT trial, most of these trials fall into a fairly narrow set of parameters. The TQT trial is usually a crossover or parallel study with four dosing levels including a therapeutic dose, a supra-therapeutic does, placebo, and a positive control. Intrinsic variability of the QT-interval is controlled by “the collection of multiple ECGs at baseline and during the study.”5 For most TQT studies, this has come to mean “triplicate” ECGs at each time-point. Depending on the study design, there can be as few as approximately 40 completed subjects for a crossover design or more than 180 completed subjects for parallel design. It is worth noting that most TQT trials use normal healthy volunteers.
Efforts to reduce the sample size for TQT trials have included collecting more ECGs and increasing data quality by standardizing on-site procedures. The outcome of these efforts is to reduce the standard deviation associated with the QT-interval. While these efforts are important, there is a practical need to ensure a minimum number of subject be recruited in a TQT trial; the positive control. Every TQT trial needs to show assay sensitivity. If too few subjects are recruited, the statistical test for assay sensitivity might not be met, thus invalidating the TQT trial.
Statistically the thorough QT trial seeks to exclude the upper bound 95% confidence interval of 10 ms, which is done by comparing the on-treatment QTc against the baseline and placebo corrected QTc (ΔΔQTc) at each study time point. In addition the plasma concentration is compared and modeled against the QTc.
While the thorough QT trial is mainly interested in the corrected QT interval, the FDA generally requests data regarding other ECG measurements. Because of multiple ion channel involvement the ECG study report should include data from PR interval and QRS duration measurements. In addition the heart rate is measured and analyzed.
In many instances it is unclear the role of multiple ion channel blockade plays in the safety analysis. For example, both verapamil and ranolazine are multiple ion channel blockers that have been deemed safe from a ventricular repolarization standpoint. Conversely the multiple ion channel blocker propoxyphene has been removed from the market because of changes to the ECG including prolonged PR interval widened QRS complex in prolonged QT interval.8
As mentioned previously, the exposure-QTc (ER) relationship is also a large part of the thorough QT trial. This analysis was proposed by the FDA and has been adopted by industry. The standard approach is outlined by Garnett.9
This approach takes a mixed effects modeling paradigm to plot the concentration of the drug versus the QTc. All data are pooled across dosing groups. A regression line is then plotted against the ΔΔQTc to see whether or not there is an increase in the QTc change from baseline and placebo as the concentration of the drug increases.
An example of the method is shown in the figure, below. The plasma concentration is shown on the x-axis versus the ΔΔQTcF on the y-axis. In this case, the compound being tested is moxifloxacin. As is expected and shown on the figure the slope of the regression line is positive meaning that as the concentration of moxifloxacin increases the QTcF change from baseline and placebo also increases.
When used in early phase development, the ER model can provide data on a wider range of exposures than is tested in the TQT trial. This allows for better modeling and can give the drug-developer early information on electrocardiographic responses to the compound.
However, the thorough QT study is not without problems. One major issue that has been known is that prolongation of the QT interval does not necessarily mean a higher risk of TdP. This means that some good drugs that prolong the QT interval might be abandoned or given a warning label reducing their use because of benign QT prolongation.
Another problem is the expense of the thorough QT trial. While the cost of the thorough QT trial might be less than 1% of the overall cost to bring a drug to market it is still a major expense to many companies including small bio-techs who don't have the resources to spend. So, drugs are being abandoned because of hERG blockade or QT prolongation, or they can be continued with an added expense of intensive testing and large Phase III trials, limiting the return-on-investment.
Although bad drugs are not getting on the market, and thus the TQT trial has been successful from that point of view, the TQT trial has not given us the information we hoped; namely, a clear link from prolonged QT to TdP. So we've learned that the hERG assay is a very good biomarker for the prolongation of the QT interval but the QT interval is a very bad biomarker for finding TdP.
Therefore, the regulators have decided it's time for a new paradigm. First, we must replace the thorough QT trial. Ongoing efforts involve: preclinical review of multiple ion channels, collecting ECGs in early phases, and placing a premium on ER relationships.
A new preclinical paradigm called CIPA (comprehensive in-vitro pro arrhythmic analysis) will include multiple ion channel evaluations (MICE), isolated cardiac myocytes, and computer modeling.
Both verapamil and ranolazine are hERG blockers without causing TdP. These drugs also block more than just the hERG channel. It is postulated that the multiple ion channel effect impart a degree of cardiac safety not seen in other drugs that cause TdP.
To determine whether or not studying addition ion-channels could help discriminate safe and non-safe compounds Kramer et al. studied 32 torsadogenic and 23 non-torsadogenic drugs.10 Assays for the Cav1.2 and Nav1.5 channel currents were conducted in addition to the hERG assay. The details of the study are beyond the scope of this discussion, but the authors state that the MICE model is better at finding drugs that cause TdP than the single hERG assay.
Clinically, more emphasis is being placed on exposure-response (ER) modeling. The Cardiac Safety Research Consortium (CSRC) and IQ-PHRMA have undertaken a study to determine whether or not ER modeling can replace the TQT in clinical development.11 The study includes both QT prolonging drugs and a negative control. The results are scheduled to be presented in December, 2014.
One item of interest is the use of a positive control. In TQT studies, a positive control has been used to ensure the ability of the process to find a small increase in the QT interval if one is present. The most widely used positive control is moxifloxacin. In early phase studies, using a positive control might not be statistically sound. Instead, Malik proposed use of statistical methods based on the variances of the data to ensure data quality.11 This, or other measures might be useful to replace the positive control requirement. By adopting a statistical model for data quality, a high-quality study can be ensured without unnecessarily exposing subjects to pharmaceutical compounds.
In the pre-E14 era, ECGs were recorded and measured on paper. The quality of the ECG was suspect as collection procedures, equipment, storage, and timing of the ECGs were not standardized. The focus on the QT-interval caused the industry to re-evaluate the importance of standardization.
Current “best-practices” include the recognition that ECG intervals are both time- and heart- rate dependent. Collecting ECGs for measurement of the intervals involves having a subject rest for a period of time (usually 5 or 10 minutes) and comparing ECGs in such a way as to minimize the effects of circadian rhythms.
Measurement and reading of ECGs are now done on a computer screen to maximize precision. Before digital ECG readings, the precision of the measurement was based on paper-speed and line-width, current precision is based on sampling rates and on-screen pixel resolution.
Finally, the digital age has seen open-architecture ECG storage. The XML ECG can be read by any number of ECG viewers and no longer subject to the manufacturers’ proprietary storage schema. This allows for a level of transparency whereby different reviewer can see and measure the same ECG with very few technical hurdles.
In a little over a decade, the QT interval has come full-circle, from a regulatory “darling” to “just another feature of the ECG.” However, before we bury the QT interval in the historical archives, it is worth noting a few of the positive results of the analysis of this measurement.
Clinically, the QT interval is now in the consciousness of clinicians. The American College of Cardiology and the American Heart Association released a position paper on preventing TdP in the hospital setting.13 Taking this paradigm further, hospitals such as the Indiana University Health Methodist Hospital have implemented a computerized clinical decision support system (CDSS) to help reduce the prescribing QT-prolonging medications to those patients with known risk factors of TdP.14 They’ve shown that this type of system can influence prescribing patterns in the hospital.
To conclude, the era of widespread use of the TQT trial appears to be ending. While it is difficult to foresee regulatory agencies totally abandoning a paradigm that has met with some success, shifting the burden of proof from a TQT trial to a combination of pre-clinical data and early phase human data might be beneficial. It is anticipated that the new paradigm will be less costly for the sponsor while yielding more scientifically valid data regarding proarrhythmic risk. However, it is important to note that 12-lead electrocardiograms yield more information, including all-cause mortality, important to drug development than just the QT-interval.
The author would like acknowledge Richard Kovacs, MD, for his guidance and review
Tim Callahan, Chief Scientific Officer, Biomedical Systems
Related Content:Online Extras