Art & Science of Imaging Analytics

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-03-01-2013
Volume 22
Issue 3

Incorporating imaging analytics can improve the quality of work and minimize unexpected delays.

More often than not, when exploring options to manage image acquisition and analysis, many investigators (sponsor and CRO alike) face the, "I don't even know where to begin" dilemma. Many researchers find it challenging just to stay current on the state-of-the-art in imaging and image analysis, let alone imaging-related requirements for preclinical and clinical trials. Rapid advances in image acquisition technology and continued developments in image analysis software are changing the depth and breadth of clinical research. Imaging permeates clinical and basic research across many scientific disciplines including oncology, orthopedics, ophthalmology, cardiology, and neurology, and across entities focusing on pharmaceuticals, medical device development, cell and molecular biology, tissue engineering and beyond.

As organizations delve into the clinical trial process they discover that there are many options for creating an optimal trial protocol that will maximize their ability to successfully meet desired primary and secondary endpoints. A critical part of the trial protocol is the "power" variable, or the number of patients required to reach 90% to 95% confidence such that statistical significance can be reliably concluded at the end of a trial. Inevitably, in order to reach such significance in a highly variable human population the recommended number of patients in these studies can be quite large. Unfortunately, with large patients cohorts, the effort, time, and cost required to analyze and score all of the medical image data acquired during the trial can increase exponentially and pose a significant barrier in getting a new medical device or therapeutic to market. Oncological and orthopedic clinical trials, for example, frequently include a CT- or MRI-based imaging component as part of their secondary endpoints. Such endpoints are often vital for safety and efficacy assessments and provide essential evidence for market adoption. Unfortunately, to account for inherent patient variability in response to an implant, device, or drug, and the subjectivity of multiple observers and scoring systems, these trials require extremely large patient numbers that consequently produce skyrocketing costs related to patient procedures, physician assessments, compliance, and radiological imaging and evaluation.

With all this in mind, a trial researcher has the daunting task of weighing the cost and effort associated with effectively evaluating the safety and efficacy of a device against the potential return on investment and predicted market adoption. Fortunately, there is a new model emerging that blends CRO manpower, software engineering, and imaging-specific expertise that can alleviate the cost and effort portion of this equation and even potentially aid with market acceptance. In this novel approach, proven algorithms housed in an extensive software library serve as a basis for development of customized 2D/3D/4D image analysis and visualization software tailored to the specific needs of a given clinical trial. This "software library" is akin to the spice rack that a master chef would have and the "algorithms" are equivalent to the spices the chef would use to create fine cuisine (or "image analysis software"). As such, it is critical to the successful application of the algorithms that the engineer or "chef" has adequate domain knowledge not only in software development, optimization, and image analysis/acquisition, but also in translation of quantitative image metrics to relevant pathological outcomes. By properly combining these elements, image quantification that previously required months of manual delineation can now be automated, thus yielding increased throughput, precision, and objectivity all adapted to an individual study's image-based endpoint requirements.

The way it has always been done

Traditionally, researchers have utilized the services of a contract research organization (CRO) to run their clinical trials. With regards to image analysis/interpretation, most of these CROs provide teams of clinical staff to secure images and perform manual scoring and data reporting. Unfortunately this approach is labor and cost-intensive, with significant margins for human error as a result of inherent human subjectivity and inter/intra-observer variability. Furthermore, it is virtually impossible for even the most skilled technicians to "quantify" their observations, and as a result they can only offer qualitative measurements (i.e., "scores"). As you can probably imagine, relying solely on subjective qualitative data not only weakens the ability to defend the results of a clinical trial, but it also necessitates a larger patient cohort to achieve endpoint statistical significance and hence more time and cost to complete the trial. Let's take a look at this relative to a realworld analogy that almost everyone has had to deal with at one time or another.1

You are driving down the freeway and you pass under a bridge. Much too late to slow down, you see a radar gun pointed at you. As you slam on the brakes and pray, the officer jumps into his car and begins to follow. Trying to decide whether to make a getaway or break out into tears, you eventually pull over and wait for the officer to approach your car. Of course you are asked the age-old question, "Do you know how fast you were going back there?" Flashing back to grade school when the teacher asks you, "Where's your homework?," you stammer, "I'm not sure, how fast was I going officer?" Imagine your reaction if the officer replied "Well...I don't know, but on a scale from 1 to 10, I'd say you were doing about an 8." Or better yet, what if the answer was "I'm not sure exactly, but I measured the speed of 80% of the last 100 cars to pass under that bridge and they were all speeding, so statistically speaking you must have been speeding as well."

This is not how law enforcement works in the real world, so why should it be the way organizations approach clinical imaging analytics? As imaging technologies continue to produce not only more detailed image data, but also more of it, it becomes increasingly difficult to perform a comprehensive and objective analysis of the data, let alone one that involves precise and quantitative measurements.

Take for example preclinical trials where histology is often used to score the bone growth in an orthopedic bone scaffold. It is possible, due to the natural randomness of the healing process, that some areas of the scaffold will have more new bone growth than others. In the traditional model, some percentage (e.g., 10% to 20%) of the entire tissue volume would be histologically sectioned, stained, and analyzed manually.

So if your budget can only allow for the sampling of 10% of the entire volume, you run a risk of misinterpreting the true performance of the implant. Worse yet, you leave yourself open to the possibility of a false positive, wherein the scaffold performs well according to scores in the 10% of sections analyzed, however, the other 90% of the scaffold not evaluated actually contains no bone integration. In this case the scaffold is erroneously characterized as efficacious because your approach to imaging and image analysis lacks the resolution and comprehensiveness to provide data representing the full picture. In some studies, inter- and intra-observer variability has been documented to be as high as 100% (the coefficient of variability). Combine this with the challenges of data sub-sampling and you have created a situation where a material amount of time and money have been invested only to generate data which offers a limited and potentially inaccurate view of your product's performance. Often, this can lead to misguided decisions regarding the status of the product R&D program.

Fortunately, advanced techniques for the co-registration and correlation of micro-computed tomography (micro-CT), clinical computed tomography (CT), and automated high resolution, large field-of-view histomorphometry (histology) image data have been developed to enable the comprehensive and quantitative analysis of the entire tissue volume or region-of-interest (ROI) in which a scaffold or implant has been placed. As a result, an investigator can compare images and analysis outputs of histology sections with corresponding slices from volumetric datasets. Employing this approach, the number of histology slides that need to be prepped, processed, and analyzed may be reduced by 50% or more while increasing the volume of the implant actually analyzed.

Similarly, in clinical practice it is common to employ trained professionals to manually score image data. This could involve "reading" MR images to assess joint integrity and tissue response to implanted devices, CT images to determine the formation or loss of bone, remodeling of articulating surfaces, and signs of impingement; positron emission tomography (PET) to discern location and concentration of radio-labeled cells or drugs; and CT-angiography images to evaluate blood vessel morphology and obstruction. Unfortunately, when assessed solely by a trained observer, these reads are generally both qualitative and subjective. Qualitative in that the presence of pathology is scored as either "yes" or "no," or perhaps on a scale of 0 to 5. Subjective in the sense that one image reader based on his/her experience might submit a score of "1" while another reader may assign a score of "3" (inter-observer variability), or a single reader might score the same image differently on two separate occasions (intra-observer variability).2, 3 With this in mind, it is not surprising that so many regulatory applications are rejected on the basis of data inconsistencies, or why it requires multiple, costly iterations to obtain market approval for a new biomedical product.

Fly the plane yourself or hire a pilot?

Recently, a number of software vendors have emerged offering universal "plug-and-play" solutions that provide researchers with "do-it-yourself" image analysis capabilities. Unfortunately these packages present steep learning curves and ultimately result in less "plug and play" and more of a manual brute force approach to image analysis. Akin to jumping into the cockpit of an Air Force F-18 fighter jet and pushing the throttle forward, the thrill of loading your images into advanced software is just that...thrilling. While getting the F-18 off the ground is probably within the grasp of most everyone reading this article, flying to a specific destination and safely landing the plane is far less probable and will likely end in disaster. Similarly, critical knowledge is needed to appropriately plan for and optimize imaging analytics software for a clinical trial. Lack of foresight regarding image analysis goals and acquisition protocols in the planning stages for a clinical trial can result in flawed data, ill-conceived primary and secondary endpoints, and substantial setbacks relative to clinical trial progress and final efficacy claims. This can significantly escalate initial time and cost estimates for a successful trial.

The large amounts of inter-patient variability seen in clinical studies can be a challenge to overcome, and the vast array of medical applications (pathologies, devices, etc.) which can benefit from imaging and image analysis (e.g., cardiovascular, orthopedic, oncology in-vitro cell phenotyping, and signaling) dictates that the reality and complexity of life science R&D demands a comprehensive and application (or study)-specific approach. It is virtually impossible to repurpose a single software package to handle an infinite number of image measurements relative to the vast array of implants, drugs, and biologics that could be potentially conceived. Commercially available software marketed as all-in-one, comprehensive solutions, are most often designed, verified, and validated for specific applications with generalized outputs defined in previous studies. Varying study parameters such as anatomical site, treatment dose, delivery, regimen, imaging vendors, and scanning protocols (resolution, field-of-view, etc.) can invalidate such software. To compensate, some packages offer user-adjustable features to provide flexibility such that the end user can adjust various filter metrics or input parameters to fit a study's need. Unfortunately these software packages present steep learning curves for the average user and require an experienced biomedical imaging analytics engineer ("master chef") to comprehend and adequately assess the consequences of adjusting specific image processing components. Lastly, it can be cost prohibitive for a software company to develop, get appropriate regulatory clearance, and then subsequently market single-use software packages for each type of medical imaging and analysis application conceivable.

Extending the F-18 analogy, organizations conducting large patient trials involving multiple imaging modalities and multiple time points should consider the value of "hiring the proper pilot" to generate effective image analysis algorithms to meet study needs Fortunately, in an emerging trend, a number of professionals with post-doctoral experience have combined their passion for biomedical imaging with computer science and software engineering to offer clinical trial researchers a well-aligned solution for their analysis needs. These professionals are available for hire as imaging analytics experts—individuals who can generate customized solutions for imaging-intensive portions of a clinical trial. These individuals should have substantive experience in: medical imaging hardware/optics; biomedical image analysis; biomedical engineering (MS or PhD degree); and software engineering (optimization, interface development, graphics programming). Imaging analytics experts are most often found in core labs at academic research institutions or within software-enabled imaging contract research organizations (ICRO). This unique blend of biomedical engineering training, image processing and analysis know-how, and programming expertise enables these engineers to provide valuable assistance in the design of acquisition protocols across multiple modalities that are optimized for extraction of quantitative parameters by uniquely adapted algorithms.

Protocol development. The selection of appropriate imaging protocols can dictate the success or failure of an image-intensive clinical trial. Experienced imaging experts can provide a trial researcher with suitable options to optimize image quality and feature prominence for a given image acquisition modality (e.g., CT, MR, X-ray, PET, CT-A, etc.). This could range from CT protocols that provide the highest resolution for a given field-of-view without imparting large radiation doses, MR sequences and field strengths that enable preferential enhancement of specific tissue types, and X-ray views that produce the most repeatable and relevant orientations of an implanted device.

Common imaging modalities used in clinical trials include, but are not limited to computed tomography (CT), magnetic resonance (MR), positron emission tomography (PET), single-photon emission computed tomography (SPECT), X-ray, ultrasound and other variations on these technologies. Briefly, CT and MR are 3D imaging modalities which enable clinicians to study dense (e.g., bone) and soft (e.g., cartilage, fat, etc.) tissue in three dimensions. PET and SPECT imaging require intravenous administration of radionucleotide tracers that provide physicians with the ability to assess organ function, or drug or biologic localization within the body and temporal distribution (e.g., drug metabolism). X-ray imaging is a long-standing 2D modality that is most commonly used in bone-related studies (e.g., orthopedic implant function, fracture healing, etc.) to assess new bone growth, implant/device translation, changes in bone quality, etc. Lastly, ultrasound imaging can be used to evaluate blood flow (specific segment flow-rates or turbulence, tumor perfusion, etc), cardiac function, and assess and visualize various soft tissues and whole organs to confirm the presence of pathology that produces measurable density variation (i.e., calcifications, plaque, necrosis, etc).

When selecting an appropriate imaging modality for a study it is critical to consider patient safety. For example, if CT imaging is required for particular treatment/device assessment, patients will be exposed to ionizing radiation. Thus, it would be important to understand and account for the various safety rules and regulations governing the amount of radiation a patient can be exposed to for a given scanning session, what exposure frequency is permissible and how these and other parameters vary with anatomical region of interest. For CT and X-ray imaging, radiation dosage levels and guidelines are well documented.4 For further information on the guidelines for patient radiation dosage, organizations such as the American College of Radiology, the Agency for Healthcare Research and Quality, and the Radiological Society of North America are valuable resources.5

It is also important to consider the availability of acquisition equipment. If your imaging protocols require an imaging technique or instrument that is only available at one of 10 healthcare institutions, you run the risk of stalling your trial due to lack of qualified patients recruited within the defined geographical boundaries of the study. In this case an imaging expert will be able to help find the right balance between technology and expected study aims to develop appropriate protocols that are valid across multiple sites if necessary.

Complementing imaging experts, image analysis engineers may aid in the selection of optimal imaging modalities and approaches (i.e., time points, views, reconstruction filters, etc.) that provide the most suitable images for feature segmentation. Subsequently integrating customized quantitative imaging analysis into the clinical trial protocol, it is possible to significantly increase the quality of endpoint measurements and reduce the number of patients required to meet ideal alpha (p-value) and beta (power) values for a given trial. Fewer patients translates to less cost and time required for successful study completion. Optimized imaging and analysis protocols will increase the likelihood of meeting study endpoints, potentially providing a smoother pathway for regulatory and marketing approval.

Selection of appropriate analysis algorithm(s). Once a wish list of appropriate image-based metrics have been defined to satisfy a trial's primary and secondary efficacy endpoints, it is crucial to consult an imaging expert to determine whether or not extraction of these metrics is in fact feasible by an established algorithm/software package, by a trained observer, or by a combination of both. Obviously, software will never be used solely in place of qualified medical professionals. However, in some instances, analysis algorithms may be used to significantly improve the efficiency and workflow of a medical professional (i.e., automated time-point registration and multi-planar reformatting). In cases where an implant's performance metrics are highly complex and require quantitative precision, or where large patient cohorts necessitate analysis automation, appropriately selected algorithms can drive parameter output when these algorithms are verified and validated by medical professionals.6, 7, 8

Custom algorithm development (if necessary). Depending upon the complexity of output parameters, generalized algorithms alone may not provide effective solutions. An imaging analytics expert, however, with intimate knowledge of study protocols, clinical goals, and software engineering expertise can very quickly assess the need for custom image analysis algorithm development. When pre-existing algorithms are unable to extract the necessary data needed to support an efficacy claim or meet a study's endpoints, new algorithms can be developed and tailored to account for various aspects of a clinical trial (e.g., treatment type, anatomical region-of-interest, imaging modality, implant material/morphology, etc.).9

Proper controls and validation. It is of vital importance to define appropriate study controls and sound validation methodology to certify image analysis algorithms before your clinical trial protocols are finalized. It could be disastrous to generate volumes of quantitative data only to have a regulatory official dismiss them because analysis approaches were not appropriately tested. For example, if a researcher is testing a new drug hypothesized to reduce the localized tissue damage caused by a stroke to either the left or right hemisphere of the brain, the contralateral ("unaffected side") can be used as an internal control for each patient as a standard for "healthy" or "normal" tissue. Using customized image analysis algorithms to flip and spatially register the normal and the pathologic sides of the brain, quantitative information on the drug's efficacy and safety can be extracted based on a patient's own anatomy rather than randomly selected "normal" subjects. This same approach can be applied across all patients and treatment cohorts. Figure 1 demonstrates this concept in an orthopedic clinical study.10 Such an approach requires additional steps during the evaluation process for each patient, however, in most cases the routine can be automated or at the very least optimized for efficient workflow. As an example, the registration shown in Figure 1 required ~2 minutes of user-guided pre-alignment followed by ~3 minutes for automated algorithm-based registration (mutual information algorithm). Ultimately, the ability to normalize patient metrics across a large cohort far outweighs the minor investment in time.

Figure 1. A. CT of a normal scapula. B. CT of the bilateral pathologic scapula from the same patient. C. Superimposed isosurface renderings of normal scapula (green) and mirrored/spatially co-registered abnormal bilateral scapula (blue).

Figure 2 illustrates another method for image analysis algorithm validation. Volumetric scans of various types of orthopedic screws either implanted into a cadaver or directly from the manufacturer's packaging were acquired using a conventional CT prior to the start of a clinical trial. Customized implant segmentation and analysis routines were used to extract volumetric and density measures which, in turn, were compared to values obtained from Micro-CT scans of the same screws (20 µm voxel resolution). Additionally, when possible, non-resorbable objects present in each timepoint for a given patient may provide a variability measure for algorithm performance. As an example, the metallic screw shown in Figure 2D had an acceptable ~4% variation in volume measurement across multiple timepoints using an algorithm developed for anterior cruciate ligament (ACL) tunnel analysis. Data such as this, derived from simple pilot scanning or retrospective studies, can enable biostatisticians to confidently assess the patient population size required to demonstrate statistical significance of a study's primary and secondary endpoints.11 Regardless of how you and your imaging analytics expert decide to validate custom-tailored image analysis algorithms, it is important to consider the time expense required. The validation time will vary based on:

  • How novel the approach is (existence of previous trials, publications)

  • Availability of pre-existing patient data

  • Regulatory agency guidelines/feedback

  • The imaging modality

  • The variability inherent in the "trusted standard" (e.g., radiologist/pathologist score)

Depending on the level of application-specific expertise, the factors mentioned above, and the availability of resources, the time required iteratively validate customized imaging analytics algorithms could range from a few days to a few months.

Visualization. Medical device and drug performance can be difficult to convey to stakeholders, regulatory agencies and potential customers using only tables and graphs. By incorporating custom-tailored quantitative data visualization schemes into your clinical trial protocol during initial planning stages, you will be in a position to build a much stronger case for your product's safety and efficacy claims. Visualizations derived directly from a trial's quantitative imaging data may be used to effectively and succinctly communicate how well a device or drug met a study's primary endpoints in a WYSIWYG manner (what you see is what you get). This will impact not only the chances of receiving regulatory approval, but also the ability to convince clinicians and patients that the new product is an improvement over the current standard of care. Quantitative visualization is a significant improvement over the traditional approach of relying on artistic or digital renderings which are derived from subjective, qualitative image scoring data, or input from trained image readers.

Figure 2. A. Patient’s CT scan of the tibia. B. Following clinical CT scanning prior to insertion (isosurface rendering on the left and orthogonal planar views on the right). C. Using a micro-CT imaging prior to insertion (isosurface rendering on the left and orthogonal planar views on the right). D. Precision analysis of femoral metallic screw volumes segmented from temporally acquired clinical CT scans (4% variation in volume).

Financial considerations

There is a financial case to be made for utilizing customized image analysis in clinical trials. Depending on the sophistication and volume (patient cohort size) of the analysis required, substantial cost and time savings can be realized. For example, using quantitative and comprehensive data derived from custom-tailored image analytics often translates to lower patient numbers required to reach statistical significance for safety or efficacy endpoints. This is a direct result of improved objectivity that accompanies automated analysis and increased precision free of any trained user or expert reviewer bias. Reducing cohort size in this manner can dramatically lower costs over a multi-year study. In some cases the reduction in costs can be attributed to the removal of manual steps for counting/tagging or tracing regions of interest. In other cases the development of tools customized to a given study will not only make contracted medical professionals more efficient in their work (e.g., image scoring), but will also result in more quantitative and comprehensive data. In other words, employing such an approach can translate into more effective use of R&D and clinical research dollars.

Conclusion

The image acquisition and analysis components of R&D and clinical trials can be quite complex. Harnessing the power of imaging analytics and incorporating them into a clinical study is not a simple task, but the benefits gained can make it a worthwhile investment. Once integrated, these new approaches to imaging and analysis can provide a number of benefits:

  • Increased precision, throughput and objectivity of outcome metrics with automation of traditionally manual image scoring processes

  • Reduced likelihood of unexpected delays due to personnel turnover and expertise unavailability

  • Reduced cost and guesswork associated with clinical trials with more comprehensive and quantitative data and potentially smaller patient cohorts

  • Improved ability to communicate and leverage study results through quantitative data renderings and visualizations (2D or 3D)—collateral for market adoption

Amit Vasanji,* PhD, is Chief Technology Officer at ImageIQ, Inc. e:mail: info@image-iq.com. Brett Hoover, MS, is Vice Presidentat at ImageIQ, Inc.

*To whom all correspondence should be addressed.

References

1. R. Taranto, "Imaging CROs and their Impact on Clinical Trials," Drug Development and Delivery: Specialty Pharma, 7 (6) 72-76 (2007).

2. J. T. Sharp, et al., "Variability of Precision in Scoring Radiographic Abnormalities in Rheumatoid Arthritis by Experienced Readers," The Journal of Rheumatology, 31 (6) 1062-1072 (2004).

3. A. M. Wainwright, J. R. Williams, and A. J. Carr, "Interobserver and Intraobserver Variation in Classification Systems for Fractures of the Distal Humerus," The Journal of Bone and Joint Surgery, 82 (5) 636-642 (2000).

4. "ACR Practice Guideline for Diagnostic Reference Levels in Medical X-ray Imaging," American College of Radiology, Resolution 3 (2008).

5. M. S. Stecker, et al. "Guidelines for Patient Radiation Dose Management," Journal of Vascular and Interventional Radiology, 20 (7 Suppl) 263-273 (2009).

6. T. Hildebrand and P. Ruegsegger, "A New Method for the Model-Independent Assessment of Thickness in Three-Dimensional Images," J Microsc, 185 (1) 67-75 (1997).

7. T. Saito and J. I. Toriwaki, "New Algorithms for Euclidean Distance Transformation of an N-Dimensional Digitized Picture with Applications," Pattern Recognition, 27 (11) 1551-1565 (1994).

8. Y. F. Tsao and K. S. Fu, "A Parallel Thinning Algorithm for 3-D Pictures," Computer Graphics and Image Process, 17 (4) 315-331 (1981).

9. R. Shekhar, et al,. "High-Speed Registration of Three- and Four-Dimensional Medical Images by Using Voxel Similarity," Radiographics, 23 (6) 1673-1681 (2003).

10. J. J.Scalise, et al., "The Three-Dimensional Glenoid Vault Model can Estimate Normal Glenoid Version in Osteoarthritis," J Shoulder Elbow Surg, 17 (3) 487-491 (2008).

11. C. S. Winalski, A. Vasanji, and R. J. Midura, "Evolution of Tibial Tunnel Contents After Anterior Cruciate Ligament Reconstruction: A Computed Tomography Image Analysis," Society of Skeletal Radiology, presentation/abstract, Las Vegas, NV, March 14, 2010.

© 2024 MJH Life Sciences

All rights reserved.