OR WAIT null SECS
A detailed plan is the key to success. But since even the best laid plans can go awry, a back-up plan is necessary.
The independent review of imaging and clinical data in oncology clinical trials is becoming increasingly important in supporting trial outcomes. The key to designing a successful independent review process centers on prospectively defining the methodology for the review.
Early in the development of an independent review process, several key documents are generated and signed off as agreements between a sponsor and imaging core lab. These documents include a project plan, Independent Review Charter, and the Investigator Site Manual.
Through these documents, the sponsor and imaging core lab define a solid review method by describing how the independent review will be performed, the types of reviewers involved, data to be collected from the investigator site, and the assessments the independent reviewers will provide.
Prospective vs. Retrospective
Often the sponsor is able to manage the process by managing the review process documentation. However, while documents may represent an ideal methodology, the review process can break down due to unrealistic expectations, lack of experience and expertise, poor communication, and numerous sustainability issues that can dismantle even the most well-defined independent review.
As a core document, the project plan describes timelines and budgets, but can also be used to prospectively define the communication and data management plan between the sponsor and imaging core lab. This document also defines how investigator sites will be qualified by the lab and the methodology by which queries from the core lab to the site will be managed.
One of the ultimate goals of the project plan is to identify any risks that may impact data collection for the independent review and to form a plan for how those risks will be prevented and how any anticipated risks will be mitigated.
Balancing Act: Stringent vs. Adequate
The project plan is heavily influenced by the Independent Review Charter, and ideally these two documents are developed in tandem. The charter defines the overall review process, including the image schedule; types of images and clinical data to be reviewed; the oncology assessment criteria (i.e., RECIST, WHO, IWC, etc.); the types and number of reviewers; the relationship between reviewers (i.e., double reviews with adjudication, single reviews, joint reviews); and reviewer qualifications.
Sponsors are often eager to have a charter in place for review and approval by regulatory authorities, but many times the key thought leaders involved in designing the charter from the sponsor side, such as the medical lead for the study and the clinical protocol manager, are not aware of the implications of their decisions. One common pitfall in designing an independent review process is dictating stringent data collection activities and review requirements in the charter.
Early stages of an independent review design are more often focused on medical and regulatory issues rather than operational logistics. As a result, the need for rapid charter development to meet a regulatory deadline excludes other groups such as the sponsor's data management from the charter development process until too late in the process. That is to say, stringent data collection activities and review requirements defined in the charter and project plan are often implemented with little forethought to the overall logistical and financial burden that these stringent requirements place on the trial. Inevitably, many trials that initially mandate very rigorous data collection guidelines are relaxed after a period of prodigious effort to enforce compliance.
Another core document required early in study start-up is the Investigator Site Manual. This document, distributed by the core lab to investigator sites, has a dual purpose: to establish exact imaging parameters and provide logistical assistance, such as shipping instructions to the site. The Investigator Site Manual typically contains far more detail on the requirements for submitting and collecting images for independent review than the clinical study protocol. Since it provides clear and concise requirements for imaging, the manual is a critical document for investigator sites.
Training and adequate monitoring are essential to assuring compliance with imaging guidelines and procedures set forth in the site manual. It is of no benefit to develop complex procedures without ensuring the investigator sites have the means and inclination to comply.
In an ideal situation the core lab will review identical information from the investigator site in the correct sequence. However, since there are two separate data collection activities (by the investigator site and imaging core lab), it is inevitable that data inconsistencies will arise.
Missing scans, unscheduled imaging visits, discrepant assessment dates, and discrepant exam types (e.g., spiral CTs versus conventional) create numerous problems with comparisons between the two analyses. It is often the case that differences of this sort are only discovered after an investigation into why RECIST assessments, for example, are discrepant. If discovered after an independent review, the data must be corrected and rereviewed—a costly and timely procedure.
Aside from radiological data, the clinical data utilized during the independent review process consistently becomes a source of complication during reviews. The sponsor's data management group is primarily responsible for receiving and redacting clinical information from the investigator sites for review by the independent radiologists and/or oncologists. In oncology trials where both radiologists and oncologists act as independent reviewers, clinical data can have a significant impact on the review results. Listing this data in the charter is critical for the core lab to know what type of data will be received. However, the type and organization of data often looks feasible during charter development, but very often becomes problematic at the time reviews are set to begin.
Part of the problem with preparing and organizing clinical data for the independent review process is due to the way in which protocols are written. Study protocols are often developed without much forethought to collecting clinical information that easily facilitates independent review.
In oncology trials, data such as biopsy or cytology results often have a significant impact on how data are interpreted by the independent reviewers. An oncology dossier of predetermined clinical information from the sponsor's clinical CRF must be presented in a format that is relevant and unbiased.
Typically, the oncology review component of the review process involves an oncologist reviewing all radiological results in conjunction with a dossier for the subject's case, which covers the entire range of the subject's treatment during the trial. This type of review scenario is generally the preferred option in oncology trials. However, a dossier specified to be sectioned by timepoints/visits within a subject's case may be more scientifically sound than the global oncology dossier for certain disease indications.
In a chronic lymphocytic leukemia trial, where clinical data plays as large or a larger role than radiological data typically does in a solid tumor study, a dossier organized by timepoint may be more relevant to the clinician interpreting the data than a dossier for the subject's entire case. Similarly, in a melanoma trial, where skin lesion photographs can be reviewed and qualitatively assessed on a timepoint/visit basis, data may be more relevant if it is presented by timepoint. This timepoint by timepoint organization may sound reasonable in theory, but can present a challenge to the data management groups from the sponsor's side that will actually be organizing and exporting the data to the imaging core lab. The net result is that the charter must be amended, usually far too late in the process, because of limitations in the data collection process.
Data management groups often attempt to format the independent electronic data similar to the investigator data for ease of comparison. However, this traditional method often fails when confronting differences in the two review processes, particularly when there are multiple types of reviewers. For example, a radiologist is required to perform imaging measurements and an oncologist is required to review clinical data, with both reviewers working independently. This differs from the investigator who can fulfill both roles simultaneously. An enlarging pleural effusion, for instance, may instantly be diagnosed as benign by the investigator who has cytology results readily at hand. However, the independent radiologist may be forced to state "unknown" for a response, and defer the matter to the independent oncologist who has access to the clinical information.
The sponsor's data management group must be cognizant of differences like these and carefully consider the collection and analysis of independent reviewer data. More importantly, they must be a key player in the charter development process, when many of the decisions on data collection are made.
Although prospectively defining a solid independent review process may appear easy enough with the many key documents in place, there also are sustainability issues that ultimately hinder the process as well. Employee turnover can be detrimental to maintaining timelines and having a historical memory of why certain decisions were made for a trial. A lack of knowledge of the purpose of independent review also routinely leads to a communication break down between the core lab, the sponsor, and the investigator sites. Sponsors must consider the need for some conceptual training on the purpose and ultimate goal of the independent review process.
At the very least, the protocol may be a vehicle to highlight the existence of an independent imaging review and provide some high-level instruction to investigator sites.
While there is considerable desire by both the sponsor and the imaging core lab to limit the number of independent reviewers in an effort to control variability, reviewers often leave for unforeseen reasons during the review period. Likewise, study requirements may change, necessitating the addition of extra reviewers late in the review process.
It makes sense to assume that the ideal independent review process would utilize the skills of the most experienced thought leaders within the therapeutic area as independent reviewers. However, these experts often have very rigorous schedules and limited time to dedicate to independent review activities, which does not offer much flexibility with regard to timeline constraints. It is important during early charter discussions—dominated many times by scientific rather than operation concerns—to strike a balance between reviewer number and availability and the expected reviewer workload.
Prospectively defining the independent review process through documents such as the project plan, Independent Review Charter, and Investigator Site Manual—among other documents—allows the sponsor to manage the actual review process by managing the documentation.
The sponsor and imaging core lab must carefully consider the implications of these documents and set realistic expectations for how issues that arise can be easily mitigated. However, regardless of how well-defined the independent review process is, it often breaks down due to a lack of enforcement of the methods defined. Similarly, a process that is too stringent poses the risk of setting unrealistic expectations that the sponsor, investigator sites, and core lab simply cannot meet.
Poor communication as well as employee and reviewer turnover can also have detrimental effects on even the most well-planned processes. While situations may often not be anticipated, mitigation plans for what to do should issues arise can always be defined. The best defense in ensuring a successful review is to limit the occurrences that can be anticipated and prepare a solid plan for those that cannot be.
In the long run, prospectively defining how to handle common pitfalls in the independent imaging review process is far easier than trying to develop postreview contingency actions.
Stephen Bates is program director of the oncology/cardiology group in the medical imaging division of Perceptive Informatics, a PAREXEL company, 900 Chelmsford Street, Suite 309, Lowell, MA 01851.Kelie Williams, * BS, MS, MTPW, is a senior medical imaging technical writer in the imaging technical medical writing group at Perceptive Informatics, email: email@example.com
*To whom all correspondence should be addressed.
1. P. Therasse, S.G. Arbuck, E.A. Eisenhauer et al., "New Guidelines to Evaluate the Response to Treatment in Solid Tumors," Journal of National Cancer Institute, 92, 205–216 (2000).
2. World Health Organization, WHO Handbook for Reporting Results of Cancer Treatment, Offset Publication No. 48 (Geneva, Switzerland, 1979).
3. B.D. Cheson, J.M. Bennett, M. Grever et al., "National Cancer Institute-Sponsored Working Group Guidelines for Chronic Lymphocytic Leukemia: Revised Guidelines for Diagnosis and Treatment in Solid Tumors," Blood, 87, 4990–4997 (1996).