OR WAIT null SECS
A newly published reference guide answers a wide variety of GCP questions.
It its core, good clinical practice (GCP) is a set of broad regulatory requirements, standards, and recommendations that apply to thousands of highly specific tasks, processes, and roles in the conduct of clinical research. Given the disparity between the specificity of clinical trial processes and tasks and the general GCP requirements and standards under which they occur, it is not surprising that interpreting and implementing GCP standards continues to present challenges to the pharmaceutical, bio-technology, and medical device industries.
As is true of any area in which interpretation plays such a central role, the GCP discipline is today, as much as it has ever been, characterized by many long-standing and emerging questions regarding how broad GCP standards should be applied in the real world of clinical trials. Today, emerging questions in such areas as Part 11 and electronic clinical trials, as well as the new Privacy Rule and its implications for clinical research, are taking their places alongside long-standing GCP gray areas, including the way FDA GCP standards intersect with ICH GCP principles and the federal governments Common Rule.
Recognizing this, we set out to systematically collect, catalog, and answer the most important, emerging, and difficult questions regarding the interpretation and implementation of GCP standards today. In doing so, it is our hope that the resultant work, Good Clinical Practice: A Question & Answer Reference Guide, will represent the next step in GCP training and instructionone that begins where all others leave off. To Plato, the most important intellectual activity was a question and answer processthe dialogue. We have, in a way, entered into a print dialogue that permits us to highlight, address, and explore a vast array of GCP-related questions that have persisted for many years without definitive answers.
In creating this work, we selected a few hundred questions culled from a much larger list. In addressing these questions, we had one clear and overriding goal: To provide definitive answers where they exist, and to provide informed, thoughtful, and well-researched answers where they do not. In developing these answers, we called upon our own experiences in clinical research, reviewed hundreds of regulatory documents, and consulted directly with dozens of FDA officials and a network of colleagues, opinion leaders, attorneys, and clinical researchers.
The answers in this reference guide reflect the multiple layers of standards that may be applied under GCP. In many areas, for example, emerging best-practice standards or international standards (ICH, for example) may surpass those required or recommended by FDA GCP.
No absolute consensus
Many areas lack an absolute consensus in approach, and there is no perfect knowledge in the complicated, evolving discipline of GCP, which intersects with several other complex and evolving disciplines, including ethics, medicine and nursing, health systems, regulatory, administrative and case law, management science, information technology, biostatistics, risk analysis, public health, and health policy. Reasonable practitioners with varied backgrounds can, and sometimes do, disagree about how best to interpret and implement GCP standards and guidelines.
We anticipate that not all readers will agree with some of the answers provided in this reference guide. Put simply, we could not have endeavored to provide meaningful answers without assuming such a risk. It is our fondest hope, however, that this work can spark new discussions or further existing dialogue that will, in the end, lead to more clarity and consensus in areas in which they are so badly needed.
It is important to note that the reference guide addresses GCP as it applies to clinical research on human beings. It does not discuss GCP as it might apply to the practice of medicine or nursing, to health care administration, to veterinary or nonclinical studies, or to non-U.S. laws or customs.
Monitoring frequencyQ: In the FDAs view, what is an acceptable monitoring frequency for clinical studies?
A: Industry and even FDA officials often speak of an informal industry standard under which each clinical trial site should be visited, on average, every four to six weeks. At an April 2003 FDA workshop, for example, CDER Office of Medical Policy Director Robert Temple, M.D., mentioned that the usual industry standard of every four weeks [for] every site [to be inspected] may not be appropriate for large, simple safety trials, which some agency officials have been advocating in some cases.
The FDAs GCP regulations state only that the sponsor shall monitor the progress of all clinical investigations under its IND. Although it is far from definitive, the FDAs Guideline for the Monitoring of Clinical Investigations (1988) adds, under a section entitled Periodic Visits, that the monitor should visit the investigator at the site of the investigation frequently enough to assure that:
Although the ICHs GCP guideline says a bit more under Section 5.18.3-Extent and Nature of Monitoring, it does not provide specific advice on monitoring frequency, leaving that determination to the sponsor: The sponsor should ensure that the trials are adequately monitored. The sponsor should determine the appropriate extent and nature of monitoring. The determination of the extent and nature of monitoring should be based on considerations such as the objective, purpose, design, complexity, blinding, size, and endpoints of the trial. In general there is a need for on-site monitoring before, during, and after the trial; however, in exceptional circumstances the sponsor may determine that central monitoring in conjunction with procedures such as investigators training and meetings, and extensive written guidance can assure appropriate conduct of the trial in accordance with GCP.
Perhaps the greatest determinant of appropriate monitoring frequency is rate of enrollment. Sites that enroll a large number of patients in a short period of time need to be monitored far more frequently than sites that are low, slow enrollers. Depending on the complexity and frequency of protocol-required study visits, an average enrolling site might have a monitoring visit once per month or once every other month.
The frequency of monitoring visits is less important than the quality of monitoring visits, however. A rushed, perfunctory routine visit that includes only a cursory review of source documents is worthless.
Q: But in CPG 7348.811 (Inspections of Investigators), FDA instructs agency field inspectors to briefly describe the method (on-site visit, telephone, contract research organization, etc.) and frequency of monitoring. So obviously, the FDA makes certain assessments in this regard. How does the FDA assess the adequacy of monitoring frequency (that is, what criteria are used), and what are the typical red flags that the FDA looks for in this area?
A: CDER Bioresearch Monitoring (BIMO) Program officials concede that it is extremely difficult for the agency to prospectively determine, for each and every trial, how often monitors should visit sites. For this reason, companies are left to determine what monitoring frequency is appropriate for their specific trials. Admittedly, this does not much help a sponsor wondering what standards it will be held to in terms of monitoring frequency. In informal correspondence on this question, however, CDER BIMO staff noted that, in assessing the adequacy of monitoring frequency, they will ask several questions, including:
As a practical matter, however, the agency only truly scrutinizes monitoring frequency when agency inspections uncover specific problems at a clinical site. Therefore, it is the compliance problems that become the red flag for possible inappropriate monitoring frequency. In assessing the reasons for such problems, the agency will scrutinize monitoring frequency as a potential cause or contributing factor.
In short, monitoring frequency will not receive significant scrutiny in the absence of compliance problems. Not surprisingly, CDER DSI staffers report that a sponsor has never been cited for inadequate monitoring frequency when no other problems were found with a study. Because most agency inspections are conducted retrospectively (i.e., following a studys completion), monitoring frequency is, by definition, deemed adequate when a study/site is found to be fully compliant.
Shadow chartsQ: Can a monitor review photocopies of medical records, also called shadowcharts, instead of the originals? Which one does the FDA inspector reviewduring site inspections?
A: As a general rule, site monitors should always review original medical records for example, actual physicians office notes, clinic notes, and hospital medical records. Monitors often ask site staff to photocopy original records for their review. Unfortunately, this request is often made for the convenience of the monitoreither the monitor does not want to spend the time reviewing the medical records or is not able to navigate through the documentation to find pertinent data.
A fundamental problem in relying on photocopies is that the monitor cannot be certain that the documentation is complete. That is, data may have been advertently or inadvertently deleted from pages (in the margins or on the back page of the original record, for example). In addition, there may be data in other parts of the record, however small, that may not have been photocopied.
When a specific original record cannot be made available, a certified copy of the original records may be used. A record is considered certified when a qualified individualoften in the medical records departmentattests that the copies are accurate and complete. In its April 1999 draft industry guidance entitled, Computerized Systems Used in Clinical Trials, the FDA defines certified copy as a copy of original information that has been verified, as indicated by dated signature, as an exact copy having all of the same attributes and information as the original. [Authors note: while this guidance addresses certified copies in the context of electronic records, FDA officials maintain that this definition is equally relevant to paper-based records.]
In recent informal correspondence on this issue, the FDA stated that, obviously, the source documents are the gold standard for study monitoring . . . In general, source documents should be the basis for monitoring. If a sponsors monitor elects to use shadow charts they are free to do so as the regulations do not specify how monitoring is to be performed. However, in our opinion they would be foolish to rely on shadow charts without establishing that the shadow charts were the equivalent of the source documentsthat is, that they are a certified copy, meaning a copy of original information that has been verified, as indicated by dated signature, as an exact copy having all of the same attributes and features and information as the original. Thats just an opinion. We instruct FDA investigators to audit against source documents.
In practice, the FDA may accept, for the purpose of an inspection, a photocopywhen the original of a specific document is unavailable or has been destroyed. The FDA inspector and the reviewer at the agencys headquarters will decide if a photocopy is acceptable in a particular case. The agency, however, is more likely to accept a photocopy of an isolated record than photocopies of an entire research record.
If FDA field staff decide to rely on a certain number of photocopies during a site inspection, agency officials would expect the inspector to examine the adequacy of the document certification process, including the standard operating procedures for that process.
The guide is not intended to provide specific regulatory or legal advice. Readers should consult their companys standard operating procedures (SOP), clinical and regulatory departments, and legal counsel for guidance when applying GCP standards in clinical research.