Improved Organization and Management of Clinical Trials

August 2, 2004
Frederick L. Olmstead

When this article was published in 1992, Frederick L. Olmstead was the director of the health studies and services division of NCRC, Inc., The Trial Management Organization, Bethesda, MD.

Applied Clinical Trials

Supplements-08-02-2004, Volume 0, Issue 0

The very first article ACT ever published shows how many changes the CRO industry has seen in 13 years.


The following article was published in


's May 1992 inaugural issue. The author talked about the dilemma pharmaceutical companies faced with regard to oursourcing: legal and regulatory assaults if they kept clinical trials in-house, and major concerns about data quality, timeliness, overestimation of patient recruitment, research reports on time delivery, and cost overruns if they outsourced. The author's solutions included better communication of what the sponsor wants and expects, and improved computer technology applications. It has been 12 years since the article was published, and, as readers will find out, much has changed.-The Editors

Clearly, major pharmaceutical companies would conduct all trials in-house if it were feasible. Some pharmaceutical companies use hybrid systems, retaining certain elements while contracting out other elements. This most likely puts senior management of such firms in a bit of a quandary. FDA is taking an increasingly skeptical and adversarial stance regarding clinical trials and studies submitted by pharmaceutical manufacturers-regardless of how they are conducted. This attitude, coupled with the increasingly litigious nature of our society, makes it likely that FDA will attempt to "reform" the new drug approval process by imposing further time-consuming burdens on manufacturers. A likely avenue for such new regulation may lie in federal theories of so-called organizational conflict of interest.

Thus, the horns of an apparent dilemma for pharmaceutical manufacturers: contract out your clinical trials and suffer deficiencies in data quality and timeliness, or conduct the trials in-house and be subject to legal and regulatory attacks by interest groups and official regulatory bodies.

Contract research organizations must deliver the quality and reliability required by manufacturers on a timely basis and at a reasonable cost. This article summarizes the most commonly voiced complaints about their services and attempts to analyze the sources of such complaints and suggest possible remedies.

Common problems
A number of articles and surveys have addressed concerns of pharmaceutical company research staff with the performance of their outside contract researchers.1 Based on both a review of some of the current literature and the author's experience, such concerns resolve themselves into four rather broad categories: credibility, responsiveness, quality of product, and cost.

Credibility. A consistent refrain in criticisms of contract research organizations relates to the companies' tendencies toward what might charitably be described as excessive optimism. Sponsors complain that contractors

  • overestimate or exaggerate the number of patients who will enter a study
  • exaggerate the resources and expertise available for the conduct of a clinical trial
  • inaccurately estimate the time it will take to select sites, qualify investigators, and conduct recruiting
  • provide excessively optimistic completion times or dates.

There are other complaints, but this list is sufficiently illustrative. Critics of a forgiving turn of mind ascribe such statements to "an excess of enthusiasm" on the part of the contract researchers. Others, of a more ill-tempered disposition, blame them on "sales hype" or worse. The essence of the problem, however, is that pharmaceutical manufacturers and CROs enter a trial or other clinical study with undue expectations of likely performance. Even if the study is eventually successful, the two parties will nonetheless be disappointed that it did not succeed as planned. Thereafter, the manufacturer will be skeptical of contractors offering research services.

Responsiveness. Once a study has begun, there is often the feeling that the contractor will plow ahead without regard to the sponsor's concerns. Sponsors fear that the contractor will

  • fail to obtain enough sites to conduct the study, particularly if requested to expand the number of clinical sites by the sponsor
  • depart from the study protocol, or resist modification of the protocol to meet the sponsor's needs
  • not maintain adequate, timely data on trial status, and fail to report or respond to sponsor requests for data
  • not prepare research reports on the schedule or in the format needed by the sponsor
  • generally fail to maintain adequate, open, and responsive communications with the sponsor.

Of all of these, the last may be the most critical. The relationship between the sponsor and the contract researchers must be based on mutual trust, as well as the sponsor's confidence in the researcher's ability to perform.

Quality of product. This is the area in which contract research is viewed with the greatest suspicion by sponsors. Quality problems run the gamut from delivery of sloppy, unclear, and technically inferior research reports to severe protocol violation.

Sponsors complain of contractors who

  • enter patients in the study who do not meet the inclusion criteria, or who, indeed, even may meet one or more criteria for exclusion
  • use particularly unreliable individuals, such as alcohol or drug abusers
  • enter the same patient twice (or even more) in the same study
  • do not properly randomize the study
  • enter data in the case report form inaccurately, under incorrect dates, substantially later than the performance of the experimental action, etc.

These examples are just a few of the many mentioned in various articles and surveys.1 The upshot of such activities is the marked degradation of the quality and usability of the clinical trial data. This may require the sponsor to conduct the trial a second or third time, delay the submission of an NDA, and lose time for marketing a new formulation, with the attendant consequences for pricing, revenues, and profitability.

Cost. This is the least understood and most contentious of the problems presented here. Under the best of circumstances, developing full, accurate, fair, and reasonable pricing is difficult. In contract research, it may be more difficult than usual because there are a multiplicity of organizational actors-the research contractor, clinical centers or hospitals, laboratory, data analysis centers, etc.-all with their own cost and price studies. It is difficult to gain the cooperation required to determine all the costs and a fair price for conducting a trial under a given protocol. Indeed, some of the collaborating organizations may be quite competitive and seek to extirpate the participation of the other cooperating organizations over the course of the study. These problems may cause the contractor to

  • develop cost overruns, followed by requests for higher funding
  • propose-or even invoice without prior discussion-costs not originally contemplated by the sponsor
  • reduce the level of service (e.g., numbers of patients evaluated, quality of reports, and so forth) to stay within a fixed contract price
  • substitute less-qualified and lower-salaried personnel on the study without informing the sponsor.

Such problems may be unavoidable by the very nature of the so-called conventional contract research process. Nonetheless, a contractor has an affirmative obligation to present the most realistic pricing feasible. To do otherwise is to fail to exercise the due diligence required of any professional.

Sources of problems
The litany of sins set forth above could lead one to conclude that contract researchers are perhaps on a par with journalists, members of Congress, and others whom Twain once referred to as the "native American criminal class." It is not my intent to give or leave that impression. Following are some potential causes for the difficulties outlined above, focusing particularly on three areas: lack of consonance of organization purposes, lack of management authority, and complexity of communications. These are not the sole sources of difficulty; rather, they are those to which I ascribe the greatest importance.

Lack of consonance of organizational purposes. Contract research involves a multiplicity of independent organizations. Few of these organizations exist for the sole purpose of conducting clinical trials. For example, the clinical sites generally have multiple missions organized around provision of care. Whatever they may consider their driving goals, they are not exclusively or primarily research organizations in most, instances. Data analysis centers are often computer service bureaus, some more specialized in health care data reduction and tabulation than others. Pathology clinics and other laboratories are generally commercial ventures. Although they may appreciate the need for an exquisite precision in the analysis of research specimens, it is unlikely that they will direct any special attention to the research studies if there is no contractual requirement to do so and no provision for enforcement.

In addition to these disjunctions of purpose, consider that supporting clinical trials and studies is an adjunct activity for most of the organizational participants. Moreover, because these are autonomous organizations, it is difficult for a contract researcher, as an intermediary for a sponsor, to impose any realistically enforceable singleness of purpose on this multifarious gaggle of collaborators.

Some of the consequences of this diversity include

  • nonstandard protocol execution among and between the various clinical sites
  • vastly different rates of patient accrual and data collection (this may have implications for data validity and usability)
  • data standardization difficulties attendant upon the wide variety of computer systems installed in the collaborating organizations.
    In sum, the diversity of organizations potentially involved in any given clinical trial, while not totally random, may be said to verge on the chaotic.

Lack of management authority. The contract researcher lacks the power to enforce procedural dicta across the collaborating organizations. The ability of a contractor to discipline personnel for material breaches or errors in protocol execution, for example, extends solely to his or her own staff. If a collaborating organization performs poorly, the contractor has three types of remedies available:

  • attempts at persuasion, up to and including various threats
  • financial penalties and withholding of payments on invoices, if contractually allowable
  • termination of the contract.

Clearly, these range from the trivial to the draconian. A far more focused approach would be desirable.

Complexity of communications. The diverse group of organizations that collaborate on the clinical trial, whether implicitly or explicitly, are a communications network. Each potential combination of two autonomous organizations constitutes a "communications link." In any such network, the possibility for miscommunication is proportional to the complexity of the net, expressed as the number of potential links. The totals escalate very rapidly. Thus, for a trial that includes the sponsoring organization, a CRO, six clinical sites, a pathology laboratory, and a data analysis center (10 organizations total), there are potentially 45 communications links, each of which adds a potential for miscommunication. If five clinical sites are added to this trial, the number of possible communications links grows to 105. That is, a 50% increase in the number of involved organizations causes an increase in network complexity of over 150%. This is why attempts to speed trials by major expansion of the number of clinical sites and investigators often end in grief.

Pragmatically, not all the potential "links" are necessarily realized in particular trials. For one thing, not all clinical sites are necessarily aware of the other sites involved. This is most often true when there are more than 10 sites, widely geographically dispersed. Nonetheless, the more actors involved, the greater will be the difficulties in clearly and effectively communicating trial purposes and requirements and maintaining appropriate adherence to protocol requirements.

Perhaps of equal importance to the size of the communications net in a trial is the rather common lack of explicit communication rules and procedures provided by study coordinators to each participating organization. In physical computer networks, such rules constitute the communications discipline of the net and are enforced both by the hardware configuration and the network operating system-without overt operator intervention. Unfortunately, there is no equivalent of network software for clinical trial groups.

This potentially dysfunctional complexity may be as much of the source of the difficulties experienced in conducting trials as any other cause. So, the question to be addressed is: What solutions may be feasible?

A consideration of solutions
The objective of this article is not to chastise the sinner but to seek means of redemption; that is, to suggest ways in which clinical trials can be improved, particularly when conducted on a contract basis.

In-house trials. A solution obvious to some would be for pharmaceutical manufacturers to conduct all trials in-house. There are, however, two major disadvantages to doing this. The first is resources. The number of compounds being developed for which trials are needed is clearly outstripping the capacity of any one firm's resources. Perhaps of greater long-term importance is the growing hostility of FDA to free-market pharmaceutical manufacturers. For the foreseeable future, relations between FDA and companies submitting NDAs are likely to become more adversarial and contentious. The use of contract researchers seems to enhance FDA's perception of the objectivity of a trial. I believe, however, that all manufacturers should conduct at least some trials in-house, perhaps through the vehicle of a wholly owned subsidiary that retains a distinct corporate identity. This would allow development of baseline comparisons of product quality, operational efficiency, and cost, which could be used to evaluate extramural contract research. Such an entity could also provide a venue for testing compounds about which the developer wished to maintain a high degree of confidentiality.

Contract language. Another potential solution to the problems experienced with contract research lies in the legalistic approach. That is, pharmaceutical manufacturers should use rather explicit detail in contract terms and conditions to enforce certain performance standards on contract researchers. Doing so could have certain salubrious outcomes. It might well eliminate from consideration those contract researchers unwilling to apply the standards the sponsor feels are necessary. More stringent contracts might also clarify expectations and better define requirements among the parties involved.

Unfortunately, more complex contracts may at some point prove counterproductive. A telling example is found in federal government contracting. The laws and regulations enforced by federal agencies in their contracting practices are extremely detailed and one-sidedly onerous. This attempt to obtain quality by contract, however, is not particularly productive. The government experiences as many or more contractual problems as any private sector organization.

Computer technology. Still another solution is the application of improved computer technology. This holds some real promise, particularly in the light of the growing power of microprocessors and the expansion of open systems architectures. Such approaches, at least in part, address the communications issues alluded to earlier. The disadvantages lie primarily in the implementation. Particularly in very large firms, there may be information services or records management departments that are wedded to obsolescent technology or at least resistant to the technology most effective for the types of problems faced in the contract research arena. Additionally, there is the problem of disseminating the desired technology through the various collaborating research organizations. Each may have distinct views as to the types of computer systems they can or will use. Moreover, if they are involved in multiple studies, which is a common situation, they may be besieged by demands to install and implement this or that system as a condition of a contract or grant.

Organizational change at the CRO. The most attractive solution, particularly because it rests in the hands of the contract researcher, is to markedly improve overall organization and management of the CRO itself. An explicit implementation of the principles of W. Edwards Deming, the leading proponent of statistical quality control, would be felicitous. Some of the specific actions that might fall within this rubric are

  • creating an in-house data center and analytic capacity to control this critical element of the trial (some CROs have already done so)
  • considering acquisition of clinical and laboratory facilities, and dedicating them to clinical trials and research (as opposed to ongoing patient care)
  • developing detailed costing models and financial planning systems, both to provide for better bids to sponsors and to improve internal management
  • implementing a standard data communications process (e.g., a wide area network) throughout the organization
  • focusing on active and aggressive accrual of trial patients, rather than passively waiting for them to present themselves at a clinical site
  • emphasizing quality and timeliness constantly and building a focus on time and a "sense of urgency" into the corporate culture.

A number of CROs have implemented some of the approaches suggested above, though usually in piecemeal fashion. What is required is a comprehensive approach and strategy for improved performance. Karl E. Peace presented an interesting look at how such a research organization might be configured.2 In particular, strong emphasis on quality control and improved, automated data management are key elements of improvement.

There are no panaceas, but improved organization and management efforts on the part of contract researchers themselves will go far to reduce the most obvious difficulties.

1. Contract Research Organization Quality Assessment Study (Decision Research Corporation, April, 1991).
2. Karl E. Peace, "TMO: The Trial Management Organization-A New System for Reducing the Time for Clinical Trials," Drug Info. J., 24, 257-264 (1990).

Related Content:

Clinical Operations