Real-World Workload Needs

May 2, 2008
Applied Clinical Trials Supplements
Volume 0, Issue 0

Developing a process and management tool for scoring complexity in cancer clinical trials.

Measurement of workload is a complex, multifaceted area and an important but difficult subject. In reviews of measurement methods, radiotherapy workload is de scribed as being "poorly represented" by the use of simple parameters, as they do not contain any measure of treatment complexity.1,2

(Photography: Jim Shive Illustration: Paul A. Belci)

In the specialties of oncology and hematology, recruitment figures are the main focus of performance management, with little consideration given to study complexity, phase, and status of subject. Yet, these are key factors when considering workload and study complexity.3

There is very little published on clinical research workload measurement, complexity, and capacity. The only comprehensive study, which was undertaken in Canada, expressed the need to be able to estimate specific costs and resources associated with clinical trials as the main reason for the study.3,4 The study measured trial activity throughout the study process and identified sponsor and study phase as important factors to consider when estimating cost and resource use, but it did not result in a complexity scoring tool.

Within the UK, some work on trial complexity and parameters for scoring were presented at a recent conference.5 The components used included the areas of screening, randomization, treatment, pharmacovilgilance, samples and questionnaires, and tumor assessment and treatment. An interesting inclusion was that of patient group.

In other studies, work activity over a certain number of days has been measured to try and determine research related costs.6,7 In other areas, instigation of monthly workload reporting has occurred, including the recording of number of queries and safety reports submitted in an attempt to quantify workload. An attempt to define the number of patients and trials that could be dealt with by nurses and data managers depending on complexity has been made; the conclusion being that it allowed more effective planning and flexibility to meet the changing demands of clinical research.

Related work has also been undertaken by the European Organization for Research and Treatment of Cancer (EORTC) Clinical Research Coordinators Group. This is work in progress that is looking at the development of a workload measurement instrument by initially doing workload analysis to determine the staff requirements for a set of tasks by reviewing the processes, frequency, and time required. The group has echoed the sections developed within this tool (i.e., set up, recruitment, and follow up) by reviewing planning, implementation, data collection, and closure (see Table 1). Further collaborate work is planned with this group.

The Complexity Scoring Tool Broken Down by Section and Category

Therefore, although there is very little literature and work in progress, attempts have been and are being made in this important area.

Piloting studies

If a complexity scoring tool enables units to demonstrate the complexity and components of the work they undertake, then it can also act as a valuable communication tool both within and outside the organization. The tool the author used in a pilot study was developed over a year with a research team and in collaboration with research colleagues.

The tool was designed to score studies according to complexity across five categories. Each study was initially allocated a predicted category according to its requirements to allow comparison. Once developed, the complexity tool was piloted by volunteers throughout the UK, who were asked to review face validity, reliability, and repeatability by undertaking three different exercises.

The tool was developed in response to problems and requirements related to clinical research management at a cancer center within a Clinical Trials Unit in the UK that organizes predominantly multicentered research studies coordinated elsewhere—the majority of which are Phase I to III. The pilot tool was reviewed by research teams based mainly in NHS Trusts within England and Wales; all but one of the participants worked within the specialities of oncology and hematology.

In terms of participants, from the 36 pilot packs sent out to people who had expressed an interest in piloting the tool, 18 (50%) completed the face validity exercise, 14 (39%) completed pilot tools for the reliability exercise, and eight (22%) completed pilot tools for the repeatability exercise.

Lessons learned

It was concluded that the idea and the tool were worth further development based on the findings of this project and with consideration of the following recommendations:

  • A future project should be a large, multicentered, prospective study reviewing current workload (with ongoing follow up included) that's designed by a collaborative group, with centers that have complete buy in and that are stakeholders in the tool's development.

  • Future work should include further exploration of the pilot findings and a plan for semistructured interviews with participants at different stages to ensure that the problems, concerns, and interpretations are discussed.

  • Expert advice is fundamental to the weighting and scoring.

  • Future work needs to link to and complement other work being undertaken.

  • Staff using the tool need a minimum level of knowledge, clear definitions, and comprehensive guidance. A training package should be built into future studies to ensure all centers are completing it in a uniform way.

  • Needs to be flexible enough to account for changes in requirements (updated and reviewed yearly).

  • Needs to be computer based and able to link with the organization's existing IT; it also needs to include a Quality Control system that leaves an audit trail of changes.

  • The categorization of the projects should be expanded to include categories for the different sections as well as an overall category. It is important that sections are meaningful as well as have an overall project category.

The project was the first important step on the developmental journey through this complex, fascinating, and vital topic in the bid to manage, understand, and predict clinical re search requirements well in the future. Good management of clinical research within the research environment is vital.

Further work

Since undertaking this work, the author has collaborated with colleagues working on a related project. A larger prospective collaborative project is now planned, working with the EORTC and the National Cancer Research Network. The aim of the project is to validate and link together the EORTC Workload Measurement and complexity instruments in order to provide a comprehensive "toolkit" of workload management and complexity tools for research staff to use at various stages depending on their differing requirements. The project is currently in the data analysis stage and will be reported shortly.

Jacqueline Briggs, RGN, MSc, is research network manager at the Peninsula Stroke Research Network–Royal Devon and Exeter Foundation NHS Trust, (formerly Wales Cancer Trials Network), William Wright House, Royal Devon and Exeter Hospital (Wonford), Barrack Road, Exeter, EX25DW, UK, email: Jacqueline.Briggs@rdeft.nhs.uk

References

1. P. Craighead, C. Herring, C. Hillier et al., "The use of the Australian Basic Treatment Equivalent (BTE) Workload Measure for Linear Accelerators in Canada," Clinical Oncology, 13, 8-13 (2001).

2. N.G. Burnett, D.S. Routis, Murrell et al., "A tool to Measure Radiotherapy Complexity From the Basic Treatment Equivalent (BTE) Concept," Clinical Oncology, 13, 14-23 (2001).

3. K. Roche, N. Paul, B. Smuck et al., "Factors Affecting the Workload of Cancer Clinical Trials: Results of a Multicenter Study of the National Institute of Cancer Clinical Trials Group," Journal of Clinical Oncology, 20 (2), 545-56 (January 15, 2002).

4. N.A. Roche, R.M. Given-Wilson, V.A. Thomas, N.P.M. Sacks, "Assessment of a Scoring System for Breast Imaging," British Journal of Surgery, 85 (5), 669-672 (1998).

5. L.A. Batt, L.K. Branston, T.S. Maughan, Assessing Cancer Trial Complexity and its Impact on Workload, Poster Presentation, NCRN Conference, 2004.

6. E. Oddone, M. Weinberger, A. Hurder, W. Henderson, D. Simel, "Measuring Activities in Clinical Trials Using Random Work Sampling: Implications for Cost-Effectiveness Analysis and Measurement of Intervention," Journal of Clinical Epidemiology, 48 (8), 1011-1018 (1995).

7. S. Owenby, "Reorganisational Plan Helps Staff Manage Workloads," Oncology Nurses Forum, 27(1), 29 (2000).