OR WAIT null SECS
Achieving the best possible outcomes for patients is the ultimate goal of healthcare. Quality improvement (QI) registries support that goal by helping clinicians identify ways to improve treatment processes and patient outcomes. These types of registries have been used in a broad range of condition areas, with diverse designs, data collection methods and funding models. QI registries may also generate large datasets about specific conditions, which can provide insights into treatment patterns and related outcomes. Overall, the value of a QI registry depends on developing a design that will achieve the registry’s objectives while maximizing feasibility and sustainability.
A QI registry uses individual and population-level data to improve patient care. In general, QI registries may be grouped into two major categories: registries that collect data from patients receiving a specific healthcare service or procedure (e.g., hospitalization for heart failure); and registries that collect data over a longer period of time on patients with the same disease or condition. Moreover, the scope of a QI registry may range from a single institution to multiple institutions in multiple countries.
Like all patient registries, QI registries collect data systematically and are purpose-driven. Unlike other types of registries (e.g., disease/condition registries, pregnancy registries, registries to fulfill post-marketing commitments), QI registries focus on promoting rapid cycle quality improvement by continuously feeding actionable information back to care providers to improve care. Other types of registries typically identify key research questions and collect the necessary data to address those questions from a specified number of patients over a specified period of time. The site investigators who enroll patients and enter data are often compensated for their time, and, at the conclusion of the study, the findings are presented at conferences or published in the peer-reviewed literature.
In contrast, a QI registry typically begins with a set of clearly defined quality measures (e.g., heart failure readmissions within 30 days) and collects the necessary data to calculate those measures at the physician or institution level. The sample size is not defined in advance-in fact, participating sites are often encouraged to enter as many patients as possible-and the registry has no defined endpoint. Data are collected on a continuous basis, and the participants’ performance on the quality measures is calculated frequently (or even on demand) and shared with the participants for the purpose of encouraging process change. Rather than receiving compensation for entering patient data, sites typically pay to participate in the QI registry and expect to receive value in the form of tools, reports, and possibly public recognition for their participation.
Due to these differences in purpose, design, and participant motivations, QI registries face different challenges than other types of registries in the design, operation, and analysis phases.
This article provides an overview of the major components of QI registries (Table 1), but focuses largely on the design-particularly the design challenges and potential solutions. In addition, the following discussion explores innovative ways to leverage the data generated by QI registries.
From the outset, QI registry developers should keep in mind three principles:
Focusing on these three principles throughout the planning and design phases will increase the likelihood of success for the registry. While these principles apply to all types of registries, they are particularly critical for QI registries. QI registries typically charge organizations a fee to participate in the registry, so organizations must see the registry as both valuable and feasible. Sustainability is also essential, as most QI registries plan to operate continuously, with no defined endpoint.
The first step in planning and designing a QI registry involves identifying the registry stakeholders-the people who will participate in and benefit from the QI registry. Institutions that participate in QI registries must allocate internal resources to support the effort, making it crucially important that these institutions see value in participation. From the start, a QI registry must aim at producing actionable information-often in the form of quality measures that assess compliance with clinical guidelines-that can be used to improve treatment processes and patient outcomes in demonstrable way. Registry developers must consider how the registry data will be analyzed and shared with participants on an ongoing basis to promote a cycle of continuous quality improvement.
The early planning phase should also consider practical issues, such as funding and legal and ethical requirements. The initial funding source for the registry must be identified, and a business model and plan for sustainability should be developed. Unlike other types of registries, many QI registries are open-ended, meaning that the duration of the registry is not set at the outset and ongoing sources of funding are needed. The overall cost of a QI registry depends largely on data-collection and data-entry costs. Most national registries currently use some form of electronic data capture, in which participating institutions input data remotely via a web-based system. Rather than having institutions manually enter data, some registries offer options to upload data from other sources, such as electronic health records (EHRs). With the growing use of EHRs, new approaches to data integration that take advantage of common standards for the exchange, sharing and retrieval of electronic health information, including Health Level Seven-HL7-are being developed and implemented.
In addition, consultation with the institution’s legal department and institutional review board (IRB) early in the planning process is advisable to determine what requirements will apply to the proposed QI registry. In contrast to many other types of registries, QI registries may or may not be considered human subjects research in some cases, depending on the design of the registry. In fact, IRBs have reached different conclusions on how to distinguish QI registries from research.1 Effective planning and interactions with legal departments and IRBs in the planning phase will help QI registry developers to make informed decisions about the registry design and avoid unexpected requirements once the registry launches.
The first step in the design phase is to determine the type of QI registry that is needed. For example, a QI registry can enhance the quality of care for patients with a particular disease, such as coronary heart disease. In this case, a QI registry would collect data on patients over time and possibly from more than one provider. Other QI registries, however, could focus on improving the treatment of a specific event, such as a myocardial infarction; this type of registry involves one time point for many patients.
The design of the registry drives the type of data that will be collected. Generally, registries that collect longitudinal patient data require identifiable patient data, whereas registries that collect data on treatment of a specific event may only require a limited dataset or even de-identifiable data. The data can come from one facility or more, and the latter can include one nation or many. Moreover, the source of the data affects the basic design of a QI registry. In a registry designed around a single institution, for instance, developers can integrate the data collection with the institution’s workflow or EHR system. In contrast, the design of a QI registry that covers many institutions-and possibly multiple countries around the world-must accommodate a variety of workflows and participants.
Before collecting data, a registry’s designers also must consider how the information will be analyzed and reported, including the audience for the reports and the intended content. For example, reports may be intended for use by providers to improve treatment processes; these reports may include only data for a specific provider, or they may include comparisons to the aggregate registry data, perhaps grouped by similar-sized institutions or regions. Other reports may be intended for patients, showing changes in health status over time. Developers must also consider whether provider-level reports will be blinded or unblinded-the latter meaning that the provider can be identified. The types of reports will depend on the registry’s objectives and the needs of stakeholders identified during the planning phase.
Deciding what to measure marks a crucial step in the development of a QI registry (Table 2). Many QI registries focus on process-of-care measures, but focusing on patient outcome measures is an emerging trend. Next, the designers must decide on the core dataset. The decision may be guided by various factors, but collection of only the minimum data needed to achieve the registry’s objectives should be the underlying principle. To make this decision, developers can start with the desired quality measures and work backward to select the required data to calculate the measures. That basic data, however, may need to be expanded to support subgroup analysis or to align with quality-reporting programs. Again, consultation with stakeholders is essential to ensure that the dataset meets participant needs while remaining feasible.
In some instances, QI registries benefit from an enhanced dataset, which allows some participants to extend data collection to support additional objectives. With this option, participants can use the same QI registry in different ways, such as for quality reporting, maintenance of certification, policy analysis, and clinical research.
The design phase also involves identifying the target population and determining if a sampling strategy is needed. Sampling may be needed if the registry will collect data on a large number of patients (e.g., all patients admitted for myocardial infarction at a large hospital). By implementing a sampling strategy, the registry enables sites with high patient volume to enroll only a fraction of eligible patients, thus reducing the data-entry burden, while still attempting to ensure that the enrolled patients provide a representative picture of the overall patient population. Sampling can also be an important component of registries that collect long-term follow-up data; for example, if a registry enrolls 500 patients per year and follows each patient for five years, the registry staff will soon need to collect follow-up data for thousands of patients each year, which may be cost-prohibitive and unsustainable. While sampling may be required for feasibility reasons, it does carry with it the possibility of introducing bias. Registries that employ sampling strategies should consider using statistical tools to predict the bias from a given sampling frame.2
Because the design of a QI registry involves multiple variables-and often multiple stakeholders-pilot testing can be a valuable step. Pilot testing allows the registry developers to identify and address major issues before the registry formally launches. For example, participants may find that enrollment procedures are confusing, that some required data are not routinely collected or that data definitions are not clear. These issues can be addressed promptly, through either training or modifications to the registry design, thus helping to ensure a smooth registry launch.
Last, registry developers must plan for change. Because QI registries often collect data for multiple years with no defined endpoint, the registry may need to adapt to changing treatment patterns, the introduction of new therapies ,and revised guidelines and/or quality-reporting requirements. QI registries must be flexible in order to remain relevant to stakeholders in the face of a changing environment.
Keeping up the quality
The value of the registry is ultimately determined by the quality of the registry data. Registry developers must decide on a plan for quality assurance and implement it once the registry launches. Quality assurance plans for QI registries may emphasize two areas: 1) enrollment procedures; and 2) data entry. Because QI registries are focused on improving care and patient outcomes and documenting provider-level behavior around those objectives, there is a possibility that participants may “cherry pick” patients-that is, selectively entering the patients who received the appropriate care and/or had the best outcomes. This is a particular concern in registries where performance is linked to economic incentives. Quality issues may also arise from honest mistakes. For instance, manual data entry introduces a risk of typographical errors. Some errors can be identified through the incorporation of automated processes that perform range checks, data-format checks and so on (e.g., logical inconsistencies), but other errors are more difficult to identify (e.g., neglecting to document a patient’s history of heart failure).
Data audits-either onsite or remote-are commonly used to assess the registry data quality. With onsite audits, a trained staff person compares registry data with source documents, such as case report forms. In a remote audit, the source documents are sent to a central location for comparison with the registry data. To manage costs, registries typically audit a portion of the data from a portion of the participating sites on a regular basis. Audits may provide valuable information on data elements with low completion rates or identify areas where additional training is needed.3,4 Audit findings, particularly when they demonstrate a high quality of data within the registry, may also be useful when publishing findings from the registry.
Ultimately, for a QI registry to achieve its fundamental goal of improving patient care and outcomes, the results from a QI registry must be reported to-at minimum-the participants, and possibly to patients and the public. The Centers for Medicare & Medicaid Services (CMS), for example, is undertaking public reporting in response to the Patient Protection and Affordable Care Act.5 Research on the effect of public reporting, though, produces conflicting results. For example, one study concluded that public reporting significantly decreased mortality after coronary artery bypass surgery.6 Nonetheless, a meta-analysis of 45 studies concluded: “The effect of public reporting on effectiveness, safety, and patient-centeredness remains uncertain.”7 As a result, designers must weigh stakeholder needs and concerns against the available evidence to reach their own conclusions about publicly reporting data from a QI registry.
Get with the guidelines
Actual examples of QI registries depict some key design features and how to address challenges. The American Heart Association and the American Stroke Association developed Get with the Guidelines®, which is a multipart program for quality improvement for patients in hospitals. In 2003, the associations started Get with the Guidelines®-Stroke to increase the odds that in-hospital patients get the most advanced treatment. This QI registry collects data on the patients and generates real-time reports on outcomes and other results that help hospitals treat patients more effectively. So far, this QI registry includes over two million patients at more than 1,600 hospitals, and many journal articles reveal the ongoing value of this registry (Figure 1).8–10
To improve in-hospital patient care, Get with the Guidelines®-Stroke incorporates a cycle of plan, do, study, and act (PDSA). That is, a hospital plans initiatives designed for quality improvement, which get put in place and studied; then, hospital members use the outcome to act, by adjusting the initial initiatives. The resulting compliance reports show a hospital how it performs relative to, for example, other hospitals in the same region or similar-size hospitals.
To accomplish these goals, this registry’s designers needed a core dataset that would provide enough information for evidence-based medicine and still be reasonable to collect. Consequently, the registry required the smallest dataset that would provide actionable results. In addition, hospitals receive no compensation for collecting data for Get with the Guidelines®-Stroke, which adds impetus to keep the process as simple as possible.
To accommodate the competing interests, the QI registry’s designers started with the desired recognition measures and worked in reverse to the required dataset. The resulting dataset includes questions that assess a facility’s compliance with stroke guidelines for patient care, and the questions evolve to reflect advances in treatment.
This example shows the value of focusing on the desired outcomes of a QI registry during the design phase, particularly in selecting the elements of the dataset and keeping it as small and simple as possible. Moreover, the developers can update the dataset over time to keep a registry relevant.
Designing around the disease
Every QI registry must be designed to meet specific, unique needs, and that proved particularly challenging for the National Parkinson Foundation Quality Improvement Initiative. This QI registry collects longitudinal data on treatments and patient-reported outcomes for Parkinson’s disease (PD) to determine and report on the best clinical care. The National Parkinson Foundation started this registry in 2009, and it now collects data on 5,000 patients at 20 international sites.
Given PD’s incurable nature, the treatments aim at reducing a patient’s symptoms and improving quality of life. Patient-based outcomes provide the best measurements of these features, but variations in diagnosis and treatment create challenges in selecting the best measures. In fact, this QI registry emerged from the need for evidence-based treatment standards. The resulting dataset includes many elements, from demographics and comorbidities to medications and outcomes-all collected annually on simple forms.
The registry data help researchers understand the factors that play the biggest roles in quality of life for PD patients11 and offer insights into challenges for caregivers,12 thus illustrating the value of QI registries for chronic, progressive conditions.
From quality improvement to research
QI registries may collect large amounts of data over many years in support of their primary objectives. These data, however, may be able to support other research, in the form of secondary analyses or through linkage to other datasets. For example, the data may be stratified for analysis-such as grouping patients by gender, age, or geographic location-to examine variations in treatment patterns.13 The data from a QI registry can also be combined or compared with other datasets. For example, one research study combined a QI registry on cardiovascular events with a claims database to determine the validity of claims by Medicare patients.14
A QI registry can also be modified to support research; for example, some sites may participate in a substudy that collects long-term follow-up information, while other sites collect only the core registry data. Modifying QI registries in this manner requires clear objectives, training, and a continued focus on minimizing burden for participants.
The central goal of a QI registry is to improve healthcare outcomes for patients, and, like all types of registries, QI registries must define clear, relevant objectives, select an appropriate design, and focus on data quality. In addition, QI registries must work with stakeholders to maximize value, remain feasible, and ensure sustainability.
Michelle Leavy, MPH, Manager, Health Policy, Real-World & Late Phase Research, Quintiles; and Daniel M. Campion, MBA, Research Director, Real-World & Late Phase Research, Quintiles
1. N. Johnson, L. Vermeulen, K.M. Smith. “A survey of academic medical centers to distinguish between quality improvement and research activities,” Quality Management in Health Care, Vol 15, No 4, 2006.
2. M.C. Walsh et al. “Selection bias in population-based cancer case-control studies due to incomplete sampling frame coverage,” Cancer Epidemiology, Biomarkers & Prevention, Vol 21, No 6, 2012.
3. Y. Xian et al. “Data quality in the American Heart Association Get With The Guidelines-Stroke (GWTG-Stroke): results from a national data validation audit,” American Heart Journal, Vol 163, No 3, 2012.
4. J.C. Messenger et al. “The National Cardiovascular Data Registry (NCDR) Data Quality Brief: the NCDR Data Quality Program in 2012,” Journal of the America College of Cardiology, Vol 60, No 16, 2012.
5. Centers for Medicare & Medicaid Services. Public Reporting. http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/physician-compare-initiative/Public_Reporting.html. Accessed November 6, 2014.
6. E.L. Hannan et al. “Improving the outcomes of coronary artery bypass surgery in New York State,” Journal of the American Medical Association, Vol 271, No 10, 1994.
7. C.H. Fung et al. “Systematic review: the evidence that publishing patient care performance data improves quality of care,” Annals of Intern Medicine, Vol 148, No 2, 2008.
8. L.H. Schwamm et al. “Temporal trends in patient characteristics and treatment with intravenous thrombolysis among acute ischemic stroke patients at Get With The Guidelines-Stroke hospitals,” Circulation: Cardiovascular Quality Outcomes, Vol 6, No 5, 2013.
9. E. Cumbler et al. “Quality of care and outcomes for in-hospital ischemic stroke: findings from the National Get With The Guidelines-Stroke,” Stroke, Vol 45, No 1, 2014.
10. L. Schwamm et al. “Get With the Guidelines-Stroke is associated with sustained improvement in care for patients hospitalized with acute stroke or transient ischemic attack,” Circulation, Vol 119, 2009.
11. J.G. Nut et al. “Mobility, mood and site of care impact health related quality of life in Parkinson’s disease,” Parkinsonism & Related Disorders, Vol 20, No 3, 2014.
12. O. Oguh et al. “Caregiver strain in Parkinson’s disease: national Parkinson Foundation Quality Initiative study,” Parkinsonism & Related Disorders, Vol 19, No 11, 2013.
13. Y. Xian et al. “Racial/ethnic differences in process of care and Outcomes among patients hospitalized with intracerebral hemorrhage,” Stroke, DOI: 10.1161/STROKEAHA.114.005620, 2014.
14. Q. Li et al. “Validity of claims-based definitions of left ventricular systolic dysfunction in Medicare patients,” Pharmacoepidemiology and Drug Safety, Vol 20, No 7, 2011.