New Approach to System Validation

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-02-01-2011
Volume 20
Issue 2

Considerations in implementing a risk-based framework for computer systems validation.

Related Articles

1. CRO Precontract Audits

2. Collaborating for CT Systems Validation

One of the trends I have noticed as a computer system auditor is that the risk-based approach has been extremely slow and is not widely adopted. The industry seems to be facing several challenges regarding implementing a risk-based approach with computer system validation. One thing to consider here is that people take calculated risks every day just getting to their place of work. Risk taking is simply making a decision given the information available. This implies not all of the information may be available when the decision is made.

Consistency

One of the interesting aspects of computer validation is every company has a slightly different methodology for documenting their validation efforts. In fact, the only consistency between the various documentation sets is inconsistency. Basically, every company is generating the same overall validation documentation. The difficulty with the documentation is it never looks the same because some documents are split and some are merged. In addition, test cases are different, and user requirements may vary. Even though the process is similar, each company has adopted their own unique documentation style. Some of these documentation differences appear to have been driven by the industry.

Sponsor auditors

Part of the difficulty with the documentation is related to how the industry verifies vendor activities related to electronic information. Time and time again, many companies' state there are inconsistencies with the many different auditors. This may not seem like a major concern but what is acceptable for one sponsor auditor becomes unacceptable for another. This statement most likely holds true for regulatory inspectors as well. The audit process is not perfect, but if the industry cannot agree on what validation is and what is acceptable then there is no way anyone will adopt a risk-based approach.

Requirements

The foundation for validating any computer system is the user requirements.1, 2, 3 Writing user requirements is very difficult because often people may not understand how the system should work. As a result, the requirements become cluttered and often evolve in the form of scope creep as the system is being developed. Evolving requirements is not typical within the pharmaceutical industry, but have been observed during audits where software organizations are using faster software development methodologies. Often, people think the software tool will cure all ailments in the process once in place. However, historically, many software projects fail to meet their expectations.4

The result of the user requirement battle is an unending epic. The important thing to take into consideration is how accurate are the user requirements if people do not know what they really need? Often, requirements are not based on what is needed. Instead, people write user requirements based on what they want. Taking this into consideration is very important because user requirements often have a lot of unneeded features. These unneeded features bias the user requirements. Implementing a good risk-based approach should target removing the biased user requirements from the validation testing if appropriate.

Well-documented process

Although there are a lot of difficulties within the software development lifecycle and monitoring the process, many of these difficulties can be avoided with well written standard operating procedures (SOPs). SOPs are important for defining what documents are required during the software lifecycle.3 In addition, SOPs should be written guiding the organization through the software validation process while also managing system changes over time. Often, there are gaps between the written process in the SOP and what is actually being performed in practice.

Risk approach

Another difficulty stems from the FDA guidance issued in February 2003 on the risk-based approach.5 The guidance presented the opportunity to reduce validation documentation but did not offer any suggestions on how to implement a compliant risk-based approach. In addition, the FDA also restricted the interpretation of 21 CFR, Part 11.6 The result for the past few years has been very few citations against Part 11. The difficulty here is organizations do not understand how the agency will react to a risk-based validation. Who is going to take this risk in order to see how their approach will hold up in an inspection?

In order to be able to reduce the validation effort, one must evaluate the user requirements and make decisions as to what requirement will be tested and what requirement will not. This is not as easy as it appears because there is a lot to lose in the event the risk-based approach is unacceptable in the eyes of an inspector. There are many different methodologies like hazard analysis and critical control points (HACCP), failure modes and effects analysis (FMEA), and functional risk assessment (FRA) to choose from but these are all very time consuming.7 These methods also require some working knowledge of the system, which may be vague if a new system is being considered.

Another difficulty working with risk-based frameworks is the amount of documentation required to demonstrate the risk-based methodology was followed. Why implement a risk-based approach if it will take the same amount of time and effort as it would to just validate in its entirety? This is a valid argument and one I have heard during sponsor audits. Taking this factor into consideration, any risk-based approach must be documentation lean, eliminate unneeded requirements, and be fast.

Framework for a risk-based approach

The proposed framework consists of two separate phases. The first phase takes into consideration the risk of the entire system and the second phase reviews risk at the requirement level. One thing to note here is it is crucial to have completed accurate user requirements for a risk-based approach.7 The framework is designed to reduce the amount of documentation required for a validation effort based on the overall system risk. A simple example would rate the overall system risk as high, medium, or low. Arbitrarily, the corresponding validation could be 90 percent, 80 percent, or 70 percent of the user requirements based on the overall risk. Of course, the percentages as well as the risk categories are flexible depending on how conservative the organization chooses to be.

Using an approach mentioned above requires some flexibility be built in when writing a procedure. This is important because it is difficult for an organization to pull out the perfect amount of requirements every time a system is validated. When the procedure is written, the validation expectations can be written as a range. For example, high risk may be from 88 to 92 percent of the user requirements to be tested. Each requirement is selected by assessing risk using the second part of the defined framework. Another way to add flexibility is to allow the validation plan and validation summary to address differences in the validation approach. Furthermore, if the overall risk assessment is very different from the requirements analysis then an error could have been made in the overall risk or more requirements need to be removed or added to the validation effort in order to be aligned with the procedure.

To reduce documentation and effort for the system level approach, the easiest way to determine system risk is by identifying various factors. Some of the questions could be:

  • Experience with the system?

  • Type of system implementation—new, major upgrade, minor upgrade?

  • Audit results?

  • Potential impact to personal safety?

  • How are the results going to be used?

  • Is the system going to support GxP work?

  • Probability of losing critical data?

  • Probability of corrupting data?

  • Probability of detecting the error?

  • Potential financial impact or business risk?

These are only examples of what an organization may consider important risk factors. Each risk should be categorized and then weighted by importance. One example of a weighting system could be assigning a value of five for high risk, three for medium risk, and one for low risk. Using a calculation, the overall system risk could be determined based on the factors' importance and criticality. A critical system, like a clinical trial database application, could be rated as medium risk as shown in Table 1.

Based on the overall risk assessment, the second step is used to systematically eliminate the non-essential user requirements. This can be done by simply amending the current user requirements by adding two additional columns for determining risk. As shown in Table 2, examples of risk categories could be business/GxP compliance and patient safety/product efficacy. Each requirement should be evaluated by ranking the risk high, medium, or low. The overall result will become a grid where risk can be assessed.

In the case of a high risk system, only the LL requirements would be excluded from the system validation in order to test approximately 90 percent of the user requirements. Once again, the values are flexible in order to meet the various needs of different organizations. Rating each requirement should be performed by a team of various interdepartmental representations.7 In addition, the risk ranking should be performed toward the end of the user requirement gathering phase.

In addition to only testing the most important requirements, the business team implementing the system should also look at the entire software development life cycle. If the vendor audit goes well, do not look to repeat what has already been tested for you by the vendor. In addition, there seems to be a lot of overlap between integration testing and user acceptance testing. In fact, many organizations have started to adopt an integrated approach where they are combining the integration and user acceptance into a single test effort.

Auditability

Any risk-based approach must be documented in a standard operating procedure. The framework should be defined in a standard operating procedure in order to be able to consistently execute the risk-based approach over time. In addition, the procedure also provides a reference for auditors. The reference allows auditors the ability to read how the risk-based method is performed and then verify the process has been followed by reviewing the documentation. It is important to remember the intent of a risk-based approach is not to cut corners but to apply a scientific rationale as to what truly needs to be tested.

Paradigm shift

Essentially, implementing a risk-based approach is going to require a change in how the industry monitors and documents computer system validation. In the event there is a disagreement during an audit, the auditor should cite against the risk-based procedure and not against the underlying data in the system. However, if the system is not tested adequately, resulting in potential data integrity issues, then the audit finding should go against the underlying data as well as the documented risk-based process. As long as the system maintains the integrity of the data, the corrective action should be geared more toward correcting the SOP to adjust the required amount of testing as well as to perform additional system testing. This is an important point—the company taking the risk should be the expert in understanding the system and underlying data.

People should start to think about big reductions in documentation. A radical thought to reducing documentation might be: Can the critical GxP system only require 80 percent or even less of the requirements to be tested? I do not think the industry is ready for this but if you consider the 80-20 rule and the bias in the user requirements, anything could be possible. One example would be: 20 percent of the user requirements are system wants, as opposed to true needs, where people request the bell and whistle components. Do we really need to test all of the bells and whistles people would like to have? This is a challenge because there are a lot of requirements in a user specification not critical to the overall business process.

Improved audit results and efficiencies

I have reviewed the variety of findings I have cited through the years for non-compliance. The interesting point about the various citations is they seem to target errors made in the documentation and are not always reflective of the business process. If someone implements a risk-based approach where on average 10 percent of the requirements are not tested then the overall audit results should improve by 10 percent. This, of course, would require the documentation be improved as well. The overall theory is if there is less documentation to review then there should be fewer citations during an audit.

The cost of validating a system has been estimated at approximately 50 percent to 100 percent of the software total cost.8 Minimizing the effort by eliminating non-essential testing could have significant impact on an organization. Freeing up the time and resources comprising the additional validation cost estimation could allow organizations to move quicker into new and innovative technologies within the clinical field.

Richard Von Culin is Manager, Promotional Services Systems at Boehringer Ingelheim, 900 Ridgebury Rd, Ridgefield, CT 06877, e-mail: [email protected].

References

1. Food and Drug Administration, General Principles of Software Validation, Docket HFA-305, 1-24, 2002, www.fda.gov/downloads/RegulatoryInformation/Guidances/ucm126955.pdf.

2. Robert D. McDowall, Validation of Chromatography Data Systems Meeting Business and Regulatory Requirements. (RSC Publishing, Cambridge, 2005).

3. Timothy Stein, The Computer System Risk Management and Validation Life Cycle. (Paton Press, Chico, 2006).

4. B. Kaplan, K.D. Harris-Salamone, "Health IT Success and Failure: Recommendations from Literature and an AMIA Workshop," Journal of the American Medical Informatics Association, 16 (3) 291-299 (2009).

5. Food and Drug Administration, Center for Drug Evaluation and Research, Part 11, Electronic Records; Electronic Signatures—Scope and Application. Guidance for Industry (FDA, Rockville, MD, 2003).

6. Food and Drug Administration, CFR/ICH GCP Reference Guide contains the FDA Code of Federal Regulations, Good Clinical Practice parts 11, 50, 54, 56, 58, 312, and 314, plus ICH Guidelines, Good Clinical Practice (E6), Clinical Safety Data Management (E2A), and the European Union Clinical Trials Directive (p. Part 314), Barnett International (2003).

7. R. McDowall, "Practical and Effective Risk Management for Computerized System Validation," Quality Assurance Journal, 9 (3) 196-227 (2005).

8. D. Ade, "Software Validation Goes a Long Way," PharmaAsia, 2006, .

© 2024 MJH Life Sciences

All rights reserved.