From the Ground Up

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-05-01-2008
Volume 0
Issue 0

Leveraging technology to build a scalable and reliable infrastructure for clinical trial data capture.

Related Articles

A to Z Trial Integration

Multi-Trial Data Integration

EDC Network Security Basics

As biopharma firms, medical device companies, research institutions, and CROs make the transition to EDC, they must ensure that their IT infrastructures are up to the rigorous regulatory and audit requirements for these applications. Clinical trial data raises the bar for managing business critical data.

(Photography: Robin BartholiCk, Getty Images)

Since Phase Forward's IT team manages operations that handle approximately 1.2 million customer transactions per day, the team has evaluated and implemented a full range of technologies and strategies to build a high-performance, scalable, and reliable infrastructure. The team ensures that the clinical trial data for which they're responsible is being entered, stored, accessed, and analyzed on the best available network and leveraging the industry's top technology. They understand that running more efficient clinical trials does not rest solely on the shoulders of the EDC vendor and its proponents. IT must work hard to leverage the latest technological innovations.

EDC user check list

As organizations implement or expand their EDC initiatives or evaluate the option of outsourcing the functionality, the following topics provide a checklist of approaches and solutions to consider to help ensure that trial data is captured accurately and can be accessed reliably—anywhere, anytime.

Storage area network (SAN) technology. SAN architectures create networks of shared storage devices, so that all devices are available to all servers in the network. With SAN technology, remote storage devices such as disk array controllers, tape libraries, and CD arrays need not be physically cabled to a particular server in order to store data from that server. Instead, these remote devices appear to the operating system as locally attached units. Sharing storage in this manner provides greater flexibility because storage capacity can easily be shifted from one server to another. Since storage can be dynamically allocated, resources are better optimized.

In addition, SANs are architected to facilitate data recovery. Should a database server go down, data can be restored to the point of failure. This approach contrasts with tape backup, which typically captures data on a regular interval (perhaps every two or three hours). If a server fails, it can only be recovered to the last tape backup, and there may be gaps in the data.

Real-user monitoring software. There are specialized monitoring platforms to choose from, each optimized for various components in the technology stack. For example, there are monitoring platforms that are suited for network, security, server, and storage infrastructure monitoring, and there are platforms for applications and database monitoring.

A new generation of Web site monitoring software provides a view of site performance as experienced by the end user, in real time. Operators can monitor every transaction by every user in real time, enabling the operator to detect and isolate problems and more rapidly resolve them. This helps operators to determine if the issue may be occurring in the data center or at the remote site. Real-user monitoring technology can prove invaluable in troubleshooting problems anywhere in the world. The benefit of this approach for the user is experiencing the upside of improved performance with nominal to no change from their vantage point, since it doesn't require agents on the user level and can all be done on the network level. With a very low footprint, there's no potential for another technology bottleneck.

Real-time monitoring solutions such as Coradiant's TruSight and Web-I offer the advantage of being easy to implement and have no negative impact on transaction performance. In making the choice for monitoring solutions, a critical factor is making sure the solution you choose doesn't add to the overhead or load on the system, thereby increasing the chance of performance issues with your application or network. The challenge is in integrating infrastructure monitoring tools with end-user experience monitoring tools.

Aggregating and correlating metrics data from monitoring platforms that are for network, security, server, and storage infrastructure monitoring tools with metrics data captured in real-time, end-user monitoring data provides a point-in-time view of the end user's applications experience in context of utilization, performance, and capacity of the underlying infrastructure supporting that application. With that level of integration in monitoring tools, it provides baseline metrics by which threshold policies can be applied to trigger alerts and proactive escalations and actions to remediate infrastructure problems that may be causing poor end-user applications performance.

Virtualization technology. Virtualization allows a single hardware device to run multiple operating systems and applications, which provides greater flexibility and more effective utilization of the hardware capacity. It was pioneered in the 1960s, when early versions of the technology were used to "partition" mainframes into multiple "virtual machines."

Virtualization software can be thought of as a layer on top of the operating system, which allows the hardware device (typically a server today) to be viewed as multiple virtual machines. When this approach is used across a series of servers in a data center, it provides for higher performance and resource utilization.

Organizations turn to virtualization to reduce overall operating spend, simplify application deployment, and consolidate servers. This technology delivers greater scalability and efficiency when running large numbers of trials. It also allows for the dynamic reallocation of server processing resources as needed around the world, when particular trials have peak periods of demand.

There are a number of virtualization software technologies that are available today, including VMWare, Citrix (formerly XenSource), and Microsoft (formerly Connectix). VMWare is the market leader and first to market with a production-ready virtualization platform.

Phase Forward adopted VMWare ESX server virtualization technology for several reasons, including:

  • Robust physical server and virtual machine monitoring and management functionality in its Virtual Infrastructure Center platform

  • High availability and dynamic load balancing features enabled with VMWare's Distributed Resource Scheduler and VMotion technologies.

Outsourcing the data center. Many organizations with high capacity demands turn to data center infrastructure outsourcing options with managed collocation or managed service providers that specialize in data center technologies and with 24/7 availability. These high-availability data center operations deliver Internet connectivity, operational support, redundant power, redundant cooling, security systems, and network monitoring and management. IT professionals term the package these data center operations deliver: ping, power, and pipe.

Whether you elect to run EDC software in your organization's data center or choose to outsource the data center operations aspect of the project, you'll need to be able to count on a robust data center at or near 100% uptime. Providers such as SunGuard Availability Systems, SAVVIS, Rackspace, and

Terremark (formerly Data Return) provide both a global footprint along with a 100% network uptime guarantee and a robust disaster recovery solutions suite.

Planning for the worst case scenario. Countless books and articles have been devoted to disaster recovery planning—another function that can be outsourced. One element of a disaster recovery plan may include maintaining multiple data centers, with clinical trial data being digitally copied continuously to more than one location. By "mirroring" the data in this manner, you can sleep nights knowing you can retrieve it from one location should another center be inaccessible.

Another consideration is the ability to access support systems remotely should the primary work site be unavailable or inaccessible. Investments in bandwidth and infrastructure to support a remote workforce can play a key role as part of any corporate Business Continuity Plan (BCP).

Key components of a disaster recovery or a BCP include:

  • Identifying those business processes and associated applications and data stores that are mission critical and must be restored in a disaster situation. Not all business processes, applications, and data are necessarily mission critical.

  • For each application and data store, determine the Recovery Time Objective (RTO), which is the service level time that a business process and its related applications and data stores need to be restored to operational readiness.

  • For each application and data store, determine the Recovery Point Objective (RPO), which is the point in time that the data for this application must be restored to for operational readiness.

  • The RTO and RPO requirements per business process and associated applications and data stores will be the primary requirements in defining the design tenets and costs for implementing a disaster recovery or business continuance capability.

Key components of any BCP include the ability of remote personnel to access systems that will be needed to perform recovery tasks unimpeded. Geographic location also needs to be considered if personnel will need to direct on-site access to backup systems at an alternate location.

For mission critical corporate support systems, digital "vaulting" solutions such as LiveVault from Iron Mountain should also be considered and incorporated into active BCPs. These systems can stream data backups to a central "vault" from worldwide locations. Corporate systems can then access the central vault to recover mission critical systems should a regional office site be unavailable.

Building a truly global operation

In the world of clinical trials, trial data is an organization's lifeblood—and increasingly that lifeblood carries a global footprint. Globalization continues to trend upward for pharmaceutical companies for a variety of reasons, including the economic downturn in the United States and increased pressures to tap into treatment naïve subject populations.

Because clinical trials are being conducted in more remote locations around the globe, PC equipment, connectivity issues, and latency can make data capture a challenge. A critical step in the planning process is a formal site assessment, checking on the configurations of available laptops, Internet connectivity issues, etc.

In addition to a formal site assessment, an adequate network infrastructure must be established. It's critically important that accessibility to the data is fast and reliable. Web acceleration technology such as the Akamai IP Application Accelerator service solution help to secure a number of performance improvements including route optimization, packet replication, and protocol optimization. These solutions can be implemented at the domain level within a production data center and require very little (if any) end user action. The Akamai solution for instance can be brought on line very quickly with a minimal footprint within the data center operations. You do need to include implementation costs such as initial hardware (routers) and set-up fees along with ongoing monthly fees for the service. And these improvements have been linked to a number of key benefits including: increased adoption, improved reliability, higher resiliency, and simplified deployment.

For trials deployed in specific areas, geographic "zone server" configurations can be set up to allow for geographic maintenance windows and other change windows that can be tailored for a particular region to minimize down time during local work hours.

Eye on the big picture

How a trial is designed has a dramatic impact on how the trial is run from start to finish, and ultimately how efficiently the data is collected and analyzed. It is, some argue, one of the most important pieces of the entire puzzle. When thinking about trial design, designers must consider how the application itself must optimize and scale.

While first-generation electronic design tools focused mainly on individual components of study design, they lacked the sophisticated, centralized design environment needed to accommodate today's increasingly complicated and geographically expanding studies. A study created from the ground up can enhance design efficiency and study component reuse; improve workflow and simultaneous global collaboration; and effectively apply standards throughout the organization.

Implementing or expanding an EDC initiative for high volumes of trial data requires an IT infrastructure that meets high standards for performance and reliability. With careful advance planning and through the use of the latest technologies, you can help to ensure that your data is accessible when you need it and will meet even the most stringent auditing requirements.

Rich Deyermond is vice president of customer care at Phase Forward, Inc., 880 Winter Street, Waltham, MA 02451, email: Richard.Deyermond@phaseforward.com

Related Content
© 2024 MJH Life Sciences

All rights reserved.