CONTINUE TO SITE >

OR WAIT 15 SECS

- Advertise
- Contact Us
- Editorial Contacts
- Do Not Sell My Personal Information
- Privacy Policy
- Terms and Conditions

© 2021 MJH Life Sciences^{™} and Applied Clinical Trials Online. All rights reserved.

November 3, 2017

Scott Hamilton, PhD##### Scott Hamilton, PhD

Applied Clinical Trials

*An overview of quantifiable techniques when choosing the parameters in dynamic randomization for clinical trial enrollment.*

Most statisticians would agree that dynamic randomization results in superior treatment group balancing over list-based stratified permuted blocked randomization, thereby increasing the precision in a randomized clinical trial. There have been several studies and review articles that provide ample evidence in favor of dynamic randomization.^{1,2,3,4}

However, in our routine interactions with statisticians and clinical trial managers, we continue to encounter the perception that dynamic randomization carries extra risk and complexity. We argue that from the implementation side of dynamic randomization, today’s modern computing platforms, thorough validation techniques, and robust randomization systems alleviate technical risk. From a regulatory perspective, dynamic randomization has been well received at the FDA for many years. We have worked on clinical trials where the FDA has explicitly requested dynamic randomization over list-based permuted blocked randomization. Any perceived regulatory risk when using dynamic randomization may be leftover from attitudes at European regulatory authorities, for example, the Committee on Proprietary Medicinal Products (CPMP). This stems from a European guidance document issued in 2003 that carried the opinion that dynamic randomization was “controversial.”^{5} However, because of excellent responses to that guidance document,^{1,2} and increased familiarity from more frequent use of dynamic randomization over the past 12 years, the perception of this risk has diminished. Thus, the hesitancy to utilize the benefits of dynamic randomization could arise from the perceived complexity. This may be due to a lack of familiarity with the process for choosing the randomization parameters and how to implement the algorithm for randomization in clinical trials.

The motivation for this article was to dissipate the perceptions of complexity by elucidating the details of our process for choosing the randomization parameters. We also provide guidance for how we implement dynamic randomization for clinical trials in randomization and trial supply management (RTSM) systems.

We begin by comparing the parameters for stratified permuted block randomization with those required for dynamic randomization. When designing a stratified permuted block randomization, the statistician must determine the stratification factors, their levels, and the block size. To maximize statistical power, the statistician desires to preserve the treatment group ratio(s) among all groups of subjects defined by demographic and disease characteristics that affect the outcome. The statistician must obtain the information about these groups from clinicians so that he or she can define the stratification factors and levels. The person also must obtain a relative hierarchy of importance for each factor for two reasons. Firstly, most methods of dynamic randomization allow for weighting of the stratification factors so that imbalances among factors with greater weight get brought back into balance more quickly. Some algorithms explicitly balance in hierarchical order among the factors.^{8} Secondly, in permuted block randomization there are practical limits to how many stratification factors can effectively be used before the balancing performance breaks down.^{3} Given the relative importance of each factor, the statistician may recommend dropping the ones that aren’t as important if there are too many. In most cases, the important factors are correlated and balancing on one factor will achieve balance on other correlated factors.

In permuted block randomization, choosing the block size is a tradeâoff between the ability to maintain the blind and the efficiency of balancing the treatment groups. The larger the block size, the less chance there is that a pattern can be detected in the treatment group allocation.^{6} However, the larger the block size, the greater the probability of treatment group imbalances over the entire study as well as within stratification factor levels. In fact, the probability of imbalances is proportional to the number of strata and the block size.^{7} The convention is to use 2 times the sum of the treatment group ratio. For instance, in a two-treatment group study with a 1:1 ratio, the most common block size chosen is 4. In a two-treatment group study with a 2:1 ratio, the most common block size chosen is 6. Because of the strong inclination among statisticians to follow this convention, it is extremely rare to base the choice of the block size on the results of quantifying the probability of treatment group imbalance for different block sizes.

When designing a dynamic randomization algorithm, all the same criteria apply for choosing the stratification factors as in permuted blocked randomization. However, there is much greater flexibility in the number of factors that one can effectively include in the dynamic randomization. Unlike permuted block randomization, which balances the *j*th factor level within the (*j*â1)st factor level, …, dynamic randomization directly balances on the marginal distribution of each factor. This makes it possible to add more factors to the algorithm without greatly reducing the ability to balance among the other factors.^{3} When planning a dynamic randomization, there is no block size to choose from. One must choose whether the allocation will be deterministic (minimization) or based on a biasedâcoin toss. However, because of the opinion in the ICHâ9 guidance, it is generally advised to use a biasedâcoin. Then the probability of the biased coin has to be chosen, e.g., for two treatment groups: 75/25, 80/20, 85/15, etc. Lastly, if weights will be applied to the stratification factors, the sizes of the weights need to be chosen. The next few paragraphs describe our process for choosing the biasedâcoin probabilities and the weights.

Similar to choosing the block size in permuted block randomization, choosing the biasedâcoin probability is a tradeâoff between efficiently balancing the treatment groups and the level of randomness in the treatment allocation. The extremes in the biasedâcoin spectrum are fair coin on one end to deterministic on the other. To make the choice, we simulate all possible enrollment patterns into the study under a range of biasedâcoin probabilities and compare the balancing characteristics for each choice. For even treatment group allocation ratios, e.g. 1:1, 1:1:1, 1:1:1:1, we look at the distribution of absolute treatment group differences overall and within each factor level. For 2:1, 3:1, 4:1, etc., we look at treatment group ratios. After calculating the treatment group differences/ratios over all possible enrollment patterns, we may summarize them using a plot. An example is given in Figure 1.

The plot is meant to illustrate the imbalance measures for the overall study averaged over all the simulations for various biasedâcoin probabilities. In this example, it shows a clear monotonic decrease in the imbalance between the treatment groups. However, one can see the greatest decrease in imbalance when going from a biasedâcoin probability of 0.8 to 0.85. A statistician could use this as a rationale for choosing a biasedâcoin probability of 0.85 since there is a marked decrease in imbalance from 0.8, but no worthwhile decrease at 0.9 or 0.95. The same would be constructed and examined within each stratum.

The advantage to using MonteâCarlo simulations of enrollment is that we can realistically model all possible randomization order patterns within each stratum and overall. We customize the enrollment parameters to fit the study. For instance, one of the stratification factors may be electrocorticogram (ECOG) status 0â1, vs. 2â3, where 80% of the subjects enrolled will have an ECOG of 2â3. We replicate that breakdown in our MonteâCarlo simulations so that the imbalance statistics calculated from the simulation data accurately reflect what could happen in the proposed trial. When site is a stratification factor, the sponsor usually has a good idea of the number of sites, but not how many subjects each site will enroll. We have a large database of studies with many different numbers of sites and their patterns of enrollment. We use that database, the sample size, and the number of sites to create a realistic site enrollment pattern for use in the MonteâCarlo simulations.

As previously mentioned, stratification factors typically have a hierarchical level of importance. For instance, it may be more important to maintain treatment group balance within ECOG score levels than gender. In this case, the statistician can specify weights that will correct ECOG stratum imbalances more quickly than imbalances within gender level. The tools for choosing the weights also utilize the simulations previously discussed. Essentially, we try out a few sets of weights and examine the imbalance measures. Figure 2 illustrates how this is done. The example is a 100-subject study with three 2âlevel stratification factors with a 2:1 active:placebo ratio. The plot on the left in Figure 2 shows a dot for each simulation showing the number of randomized subjects within one of the ECOG strata. The Xâaxis indicates the number randomized to Active and the Yâaxis to Placebo. With near perfect balancing, we would see a tight cloud of points around a line with a slope of 0.5.

The simulations show us that with the parameters of this randomization plan, we are obtaining a slope close to 0.5, and a measure of dispersion of 0.54. The medians show that on average there is an almost perfect 2:1 ratio. The plot on the right in Figure 2 shows the results of applying a weight of 10 to this factor. By adding the weight, the dispersion is reduced by 11.4% to approximately 0.48. The tightening of the ratios around 0.5 is evidenced visually as well. This may provide sufficient evidence that by weighting the ECOG stratification factor, we can effectively improve the consistency of the desired 2:1 ratio.

Of course, the statistician would want to look at the same plots of the other factors, such as gender, to determine the impact of adding the weight to the ECOG factor on the other factors. As an illustration, this is shown in Figure 3 for male gender. The increase in dispersion within the treatment group ratios within males after applying a weight to ECOG is approximately 520%. The increase in dispersion in treatment group balance among the males by adding the weight to ECOG may outweigh the reduction in dispersion within the ECOG strata. However, the relative importance of achieving balance within gender may be far lower than balance within ECOG score, in which case this scenario may be acceptable. The answer is always driven by the clinical science of the therapeutic area. The statistician has to tune these balancing parameters with the science to maximize the precision of the study.

In this article, we have provided an overview of our quantifiable techniques for choosing the parameters in dynamic randomization. Prior to the start of a study, a statistician must make plenty of decisions regarding power, analysis methods, handling missing data, etc. Adding more certainty to the randomization plan is of great help to the statistician. As with most statistical techniques, we find that when a statistician has experienced the quantitative process for choosing the randomization parameters for the first time, the mystery greatly dissipates and confidence grows. A statistician is most confident when he or she has quantified evidence from data to support their decisions. Fortunately, we have a sound process, a wealth of experience, and the data from thousands of randomized studies for statisticians to develop dynamic randomizations that will increase the precision of their studies.

At Bracket, our mission is to bring quality, resourcefulness, and dependability to the medicine development process. Improvements in the ability to implement sound and robust dynamic randomization systems provide us an easy landscape to utilize modern simulation techniques and graphical displays of quantitative data for making decisions about the biasedâcoin probabilities and weights. Future discussions will explore more subtle and flexible methods to biasedâcoin and weighting in dynamic randomization.

**Scott Hamilton**, PhD, is Principal Biostatistician at Bracket and Associate Professor at Stanford University

**References**

1. McEntegart, D.J. “The Pursuit of Balance Using Stratified and Dynamic Randomization Techniques: An Overview,” *Drug Information Journal*, 37 (3) 293–308 (2003).

2. Buyse, M., McEntegart, D.J, “Achieving Balance in Clinical Trials: An Unbalanced View from EU Regulators, “ *Applied Clinical Trials*, May 2004

3. Therneau TM. “How many stratification factors are “too many” to use in a randomization plan?” *Controlled Clinical Trials* 1993; 14:98â108.

4. Kalish LA, Begg CB. “Treatment allocation methods in clinical trials: A review.” *Statistics in Medicine, *1985; 4:129â144.

5. CPMP, C. f. (2003). “Points to Consider on Adjustment for Baseline Covariates.” http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500003639.pdf

6. Hodges, D. B. (1957). “Design for the Control of Selection Bias.” *Annals of Mathematical Statistics*, 28: 449â460.

7. McEntegart, D. (2008). “Blocked Randomization.” In R. S. D'Agostino, *Wiley Encyclopedia of Clinical Trials *(p. DOI:10.1002/9780471462422.eoct301). Hoboken: John Wiley & Sons, Inc.

8. Ledford D, Busse W, Trzaskoma B, Omachi TA, Rosén K, Chipps BE, Luskin AT, Solari PG.J. “A Randomized Multicenter Study Evaluating Xolair Persistence of Response After LongâTerm Therapy." *Allergy Clin Immunol*. 2016 Nov 5. pii: S0091â6749(16)31274âX. doi: 10.1016/j.jaci.2016.08.054

**Related Content:**