OR WAIT null SECS
While block randomization has been shown to be a more frequently used form of trial randomization design, the use of dynamic randomization has been shown to have optimal treatment group balancing results.
In a recent visit to a major biotechnology company, I asked a senior statistician how often she’s required to think about randomization design. She chuckled, “about once every two years – not enough to think much about what I’m doing”. She followed with the comment that she mostly relies on the IXRS/RTSM statistician to recommend the randomization parameters for their studies. Having worked with clinical teams on hundreds of randomization designs, my sense is that her statements would represent the experience and attitude of most study statisticians.
These sentiments would be especially true for Dynamic Randomization where complex algorithms determine the probabilities of treatment group assignment at each randomization. Static list based randomization would seem to be fairly straightforward. However, there’s more to think about with list based randomization than most would realize. By far, the primary considerations to the statistician are the block size and the stratification. The choice of the block size is an attempt to minimize predictability and at the same time maximize the treatment group balance. There isn’t a large amount of flexibility in choice of the block size. The ICH-E9 guideline specifically recommends that the block size be at least 2m, where m is the number of subjects per treatment in the allocation ratio. For instance, with 2 treatments and a 1:1 ratio the minimum block size would be 4. For a 2 treatment 3:1 ratio, the minimum block size would be 8. A simple way to remember the minimum block size is to sum the treatment group ratio and multiply it by 2.
Choice of block size gets tricky when stratification is imposed on the randomization. The more levels of stratification the greater the risk of imbalance in the treatment groups. This is due to the number of incomplete blocks at the end of the randomization. For example, let’s examine the risk of imbalance in a 2 treatment study with a 1:1 ratio and a block size of 4 with a sample size of 100 and the randomization is stratified for 10 sites. In the extreme, if the randomization ends after the first two slots are filled in the AABB permutation within each of the 10 sites, then the final treatment group totals would be 55 in treatment group A and 45 in treatment group B. Although the loss in study power resulting from this imbalance is minimal, there are other considerations where this anomaly could be more serious. For instance, if an interim analysis occurs after 25% of the enrollment completes and the treatment group totals were off by 10 it would handicap the efforts of the data monitoring committee to interpret the results. A 10% imbalance in the treatment group totals could also bring up trial credibility questions to journal or regulatory reviewers. Considering that the integrity of clinical research is at risk on many fronts, methods to minimize treatment group imbalance are important tools for reducing negative perceptions about the integrity of the study.
A straightforward remedy to the situation presented above is to NOT over-stratify the randomization. Or, when stratification on several factors is warranted, we recommend use of Dynamic Randomization which has been shown to have optimal treatment group balancing results. However, in my experience 80% of trials in our database of over 2000 randomized studies used blocked randomization. Even with straightforward site stratification the imbalances can be problematic, as illustrated in the example above. Thus, it’s worth thinking more creatively about blocked randomization and considering some novel approaches.
We decided to look more closely at the set of permutations for a given treatment group ratio/block size combination and study the effects on predictability and imbalance if we remove the permutations most likely to cause imbalance. The term “predictability” refers to the probability an investigator could predict the next assigned treatment. For a more complete description of what we mean by predictability please see Shum, Hamilton, and Lo (2016), or Blackwell and Hodges (1957) for a detailed explanation. To avoid getting too technical, let’s look at a simple example to illustrate what we’re proposing. Take a 2 treatment study with a 2:1 ratio of treatment groups A to B. The minimum block size would be 6. There are 15 permutations in this set, shown in Figure 1.
The first and last permutations (1 and 15) are the most likely to create imbalance in treatment group totals. For instance, using the example we looked at above if the randomization ends after the first four slots are filled in the AAAABB permutation within each of the 10 sites, the final treatment group totals would be 80 in treatment group A and 20 in treatment group B, which is a 4:1 ratio instead of the desired 2:1 ratio. By removing the 1st block we could eliminate that scenario and reduce the potential maximum imbalance of that example to 70 in group A and 30 in group B – a reduction of 42%.
The previous example occurs with a very low probability, but with such costly consequences it makes most statisticians nervous anyway. To get a better idea of the typical benefit of our proposed method we used Monte Carlo simulated samples and calculated the average reduction in imbalance if we eliminated permutations 1 and 15 from our set of permutations. It turned out that the average gain in balance was almost 9% with only an increase of 4% in predictability. If we also removed blocks 2,3,10, and 14 we achieved an average gain in balance of almost 22%, with only a 10% increase in predictability. We also found dramatic results when we looked at a study design with 2 treatments and a 4:1 ratio (block size of 10). There are 45 treatment group order permutations. When we deleted the 12 that were the most likely to create imbalance we reduced average imbalance by 18.2%, with less than a 6% increase in predictability.
A recent study with one of our clients illustrates how we can utilize this strategy. The randomization design was 3 treatments with a 1:1:1 ratio (block size of 6) stratified by investigational center. This randomization design has 90 treatment order permutations. Based on the enrollment projections by center we decided we needed 360 randomization slots per center, or 60 blocks. Instead of generating all possible permutations and randomly eliminating 30 blocks per center, we purposefully eliminated the 30 permutations from each stratum (center) that were most likely to cause imbalance, ensuring minimal imbalances in the treatment group totals at the end of enrollment.
In general, the larger the number of possible permutations in a given design, or the greater the treatment group ratio, the more room there is to eliminate blocks most likely to cause treatment group imbalances. We use Monte Carlo simulations to find the average imbalance with specific block combinations and then calculate their associated predictability. We can then choose the combination of blocks that maximizes balance and minimizes predictability for a given study design. This proposal is novel in the literature. It is a straightforward way to increase the chances of treatment group balances with minimal impact on predictability.
The conversation with my statistician colleague about the limited amount of time and desire to think about blocking strategies points out the importance of having a consulting resource available. Fortunately, study statisticians can rely on the IXRS/RTSM statistician to identify and recommend smarter ways to “build blocks” for randomizing your study.
Scott Hamilton, PhD, is Principal Biostatistician, Bracket; Carol Shum, is SAS Programmer, Gilead Sciences