Skip to main content

Adaptive Designs for Clinical Trials of Drugs and Biologics Guidance for Industry

I. INTRODUCTION AND SCOPE

This document provides guidance to sponsors and applicants submitting investigational new drug applications (INDs), new drug applications (NDAs), biologics licensing applications (BLAs), or supplemental applications on the appropriate use of adaptive designs for clinical trials to provide evidence of the effectiveness and safety of a drug or biologic.2 The guidance describes important principles for designing, conducting, and reporting the results from an adaptive clinical trial. The guidance also advises sponsors on the types of information to submit to facilitate FDA evaluation of clinical trials with adaptive designs, including Bayesian adaptive and complex trials that rely on computer simulations for their design.

2: The term drug as used in this guidance refers to both human drugs and biological products unless otherwise specified.

The primary focus of this guidance is on adaptive designs for clinical trials intended to support the effectiveness and safety of drugs. The concepts contained in this guidance are also useful for early-phase or exploratory clinical trials as well as trials conducted to satisfy post-marketing commitments or requirements.

In general, FDA’s guidance documents do not establish legally enforceable responsibilities. Instead, guidances describe the Agency’s current thinking on a topic and should be viewed only as recommendations, unless specific regulatory or statutory requirements are cited. The use of the word should in Agency guidances means that something is suggested or recommended, but not required.

II. DESCRIPTION OF AND MOTIVATION FOR ADAPTIVE DESIGNS

A. Definition

For the purposes of this guidance, an adaptive design is defined as a clinical trial design that allows for prospectively planned modifications to one or more aspects of the design based on accumulating data from subjects in the trial.

B. Important Concepts

The following are descriptions of important concepts used in this guidance:

  • An interim analysis3 is any examination of data obtained from subjects in a trial while that trial is ongoing and is not restricted to cases in which there are formal between-group comparisons. The observed data used in the interim analysis can include one or more types, such as baseline data, safety outcome data, pharmacokinetic, pharmacodynamic or other biomarker data, or efficacy outcome data.

    3: The FDA guidance for industry E9 Statistical Principles for Clinical Trials (September 1998) defines an interim analysis as “any analysis intended to compare treatment arms with respect to efficacy or safety…” The current guidance uses a broader meaning for interim analysis to accommodate the wide range of analyses of accumulating data that can be used to determine trial adaptations. We update guidances periodically. For the most recent version of a guidance, check the FDA guidance web page.


  • A non-comparative analysis is an examination of accumulating trial data in which the treatment group assignments of subjects are not used in any manner in the analysis. A comparative analysis is an examination of accumulating trial data in which treatment groups are identified, either with the actual assigned treatments or with codes (e.g., labeled as A and B, without divulging which treatment is investigational).4 The terms unblinded analysis and blinded analysis are also sometimes used to make the distinction between analyses in which treatment assignments are and are not identified, respectively. We avoid the terms unblinded analysis and blinded analysis in this guidance because these terms can misleadingly conflate knowledge of treatment assignment with the use of treatment assignment in adaptation algorithms. An interim analysis can be comparative or non-comparative regardless of whether trial subjects, investigators, and other personnel such as the sponsor and data monitoring committee (DMC) have knowledge of individual treatment assignments or access to comparative results by treatment arm. For example, it is possible to include adaptations based on a non-comparative analysis even in open-label trials, but ensuring that the adaptations are completely unaffected by knowledge of comparative data presents additional challenges. The importance of limiting access to comparative interim results is discussed in detail in section VII. of this guidance.

    4: These definitions of the terms non-comparative analysis and comparative analysis refer to the setting of a multi-arm clinical trial. In a single-arm clinical trial, any analysis of accumulating trial data involves identification of treatment assignment information and, therefore, is considered comparable to a comparative analysis for the purposes of this guidance.


  • The term prospective, for the purposes of this guidance, means that the adaptation is planned and details specified before any comparative analyses of accumulating trial data are conducted. In nearly all situations, potential adaptive design modifications should be planned and described in the clinical trial protocol (and in a separate statistical analysis plan) prior to initiation of the trial.


  • This guidance distinguishes between those trials that are intended to provide substantial evidence of effectiveness and other trials, termed exploratory trials.5 This distinction depends on multiple features of a clinical trial, such as the clinical relevance of the primary endpoint, quality of trial conduct, rigor of control of the chance of erroneous conclusions, and reliability of estimation.

    5: A variety of terms have been used to describe different kinds of clinical trials, such as phase 1, phase 2, and phase 3 (21 CFR 312.21); pivotal; registration; and confirmatory (FDA guidance for industry E9 Statistical Principles for Clinical Trials (September 1998)). These terms will not be used in this guidance.


  • A fixed sample trial is a clinical trial with a targeted total sample size, or a targeted total number of events,6 that is specified at the design stage and not subject to prospectively planned adaptation.

    6: In settings where the primary outcome of interest is the time to event (such as death), the statistical power of the trial is determined by the total number of observed events rather than the sample size.


  • A non-adaptive trial is a clinical trial without any prospectively planned opportunities for modifications to the design.


  • Bias is a systematic tendency for the estimate of treatment effect to deviate from its true value.


  • Reliability is the extent to which statistical inference from the clinical trial accurately and precisely evaluates the treatment effect.


  • A critical component of the demonstration of the effectiveness and, in some cases, safety of a drug is the test of a null hypothesis in a clinical trial. If the null hypothesis is rejected at a specified level of significance (typically a one-sided level equal to .025), with demonstration of a clinically meaningful effect of the drug, the evidence generally supports a conclusion of effectiveness. Sometimes, however, the null hypothesis is rejected even though the drug is ineffective. This is called a Type I error. Typically, there are multiple scenarios for which the null hypothesis is true. We will use the term Type I error probability to refer to the maximum probability of rejecting the null hypothesis across these scenarios.

C. Potential Advantages and Examples

Adaptive designs can provide a variety of advantages over non-adaptive designs. These advantages arise from the fundamental property of clinical trials with an adaptive design: they allow the trial to adjust to information that was not available when the trial began. The specific nature of the advantages depends on the scientific context and type or types of adaptation considered, with potential advantages falling into the following major categories:

  • Statistical efficiency: In some cases, an adaptive design can provide a greater chance to detect a true drug effect (i.e., greater statistical power) than a comparable non-adaptive design.7 This is often true, for example, of group sequential designs (section V.A.) and designs with adaptive modifications to the sample size (section V.B.). Alternatively, an adaptive design may provide the same statistical power with a smaller expected sample size8 or shorter expected duration than a comparable non-adaptive design.

    7: An example of a comparable non-adaptive design is a fixed sample design with sample size equal to the expected sample size of the adaptive design.

    8: The expected sample size is the average sample size if the trial were repeated many times.


  • Ethical considerations: There are many ways in which an adaptive design can provide ethical advantages over a non-adaptive design. For example, the ability to stop a trial early if it becomes clear that the trial is unlikely to demonstrate effectiveness can reduce the number of patients exposed to the unnecessary risk of an ineffective investigational treatment and allow subjects the opportunity to explore more promising therapeutic alternatives.


  • Improved understanding of drug effects: An adaptive design can make it possible to answer broader questions than would normally be feasible with a non-adaptive design. For example, an adaptive enrichment design (section V.C.) may make it possible to demonstrate effectiveness in either a given population of patients or a targeted subgroup of that population, where a non-adaptive alternative might require infeasibly large sample sizes. An adaptive design can also yield improved understanding of the effect of the experimental treatment. For example, a design with adaptive dose selection (section V.D.) may yield better estimates of the dose-response relationship, which may also lead to more efficient subsequent trials.


  • Acceptability to stakeholders: An adaptive design may be considered more acceptable to stakeholders than a comparable non-adaptive design because of the added flexibility. For example, sponsors might be more willing to commit to a trial that allows planned design modifications based on accumulating information. Patients may be more willing to enroll in trials that use response-adaptive randomization (section V.E.) because these trials can increase the probability that subjects will be assigned to the more effective treatment.

The following examples of clinical trials with adaptive designs illustrate some of the potential advantages:

  • clinical trial was conducted to evaluate Eliprodil for treatment of patients suffering from severe head injury (Bolland et al. 1998). The primary efficacy endpoint was a three-category outcome defining the functional status of the patient after six months of treatment. There was considerable uncertainty at the design stage about the proportions of patients in the placebo control group who would be expected to experience each of the three different functional outcomes. An interim analysis was prespecified to update estimates of these proportions based on pooled, non-comparative data in order to potentially increase the sample size. This approach was chosen to avoid a trial with inadequate statistical power and therefore helped ensure that the trial would efficiently and reliably achieve its objective. The interim analysis ultimately led to a sample size increase from 400 to 450 patients.


  • PARADIGM-HF was a clinical trial in patients with chronic heart failure with reduced-ejection fraction designed to compare LCZ696, a combination of the neprilysin inhibitor sacubitril and the renin-angiotensin system (RAS) inhibitor valsartan, with the RAS inhibitor enalapril with respect to risk of the composite endpoint of cardiovascular death or hospitalization for heart failure (McMurray et al. 2014). The trial design included three planned interim analyses after accrual of one-third, one-half, and two-thirds of the total planned number of events, with the potential to stop the trial for superior efficacy of LCZ696 over enalapril based on comparative results. The addition of interim analyses with stopping rules for efficacy reduced the expected sample size and expected duration of the trial while maintaining a similar probability of trial success, relative to a trial with a single analysis after observation of a fixed total number of events. PARADIGM-HF was stopped after the third interim analysis because the prespecified stopping boundary for compelling superiority of LCZ696 over enalapril had been crossed. The group sequential design therefore facilitated a more rapid determination of benefit than would have been possible with a fixed sample design.


  • To evaluate the safety and effectiveness of a nine-valent human papillomavirus (HPV) vaccine, a clinical trial with adaptive dose selection was carried out (Chen et al. 2015). The trial randomized subjects to one of three dose formulations of the nine-valent HPV vaccine or an active control, the four-valent HPV vaccine. An interim analysis was carried out to select one of the three dose formulations to carry forward into the second stage of the trial. The goal of the trial was to select an appropriate dose and confirm the safety and effectiveness of that dose in a timely manner.


  • STAMPEDE was a clinical trial designed to inform the practice of medicine and simultaneously evaluate multiple treatments in prostate cancer by comparing standard androgen deprivation therapy (ADT) with several different treatment regimens that combined ADT with one or more approved therapies (Sydes et al. 2012). The trial design included multiple interim analyses to potentially drop treatment arms that were not performing well based on comparative results. The use of a common control group, along with sequential analyses to potentially terminate treatment arms, allowed the simultaneous evaluation of several treatments more efficiently than could have been achieved in multiple individual trials.


  • PREVAIL II was a clinical trial conducted to evaluate ZMapp plus the current standard of care as compared to the current standard of care alone for treatment of patients with Ebola virus disease (PREVAIL II Writing Group et al. 2016; Dodd et al. 2016). The trial utilized a novel Bayesian adaptive design in which decision rules for concluding effectiveness at interim and final analyses were based on the Bayesian posterior probability that the addition of ZMapp to standard of care reduces 28-day mortality. Interim analyses were planned after every 2 patients completed, with no potential action taken until a minimum number of patients (12 per group) were enrolled. The design also allowed the potential to add experimental agents as new treatment arms and the potential to supplement or replace the current standard of care arm with any agents determined to be efficacious during the conduct of the trial.

D. Limitations

The following are some of the possible limitations associated with a clinical trial employing an adaptive design: