Estimands & Estimation Within Clinical Trials
Ensuring trials answer the questions of interest: Implementation of the estimand framework
Team: Suzie Cro, Victoria Cornelius, Brennan Kahan, Ian White, James Carpenter, Richard Emsley, Beatriz Goulao
When evaluating the effect of a treatment in a clinical trial, different questions can be addressed. For example, does the treatment work when it is received as prescribed?, or does the treatment work regardless of whether all is received? The answers to these questions may lead to different conclusions on treatment benefit. It is therefore important to have a clear understanding of exactly what treatment effect a trial intends to demonstrate, referred to as the ‘estimand’. Trial design, conduct and analysis can then be aligned to address this. There is a need to implement the estimand framework introduced in the ICH E9 (R1) addendum across clinical trial units.
We are reviewing current practice on using estimands in trials and developing a workshop for clinical trial statisticians and clinicians on the implementation of the framework, supported by the MRC NIHR TMRP. The aim is to show trialists how to use the estimand framework by providing implementation tools. This to increase uptake of the framework, to ensure trials are designed and analysed to answer the questions of interest.
Accessible statistical methods to determine treatments effects that matter to patients in randomised controlled trials
Team: Suzie Cro, Victoria Cornelius, Ian White, James Carpenter
The calculation of different types of treatment effects, including those that are more relevant for patients, has recently been brought to the forefront with the publication of an addendum to international drug trial guidelines ( ICH E9 (R1) - addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials). However, the guidance does not provide statistical methods for achieving this. Whilst some statistical methods have been proposed for calculating more patient centred effects, these are limited and not widely used.
We are conducting a programme of research to develop and evaluate accessible statistical methods to estimate treatment effects in trials that are of greater relevance patients, such as the effect of treatment if taken as intended. Alongside statistical methods development, we are working with public partners to improve the communication of statistical information from trials, to enable both more relevant and understandable information.
How should compliance be defined in smartphone app trials?
Team: Jack Elkes, Suzie Cro, Victoria Cornelius
The need to validate the use of smartphone apps and other digital technologies in healthcare is rapidly growing. A randomised controlled trial remains the gold standard approach and typically, an intention-to-treat analysis will be performed to determine if the intervention is beneficial.
However, the use of an app is known to decline substantially over time and an additional question of interest to answer is how effective the app in those is who use it (‘complie’). Unlike drug and behavioural interventions, compliance with digital interventions is more complex to define. There are currently recommended approaches to define participant compliance for smartphone app use in a trial. This is needed when calculating the benefit of treatment receipt (using complier causal inference methods).
We are conducting research to identify the different ways patients access the app based on key metrics; duration in app, pages accessed, and time of day accessed. PCA will be used to identify clusters of the different user profiles, which in turn will help us to develop a strategy for defining compliance to an app.
Machine Learning methods for subgroup analysis and estimation of treatment effect heterogeneity
Team: Ellie Van Vogt, Suzie Cro, Victoria Cornelius
Traditional methods for subgroup analysis include univariate analysis of interactions coupled with thresholding or restricting and repeating the analysis in pre-defined subgroups. The availability of historical trial data and the advancement of machine learning methods means that we are now able to take a data-driven approach to subgroup analysis, and search for characteristics that define heterogenous treatment effects.
We use causal machine learning approaches to estimate the conditional average treatment effect (CATE) in historic RCTs. We employ the causal forest for this estimation and examine how the CATE varies over predictors to determine how to define subgroups of heterogeneity. Meta-learning algorithms combine several machine learning algorithms, where the output the models are trying to estimate is the individual treatment effect or the average treatment effect.
The post-hoc determination of “super-responders” in positive result trials or of positive responders in null result trials can form recommendations for future research.