Highlights and Reflections from ISCB 2025
- xiaotan20
- Oct 21
- 3 min read
This summer, I had the opportunity to attend the 46th Annual Conference of the International Society for Clinical Biostatistics (ISCB) in Basel. It was an incredible week of learning and networking with biostatisticians from around the world. As my first academic conference, the experience was both exciting and a little overwhelming – with multiple parallel sessions running at once, there was so much to explore and absorb.
The two keynote presentations particularly stood out to me.
Prof. Ian Marschner [University of Sydney] presented on confidence distributions, which provide a frequentist analogue of Bayesian posterior distributions that do not require the specification of a prior. I found it fascinating how this approach bridges frequentists and Bayesian ideas, providing a framework to quantify the strength of treatment effects in clinical trials through interpretable confidence statements.
Prof. Erica Moodie [McGill University] illustrated how machine learning can be incorporated into classical statistical methods, such as Q-learning and inverse probability weighting, to learn individualised treatment strategies that appropriately address confounding and yield causally valid conclusions. I found it inspiring to see how these approaches demonstrate the potential of machine learning to advance precision medicine when applied within a rigorous, causally grounded framework.
Beyond the keynotes, several sessions on causal inference, estimands, machine learning and clinical trial simulations offered valuable insights. Some highlights and key takeaways included:
The Defining Estimands for Clinical Trials session. This featured several insightful talks from University College London, including Prof. Ian White on the estimand for the proportional odds model, Joanna Hindley on estimands for multi-episode trials and Dongquan Bi on estimands in cluster randomised trials. These talks highlighted that selecting appropriate estimands is rarely straightforward and requires careful consideration of both the clinical context and trial design.
Dr Wouter A.C. van Amsterdam [University Medical Center Utrecht] discussed key pitfalls in traditional predictive model evaluation, demonstrating how accurate models can inadvertently create self-fulfilling prophecies that do not translate to better clinical decision-making. He introduced prediction-under-intervention models, which directly link predictions to treatment decisions, reinforcing that high predictive accuracy alone is not enough, and how models should also be evaluated for their value for decision making.
Konstantinos Sechidis [Norvartis] delivered a clear, engaging walkthrough of Shapley Additive Explanations (SHAP) using a taxi-fare splitting analogy to illustrate average marginal contributions. He also shared practical recommendations on using SHAP values for identifying predictive biomarkers via conditional average treatment effect (CATE) modeling. I found it insightful how explainable machine learning can bridge machine learning models and clinical understanding.
Dr Kim May Lee [King’s College London] introduced the OCTAVE framework, a structured, step-by-step process for planning, implementing and validating clinical trial simulations. I found the framework particularly useful as it offers clear guidance on best practices for conducting rigorous, well-planned simulations to support the design and analysis of clinical trials.

Ellie Van Vogt [ICTU] presented their simulation study on causal machine learning for CATE estimation in randomised controlled trials, addressing challenges such as minimum sample size requirements and missing data handling. The results demonstrated that CATE estimators using machine learning can still perform well in smaller sample sizes, while also recognising the ongoing challenges in testing and validating these in practice.

I also presented my poster on a systematic review of machine learning applications in randomised controlled trials. Most of the attendees who stopped by shared an interest in machine learning and were keen to learn about both the findings and the next steps of the review. Several conversations focused on the limitations of the review such as the restricted definition of analysis and the difficulty of assessing the quality of the included studies given the large number involved. One particularly valuable conversation was with a PhD student from ETH Zurich, who had once planned a similar review but decided not to proceed after facing the same challenge I encountered – the overwhelming number of studies to screen. Exchanging experiences and hearing how she had initially approached the task gave me a fresh perspective on my own process.
Throughout the week, there were many opportunities to connect with others, including the student gathering, conference dinner and the Early Career Biostatisticians’ Day. I enjoyed meeting other pre-doctoral fellows and PhD students, learning about their research, and hearing about their paths into biostatistics. Many of these introductions came through Ellie and Dong, whose support I greatly appreciated.
As someone attending my first conference, I found it challenging at times to initiate conversations, especially with more senior statisticians. However, it was an invaluable first step, and I hope to build more confidence in networking at future events. I also learned that reviewing abstracts in advance to identify key sessions and posters of interest helps enormously in navigating the packed schedule and getting the most out of each session.
Overall, ISCB 2025 was an inspiring and rewarding first conference experience. I left Basel with new insights, ideas and motivation to apply what I learned to my own research. I look forward to future opportunities to engage with the wider biostatistics and clinical trials community.



Comments