top of page
Search

Canadian adventures & CONSORT Harms

Writer's picture: Rachel PhillipsRachel Phillips

Updated: Nov 11, 2019

Are you involved in the design, running, analysis or reporting of randomised controlled trials (RCTs)? "Yes.”


Have you heard of CONSORT? “Of course.”


What about the CONSORT extension for harms??????????………………………..Okay the silence is almost deafening. If you answered yes then I apologise and congratulate you.


But have you ever used it? If you answered yes to this one. Who are you? Let’s talk. Because I’m afraid to tell you you’re in the minority. But it’s time for that to change.


Let’s rewind. For those of you that don’t know, the CONSORT statement was first published in 1996 and was designed to alleviate the problems arising from inadequate reporting of RCTs. You can read more about CONSORT here: http://www.consort-statement.org/


There have since been two updates. One in 2001 and another in 2010. And I think it’s fair to say it has been widely accepted and adopted into current practice by the trials community.


The Harms extension was developed in 2004 with a similar remit to the original checklist but this time focusing on a minimum standard for reporting harms related data arising from RCTs. You can read more about the Harms extension here: http://www.consort-statement.org/extensions/overview/harms


Sadly, despite harms being such an important component of trials (I would argue on a par with the data obtained on benefits) the evidence shows that uptake has been minimal and reporting of harms has remained sub-optimal. We reported on this in a BMJ Open article that you can find here: https://bmjopen.bmj.com/content/9/2/e024537.abstract.


Reasons for this minimal uptake are perhaps less explored but it’s clear a change is needed. As luck would have it, a group of internationally renowned researchers agree and are currently working on an update of the CONSORT Harms extension.




The Delphi process to complete the update started in 2017 and has so far been through two rounds of surveys targeting the clinical trials community. Delphi surveys are used to collect feedback and opinions from a group to help develop or in this case update a framework or guideline. You can read more on Delphi surveys and how to conduct them here: https://www.involve.org.uk/resources/methods/delphi-survey


So, what’s my involvement in CONSORT Harms? In 2019, I was invited to join the group as they embarked on the consensus meeting. This is the final stage of the Delphi process where the team gather to compile and incorporate the feedback they have received from the earlier stages.


Preconceptions


I was initially unsure about whether to accept the invite. For one trekking to Canada for a 48-hour meeting seems a little absurd. But I was also worried about what a second year PhD student might be able to offer. The existing team are made up of a group of senior academics with an abundance more experience than me. We’re talking about renowned global researchers that are leaders in the field of trials and some who have been involved in CONSORT from day dot. So what could I possibly add to this discussion?


But I decided an opportunity to just sit in the room, even if just to observe and learn from this group seemed too good to turn down. Did it matter if I didn’t have the confidence or insight to contribute? Perhaps not.


So what did I observe?


The first day started with introductions and an overview of everyone’s interest in this area. We were comprised of systematic reviewers, epidemiologists, statisticians and a patient representative. There were only two of us from the UK so straight away I felt that I might have more to contribute than I had initially thought.


Daniela Junqueira (the project coordinator) gave a concise overview of the evidence examining use of CONSORT Harms (see above – it’s not!) and an update about where the team are in the Delphi process. We then ploughed straight into scrutinising each item of the old checklist alongside a revision put together by the core team (Daniela Junqueira, Liliane Zorzela and Sunita Vohra) and a brief overview of key points from the first two rounds of the Delphi survey. And then the discussions began in earnest.


Terminology featured heavily across the two days. The first question raised was whether we should be talking about harms or risks????? There were strong opinions on this. If we think that “‘risk’ is what participants might face at the start of a trial but ‘harms’ accurately reflects what has happened to participants when you report the results at the end” then is harm the best fit? Observing this discussion, it felt like this was territory that had been discussed before and many did not think needed to be revisited. I could see the argument from both sides and whilst a consensus was reached, I think this might not be the last time we hear this debate.


Should we think of harms as pre-specified vs. non-pre-specified, anticipated vs. unanticipated, systematic vs. unsystematic or solicited vs. unsolicited. Can non-pre-specified be anticipated? Can non-prespecified be systematic? I’ve struggled with this terminology too and have to date, been using pre-specified (events that are listed in advance as harm outcomes) and emerging (events that have not been pre-specified). This discussion became clearer to me when I thought about the terminology in terms of whether we were talking about outcome specification, data collection, reporting or analysis. For example, you might not pre-specify an event as an outcome at the start of a trial but through systematic examinations, you identify a harmful effect. This would be a non-pre-specified outcome that was systematically collected. Again, there were some strong opinions on this but I’m afraid you’ll have to wait for the checklist for the final decision.


Selection on what should be included in the primary report and what analysis should be performed also ignited some lively debate. For me the most controversial topic raised here was the question “Do we need a separate safety report?”. I think not, and whilst I am inclined to agree with the opinion that a more in-depth look at harms would be useful, I think the primary results paper should strike a balance, summarising both benefits and harms in one place. We need to ensure that interested parties including clinicians, prescribers, trialists and patients can get an overview of both benefits and harms in one place. There was also a lot of discussion around analysis and the information we expect to be reported, most of which I hope will feature in the explanation and elaboration section to be drafted post meeting.


After a productive but exhausting two days, we were done and it seemed a consensus had been reached. In my opinion, this has to be attributed to the excellent chairing of Sunita and the relentless hours of work put in by both Daniela and Liliane. Sunita was able to keep us on track without it feeling like we were ever being rushed. Her skill at noticing when discussions reached saturation without resolution and her ability to move us on but without cutting us short were second to none. Her promise of revisiting such discussions with fresh eyes at a later date was sometimes the only way to move forward and inevitably helped us reach a consensus.


Did I contribute?


Yes. Having spent the last 20+ months working on harms, I was able to bring a unique insight to the room. It seems no one else there quite lives and breathes harms as I do. It also gave me the opportunity to connect with a rare breed of researchers that care about harms in RCTs and hopefully opened up opportunities for international collaborations. Personally, this trip was definitely worth it.


Where are we now?


There’s still a long way to go before the update is ready to launch. A lot of the knottier issues consigned to the explanation and elaboration are still to be thrashed out. But I’m optimistic we should be seeing something from the team in 2020.


Will it make the difference we so desperately need?


The honest answer, I’m not sure. There's still a lot of uncertainty in my mind. Did we go far enough? Could we have been more prescriptive? Is guidance really enough to change practice? Does a standard need to be mandated? Adoption by the trials community depends on so many things. Key to it all, I believe, is the need to communicate the importance of harms data from RCTs. We need to start thinking about harm as equal in importance to benefits. A carefully planned dissemination strategy will help raise awareness. And the support of journal editors could help us demand better from trialists. Whether we’ve done enough to produce a monumental shift to improve harms reporting; only time will tell.

96 views0 comments

Recent Posts

See All

Comments


bottom of page