ME Awareness: The PACE Trial: How a Debate Over Science Empowered a Whole Community | 09 May 2019

 

Guest blog by Carolyn E. Wilshire

There are few in the ME/CFS community who have not heard of the PACE Trial. This £5 million clinical trial was designed to test the effectiveness of cognitive behavioural therapy (CBT) and graded exercise therapy (GET) as treatments for people with “chronic fatigue syndrome” (CFS).

Publications from the trial claimed that CBT and GET could not only ease the symptoms of CFS but were enough to get over a fifth of patients back to normal. Here we tell the story of how a group of patients teamed up with researchers and statisticians to scrutinise these bold claims.

How the story unfolded

Carolyn E. Wilshire.

Our story begins in 2007, when the plan for the PACE trial was first published. The trial would assess the effectiveness of CBT and GET using questionnaires.

Several members of the ME/CFS community were concerned. If the aim of treatment is to help patients regain some function, then surely we need to measure whether treatment increases their actual activity levels.

Some wrote letters, pointing out how easy it would be to measure activity levels, given that the trial participants were going to be issued with an activity tracker anyway. These concerns went unheeded. The trial went ahead as planned.

And indeed, several years later, when the results from the questionnaires were finally published, the results were almost too good to be true. CBT and GET were reported to be “highly effective” treatments for CFS, leading to recovery in over a fifth of the trial participants.

One group of science-minded patients – including Tom Kindlon, Robert Courtney and Alem Matthees – carefully examined the trial publications, and came across some striking oddities. For example, midway through the trial, the researchers had altered their definitions of improvement and recovery.

Could these changes have worked to favour the trial’s hypotheses? The only way to know for sure was to examine the data. So, these intrepid patients wrote data requests, first to the researchers themselves, then to their institutions, and then finally as requests under the Freedom of Information (FOI) Act.

At every turn, they were met with refusals – and worse. They were accused of harassment, of vexatious behaviour. They were described by the trial investigators as part of a “highly organised, very vocal and very damaging group of individuals”, who wanted to expose and discredit the patients who participated in the trial. But they did not give up.

“Finally, in October 2015, after a protracted court case, Alem Matthees’ request was granted. A portion of the PACE trial data would be released to the public under FOI legislation.”

What our reanalysis discovered

The group teamed up with two senior statisticians and began to analyse the data. I offered my help as a Psychology researcher. My part was easy. I had joined a group who already had a rich understanding of the trial and its problems, and who had already worked with the data.

“For example, the team had discovered that, if the trial definition of recovery had not been altered, then only 3-7% of patients would have been counted as recovered, and that CBT or GET did not significantly improve the chances of recovery over medical care alone.”

Now it was just a matter of examining the other trial measures, scrutinising all the various issues and arguments, and putting it all together for publication.

Tom Kindlon.

Any high-quality clinical trial begins a written plan (or protocol) which sets out what will be measured and how the results will be analysed. Having this plan – and sticking to it – prevents researchers from “cherry picking” the best results from the trial and hiding the weaker ones. It helps keep everyone honest.

The protocol for the PACE trial set out how success would be measured: by counting how many participants met the trial’s definition of “improvement” at the end. To count as having improved overall, patients needed to report reduced levels of fatigue and an increase in physical capabilities.

However, the researchers did not stick to this plan. They changed the primary outcome measures midway through the trial.

In our reanalysis, we set about to see what the results would have looked like if the protocol had been followed. We found that, although some CBT and GET patients reported slightly lowered fatigue or increased activity after treatment, these small benefits often fell short of the trial’s definition of “overall improvement”.

“Overall, CBT and GET patients were no more likely to improve than those who received medical care alone. And at long-term follow up (around one year after the end of the trial) even the small benefits that patients had reported had vanished.”

In other words, over the long-term, patients who received CBT or GET fared no better than those who received only medical care.

You might say, “well, CBT and GET patients did say they felt a little better after the treatments. Isn’t that worth something?”. The problem here is that CBT and GET workbooks actively endorsed those treatments and promised great improvements following them.

CBT was “a powerful and safe treatment which has been shown to be effective in… CFS/ME”. GET was “one of the most effective therapy strategies currently known”. If they followed the treatment faithfully, they would see improvements.

Robert Courtney, Rest in Peace.

We have known for decades that if a person is told that a treatment might really work, they will tend to focus on (and remember) experiences that seem to confirm this. So even if the treatment made no real difference, they’re inclined to say, “I think I’m a little better”.

Drug trials control for this effect by ensuring participants don’t know whether they’re receiving the active drug or a dummy pill. In a therapy trial, we obviously can’t do this. But we should certainly not “talk up” the treatment!

And as patients pointed out right at the start of the trial, we should always check whether a person’s experience is backed up by more objective measures.

For example, did patients increase their levels of activity, or their overall fitness, did they come off benefits or increase their hours of work?

On most of the objective measures included in the trial, the CBT and GET participants fared no better than those in the no-therapy group.

“The conclusion was simple. The modest, short-lived changes on self-report measures were exactly what we would expect if these therapies did not produce genuine change. There was little evidence that CBT and GET led to genuine health benefits.”

The PACE investigators’ response to our reanalysis

A team of the original PACE investigators, headed by Professor Michael Sharpe, wrote a response to our reanalysis, and we were given a chance to reply.

I’ll share just a few excerpts from this correspondence:

Sharpe’s team argued that it was okay to alter the trial measures, because the original ones were “hard to interpret”, and the modified ones were, in their opinion, “more accurate and sensitive to change”.

This was a stunning admission. PACE was a major fully funded clinical trial, and the research team included several senior biostatisticians. Was it really possible that nobody had thought to consider these issues before planning the trial?

Another, even more perplexing claim was that the altered measures were better, because they produced results that were closer to what the investigators had expected. For example, recovery rates, according to the original definition, were unexpectedly low (7% or less), but by loosening that definition, they achieved rates that more closely matched their own “clinical expectations”.

In other words, Sharpe and his team seemed to be admitting that they picked the measures that gave them the best result.

Our reply: “Clearly, it is not appropriate to loosen the definition of recovery simply because things did not go as expected based on previous studies. Researchers need to be open to the possibility that their results may not align with previous findings, nor with their own preconceptions. That is the whole point of a trial. Otherwise, the enterprise ceases to be genuinely informative, and becomes an exercise in belief confirmation.”

Sharpe and colleagues also did not see a problem with actively promoting CBT and GET during the therapy. This could have no effect of the outcome, they reasoned, because patients were not asked general questions about their health, but rather specific questions about their fatigue and physical function. The team seemed unaware that expectation biases affect all sorts of self-report measures, include symptom-specific ones.

Our reply: “(Sharpe et al) appear to be unaware that biases can be observed on a wide range of different kinds of self-report measures, including symptom-specific ones, and that they generally operate in the same direction across all types of self-report measures… When assessing whether self-reported measures are influenced by bias, we must examine whether they pattern in a similar way to those observed on more objective measures (e.g., estimates of physical fitness, activity levels). However, on the majority of the objective measures examined in the PACE trial, CBT and GET fared no better than the other treatment arms…”

At the end of their letter, Sharpe and colleagues went a step further. They suggested that our team were “resistant” to their findings because we do not like the implication that CFS is a “psychological” condition.

This was a bold suggestion to make, given that I’m a psychology academic of some twenty years’ standing! However, in our response we decided to focus on the trial investigator’s own biases.

Our reply: “The issue of ideological bias is an important one… the PACE trial investigators began work on the trial with the firm belief that thoughts, feelings and behaviours were the central perpetuators of CFS, and that psychological interventions could reverse the illness…. In contrast, we approached our analysis from a more conservative, sceptical perspective: we considered that a false positive conclusion regarding the benefits of CBT and GET could be harmful for patients. For example, it could limit patients’ treatment options and reduce the opportunities for future research into new treatments. Readers can consider the original findings and the reanalysis in the context of these two very different perspectives and draw their own conclusions.”

What have we gained from this debate?

This debate does not bring us closer to solving the puzzle of ME/CFS, nor does it suggest any new ways to ease patients’ suffering. But it helps highlight how existing behavioural approaches and treatments are failing patients. This is a small but important step on the road to better understanding and treating this cruel, debilitating illness.

With thanks to the ME Association for their contribution towards the article processing fee that enabled our reanalysis to be open-access.


The ME Association

Real People. Real Disease. Real M.E.

We are a national charity working hard to make the UK a better place for people whose lives have been devastated by an often-misunderstood neurological disease.

If you would like to support our efforts – particularly during ME Awareness Week – and help ensure we can continue to inform, support, advocate and invest in biomedical research, then please donate today.

Just click the image opposite and visit our JustGiving page for one-off donations, to establish a regular payment or to create your own fundraising event.

Or why not join the ME Association as a member and be part of our growing community? For a monthly (or annual) subscription you will also receive our exclusive ME Essential magazine.


ME Association Registered Charity Number 801279