Saturday, April 06, 2013

The Mother of All Depression Studies: STAR*D

Given the amount of public funding ($35 million) that went into the six-year STAR*D study, it's utterly astounding that more people haven't heard of it. The Sequenced Treatment Alternatives to Relieve Depression study was easily the single largest study of its kind, ever, lasting six years and involving over 4,000 patients in more than 40 treatment centers.

Well over 100 scientific papers came out of the study (so it's not like you can assimilate the whole thing by reading any one article), and most are behind a paywall, which is outrageous when you consider that the entire study was funded by U.S. taxpayer dollars. Robert Whitaker, author of Mad in America and Anatomy of an Epidemic (superb books, by the way) has put one of the summary papers up on his web site. Be warned, however, that the STAR*D findings are subject to many interpretations. See this paper by H. Edmund Pigott for added perspective. Also see these STAR*D documents.

Some of the STAR*D researchers (e.g., M. Fava, S.R. Wisniewski, A.J. Rush, and M.H. Trivedi) went on to publish upwards of 15 papers a year (sometimes 20 a year) on the STAR*D results. How do you write that many papers? Short answer: with professional help. At least 35 STAR*D papers were, in fact, ghostwritten by freelance tech writer Jon Kilner, who lists the papers as "writing samples" on his web site.

The STAR*D study was unique in a number of ways. First, it was an uncontrolled study (which of course puts serious limitations on its validity). Secondly, it involved real-world patients in clinical practice, not paid volunteers. Thirdly, it tested eleven distinct "next step" treatment options, mostly involving drugs but also including talk-therapy arms. Fourthly, the primary outcome measure was remission (a fairly stringent measure of progress).

The idea of the study was to offer patients suffering from major depression intensive treatment with a well-tolerated first-line SSRI (namely Celexa) to see how many patients would go into remission with drug treatment alone; then offer those who didn't improve on Celexa a second line of treatment (involving various drug and talk-therapy options); then offer those who didn't respond to the second line of treatment a third type of treatment; and finally, offer a fourth type of treatment to anyone left.

Participants were given free medical care throughout the study. All drugs were provided at no cost by the respective manufacturers (including Viagra for those who wanted it) and patients were offered $25 as an incentive for completing follow-up assessments.

STAR*D study design (click to enlarge). While 4,041 patients were initially enrolled, only 3,671 participants actually entered the first level of the study, which involved taking citalopram (Celexa), an SSRI chosen for its supposed high efficacy and low level of side effects. Patients who failed to get better in Level 1 could progress to Level 2, then Level 3 and finally Level 4. Attrition was a huge problem, despite free drugs, free medical care, and free Viagra.

The overall study design (and the numbers of participants at each level) can be seen in the accompanying flow graph. Note that every participant got Celexa in the first level of the study. Any patient who wasn't happy with his or her current treatment option was allowed to progress to the next level of the study at any time, and the choice of treatment option was up to the patient. (All meds were open-label.) This proved to be disastrous for the cognitive therapy (CT) arms of the treatment, which saw only 101 people make it through to the end. (Not all of the 147 patients who started CT finished.) So few CT patients finished, and so many of them started taking other drugs, that no scientific papers about the talk-therapy part of the STAR*D study were ever published.

Participant attrition was a big problem for the study. Around half (48%) of the study population dropped out early, despite various inducements to stay.

What were the study's main findings? STAR*D researchers reported that for the "intent to treat" group(s), remission rates were 32.9% for Level 1 of the study, 30.6% for Level 2, 13.6% for Level 3, and 14.7% for Level 4. These numbers are likely inflated (read Pigott's analysis to learn why), but even assuming the numbers are right, it speaks to very disappointing effectiveness of conventional treatments for depression. This is especially true when you consider that studies on the natural course of depression show that half of all patients recover spontaneously in three months (see this study) to twelve months (this study) whether treated or not.

Another major finding of STAR*D was that no one treatment strategy (no one drug or combination of drugs) stood out as doing much better than any other. Changing from Celexa to Wellbutrin (bupropion), for example, saw 26% of patients either remit or get better on their assessment survey. Patients who were non-responsive to an SSRI (Celexa) and switched over to an SNRI (Effexor) got better 25% of the time. But 27% also got better when switched from the original SSRI (Celexa) to a different SSRI (Zoloft).

Another finding was that very few of the patients who scored a remission stayed in remission. The overwhelming majority of patients who got better eventually relapsed or dropped out. This fact was glossed over in all of the original STAR*D studies, then brought up by H. Edmund Pigott and others, who analyzed the data behind Figure 3 of this paper. It turns out that of 1,518 patients who remitted and stayed in the study, only 108 patients (7.1%) failed to relapse. This is a stunning indictment of the effectiveness of available treatment options for depression.

Space prohibits a thorough critique of the STAR*D trial here, but suffice it to say there were numerous problems not only with the study design itself (e.g., the study was uncontrolled; and patients were allowed to take anxiolytics, sleep aids, and other psychoactive drugs during their "treatment," confounding the results) as well as with the way the results were spin-doctored in the literature. If you're interested in the gory details, you can read an excellent critique here and also here. Plus there's an interesting paper behind Springer's paywall, mentioned in this Psychology Today blog, and there's a worthwhile Psychology Today post here.

Personally, I don't think you have to do much critiquing of the STAR*D results to see how dismal the findings were. After six years of work and $35 million spent, NIMH only confirmed what all of us already knew, which is that modern antidepressants aren't terribly effective and it doesn't much matter which one you use, because they all perform equally poorly.

But don't worry. Big Pharma will come up with something new and improved Real Soon Now.