Bridging the Gap between Science and Evidence-Based Practice

Highlights from the 2012 International Conference on Eating Disorders

Reprinted from Eating Disorders Review
May/June 2012 Volume 23, Number 3
©2012 Gürze Books

At the 2012 International Conference on Eating Disorders, May 3-6, in Austin, TX, one of the most-discussed topics was the need for and challenges of integrating scientific findings into real-world clinical practice.

In his keynote address, Scott O. Lilienfeld, PhD, Professor of Psychology at Emory University, Atlanta, told eating disorders professionals that in today’s world, good science is being crowded out by pseudoscience, and that it is often difficult to distinguish one from the other. It is imperative that eating disorders professionals find a balance between science and evidence-based practice, Dr. Lilienfeld told the audience. “Science,” he said, “is necessary, not because it’s perfect but because it helps safeguard us against bias, naïve realism, and illusory correlation.” In the same way, he added, evidence-based practice is needed because it minimizes these biases.

Dr. Lilienfeld works outside the field of eating disorders, but said his outsider’s perspective might prove helpful. He said, “Today we live in a very confusing world filled with lots of medical claims, some of which are well supported, but others are not, and it is often hard to distinguish the science from the pseudoscience. We haven’t done as good a job as we could have to distinguish good science from pseudoscience.”

“The good news is that psychotherapy works,” he said, and added,  “we can demonstrate the positive effects of a variety of psychotherapies for mood disorders, anxiety disorders, sexual dysfunction, insomnia, and bulimia nervosa, for example, when compared with the lack of treatment with a plausible “placebo” intervention. However, researchers such as Drs. Judith Banker and Kelly Klump (2010) have reported a substantial science-practice gap in eating disorders treatment, he added.

According to Dr. Lilienfeld, this science-practice gap is caused by a number of factors, including a clash of world views, and by misconceptions about science in general and of evidence-based practice in particular. He added that researchers have shown that most therapists who treat clients with eating disorders do not administer scientifically supported therapies (Mussell et al., 2000), and that most clients with depression and panic attacks do not receive scientifically supported therapies (Kestler et al., 2001). In addition, Dr. Lilienfeld pointed out that 75% of licensed clinical social workers use one or more unsupported therapies, including age regression, psychodrama, and neurolinguistic programming, for example (Pignotti and Thayer, 2009).

Dr. Lilienfeld then explored a number of popular psychological myths and misconceptions. We are not doing a good enough job of educating the public well about psychotherapy, he said, adding that “good psychotherapy has been under-hyped.” And, Dr. Lilienfeld added, the science-practice “gap” could be better described as the science-practice “canyon” because it is so broad. Noting that “our clients deserve the best,” he urged audience members to work to close the science-practice gap. Part of the problem lies in a conflict between romantic vs. empirical traditions, as outlined by psychiatrist Dr. Paul McHugh, of Johns Hopkins University. Romantics, Dr. Lilienfeld said, believe that questions are best settled by intuition and clinical experience, not research. In contrast, empiricists believe that questions are best settled by research, not intuition or clinical experience. He believes that this split between romantics and empiricists probably underlies much of the science-practice gap and resistance to evidence-based practice.

A number of misconceptions exist about evidence-based practice, Dr. Lilienfeld stressed, including the belief that it stifles creativity and that it requires a cookie-cutter approach to patients. Other misconceptions are that the approach is not helpful because all individuals are unique; or that evidence-based practice isn’t needed because “we can judge the effectiveness using our clinical experience and intuition.”

Treatment of eating disorders patients is also affected by confirmation bias, Dr. Lilienfeld said. He explained that there is a tendency to seek out “evidence that supports our individual views and thus to deny, dismiss or distort evidence that does not do so.” This tendency affects scientists at least as often as nonscientists, he noted (Mahoney, 1977). One historical example of this was widespread adoption of prefrontal lobotomy in the late 1940s; in fact, the scientist who developed it was awarded the Nobel Prize in 1949.

Another bias is naïve realism, or the belief that the world is exactly as we see it. Illusory correlation, or a tendency to perceive correlations when they are absent or to exaggerate the magnitude of actual correlations is another confounding problem. Some examples of this, he said, are shown in the argument about a connection between vaccinations and autism and the belief in the “lunar lunacy effect,” or the belief that the moon has powerful effects upon us.

There are ethical dangers from some of these biases, according to Dr. Lilienfeld, and he pointed to the true cost of lost opportunities when clients spend time and money and effort to seek treatments that can’t help them. “I worry that these types of approaches have undermined our credibility—psychology and psychiatry are not viewed well,” Dr. Lilienfeld said. He then quoted the late physicist Richard Feynman, who defined the essence of science as “bending over backwards to prove ourselves wrong.”

Dr. Lilienfeld and some colleagues are currently writing an article on causes of spurious therapeutic effectiveness. No matter how clever or good we are, there is frequently no way to know if the therapy worked, he said. Some causes include regression to the mean, placebo effects, multiple pre-treatment interference, maturation of the patient, and history.

All individuals in the mental health field want to help people, but we disagree about how to do it, he stressed, noting that “Science is our best hope to root out our errors and is at the same time a prescription for humility.” Poor clinical care comes from overconfidence and reluctance to root out errors. He told the audience that evidence-based practice is important because it helps us correct and rule out our errors and mistakes within our ‘web of belief.’

Dr. Lilienfeld urged the audience to try to heal the pernicious divide between science and practice by recognizing the proper place for romanticism, which is to think big, to be bold and to listen to clinical intuition, but then to be able to turn off the romanticism and to use a scientific approach to be rigorous and to realize that everyone is prone to error.

Plenary Session II: Implementing Science into Clinical Practice

In a plenary session, “The Good, the Bad, and the Ugly of Integrating Clinical Research and Practice,” Dasha Nicholls, MBBS, MRCPsych, MD, FAED, head of the Feeding and Eating Disorders Service at Great Ormond Street Hospital for Children, London, reminded the audience of the many challenges that face clinicians in “real-world practice.” Dr. Nicholls said that in her practice, where she treats children and teens with eating disorders and very ill patients with anorexia nervosa, she may not know all the latest research, but her expertise, like many professionals treating eating disorders, is personal and unique.

Clinical work is complicated, she said, and there is much to process and much clinical judgment comes into play. There is a need for mutual respect and ongoing dialog, she said, but she also called for mutual understanding about others’ ways of thinking. One of the issues with different ways of thinking has to do with different purposes of thinking, she added. Dr. Nicholls said, “Through multiple lenses a clinician reaches a clinical judgment, because you must make a decision—patients come to see you for that opinion, and can’t wait 10 years for research to prove a treatment approach.”

Dr. Nicholls said that the role of a researcher is to take a highly focused research question, perhaps from a review of the medical literature, to look for any possible source of bias, including outcome bias, context and personal biases, and to eliminate such biases. The role of the researcher, then, is to be totally “agnostic” and open-minded about the outcome. This approach is the opposite of the role of the clinician, who cares a great deal about the outcome. Thus, it is easy to see that the purposes of the researcher and the clinician are diametrically opposed, which causes a tension between the confidence a clinician needs and the open-minded and agnostic view needed for good research.

Another potential problem a clinician faces is that his or her judgment will often be different from that of other clinicians, and to mitigate this problem, she said, we build in supervision, writing, and communicating with others. She noted that there is ambiguity about the role of the multidisciplinary team in delivery of empirically supported treatments. Another issue is individual competence versus collective competence, she added. Also, some empirically supported treatments can only be tested and delivered in certain health care contexts, for example, possible acute hospitalization followed by day care vs. inpatient treatment.

To explore the challenge of implementing evidence-based therapy into clinical practice, Dr. Nicholls conducted a brief survey of 23 of her colleagues. She said 73% described themselves as pure clinicians, and 27% identified themselves as clinician-researchers. The question she posed was, “How much of your current practice do you estimate has a sound evidence base?” The majority, 68%, said “some,” while 32% replied that “most” of their practice had a sound evidence base. Some of the reasons her colleagues gave for not implementing research into their practices were: (1) research did not ask or answer the relevant question; (2) it can be hard to identify the clinical implications of research; and (3) clinicians don’t like using treatment manuals and don’t have the time to read research papers. Finally (4) some felt that most treatment manuals are out of date or that research did not apply to clinical work.

When Dr. Nicholls qualified the answers further, she found a number of reasons that research was not implemented or prioritized among the colleagues. Some comments included: ‘It’s hard to get everyone to agree on protocols—we all have our own ideas and service constraints’; ‘It is fantastically difficult to do good research that answers the difficult questions,’ and ‘There is a need to compare a manualized with an non-manualized version of the same therapy—that is, does the manual matter?

Dr. Nicholls concluded that what she and other clinicians need are “frameworks for practice based on research, clinical experience, and the perspectives of patients and caregivers, that facilitate problem-solving at any individual patient level.” While some of this is admittedly already available, there are still huge gaps between research and clinicians, she said. She added that there is a long way to go; what is needed, she said, is a perspective on individuals who are doing well in treatment, those who are not doing that well, and better ways of understanding the treatment response. Other factors are: knowing when intervention should be halted, who should deliver treatment, and a better way to evaluate therapist skill.

A Better Means of Accreditation

Craig Johnson, Chief Clinical Officer at the Eating Disorders Recovery Center of Denver and Professor of Psychiatry at the University of Colorado, proposed an action plan that would lead to increased competency among clinicians and adoption and dissemination of evidence-based therapy among eating disorders professionals. Noting that he and Dr. Chris Fairburn suggested such a path several years ago, he expressed disappointment that not much progress had been made since then.

Dr. Johnson proposed a plan of action that would create a mechanism that disseminates evidene-based therapy and that would identify to the public—a crucial point, he said—professionals who have demonstrated competencies in treatment. To accomplish this, he suggested coordination between researchers generating evidence-based therapy and consumers of treatment. He added that those who have the most influence upon what treatment to deliver are those that pay for the treatment.

Dr. Johnson praised the National Institute for Clinical Excellence (NICE) in the United Kingdom, which has identified certain areas of evidence-based treatment in eating disorders. For example, he said, cognitive behavioral therapy for bulimia nervosa has been graded as an A-level treatment; in the B-level category, where there is research but not randomized trials, interpersonal psychotherapy and the SSRIs are included for bulimia nervosa. He predicted that DBT will soon be included at the B-level and family-based therapy will be included at the A-level in the next NICE iteration.

Noting that “We do have treatments to disseminate, but nobody thinks EBTs are being adopted by most clinicians in the US. One of the challenges is showing expertise, and some progress has been made through the AED Credentialing Task Force, and collaborations with the Academy for Eating Disorders (AED) and the National Eating Disorders Association (NEDA). Dr. Johnson singled out professionals at NEDA who he said pushed the professional community to come forward with some criteria to show expertise in the field. He said, “Dr. Mary Tantillo of NEDA deserves a purple heart for all her efforts.”

The crux of the problem was not that the groups couldn’t agree on professional standards but rather finding a business model to implement the standards, Dr. Johnson commented. While establishing the standards took a year, finding the correct business model took about 7 years, he added. Dr. Johnson also noted that “We are within months of being able to partner with the joint Commission on Accreditation of Rehabilitation Facilities (CARF), an independent, nonprofit organization that accredits programs of health and human services.

Credentialing task forces can help with this challenge. Dr. Johnson said he would like to see the AED take on the task of credentialing individuals; however, he added that most recognize there are liability concerns and we should respect that, he added. He would like to see the organization extend the scope of the credentialing task force to individual practitioners and to continue the joint venture between IAED and the Academy. Dr. Johnson predicted, “If we in the Academy do nothing, we will see a progressive loss of relevancy to the practitioners out there.”

 (In the next issue, the dialogue about efforts to close the gap between research and evidence-based practice continues with presentations by G. Terence Wilson, PhD and Ulrike Schmidt, MD, PhD, FAED.)

No Comments Yet

Comments are closed