Artificial Intelligence Reaches Out to Eating Disorders

Some benefits of and problems with this rapidly developing science

Artificial intelligence, or AI, is everywhere, and now reaches out to eating disorders treatment. In a thoughtful editorial, three researchers from the University of Turin, Italy, have explored the early ways that AI can be creatively applied to the therapeutic relationship, but also point out important cautions (Eat Weight Disord. Studies on Anorexia, Bulimia and Obesity. 2023. 28:50).

The original digital platforms are being replaced by Large Language Models (LLM), which allow a person to interact with AI in a conversational way, by posing questions and receiving both original and articulated answers. ChatGPT, which was launched late last year, has stimulated lively debate on both sides. Is it helpful or harmful? On the plus side, AI has made it possible to apply a large amount of data from a single patient or groups of patients. A second area of discussion and debate involves active use of AI-based tools during treatment and in the daily life of the patient. As Dr. Giovanni Abbate-Daga and his colleagues note, both areas “can lead to eventual therapeutic advances but can also present obstacles and ethical problems.”

Consider some benefits

One of the attractive elements of AI is that it provides direct information in a conversational manner that many internet users are familiar with and like. Through browsers and apps, an individual is now free to use AI in his or her own way, leading to flexible and unforeseen possibilities. AI could be used for and against therapy, as well as exposing the patient or future patient to completely novel approaches to therapy. As time passes, treatment boundaries may become blurred, say the authors, making it more difficult to define what is appropriate in therapy and what is not. AI may supply support between treatment sessions when a patient has urges to eat, or gains more weight than expected, or is trying to achieve a treatment goal.

And, as a way to prevent an eating disorder, the use of AI by individuals who are not yet receiving treatment but who are at serious risk of an eating disorder should be taken into consideration. A person may turn to AI first when he or she has recently developed symptoms of an eating disorder but does not know about, or resists thinking about, the possibility of having an eating disorder. Having access to AI may stimulate the individual to get more information on purging, overexercise, an unhealthy diet, and other warning signs. The conversational approach provided by artificial intelligence may lead the patient to take action.

Healthcare professionals can also benefit from AI by recognizing behavioral patterns and language cues, helping them intervene at the earliest stages of an eating disorder, improving the chances of successful treatment and recovery. AI, through its large data collection, could provide access to more accurate and appropriate information on treatment and help grade the effectiveness of interventions for EDs much more quickly and in a more articulated way than through a normal internet search. In a different but related area, a recent study showed AI produced accurate and reproducible responses to frequent questions related to bariatric surgery: 86.7% of 151 responses obtained by AI was rated “comprehensive” by board-certified bariatric surgeons (Obes Surg. 2023. 33:1790). AI might also allow individuals to avoid the feelings of isolation and shame, a major problem for some patients (J Clin Med. 2022. 11:6683).

And, consider some negatives

The authors note that current conversational AI entries can provide false information, presenting it as fact (Front Comput Intell Syst. 2023. 2:81), producing the so-called “AI hallucinations or delusions.” Vulnerable patients may take false internet information to heart, for example, false information about weight, foods, and calories. This false information may even back up these claims by citing clinical studies that have never been conducted.

Clinicians can discuss and re-establish correct information on diets, weight, and metabolism, for example. The authors note that physicians and psychotherapists can also work together to develop their own AI models to counteract harmful information and to check AI for risks of mistakes.

Finally, there can be ethical problems. Patients who live far from specialized treatment centers and have geographical or financial barriers to care may rely largely on the internet between limited in-person sessions. Privacy and data security are also potentially at risk, calling for transparency and data security, and patients must know how their personal data is being used. The authors point out that laws and norms on data-sharing are urgently needed.

Finally, the Internet cannot replace in-person sessions

The challenge for clinicians is to retain personal relationships with patients and to remember that nonverbal communication is also an essential part of therapy. Although many internet tools are being developed, they cannot replace in-person sessions. Even with the dubious claim that AI tools can read facial and body expressions, a physician-patient relationship cannot be reproduced through artificial tools (Front Psychiatry. 2013. 10:746). The authors note that one positive effect is that AI can provide accessible and affordable support between sessions. Mobile applications and online platforms powered by AI algorithms can deliver evidence-based alerts.

Finally, the authors note: “Our duty is to help our patients understand how responsibly to use opportunities opened by AI, learning from them at the same time. It is better to ride the wave than to end up beneath it.”

No Comments Yet

Leave a Reply

Your email address will not be published.