Diet Culture and Artificial Intelligence Don’t Mesh

Share
At the beginning of May, the National Eating Disorders Association (NEDA)—which bills itself as the biggest nonprofit devoted to helping people with eating disorders—debuted Tessa, its new support chatbot. Tessa, which was developed by professors at Washington University School of Medicine and funded by the National Institute of Mental Health, was shown in a clinical trial to help women at high risk for eating disorders feel less concerned about their weight and shape by teaching them coping skills based on cognitive behavioral therapy techniques. After over four years of development, experts had evidence-backed reason to believe the bot could be a free, accessible alternative to eating disorder treatment.

But Tessa very quickly started to go off-script.


Experts In This Article

  • Alexis Conason, PsyD, a clinical psychologist and Certified Eating Disorder Specialist Supervisor (CEDS-S)
  • Amanda Raffoul, PhD, an instructor in pediatrics at Harvard Medical School and researcher at Harvard STRIPED
  • Christine Byrne, RD, an anti-diet dietitian based in Raleigh, North Carolina
  • Dalina Soto, MA, RD, LDN, anti-diet dietitian based in Philadelphia, Pennsylvania.
  • Eric Lehman, PhD candidate at the Massachusetts Institute of Technology researching natural language processing
  • Kush Varshney, PhD, distinguished research scientist and manager at IBM Research’s Thomas J. Watson Research Center in Yorktown Heights, NY
  • Nia Patterson, a body liberation coach and eating disorder survivor
  • Sharon Maxwell, a fat activist, public speaker and weight inclusive consultant

“The bot responded back with information about weight loss,” says Alexis Conason, PsyD, CEDS-S, a clinical psychologist who specializes in the treatment of eating disorders. After inputting a common statement that she hears from new clients all the time—I’m really struggling, I’ve gained weight recently and I hate my body—Dr. Conason says the bot started to give her tips on how to lose weight.

Among the recommendations Tessa shared with Dr. Conason were goals of restricting calories, losing a certain number of pounds per week, minimizing sugar intake, and focusing on “whole foods” instead of “processed” ones.

Dr. Conason says Tessa’s responses were very disturbing. “The bot obviously is endorsed by NEDA and speaking for NEDA, yet [people who use it] are being told that it’s okay to engage in these behaviors that are essentially eating disorder behaviors,” she says. “It can give people the green light to say, ‘Okay, what I’m doing is actually fine.’”

Many other experts and advocates in the eating disorder treatment space tried the tool, and voiced similar experiences. “I was just absolutely floored,” says fat activist and weight inclusive consultant Sharon Maxwell, who is in recovery from anorexia and says Tessa gave her information on tracking calories and other ways to engage in what the bot calls “healthy weight loss.” “Intentional pursuit of weight loss is the antithesis of recovery—it cannot coexist together,” Maxwell says.

Following coverage from a number of media outlets outlining Tessa’s concerning responses, leadership at NEDA ultimately decided to suspend Tessa at the end of May. “Tessa will remain offline while we complete a full review of what happened,” NEDA’s chief operating officer Elizabeth Thompson said in an emailed statement to Well+Good in June. The organization says that the bot’s developer added generative artificial intelligence (AI) features to Tessa without its knowledge or consent. (A representative from the software developer, Cass, told the Wall Street Journal that it operated in accordance with its contract with NEDA.)

The entire incident sounded alarm bells for many in the eating-disorder-recovery space. I would argue, however, that artificial intelligence is often working exactly as designed. “[AI is] just reflecting back the cultural opinion of diet culture,” says Christine Byrne, RD, MPH, an anti-diet dietitian who specializes in the treating of eating disorders.

Like the magic mirror in Snow White, which answered the Evil Queen’s every question, we seek out AI to give us clear-cut answers in an uncertain, often contradictory world. And like that magic mirror, AI reflects back to us the truth about ourselves. For the Evil Queen, that meant being the fairest in the land. But in our current diet culture-steeped society, AI is simply “mirroring” America’s enduring fixation on weight and thinness—and how much work we have yet to do to break that spell.

How AI-powered advice works

“Artificial intelligence is any computer-related technology that is trying to do the things that we associate with humans in terms of their thinking and learning,” says Kush Varshney, PhD, distinguished research scientist and manager at IBM Research’s Thomas J. Watson Research Center in Yorktown Heights, NY. AI uses complex algorithms to mimic human skills like recognizing speech, making decisions, and seeing and identifying objects or patterns. Many of us use AI-powered tech every single day, like asking Siri to set a reminder to take medication, or using Google Translate to understand that word on a French restaurant’s menu.

There are many different subcategories of AI; here we’ll focus on text-based AI tools like chatbots, which are rapidly becoming more sophisticated as proven by the debut of the chatbot ChatGPT’s launch in fall 2022. “[AI-based Chatbots] are very, very good at predicting the next word in a sentence,” says Eric Lehman, a PhD candidate at the Massachusetts Institute of Technology. Dr. Lehman’s research centers on natural language processing (meaning, a computer’s ability to understand human languages), which allows this kind of software to write emails, answer questions, and more.

In the simplest terms possible, text-based AI tools learn to imitate human speech and writing because they are provided with what’s called “training data,” which is essentially a huge library of existing written content from the internet. From there, Dr. Varshney says the computer analyzes patterns of language (for example: what it means when certain words follow others; how terms are often used in and out of context) in order to be able to replicate it convincingly. Software developers will then fine-tune that data and its learnings to “specialize” the bot for its particular usage.

From that training, you get two general categories of application: predictive AI and generative AI. According to Dr. Varshney, predictive AI works with a fixed set of possible answers that are pre-programmed for a specific purpose. Examples include auto-responses inside your email, or data your wearable devices give you regarding your body’s movement.

Generative AI, however, is designed to create entirely new content inspired by what it knows about language and how humans talk. “It’s completely generating output without restriction on what possibilities there could be,” Dr. Varshney says. Go into ChatGPT, the most well-known generative AI program to date, and you can ask it to write wedding vows, a sample Seinfeld script, or questions to ask in a job interview based on the hiring manager’s bio. (And much, much more.)

But, again, AI chatbots only know what is available for them to analyze. In nuanced, sensitive, and highly personalized situations, like, say, eating disorder treatment, AI chatbots present shortcomings in the best of scenarios and danger in the worst.

The current limitations of AI text tools for health and nutrition information

There is immense potential for generative AI in health-care spaces, says Dr. Varshney; it’s already being used to help doctors with charting, aid in cancer diagnoses and care decisions, and more. But once you start digging, the risks of generative AI for directly providing consumers with health or nutrition information become pretty clear.

Since these models typically pull information from all over the internet rather than specifically vetted sources—and health-based information on the web is notoriously inaccurate—you shouldn’t expect the output to be factual, says Dr. Lehman. It won’t reflect cutting-edge medical opinion either, since many tools, like ChatGPT, solely have access to information that was online in 2019 or earlier.

Experts say these very human-sounding tools could be used to replace professional care and insight. “The problem with folks trying to get health and general wellness advice online is that they’re not getting it from a health practitioner who knows about their specific needs, barriers, and other things that may need to be considered,” says Amanda Raffoul, PhD, instructor in pediatrics at Harvard Medical School and researcher at Harvard STRIPED, a public health incubator devoted to preventing eating disorders.

Furthermore, everyone’s body has different health and nutritional needs depending on their unique genetic makeup, gut microbiome, underlying health conditions, cultural context, and more—and those individual needs change on a daily basis, too. AI doesn’t currently have the capacity to know that. “I am constantly telling my clients that we are not robots,” says Dalina Soto, RD, LDN. “We don’t plug in and out daily, so we don’t need the same amount daily. We have hormones, feelings, stress, lives, movement—so many things that affect how we burn and use energy…But because AI can spit out an equation, people think, Okay, this must be right.

“I am constantly telling my clients that we are not robots. We don’t plug in and out daily, so we don’t need the same amount daily. We have hormones, feelings, stress, lives, movement—so many things that affect how we burn and use energy.”
—Dalina Soto, RD, LDN

There’s also a huge value in human connection, which a bot just can’t replace, adds Dr. Conason. “There’s just something about speaking to another human being and feeling heard and seen and validated, and to have someone there with you during a really dark moment…That’s really powerful. And I don’t think that a bot can ever meet that need.”

Even more concerning are the known social bias issues with AI technology, particularly the fact that AI algorithms often reflect existing societal prejudices against certain groups including women, people of color, and LGBTQ+ people. A 2023 study looking at ChatGPT found that the chatbot could very easily produce racist or problematic responses depending on the prompt it was given. “We find concerning patterns where specific entities—for instance, certain races—are targeted on average three times more than others irrespective of the assigned persona. This reflects inherent discriminatory biases in the model,” the researchers wrote.

But like humans, AI isn’t necessarily “born” prejudiced. It learns bias—from all of us. Take training data, which, as mentioned, is typically composed of text (articles, informational sites, and sometimes social media sites) from all over the web. “This language that’s out on the internet already has a lot of social biases,” says Dr. Varshney. Without mitigation, a generative AI program will pick up on those biases and incorporate them into its output, which may inform—and incorrectly so—diagnoses and treatment options. Choices developers when creating the training may introduce bias, as well.

Put simply: “If the underlying text you’re training on is racist, sexist, or has these biases in it, your model is going to reflect that,” says Dr. Lehman.

How we programmed diet culture into AI

Most research and discussion to date on AI and social bias has focused on issues like sexism and racism. But the Tessa chatbot incident reveals that there is another prejudice baked into this type of technology (and, thus, into our larger society, given that said prejudice is introduced by human behavior): that of diet culture.

There’s not an official definition of diet culture, but Byrne summarizes it as “the idea that weight equals health, that fitter is always better, that people in large bodies are inherently unhealthy, and that there’s some kind of morality tied up in what you eat.”

Part of that understanding of diet culture, adds Dr. Conason, is this persistent (but misguided) belief that individuals have full, direct control over their body and weight—a belief that the $70-plus billion diet industry perpetuates for profit.

But, that’s just part of it. “Really, it is about weight bias,” says Byrne. And that means the negative attitudes, assumptions, and beliefs that individuals and society hold toward people in larger bodies.

Research abounds connecting weight bias to direct harm for fat people in nearly every area of their lives. Fat people are often stereotyped as lazy, sloppy, and less smart than people who are smaller-sized—beliefs that lead managers to pass on hiring fat workers or overlook them for promotions and raises. Fat women in particular are often considered less attractive due to their size, even by their own romantic partners. Fat people are also more likely to be bullied and more likely to be convicted of a crime than smaller-sized people, simply by virtue of their body weight.

Weight bias is also rampant online—and mirrored to generative AI programs to pick up on. “We know that generally across the internet, across all forms of media, very stigmatizing views about fatness and higher weights are pervasive,” Dr. Raffoul says, alongside inaccuracies about nutrition, fitness, and overall health. With a huge portion of one’s training data likely tainted with weight bias, you’re likely to find it manifest in a generative AI program—say, when a bot designed to prevent eating disorders instead gives people tips on how to lose weight.

In fact, a report released in August from the Center for Countering Digital Hate (CCDH) that examined the relationship between AI and eating disorders found that AI chatbots generated harmful eating disorder content 23 percent of the time. Ninety-four percent of these harmful responses were accompanied by warnings that the advice provided might be “dangerous.”

But again, it’s humans who create program algorithms, shape their directives, and write the content from which algorithms learn—meaning that the bias comes from us. And sadly, stigmatizing beliefs about fat people inform every aspect of our society, from how airline seats are built and sold, to whom we cast as leads versus sidekicks in our movies and TV shows, to what size clothing we choose to stock and sell in our stores.

“Anti-fat bias and diet culture is so intricately and deeply woven into the fabric of our society,” says Maxwell. “It’s like the air that we breathe outside.”

Sadly, the medical industry is the biggest perpetrator of weight bias and stigma. “The belief that being fat is unhealthy,” Byrne says, is “baked into all health and medical research.” The Centers for Disease Control and Prevention (CDC) describes obesity (when a person has a body mass index, aka BMI, of 30 or higher) as a “common, serious, and costly chronic disease.” The World Health Organization (WHO) refers to the number of larger-sized people around the world as an “epidemic” that is “taking over many parts of the world.”

Yet the “solution” for being fat—weight loss—is not particularly well-supported by science. Research has shown that the majority of people gain back the weight they lose within a few years, even patients who undergo bariatric surgery. And weight cycling (when you frequently lose and gain weight, often due to dieting) has been linked to an increased risk of chronic health concerns.

While having a higher weight is associated with a higher likelihood of having high blood pressure, type 2 diabetes, heart attacks, gallstones, liver problems, and more, there isn’t a ton of evidence that fatness alone causes these diseases. In fact, many anti-diet experts argue that fat people have worse health outcomes in part because of the toxic stress associated with weight stigma. The BMI, which is used to quickly evaluate a person’s health and risk, is also widely recognized as racist, outdated, and not accurate for Black, Indigenous, and people of color (BIPOC). Yet despite all of these issues, our medical system and society at large treat fatness simultaneously as a disease and moral failing.

“It is a pretty clear example of weight stigma, the ways in which public health agencies make recommendations based solely on weight, body size, and shape,” says Dr. Raffoul.

The pathologizing of fatness directly contributes to weight stigma—and the consequences are devastating. Research shows that doctors tend to be dismissive of fat patients and attribute all health issues to a person’s weight or BMI, which can result in missed diagnoses and dangerous lapses in care. These negative experiences cause many fat people to avoid health-care spaces altogether—further increasing their risk of poor health outcomes.

Weight stigma is pervasive, even within the eating disorder recovery world. Less than 6 percent of people with eating disorders are diagnosed as “underweight,” per the National Association of Anorexia Nervosa and Associated Disorders (ANAD), yet extreme thinness is often the main criteria in people’s minds for diagnosing an eating disorder. This means fat people with eating disorders often take years to get diagnosed.

Research shows that doctors tend to be dismissive of fat patients and attribute all health issues to a person’s weight or BMI, which can result in missed diagnoses and dangerous lapses in care.

“And even if you can go to treatment, it’s not equitable care,” says Nia Patterson, a body liberation coach and eating disorder survivor. Fat people are often treated differently because of their size in these spaces. Maxwell says she was shamed for asking for more food during anorexia treatment and was put on a weight “maintenance” plan that still restricted calories.

Byrne says there is even debate in the medical community about whether people who have an eating disorder can still safely pursue weight loss—even though data shows that dieting significantly increases a person’s risk of developing an eating disorder.

The reality is that these highly pervasive beliefs about weight (and the health-related medical advice they’ve informed) will naturally exist in a chatbot—because we have allowed them to exist everywhere: in magazines, in doctor’s offices, in research proposals, in movies and TV shows, in the very clothes we wear. You’ll even find anti-fat attitudes from respected organizations like the NIH, the CDC, and top hospitals like the Cleveland Clinic. All of the above makes spotting the problematic advice a bot spits out (like trying to lose a pound per week) all the more challenging, “because it’s something that’s been echoed by doctors and different people we look to for expertise,” Dr. Conason says. But these messages reinforce weight bias and can fuel eating disorders and otherwise harm people’s mental health, she says.

To that end, it’s not necessarily the algorithms that are the main problem here: It’s our society, and how we view and treat fat people. We are the ones who created weight bias, and it’s on us to fix it.

Breaking free from diet culture

The ugly truth staring back at us in the mirror—that fatphobia and weight bias in AI have nothing to do with the robots and everything to do with us—feels uncomfortable to sit with in part because it’s seemed like we’ve been making progress on that front. We have celebrated plus-size models, musicians, and actresses; larger-sized Barbie dolls for kids; more expansive clothing size options on store shelves. But those victories do little (if anything) to address the discrimination affecting people in larger bodies, says Maxwell.

“I think that the progress we’ve made is not even starting to really touch on the real change that needs to happen,” agrees Dr. Conason. Breaking the spell of diet culture is a long and winding road that involves a lot more than pushing body positivity. But the work has to start somewhere, both in the virtual landscape and in the real world.

Dr. Varshney says that in terms of AI, his team and others are working to develop ways that programmers can intervene during the creation of a program to try and mitigate biases. (For instance, pre-processing training data before feeding it to a computer to weed out certain biases, or creating algorithms designed to exclude biased answers or outcomes.)

There’s also a burgeoning AI ethics field that aims to help tech workers think critically about the products they design, how they can be used, and why it’s important to address bias. Dr. Varshney, for example, leads machine learning at IBM’s Foundations of Trustworthy AI department. Currently, these efforts are voluntary; Dr. Lehman predicts that it will require government regulation (a goal of the Biden Administration) in order for more tech companies to adopt stringent measures to address bias and other ethical issues associated with AI.

New generations of tech workers are also being taught more critically about the digital tools they create. Some universities have dedicated AI ethics research centers, like the Berkman Klein Center at Harvard University (which has an annual “Responsible AI” fellowship). MIT’s Schwarzman College of Computing also offers a “Computing and Society Concentration” which aims to encourage critical thinking about the social and ethical implications of tech. Classes like “Advocacy in Tech, Media, and Society” at Columbia University’s School of Social Work, meanwhile, aim to give grad students the tools to advocate for better, more just tech systems—even if they’re not developers themselves.

But in order to ensure a less biased virtual environment, the harder work of eradicating weight bias in real life must begin. A critical place to start? Eradicating the BMI. “I think that it is lazy medicine at this point, lazy science, to continue to ascribe to the BMI as a measure of health,” says Maxwell.

It’s not necessarily the algorithms that are the main problem here: It’s our society, and how we view and treat fat people. We are the ones who created weight bias, and it’s on us to fix it.

Meanwhile, Byrne says it’s helpful to understand that weight should be viewed as just one metric rather than the metric that defines your health. “Ideally, weight would be just one number on your chart,” she says. Byrne underscores that while it can be helpful to look into changes in weight over time (in context with other pertinent information, like vitals and medical history), body size certainly shouldn’t be the center of conversations about health. (You have the right to refuse to get weighed, which is something Patterson does with their doctor.)

There are already steps being taken in this direction, as the American Medical Association (AMA) voted on June 14 to adopt a new policy to use the BMI only in conjunction with other health measures. Unfortunately, those measures still include the amount of fat a person has—and still leave in place the BMI.

For tackling weight bias outside of doctor’s offices, Patterson cites the efforts being made to pass legislation that would ban weight discrimination at the city and state level. These bills—like the one just passed in New York City—ensure that employers, landlords, or public services cannot deny services to someone based on their height or weight. Similar legislation is being considered in Massachusetts and New Jersey, and is already on the books in Michigan, says Dr. Raffoul.

On an individual level, everyone has work to do unlearning diet culture. “I think it’s hard, and it happens really slowly,” says Byrne, which is why she says books unpacking weight bias are great places to start. She recommends Belly of the Beast by Da’Shaun L. Harrison and Anti-Diet by Christy Harrison, RD, MPH. Soto also often recommends Fearing the Black Body by Sabrina Strings to her clients. Parents might also look at Fat Talk: Parenting in the Age of Diet Culture by journalist Virginia Sole-Smith for additional guidance on halting weight stigma at home. Podcasts like Maintenance Phase and Unsolicited: Fatties Talk Back are also great places to unlearn, says Byrne.

Patterson says one of their goals as a body liberation coach is to get people to move beyond mainstream ideas of body positivity and focus on something they think is more attainable: “body tolerance.” The idea, which they first heard someone articulate in a support group 10 years ago, is that while a person may not always love their body or how it looks at a given moment, they are living in it the best they can. “That’s usually what I try to get people who are in marginalized bodies to strive for,” Patterson says. “You don’t need to be neutral to your body, you don’t have to accept it…Being fat feels really hard, and it is. At least just tolerate it today.”

Patterson says that overcoming the problematic ways our society treats weight must start with advocacy—and that can happen on an individual basis. “How I can change things is to help people, one-on-one or in a group, make a difference with their bodies: their perception and experience in their bodies and their ability to stand up and advocate for themselves,” they share.

In Snow White, there ultimately came a day when the Evil Queen learned the truth about herself from her magic mirror. AI has similarly shown all of us the truth about our society: that we are still in the thrall of diet culture. But instead of doubling down on our beliefs, we have a unique opportunity to break the spell that weight stigma holds over us all. If only we all were willing to face up to our true selves—and commit to the hard work of being (and doing) better.

Our editors independently select these products. Making a purchase through our links may earn Well+Good a commission.

You may also like...