Produced by the Office of Marketing and Communications
UMD Scientist Shows Generative AI Outputs Skew Negative, Even After Positive Prompting
New UMD research found that even when prompts using generative artificial intelligence lean toward positive emotion, such as joy or excitement, the images generated from these prompts tend to evoke fear as the dominant emotion.
Illustration by Adobe Stock
Anyone active on social media has seen how their algorithms can favor negative content, with outraged posts and anxiety-inducing images often getting more engagement.
Now, a University of Maryland researcher is exploring similar biases in generative artificial intelligence (AI) models like Stable Diffusion and GPT-4o. These tools, which produce text and images based on user prompts, often create content that is more negative than the prompts themselves—a bias that could have serious consequences for our online spaces.
To research biases in image generation models, Cody Buntain, an assistant professor in the University of Maryland’s College of Information and part of the Institute for Trustworthy AI in Law & Society (TRAILS), teamed up with Maneet Mehta, a senior at Reservoir High School in Howard County, Md., who reached out to him last summer to learn about his work on ethical AI.
The duo employed the DiffusionDB dataset, which contains 14 million images generated by Stable Diffusion, a deep-learning, text-to-image AI model. They then used advanced machine learning techniques and a hybrid captioning approach to identify emotions in AI-generated images and compare them to those in corresponding text prompts.
The researchers discovered that even when prompts lean toward positive emotion, such as joy or excitement, the images generated from these prompts tend to evoke fear as the dominant emotion.
“We’ve found this to happen even if that's not your intent, even if you aren't aware of it,” said Buntain, who has an affiliate appointment at the University of Maryland Institute for Advanced Computer Studies (UMIACS), which is managing the large volume of data associated with the project.
But where does this bias come from? According to Buntain, it may be rooted in inherent human psychology.
“People tend to be more responsive to negative visuals,” he said, referencing previous research demonstrating that hostile or fearful content on social media gets more reactions. To satisfy the increasing need for training data as generative AI models grow in sophistication, developers are increasingly using social media and creator-content as training data—which is already skewed toward negativity.
This results in what Buntain calls an “unvirtuous cycle.” AI systems are trained on this media, then over-represent negative emotions in the content they produce. This content feeds back into social media, captures the attention of even more viewers, and is funneled through to the next round of training AI systems—and the cycle repeats.
Buntain believes this may result in a spiral of negativity online. “If you're exposed to your friends who are posting negative content, you are more likely to then feel negative and post negative content yourself,” he said.
This is especially noteworthy amid increasing anxiety disorder in the U.S., especially among youths. Similarly, he warns of the possibility that this contagion of negative emotion could worsen political polarization, making it harder for people to engage in civil discourse, making elections more contentious.
However, Buntain believes there is a way to break this cycle, primarily by empowering users of these tools.
For example, generative AI models could incorporate features like an emotional tone slider, allowing users to adjust for positive or negative content. Or they could provide feedback that acknowledges that the images the system creates are likely to be negative, and then offer to recreate content to evoke more positive emotions.
“The main problem is not necessarily the creation of negative content—it’s the lack of knowledge that the systems you're using have this sort of predisposition,” he said.
College of Information University of Maryland Institute for Advanced Computer Studies
Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.
Faculty, staff and students receive the daily Maryland Today e-newsletter. To be added to the subscription list, sign up here:
Subscribe