UT Austin Researchers Identify “Brain Rot” Effect in Large Language Models

Share this content

Published:
November 5, 2025
Photo of figure from "Brain Rot" paper

A new study co-authored by UT Austin researchers introduces the “LLM Brain Rot Hypothesis,” showing that large language models can lose reasoning ability when repeatedly trained on low-quality, engagement-driven web content such as short social-media posts. The work demonstrates measurable declines in ethical behavior, reasoning, and long-context performance, signaling that model quality can erode even without changes to architecture or scale.

Check out the full paper and accompanying coverage to learn more.