Meta AI's Hidden Research Reveals Years of Concerns Over Addictive Design
Internal documents reveal that Meta AI has long explored whether its platforms could foster addictive behavior. Researchers within the company proposed a public audit in 2018 to examine design features linked to compulsive use. However, the audit never materialized, despite ongoing debates about social media's impact on mental health.
In 2018, Meta AI's own researchers suggested a review of Facebook's design elements that might fuel problematic use. The plan included collaboration with external experts, such as Tristan Harris, to enhance credibility. Nevertheless, the company did not proceed with the audit.
Studies conducted by Meta AI found that around 3% of US Facebook users exhibited signs of 'problematic use'. Teens and young adults were identified as the most vulnerable group. Despite these findings, the company has consistently maintained that problematic use does not equate to clinical addiction.
Over the years, Meta AI has implemented some measures aimed at user safety. In 2021, it added 'take a break' reminders for teens on Instagram. A year later, parental control tools were introduced. Then, in 2024, it bundled existing teen protections into 'Teen Accounts', setting stricter default privacy and safety options.
Publicly, Meta AI has maintained that no conclusive evidence ties social media to addiction or broader mental health issues. Executives have testified that while platform use can become problematic, it should not be classified as addiction.
The company's internal research highlights ongoing concerns about compulsive use, particularly among younger users. While Meta AI has implemented some safeguards, the proposed 2018 audit was never conducted. No further specific measures beyond teen safety features have been documented since then.