Title: Meta Accused of Creating a Dangerous Space for Children on Facebook and Instagram
According to a lawsuit filed in New Mexico, Meta is guilty of exposing minors to explicit content and making it easy for adults to contact them, putting children at risk of exploitation or harassment. The lawsuit also names Mark Zuckerberg, Meta's CEO, as a defendant.
The complaint argues that Meta's business model, which claims to prioritize child safety, and its practices that allow harmful materials and behaviors that endanger children on its platform violate New Mexico law.
Meta has faced increasing scrutiny regarding its impact on younger users in recent years. The social media giant has been sued by several school districts and attorneys general over concerns related to mental health, child safety, and privacy. Arturo Bejar, a former Facebook employee and current whistleblower, testified before a Senate subcommittee last month that Meta leadership, including Zuckerberg, had ignored warnings about how their platform harmed young users for years.
Meta also sued the Federal Trade Commission last month to prevent the regulatory agency from reviving the groundbreaking $5-billion data privacy settlement the company reached in 2020 and from banning the social media giant from monetizing user data.
Meta denies all allegations that its platform endangers children.
"We use advanced technology, engage child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other companies and law enforcement agencies, including attorneys general, to help combat child exploitation," said Nneji in a statement, which also noted that Meta has removed thousands of accounts, groups, and devices that violate child safety guidelines.
Earlier this month, Meta announced that it had introduced a proactive detection and blocking technology for accounts showing suspicious behavior and a child safety team to improve its youth safety and best practices. Meta also said it offers about 30 tools to support teenagers and families, including the ability to set screen time limits and the option to remove likes from posts.
In the course of the investigation, the New Mexico Attorney General's Office conducted random samplings of Instagram accounts registered for those under 12. The complaints allege that these accounts could access explicit content through searches for "sexual or self-harm content," including "soft pornography."
Investigation in New Mexico
In one case, a Facebook search for explicit content yielded no results, but the same search on Instagram produced "a large number of accounts," according to the complaint.
Photos of young girls posted on Instagram often attract "a series of comments from accounts of adult men, in which girls are often encouraged to contact them or send photos," the complaint states and adds that several “adult” accounts were found with numerous photos of young girls published on the back of each page.
"After reviewing accounts with sexually suggestive images of girls, Instagram's algorithms led investigators to other accounts containing child pornography and pornographic images," according to the complaint.
Investigators found dozens of accounts sharing child pornography, including photos of young girls in undergarments and images suggesting that children were engaging in sexual activity. In some cases, it seemed that these accounts were selling child sexual abuse material.
The complaint also alleges that Meta's safety measures were inadequate and made it easier for people to find and view child pornography.
"A search on Instagram for 'Lolita,' whose literary roots refer to a relationship between an adult man and an adolescent girl, resulted in an Instagram warning marking content related to potential child sexual abuse, but the algorithm also suggested alternative terms like 'Lolita girls' that generated explicit content without warning," according to the complaint.
The lawsuit seeks a $5,000 fine for each alleged violation of New Mexico's deceptive practices law and an order prohibiting Meta from engaging in unfair, unreasonable, or deceptive practices.
Additional Insights:
Meta could improve its use of technological innovations to better address online threats to children by:
- Enhancing moderation: Implementing robust age verification systems that prevent children under 13 from creating accounts by entering false birthdates. This would significantly reduce the exposure of minors to harmful content.
- Improving algorithmic design: Modifying algorithms to minimize the recommendation of exploitative content to minors, avoiding the proactive serving of sexually explicit images to children through recommended users and posts.
- Enhancing parental controls: Developing and promoting user-friendly parental control features that allow parents to effectively monitor and control their children's online activities.
- Increasing transparency and accountability: Conducting regular third-party audits to ensure that safety measures are being implemented and that the company is adhering to its own safety standards.
- Improving data accuracy and transparency: Ensuring that data presented to parents through tools like "time spent" features is accurate and reliable.
- Addressing red flags: Prioritizing child safety over design and engagement metrics by taking meaningful action to reduce risks posed by the platform.
By implementing these measures, Meta can significantly improve its protection of children from online threats and mitigate the negative impacts of its platforms on young users.
Source: