Several whistleblowers claim that major social media platforms allowed harmful content to spread in order to increase user engagement. Former employees say internal research revealed that controversial or outrage-driven posts attracted more attention from users.
Insiders from both Meta and TikTok described how companies made decisions that increased the visibility of borderline harmful material. These posts included misogyny, conspiracy theories, harassment, and other disturbing content.
According to the whistleblowers, the companies prioritized user engagement and competition over safety concerns.
Pressure to Compete in the Social Media Market
An engineer who worked at Meta said managers instructed developers to allow more “borderline” content into user feeds. The goal was to compete with TikTok’s powerful video recommendation system.
The engineer explained that executives believed the platform needed stronger engagement numbers. Some employees understood this push as a response to falling stock prices.
Meta owns several major platforms, including Facebook and Instagram. Both services rely heavily on algorithms to decide which posts users see first.
Whistleblowers claim these algorithms sometimes favored content that sparked anger or outrage because it kept users interacting with the platform longer.
Internal Concerns About Instagram Reels
Former Meta researcher Matt Motyl said the company launched Instagram Reels in 2020 without strong safety protections.
Reels was created to compete directly with TikTok, which had rapidly gained popularity with its short-video format.
Internal studies reportedly showed that comments on Reels contained higher levels of harassment, bullying, hate speech, and violent language compared with other parts of Instagram.
Motyl shared research documents that highlighted concerns about how recommendation algorithms affected user behavior.
One internal report warned that the algorithm created financial incentives for content creators to prioritize engagement over audience well-being.
Safety Teams Reported Limited Support
Whistleblowers also described staffing decisions that raised concerns among employees.
According to a former senior Meta employee, the company hired hundreds of workers to expand Reels. However, safety teams struggled to secure even a few additional specialists to handle issues related to child protection and election integrity.
Critics argue that these choices show how product growth sometimes took priority over user safety.
TikTok Staffer Raises Policy Concerns
A TikTok employee shared internal complaint dashboards that track user reports. The data reportedly showed cases involving harmful posts, including content featuring children.
The staff member said company leaders sometimes prioritized issues involving political figures over other harmful content reports.
According to the employee, management wanted to maintain strong relationships with politicians to avoid potential regulation or platform bans.
Companies Reject the Allegations
Both companies strongly deny the claims made by whistleblowers.
Meta stated that it does not deliberately promote harmful content to increase profits. The company said protecting users remains a key priority.
Meanwhile, TikTok described the accusations as fabricated and emphasized its investments in technology designed to prevent harmful material from reaching users.
Growing Debate Over Social Media Algorithms
The allegations have intensified the ongoing debate about how social media algorithms shape online behavior.
Critics argue that platforms must balance engagement metrics with user safety. Governments and regulators worldwide continue to examine how these systems influence public discourse and online well-being.
As scrutiny grows, the decisions made by companies such as Meta and TikTok could face increasing regulatory pressure in the coming years.
