February 27, 2025
A recent incident that has sent fierce ripples of concern among Instagram users across the world saw some violent and graphic videos be able to surface on the Reels feature because of a tech glitch. Meta Platforms, the parent company of Instagram, has since apologized for the issue, claiming that it has now been resolved.
The Incident
On Wednesday, a large number of Instagram users reported unexpected appearances of disturbing content in their Reels feeds. These videos forged on without regard to the users’ own setting of the “Sensitive Content Control,” meant to filter certain material. Some videos showed “Sensitive Content” warnings, while others showed without any labels whatsoever.
Several videos, in particular, from people’s news feeds show funeral shootings, industrial accidents, and a live estimate of “ShockingTragedies” with an account titled “PeopleDyingHub” on the top.
The Meta Response
Meta’s spokesperson said they had fixed a glitch that caused some users’ Reels feed on Instagram to show some content that should not have been recommended. “We’re sorry for the inconvenience,” said the spokesperson. Not revealing any more details about the actual cause of the glitch or how many users it affected, the company noted to have fixed it.
Context of Changes to Content Moderation
Changes like this come on the backdrop of some recent broader content moderation changes at Meta this January, when the chief executive officer, Mark Zuckerberg, had announced a shift from third-party fact-checking to a community notes thing like the one on Elon Musk’s X platform.
Instead of fact-checking, the arrangement is done through notes made by users to flag misinformation while the number of super-strong violations, such as targeting drug trade, terror, and child exploitation, are treated more seriously. Other violations are going to use mainly user reports.

This policy change instigated discourse concerning some concerns namely the perceived risk of exposure to hateful content. Detractors claim that less professional fact-checking and more dependence on automated moderation tools could witness some content slipping through the cracks23.
User Reactions and Ongoing Concerns
Although Meta maintains that the problem has been solved, some users have claimed there was still disturbing material in their feeds. The complaints on the social media platforms reflect upsurge with users showing their resentment and worries regarding the strange and improper videos going up on their Reels.
Implications for User Trust and Safety of Platforms
This incident highlights the challenges social media platforms are increasingly facing, finding a sweet balance between content-r recommendation algorithms and user safety and content moderation. While an increasing trend toward community-driven moderation and automated systems raise questions about the effectiveness of such tools to combat the proliferation of harmful content.
Given the changes to Meta’s content policies, incidents like these call for strong defenses to mitigate the exposure of users to violence and insensitivity. Keeping users trusting this platform necessitates follow-ups and careful insight to curtain these incidents from recurring.
The glitch, which left thousands of Instagram’s Reels showing violent material, forced Meta to make a public apology and rethink its policies on content moderation.
Since then, as the company is moving toward community-based moderation and increased automation, this event serves as food for thought on how effective such systems could prove to be in keeping incidents like this at arm’s length while keeping the platform free from danger for everyone using it.