FOR years, Facebook reigned as the ultimate social hub for connecting people and fostering community engagement.
But today, users find their feeds overwhelmed by a mix of irrelevant and intrusive content, with random posts, spam and trending scandals like the recent Engonga story.
In recent days, videos surfaced showing Equatorial Guinea’s senior civil servant, Baltasar Ebang Engonga, in compromising situations with various women, circulating widely on social media.
The BBC reports that this scandal has become a trending topic across Kenya, Nigeria, and South Africa, underscoring the troubling influence of algorithms that flood users’ feeds with sensational content, as if everyone is actively pursuing it.
These algorithms amplify viral stories and suggest related groups, eroding the authentic connections that social media platforms should foster.
Users are left questioning the relevance of, for example, Facebook’s content suggestions, and for many, the platform has become a space where trust and enjoyment are rapidly diminishing.
Keep Reading
- Accident, piracy affected my career: Zembe
- Kim Jayde revels in SA award nomination
- Building narratives: Chindiya empowers girls through sports
- Public relations: Which innovations are driving PR in 2022 and beyond?
According to the Pew Research Centre concerns about inaccuracies on social media are growing. A 2023 study by Government Information Quarterly found that frequent Facebook users were more likely to consume fake and hyper-partisan news than users of other platforms.
A South African journalist recently voiced his frustration in a post: “Dear Facebook, these random suggested posts from people and groups I don’t follow are a huge turn-off. Why do I have to keep telling you to ‘show less,’ only for them to flood my timeline again? Do better!”
Spam and excessively personal posts — such as parents posting their children’s school grades and other sensitive details — are compromising privacy and exposing both individuals and their families to harm they may not even be aware of.
The phenomenon of parents sharing content about their children online, often referred to as “sharenting,” is fraught with risks.
According to David Moepeng, Africa’s first cyberpsychologist, sharenting poses significant privacy and security threats.
“Parents need to understand that sharing photos or personal information online can lead to identity theft or online exploitation,” he explained.
“The permanence of online posts means that children’s digital footprints are being shaped before they can consent. Parents should be more cautious about what they share and consider the long-term impact of posting sensitive information about their children online. Balancing the desire to share moments with ensuring the safety and privacy of their children is crucial.”
Moepeng specialises in educating internet users across Africa on cybersecurity and the measures they can adopt to safeguard themselves and their digital assets.
While social media has the potential to connect people and offer opportunities, these benefits are often overshadowed by the platforms’ misuse.
More so, researchers have also linked excessive social media use to mental health issues such as disrupted sleep, lower life satisfaction and diminished self-esteem.
Europe has taken steps to address some of these concerns through the Digital Services Act although its impact is yet to be ascertained.
This requires platforms such as Facebook, X (formerly Twitter) and TikTok to better regulate the content they promote.
The Digital Services Act aims to curb disinformation, illegal content tent and societal harm while safeguarding privacy and child safety.
In response to growing concerns about the impact of social media on young people, Australia is set to introduce legislation to ban children under 16 from social media platforms.
Prime Minister Anthony Albanese emphasised that this move is intended to protect children from the mental health risks, harmful content and social pressures associated with online spaces.
In Zimbabwe, however, legal frameworks to manage social media and AI-generated content remain insufficient.
While the Cyber and Data Protection Act addresses some aspects of liability, it falls short of addressing AI-related issues and broader social media abuses.
Moepeng further explained how Facebook’s algorithm exacerbate these problems.
“Facebook’s algorithms are designed to promote content that engages users.
However, this can inadvertently increase exposure to unwanted or malicious content, as the algorithms may prioritise highly engaging posts, even if they contain misleading or harmful information.
Unsolicited content, such as spam, can be amplified by these algorithms, making it more likely for users to encounter potentially harmful posts that could lead to cybersecurity risks, such as phishing or scams.”
Research shows that misinformation generates far more engagement on social media than fact-based reporting.
“The rise of AI-generated content has made it even harder for Facebook to differentiate between genuine and harmful posts,” Moepeng added.
“Spammers use sophisticated techniques to create content that mimics legitimate posts challenging traditional filtering methods.”
Users, Moepeng added, can protect themselves by being more critical of posts that seem unusually repetitive or impersonal and by cross-checking information through trusted sources before engaging. Installing browser-based plugins that analyse and flag AI-generated content can also help to mitigate this issue, he said.
AI is frequently employed to mass-produce fake images and text, creating entry points for cybercriminals.
“AI-generated spam can lead to cyberattacks,” Moepeng warned.
“These posts often contain malicious links directing users to phishing sites or malware downloads, putting personal data at risk.”
Meta, Facebook’s parent company, acknowledges these challenges.
In its policy rationale, Meta stated: “We do not allow content designed to deceive, mislead or overwhelm users in an attempt to artificially increase viewership. This detracts from authentic engagement and can threaten the security of our services.”
Nevertheless, as spammers adapt to evolving detection systems, users remain vulnerable.
So long as spamming remains profitable, the problem will persist.
Engaging with harmful content is not just inconvenient — it can have severe financial repercussions.
Moepeng advises that clicking on suspicious links or downloading attachments from unfamiliar sources can expose users to phishing attempts, identity theft and malware.
“To avoid this, users should critically assess the legitimacy of the content before engaging. It’s essential to check the source, verify the credibility of the post and be cautious of any posts that seem overly sensational or offer something too good to be true.”
Ultimately, the responsibility lies with both the social media platforms and their users. As AI and algorithms continue to shape the digital landscape, balancing engagement with user safety becomes an ever-greater challenge.
One thing is certain: unless social media giants address these concerns, they risk alienating the very users they rely on.