There are a lot of problems with social media. Whether you use it regularly or not, we have all seen innumerable news stories detailing the data leaks, the online arguments that end in real-world murders, and the industrial-scale manipulation of social media platforms on behalf of corporate or political interests.
There are myriad technical and psychological reasons that social media platforms have the issues that they do. Integral to the way that social media platforms work is the fact that users are required to use their real names when signing up for the service.
This contrasts with most other online services where it is expected that users will adopt an online persona and use a made-up handle. The fact that social media accounts are so intimately tied to the real people behind them is a big part of why social media platforms have become such targets for all kinds of nefarious activity. On the other hand, you can always manage multiple real accounts with the help of proxy providers. More about it here.
But while users are allowed to use their real names on social media platforms, we all know that they are infested with accounts that are fake. It is only the real, legitimate users that are impacted by the insistence that they use their real names, and therefore sacrifice their privacy and anonymity.
Worse still, because most real users are having to use their real names and identities, they assume that the same is true of the other accounts they see on the platform.
This helps blind people to the prevalence of fake accounts, whether they are trolls or astroturfers, that are designed to look just like regular users.
Several years ago, when Twitter was just becoming the Goliath that it is today, there was a huge scandal when it transpired that the majority of followers on celebrity accounts were in fact bots.
We all understand now that these bots and fake fans are part of social media platforms, but at the time, the revelation was shocking. In fact, the implications of that scandal can still be felt today.
Marketers now put a great deal of pressure on social media platforms to remove fake accounts so that they can get a more realistic picture of how popular their clients actually are – and how effective their marketing actually is.
What’s The Solution?
It’s hard to know where to begin with fixing social media platforms so that they can be used as intended – as tools for users to connect and communicate with one another. It is clear that the scale of the problem is simply far too great for platforms to deal with themselves.
Even a business with the resources of Facebook, one of the largest businesses in the world, is ultimately powerless against the deluge of fake accounts hitting its systems every day.
Just as other businesses, such as YouTube, have admitted defeat and found other ways of tackling content or users that they don’t want on their platform, it is now time for social media platforms to consider more radical measures that might just finally rid them of the scourge of fake user accounts.
In fact, the old adage “fight fire with fire” might just be the answer here. If you can remember the old days of the internet, you will no doubt recall just how rare it used to be to associate real names with the online accounts that we held. Back then, moderation was generally carried out by volunteers, with online platforms nominating moderators and administrators with varying degrees of powers to police their communities. Of course, when you are dealing with a platform the size of Facebook, even this system has its difficulties.
Also Read: 4 Ways to Make Your Social Media Posts Seen
The Old Ways Were The Best Ways
Websites and online message boards used to enable people to create accounts with little to no authentication involved, and they did not suffer for doing so. While there may be much more determined efforts to spam websites and create fake accounts en-masse today, it is clear that efforts to put up barriers are simply delaying the inevitable.
Reddit is setting a good example when it comes to policing social media platforms. Reddit is divided into sub-communities that are created and maintained by the users.
Every individual community has its own self-contained moderation team and thus far has had no problems removing bots and fake accounts when they appear. Outright spam is a rarity on Reddit precisely because of this model.
Another source of inspiration is Fediverse. Fediverse is a network of interconnected servers, each of which is hosting its own content. While the servers that constitute Fediverse are entirely separate from one another, they can intercommunicate.
This means that if a network of bot or spam accounts is detected on one part of the network, it can be proactively banned from the entire thing. If social media platforms aren’t able to fix their issues themselves then it is time to invest the necessary powers in their users.
Self-moderation of communities used to be commonplace and was a perfectly good system, certainly more successful than what we have today. Now is the time for radical solutions.