Unlike mainstream media and publishers that are responsible for their content – as they have the right and duty to edit and ensure their content complies with regulations and ethics – social media administrators are detached from their subscribers’ content. They may have a say and filter the types of advertising it is to carry but are in a difficult position to regulate what their members talk about.
e recently witnessed dozens of big brands pulling out from advertising on Facebook as a form of pressure to demand that Facebook take stronger action against racist and hateful content broadcast on its platform.
The question is whether such a boycott is enough for Facebook to react? Could Facebook actually filter racist and hateful content from showing up on its platform? And last, would there be any way at all to ensure social media is free from racism, hatred (and their cousins, hoax and misinformation) once and for all?
Though Facebook (and other social media) rely on advertising revenue, the boycott of major brands would only set it back 10 percent (or less). The majority of advertisers on social media are small ones taking advantage of the cheaper rate card, as well as effective audience reach; thus unless all (or most) of them join the boycott this act would not affect Facebook financially.
This stance, however, is effective in addressing the issue and is a public relations nightmare for Facebook. Though Facebook is aware of this situation and stated it would do its best to remove hateful misinformed content, the second question still lurks as whether it would be possible for any social media platform to do so.
Technically, even if platforms have filters that detect words, symbols or pictures that indicate unwanted content, it can easily be circumvented by malicious perpetrators. Change “sex” to “53x”, a Nazi swastika to a hooked cross and instead of a menacing firearm use a colorful photo of a water pistol. Not to mention, how do you differentiate between the hateful Nazi symbol from the (original) religious Hindu swastika or how do you distinguish between real weapons and toys; even with the best filtering programs it is still far from perfect.
Then there is the meaning behind the use of these semiotics; a noose on its own may be related to types of knots but it can also mean something entirely different and offensive.
Government regulations controlling social media content may not necessarily solve the problem either; other than blatant hate speech punishable by law, there is a wide gray area between freedom of speech and despotic governments using social media data to identify and punish dissent.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.