Point of view
Head of Strategy & Planning
This week we were disgusted and saddened to hear of the racist abuse black England players were subjected to in the wake of the Euro 2020 final. Since then, the large social platforms such as Facebook and Twitter have come under scrutiny for not acting fast enough, this is despite Football clubs, players, athletes and a number of sporting bodies undertaking a four-day boycott of social media in April to highlight abuse and discrimination on these platforms.
It’s clear tech giants are struggling to contain this issue let alone eradicate it; technology implemented isn’t fool proof and platforms often rely on users to report hate speech. At the same time the tech giants are grappling with the need for users to share content freely and access platforms easily.
Social media is a key component of today’s society and we all have a role to play in ending all types of hate speech everywhere - in social media and in person; if you see or hear hate crime, report it (helpful links are provided at the bottom of this article), whilst as agencies and advertisers our role is to encourage platforms to act as well as supporting and implementing best practice brand safety as it is introduced.
We have been monitoring the situation closely and wanted to reassure advertisers about brand safety on Facebook’s platforms. As a Facebook Business Partner we follow best practice brand safety within the platform; using Block Lists for any third party placements on Audience Network as well as avoiding any religious or current affairs topics on that platform. In addition, to further protect our clients, we are developing proprietary technology that can detect sentiment in comments within social so that we can react quickly to remove ads where a user has made harmful or abusive comments.
In the meantime Facebook have provided us with the below update that covers the steps they have taken to improve the situation across their platforms:
No one should have to experience racist abuse anywhere, and we don’t want it on our platforms. We worked swiftly to remove comments and accounts directing abuse at England’s footballers last night and we’ll continue to take firm action against those that break our rules.
We have always taken the issue of hate speech very seriously. We don’t allow content that attacks people based on their race, religion, sex, gender identity, sexual orientation, ethnicity or national origin.
In addition to our work to remove this content, we have been working hard to find ways to prevent players and public figures from having to see this type of abuse at all. Earlier this year we announced a new feature on Instagram, built in consultation with footballers and high profile people who use our apps. When turned on, this feature will automatically filter Direct Message requests containing offensive words, phrases and emojis, into a hidden folder, so people never have to see them.
This works alongside our existing comment filtering technology and people can report this abuse without ever having to see the content of the messages.
Players and football bodies have also been clear that harmful online activity should have real world consequences and we agree. Our dedicated law enforcement outreach team is already in touch with the relevant authorities to see how we can support with the most recent incidents of abuse.
No one thing will fix this challenge overnight, but we’re committed to keeping our community safe from abuse and from working with others to address those seeking to cause this abuse in the first place.
Obviously, this is a very important topic so if you have any questions please get in touch to discuss this further.
We believe that moving too slowly in digital is the biggest risk your business faces. If you are ready to move faster in digital, we are here to help.
GET IN TOUCH