Search Engine Optimisation teams work with AI, optimising search results to help clients rank higher within Google’s search algorithm. It’s important to think about the ‘human factor’ too, and conversations around ‘how ethical is SEO’ often crop up.
I genuinely believe that my job as a member of iCrossing’s SEO team does good, as we help users find what they need faster and more efficiently. However, I’m aware that SEO has its flaws. “Don’t Be Evil” was once famously Google’s motto – with the company’s old code of conduct stating they offered users unbiased access to information.
One such ethical issue is that search engine AI is inherently biased, which ultimately can lead to discrimination. It’s important that SEO teams action anti-discriminatory practices in their work to counter this bias.
Yet SEO is the practice of improving the positioning of web pages in search results. It’s also important to remember that Google Search is an advertising company which favours search results for its own economic interest.
Google isn’t primarily an information site. In seoClarity’s survey conducted in 2021, 81.8% of the participants said they mostly trust Google results. Yet, we forget that search engines are created to be profitable and their decision making protocols are biased in favour of the corporate elite. So the question arises – who are Google’s top rankings providing the best information for?
A troubling aspect of SEO is discrimination in search results. Part of my job is to analyse the keywords people use and target these terms. Yet, I’m aware Google search results aren’t always ‘good’ - meaning that when typing ‘women should’ I get ‘women should dress modestly’ as my top autosuggestion.
In the past, autosuggestions featured a range of sexist ideas such as:
Women cannot: drive, be bishops, be trusted, speak in church
Women should not: have rights, vote, work, box
Women should: stay at home, be slaves, be in the kitchen, not speak in church
Women need to be: put in their places, know their place, be controlled, be disciplined
It’s not only gendered queries, but also race-related queries that garner these types of results. Typing in ‘Asian girls’ on Google Image Search brings up several sexualised results, often self-named ‘gentle romantic images’, ranging from ‘Asian school girls’ to ‘Asian mail order brides’.
The public trusts Google and believes that top-ranking search results are the most popular or credible. Yet, this doesn’t explain why sexualised images show up even when users don’t include the word ‘porn’ in keyword searches on ‘Asian girls’.
Admittedly, people typically use very few search terms when seeking information online and rarely use advanced search queries. Google’s algorithm is excellent at mapping simple search terms against more complex thought users’ patterns and concepts on a topic. But again, we must remember that it’s an advertising and not an information algorithm.
Placing responsibility for ‘bad results’ onto users is a problem, since most of the results on generic racial and gendered searches are out of their control. The narrative that search engines aren’t accountable for their algorithms, but instead users and society are, fails to implicate the algorithms, search engines, or SEO agencies, that drive and manipulate certain results to the top of the rankings.
When accused of racial stereotyping, Google typically denies responsibility or intent to harm. However, the platform is able to ‘tweak’ or ‘fix’ these aberrations or ‘glitches’ in its systems in response to these issues. So the question is, if Google isn’t responsible for their algorithm’s design, who is?
We can see positive change in the algorithms over time: typing in ‘black girls’ brings up fewer pornographic images than 10 years ago. But overall the algorithm updates are famously vague: often they’re unconfirmed, and even when we’re told more information on what they entail, a lot of guesswork is involved. Not knowing exactly what Google’s algorithm is, combined with Google’s denial of there being an issue, means it’s incredibly difficult to pinpoint responses on how to resolve this problem.
This isn’t a problem of the past: on 2 December, 2020, Timnit Gebru, a highly influential artificial intelligence computer scientist (and black woman), announced she had been fired by Google for authoring a paper on the risks of large natural language processing models. As large language models are trained on exponentially increasing amounts of text, there’s a risk that racist and sexist language ends up in the training data; as the training data is so large, it’s hard to audit. They will also fail to capture the language and norms of communities with less access to the internet, reflecting then only the practices of the richest countries and communities.
Gebru highlighted in her paper that, as Big Tech companies can already profit from models that manipulate language rather than understanding it more accurately, more keeps being invested in the former. This paper led to her dismissal.
As the adage says, admitting there’s a problem is the first step. While we can point out examples where the algorithm has improved, we must still recognise that search engines which rank and prioritise for profit compromise our capability to engage with complex concepts. Perhaps these tech giants don’t intend to be racist or sexist, but the output is what matters more than intention.
We can then start to be critical of our role in it and apply anti-racist and anti-sexist actions to our deliverables. I’ve combined some of my own ideas with those that have already been suggested as a way that digital marketers can fight discrimination in the SEO industry,  but this list is by no means exhaustive.
· When doing keyword research on beauty products, for example, we can go further than looking at Google keyword planner on the search volume of terms aimed at white women. We could deep dive into black-owned beauty web-shops and do an analysis on what terminology they use, what pain points they aim to soothe, and bring that research into our content.
· We can challenge our clients’ buyer personas: How diverse are they? What data have they used to back up their conclusions? Have they used wide sources of data – and not just Google? Do they have diverse sample sets within their market research/focus groups informing their product development and website UX testing?
· We can take a second look at the client’s content – both written and media – and add diversity to the website audits. Does the written content target people from all backgrounds? Are the photos and videos inclusive? Can we optimise the alt text and schema in mindful ways?
· Who is behind the scenes? Do we work with enough black, Asian, female, trans, gender-fluid writers, developers and partners? Do we have a diversity advisory committee to ensure content is accessible and inclusive?
· Are we keeping our suppliers accountable? As paying customers, we can reach out to our tool companies, from Ahrefs to Conductor, to ask them what their anti-racist and anti-sexism policies are.
· Check our language – why are we still saying black hat and white hat SEO? Can we replace ‘webmaster’ with ‘web leader’?
· Volunteer our time and skills to non-profits that are leading the fight against sexism and racism in their communities. By training them in SEO, we can help level the playing field.
We believe that moving too slowly in digital is the biggest risk your business faces. If you are ready to move faster in digital, we are here to help.