When considering the potential impact of discrimination on society, sexism in digital campaigns would be one of the last thoughts. In many ways, digital ads have been at the tip of the spear of advertising democratization. While large investments can carry firms a long way, building interesting content that people engage with online can save money and access unreachable audiences in the traditional media market. However, an algorithm is only as good as its training data. Historical human biases, incomplete training data, and characteristics that interact with the algorithm code can lead to biased outcomes even with the best intentions. It can be daunting to tackle this systemic problem. Even the phrase ‘problem with algorithm’ has the default association with ‘unsolvable problem’. A firm understanding of how they work and steps you can take to navigate biases can enhance your audience diversity and unlock additional market share.

Media buying algorithms operate off of a recommendation system based on content that consumers have engaged with. The advantage of this system is that for cost-per-acquisition and cost-per-click campaigns, there is an ability to hone in on the users most likely to take the desired action within a designated attribution window (typically 1-7 days). All things equal, this media buying method allows brands to cut waste and optimize toward users who generate revenue. Taking this at face value, this would seem as though any perceived inequitable results are simply due to a complex web of decision-making. However, this is not always the case.

In 2018 a study found that the Facebook algorithm had displayed STEM career ads more frequently to young women than young men. The root cause of this was that young women engage with brands more and are thus higher priced in the auction. As a way to cut costs, the algorithm optimized away from those higher-cost users who, in this case, were women. This case sheds light on the routine optimizations that lead to biased results, even when these results do not align with a brand’s intended outcome. In this instance, there isn’t a hard override that the company could have done to offset these results. Due to employment regulations, businesses cannot run gender-based campaigns. However, brands can do a better job of controlling their inputs.

Brands have control over targeting through interests as well as the creative that they leverage in campaigns. The Geena Davis Institute on Gender in Media found that male characters received 1.5x more screen time globally, even though female content-led and gender-balanced attracted 30% more views. These findings mean that a more diversified approach to content creation can increase viewership and engagement with underserved segments. Content with high engagement can reduce ad serving costs by up to 50% on platforms like YouTube and Facebook. If a campaign has content that does not feature or only briefly features women and people of color, those individuals will likely not engage with the content. That will inflate the algorithm’s bias to shift investment away from those users. In short, by creating content that underserved groups are more likely to engage with, investments can be organically shifted towards those groups, even with the algorithm’s cost reduction parameters. 

Marketers can begin overcoming algorithmic bias by designing creatives that are more inclusive. Next, it’s important to appraise and adjust prospecting strategies based on the results regularly. Open market targeting, the process of serving to the full audience of a social platform and leaving the decisions of who to serve to entirely to an AI-powered bidder, has become more popular through the years. However, most digital media platforms allow brands to target via interests, and through audience tools, it is possible to analyze those interests on demographic composition and psychographics. Forcing the platform to spend against these segments may lead to a more even distribution of investment. 

Another more controversial step suggested in a 2020 note from Harvard includes adding the collection and measurement of sensitive protected data into regular best practices. Algorithms themselves do not inherently optimize away from minority groups and women. However, the baseline parameters often yield unintended results. Without distinct measurement, algorithms cannot detect bias in the ad serving on their own. While the precise one-to-one collection of the data is impossible for brands, regular surveys of consumers can provide valuable insights into how their brand reaches and is perceived by diverse groups.

Google made headlines last week reorganizing their AI and diversity research under Dr Marian Croak, a move that has been praised after disruption in the field that started with the firing of Timnit Gebru in December.  Facebook hired a new VP of Civil Rights to address a study’s findings last July that found bias in their algorithms. Amazon researchers made progress in 2020 with their recent award on including a fairness constraint in algorithms. Still, the advertising industry has historical biases that continue to impact the content we produce and how we serve ads. The ANA reported that diversity in advertising numbers have scarcely shifted over the past few years of reporting. Advertisers ultimately control the inputs of these algorithms with the talents we hire, the audiences we focus on, our teams’ staffing and the publishers we work with. And while changes in the algorithms with fairness constraints can help mitigate these biases, if we as marketers do not emphasize changing the inputs, the outcomes will still have a bias in the long run. 

– Khari Motayne, Associate Director of Multicultural Strategy


To learn more about bias in algorithms and steps your team can take to mitigate their effects, please reach out to our team at [email protected].