The trouble with bikinis: How machine learning struggles with mature content

Amid persistent concerns about brand safety vulnerabilities, marketers have ratcheted up their demands for solutions that safeguard companies’ reputations. Unilever’s Keith Weed and P&G’s Mark Pritchard have been among the most outspoken corporate leaders on the issue, lending it visibility and a sense of urgency.

Among the most pressing questions in the brand safety conversation is how marketers can ensure that their companies’ content is not appearing alongside graphic or otherwise inappropriate content. AI-driven technology (artificial intelligence) will play a key role in tackling the problem – but skepticism abounds. After all, if machine learning systems struggle to distinguish a rifle from a helicopter, how can they draw more subtle distinctions – between a bikini and a bra, for instance?

While high-profile AI flops are no argument for throwing the baby out with the bathwater, they do serve as a reminder that technology is not a panacea in the effort to shore up brand safety. Without discerning humans at the helm of these systems – guiding their training and ensuring that the systems are capable of integrating new information – technology will fall short of its promise.

The problem

The marketing world is abuzz with talk of AI as a brand safety solution. But there is no silver bullet AI solution to the issue; there are instead a diverse array of models – including commercial APIs from tech behemoths like Amazon and Google, in addition to other open-source solutions. Each model has its own limitations.

Take a model for flagging video content deemed NSFW – Not Safe For Work. Under some models, only explicit pornographic images are considered NSFW, but depictions of nudity or softer sexual content are not, though it’s obvious that many non-pornographic images are still NSFW.

Another model, meanwhile, may adopt a more stringent approach, categorising any woman in a bikini as mature content. But this approach is, for most purposes, way too strict. For entertainment, travel, and lifestyle brands, this degree of restriction would be harmful, because it would exclude so much relevant pool-side, beach or fashion related content.

When selecting a brand safety solution, then, brands and content producers should verify that the technology underpinning each solution dovetails with their unique needs and specifications. Testing solutions to determine how they would handle specific cases a brand is likely to encounter will also offer a useful window into how well a particular model would work in the real world.

A new approach

Among the most vexing limitations of deep learning models is that they function as black boxes. Images go in, data comes out, and users cannot know how a system has arrived at certain conclusions.

But cutting-edge academic researchers are at work on solutions that enable systems to explain which part of an image led a system to reach a particular conclusion. Say an algorithm blocks as inappropriate content that would actually be acceptable for a brand’s purposes. These nascent models can provide illuminating insight into what went wrong – and how to correct course.

How does this work in practice? While more rudimentary models might flag a bikini as mature content (or even nudity), the newer models can be trained to accurately assess the shape and color of the swimwear – mimicking human common sense to tell the difference between a bikini and something more risqué.

The implications for brand safety are encouraging: As more advanced AI solutions are developed, the risk of overcorrection diminishes – and the ability for developers to intervene can further mitigate flawed decisions by the machine. Time and again, the tech world has been reminded that biases and imperfections plague AI. That’s because AI systems are designed by biased and imperfect humans. The first step to correcting these problems is recognising that they exist – and then training AI models to assimilate new information and to mimic human discernment. A machine learning model won’t be able to tell the difference between a bra and a bikini unless a human who knows the difference teaches the model how to differentiate the two.

Brand safety in the age of AI

In a landmark obscenity case in 1964, the U.S. Supreme Court Justice Potter Stewart famously wrote that while he could not provide a precise definition of hard-core pornography, “I know it when I see it.” Stewart’s is more or less the standard by which most people judge which content is mature and which is not, when a brand safety breach has occurred and when one has not.

But in the digital age, it simply is not possible to have a human screener check every programmatic ad placement to ensure that it’s not adjacent to problematic content. Artificial intelligence must, therefore, step in – but AI systems are only as discerning as the humans who programme and train them.

Interested in hearing leading global brands discuss subjects like this in person?

Find out more about Digital Marketing World Forum (#DMWF) Europe, London, North America, and Singapore.  

Related Stories

Leave a comment

Alternatively

This will only be used to quickly provide signup information and will not allow us to post to your account or appear on your timeline.