Are humans the flaw in the AI machine?
With many predicting that 2018 will be a watershed year for artificial intelligence (AI) and machine learning (ML), we might expect that marketers will find their jobs becoming easier over the next 12 months. However, our plans to let technology take the strain could be at risk if we lose faith in these new techniques without first giving them enough time or attention.
There are plenty of people in advertising who question the impact that AI will have on the industry as a whole and dismiss discussion around the subject as simply hype. Many argue that the usefulness of AI and ML will be limited to a few niche areas of the industry. There does, however, seem to be a consensus that brand safety is one of the areas where these relatively new technologies can have a real impact.
We must be wary of the belief that AI and ML are magic bullets that will instantly fix programmatic’s brand safety issue. To enable us to get the most from AI and ML techniques, we must first ensure that we as marketers fully understand what our ultimate objectives are.
Of course, the aim is to eliminate the issue of ads being placed alongside content that is inappropriate, but further to this it is achieving KPIs.
When it comes to brand safety, there is no universal definition that fits every brand. It’s a subjective area – while most brands will obviously want to avoid placements alongside extremist and illegal content, it’s more difficult to come up with hard and fast rules as to what kinds of content are controversial or inappropriate without approaching it on a brand-by-brand basis.
For example, a soft drinks company wouldn’t want to have its ads appear on a website offering information about diabetes; nor would a kitchen knife manufacturer want its advertising accompanying a news article about violent crime.
Many current technological solutions used in an attempt to ensure brand safety, such as keyword searching, blacklisting and whitelisting, often fall short by either letting some ads slip through the net, or blocking placements that would have been perfectly safe – also known as a false positive. In the long term these outdated techniques do not have the capabilities to provide brand safety protection, as they are less adaptable to individual brand needs and incapable of understanding nuances in meaning at the page level. Therefore, taking a more refined approach by using AI-based technologies such as semantic analysis and Natural Language Processing to gain a better understanding of emotion and context is essential.
Google, for example, has been using AI to ensure YouTube content is safe for its advertisers, saying that the technology allows it to identify and remove unwanted content quicker than a human could. More than 150,000 violent videos have been wiped from the site within just six months, according to Google – and this just the start. This progress is obviously welcome after the multitude of brand safety scandals it suffered in 2017, and shows the potential benefits of scale from employing AI solutions.
Brands understand what content could threaten their reputation but the task of implementing such tailored brand safety measures across every ad placement, is where the complexity begins. Adopting a one-size-fits all solution does not accommodate for effectively protecting the brand while maximising relevant ad placements.
This is where utilising AI technologies to process large amounts of data at speed and scale to combat inappropriate placements and missed opportunities is key, but these techniques will only ever be as good as the rules and variables that humans set for them.
While some brands’ reaction to the YouTube Brand Safety scare was to shift their focus from programmatic to native advertising, a better response would be to update their approach to this type of advertising. Blacklists, whitelists and keyword searches are not a long-term solution to brand safety – we need to build algorithms that understand nuance and context, and can continue to learn and refine themselves.
Solutions such as semantic analysis provides the ability to understand the full context of a page, reducing the risk of a brand missing out on a golden advertising opportunity due to a false positive, while ensuring inappropriate placements are avoided.
Persisting with outdated methods that are not up to standard instead of embracing AI technological solutions – or abandoning programmatic altogether – holds back the advancement of AI as an effective tool to ensure brand safety. The perception that semantic technologies are more complex for advertisers to use are also a misnomer. If anything, they are much simpler to use given there is no need for the manual inclusion or removal of keywords when trying to optimise in the real world.
Although the industry’s perception of AI and ML is one of increased automation, we have to realise that these technologies are very much under our control. To advance programmatic, we need to ensure we fully understand brand safety within the context of an individual brand, rather than attempting to use broad brush strokes for the whole industry.
These learnings must then be applied to the AI and ML algorithms we use to make sure ad placements are appropriate for the brand. Otherwise, we will continue to hold back the development of AI, and in turn the entire industry as brand safety remains unachievable at the scale needed to succeed.
- » Exploring the ‘accidentally on purpose’ social miscommunication as a marketing tactic
- » Agencies practice what they preach with ‘mature’ content marketing strategies
- » Risky business: The impacts of counterfeiting on your brand
- » Why 2019 will be the year that advertisers successfully adopt digital personalisation
- » How to choose the best martech solution for your business: A guide