Artificial intelligence, data, and the legal debate: What marketers need to know now
I recently attended the ReWork Deep Learning conference in London, a fantastic event bringing world leading academics, large multinationals and start-ups together to discuss the latest advances in this branch of artificial intelligence and how it can improve our lives. Although very few applications include AI at present, it is predicted to increase exponentially over the next few years as the technology currently in development gets released for general use.
While there were many fascinating talks about advances in healthcare and construction, there were two distinct themes that are relevant to marketers that ran through almost all the talks I attended, and there is no doubt that both issues will need addressing soon.
The first of the issues was the moral and legal agency of AI applications. If your doctor misdiagnoses you, you have somewhere to go to complain and have the potential for compensation. But what if your computer misdiagnoses you? Who is to blame? The person who sold you the machine, the person who sold the application, the engineers who wrote the application, the individuals who labelled the data, the people who created the data set?
It’s a difficult question to answer, and depending on the risk of following the prediction the AI gives, the answers can vary. Amazon suggesting a book in which I have no interest is a very different problem from a health app telling me not to worry about the chest pains I’m having. Marketers need to define the legal agency of the artificial intelligence they create and make that clear to all individuals at the point of use. I believe this is something we can debate and resolve before they become an issue - let’s decide as a community what power we are happy to devolve and the consequences of that, while these applications are still in development.
The second theme is data. We are all far more aware of our data and its potential uses these days. There was uproar earlier this year when the NHS shared the data of 1.6 million patients with Google DeepMind from three London Hospitals to produce an application for monitoring kidney disease.
Here in the UK we are very protective over who has access to our personal information, and a lot of people were upset about the lack of opt-out from this, particularly in marketing communcations. With the ethical considerations of this example aside, the question remains, how can we build applications to predict early indicators of disease, or any other system for that matter, without real data? This is one of the biggest challenges in the industry right now and is being attacked on two fronts.
One approach is making it easier to create labelled data sets - taking copyright free or anonymised images and manually labelling them is a time consuming and expensive task, particularly if those images require expert knowledge to label. There is a lot of focus on creating artificial data sets, making labelling of real data easier, and getting as much as possible out of all the existing data sets.
Another approach is to reduce the amount of labelled data required by using semi-supervised or unsupervised techniques. In this case, the algorithms learn to categorise the data they are given without any real concept of what they are categorising, so all the programmer needs to do is to label the end category as ‘cat’ or ‘banana’. These approaches are powerful, but are not yet as precise as the supervised approach, requiring a lot of labelled data. While I’ve looked at images here, the same problems apply to video, sound and natural language analysis.
In combination, there are great advances being made, but we are always going to need some form of real data. As individuals, we need to make our decisions about the data we make available and be aware of what sources of data could be used to create these types of applications.
In marketing, it seems like there are new techniques emerging constantly - yet many of these vanish as quickly as they appeared. However, the benefits of data driven and truly personalized marketing are numerous, but marketers are now governed by more regulation than ever, particularly around the use of data.
Gone are the days when data protection and compliance for marketers was black or white - a customer was either opted-in or out. In our recent work with a major multi-channel retailer, my team and I identified 24 different opt in points with varied permission messaging, eight unique permission states, influenced by three separate ecommerce systems. At that level of complexity, knowing what the customer has opted into and via which channels at a specific point in time, can all start to look a bit grey. However, following data best practices can prevent issues and if mistakes occur, enables quick and effective resolution.
As a first step ensure you keep a record of the date, context and source of each new customer acquisition; as well as good data management practice, its useful data for marketers when designing new customer engagement plans.
It’s also worth remembering that even where permission status is unclear but a new subscriber has purchased, for example a new customer has used a guest checkout, the law allows a reasonable amount of flexibility around follow-up messaging. As long as the tone is non-promotional and there’s an opportunity to immediately unsubscribe, welcome programs can still be successful (although you should of course always check with your legal team before putting plans into action).
Most importantly, if an opt-out is received, then ensure that record is updated in as real time as possible. With complex multi-channel operations, a central marketing hub or single customer view is essential for effective permission management. The law allows plenty of time for an organisation to process unsubscribes, however you can’t assume you have days available to you to process data synchs across disparate systems. Apart from the high incidence of deliverability issues when organisations are slow to respond, customers who continue to receive marketing messages after opt-out are increasingly likely to voice their dissatisfaction via social media.