Good news, fake news: The media’s fight for encryption – and against misinformation
According to reports, senior figures in the Trump administration held a meeting last month to consider whether to pursue legislation outlawing end-to-end encryption on US-made messaging apps.
While the ultimate outcome of the meeting is yet to be seen, it brings to light a serious question that we all - governments, consumers and businesses - need to ask ourselves: what do we lose when private messaging is truly private?
On the one hand, end-to-end encryption means that private messaging conversations aren’t able to be subjected to government surveillance, data harvesting and the sort of targeted advertising that has become pervasive and a little creepy.
On the other hand, encrypted messaging makes it almost impossible for law enforcement, journalists and even the chat platforms themselves to combat hate speech, harassment, illicit activity and the dangerous spread of viral misinformation.
A recent Washington Post article explored what this dilemma means for Facebook, as it looks to position its business more squarely around private messaging while at the same time pledging to clean up its platform.
In Mark Zuckerberg’s own words: “When faced with a challenge around a trade-off between encryption and safety, I think people would want us to err a little bit more on the side of encryption."
With these arguments around the encryption paradox not looking like they’ll end anytime soon, a growing number of organisations are tackling the challenge of viral misinformation on private messaging platforms head on. And they’re doing it through messaging apps themselves.
Journalism non-profit Meedan just launched a suite of tools for reporters to use during global elections, natural disasters and other time-sensitive events.
Called Check, Meedan’s platform uses Smooch to connect to WhatsApp’s Business API so that citizens can submit photos and news stories via WhatsApp for journalists to verify or debunk.
Using a combination of computer vision, natural language processing and machine learning, Check automatically recognises and groups together identical and near-identical requests for image or text verification, meaning fact checkers don’t need to verify the same submission more than once.
The integration was piloted during the Indian elections last spring. But with massive elections in Australia, Argentina, Canada and the U.S. expected in the coming year the Check platform has the potential to tackle the problem of viral misinformation on a global scale.
Meanwhile, investigative journalism startup Point is trying to fight messaging-based misinformation — and raise funds — by creating its own video game.
On their Kickstarter page, Point’s Jay McGregor explains that “Misinformer” will be a “text-based detective style mobile game” in which the player has to “crack a major misinformation-based conspiracy before an upcoming election.”
Essentially, it’s a fictionalised version of Meedan’s Check platform – putting the player in the role of “citizen journalist.”
If the game is effective, it could teach people to think twice before sharing unverified information in real-world messaging apps. As McGregor told Journalism.co.uk: “If there ever was a time to be better informed about misinformation, it’s now.”
Find out more about Digital Marketing World Forum (#DMWF) Europe, London, North America, and Singapore.
- » Look to the cloud as the silver lining for CX and fast fashion’s global growth
- » It’s crunch time for cookie bashers: Why we need cookies to protect the open Internet
- » Instagram confirms it is hiding likes in the US: The agency and analyst perspective
- » For cybersecurity marketers, a sense of clarity is desperately needed
- » Why security is the key to CMO survival in the C-suite