Skip to main content
Industry Contributor 10 Jun 2019 - 3 min read

Australians among least likely to trust information from social media

By Paul McIntyre - Executive Editor

Around three quarters of Australians don’t trust information that comes from social media, according to latest data (Warc).

 

Key points

  • Findings based on YouGov-Cambridge Globalism Project
  • Of 23 countries surveyed, Australian level of distrust only topped by GB, Sweden, France and Germany
  • Trust was greater than distrust in only five countries: Thailand, Saudi Arabia, India, Nigeria and Poland
  • Contrasts with report’s upward trajectory for mobile ad spending, driven by social media and video
  • Warc forecasts Australian ad market will reach AUD$17.2bn in 2019, with internet to take 54.3% ($9.343bn) and mobile to take 49.5% of internet dollars ($4.622bn)

Fake news is taking its toll on consumer trust. And we have only seen the tip of the iceberg. Deep fakes are coming to a screen near you. In the meantime, same old (true) story: news media, particularly print, is paying the price, while social media spend continues to climb.

It's fair to question the accuracy of pollsters, given recent screw-ups. What's unquestionable is the rise of mobile, in terms of dollars and time spent on devices. Warc's outlook cites predictions that 75% of all internet users will be mobile only by 2025, underlining the shift in focus from global platforms and marketers in recent years has a lot more legs yet.

Given social media drives a big chunk of mobile internet traffic, lack of trust is an acknowledged challenge for most advertisers as well as platforms. And in five years' time, with AI-powered fake news at scale turbo-charging polarisation, there's a whole heap of challenge yet to rear its head.

Earlier this year, OpenAI created a tool so good at creating fake news that they pulled the code for fear of mis-use.

"The public at large will need to become more skeptical of text they find online, just as the “deep fakes” phenomenon calls for more skepticism about images," they wrote. "Today, malicious actors - some of which are political in nature - have already begun to target the shared online commons. We should consider how research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors, and should seek to create better technical and non-technical countermeasures."

The year is 2024 and nobody trusts anything they read, see, or hear. 

What do you think?

Search Mi3 Articles