Researchers at MIT's Media Lab found fake news stories are '70% more likely to be retweeted than the truth,' and fixing the problem will take more than banning Twitter 'bots.'
A new study finds that fake news spreads far faster than real news on social media.
By tracing how Twitter users responded to 126,000 stories between 2006 and 2017, researchers at MIT’s Media Lab found “it took the truth about six times as long as falsehood to reach 1,500 people.”
All told, “falsehoods were 70% more likely to be retweeted than the truth,” even though the accounts most responsible for circulating fake stories often had fewer followers, were less active on Twitter and were more often unverified.
“There’s two parts to the spread of false news.”
That’s Sinan Aral, the David Austin Professor of Management at MIT and one of the study’s authors.
“There is the attraction of attention, and then there’s the decision to share.”
Combing through their data, Aral and his colleagues found there was a novelty about false news stories that often made them more intriguing than a truthful story.
Leaning on the world of behavioral theory for an explanation, they started to think that since new information helps humans better understand the world, we’re more tempted by stories that make them feel like we’re “in the know.”
So maybe it’s not surprising that with a kernel of juicy – if fake – news in hand, so many people are then inclined to show off that new information and share it online.
Human instincts may help explain the novelty value of fake news, but surely those notorious Twitter bots make the problem worse. Right?
“I did think that bots were going to be able to explain at least some of the variance in the spread of false news compared to the truth.”
Take bots out of the picture or keep them in the study, Sinan and his colleagues said their results were basically unaffected.
“I am not saying that bots are not a problem. What I’m saying is that human beings have more responsibility than we may have thought, and that actually changes the way that we would think about solutions.”
Instead of just focusing on removing bots or better policing networks, Aral says social media companies would be wise to look at ways to change user behavior by giving them more information about the content they encounter. And some platforms seem to be heeding that advice.
Last month, YouTube began placing notices below videos uploaded by media outlets funded wholly or partially by governments. YouTube said it hoped the move would “equip users with additional information to help them better understand the sources of news content that they choose to watch.”
Read the full text of “The spread of true and false news online” at Science Mag here.