DisFact #10: Interview with Twitter CEO; India's another fake news problem; Facebook scandal
Happy Sunday, readers!
I am Samarth Bansal. Welcome to DisFact, my weekly newsletter. Do let me know what I can do better, what you would like to see more of, or any other comments. If you like reading this newsletter, please spread the word. If you’ve been forwarded this email, sign up here. For previous issues, check here.
Three things today: all related to social media and politics.
First, I interviewed Twitter CEO Jack Dorsey in his first visit to India. I list down my takeaways.
Second, my story this week: how fake news and hate speech thrive on India’s regional language social media platforms like ShareChat and Helo.
Third, a note on the New York Times’ bombshell story on Facebook.
Jack Dorsey tells me how Twitter is dealing with challenges arising from human biases
“Maybe the goal to change someone’s mind is not the right goal”—Jack Dorsey, Twitter CEO
Extensive academic research shows how inherent psychological biases dictate how we process information, decide what’s true or not true and form opinions about socio-political issues. These biases influence our political discourse and feed into consequential policy decisions.
They are also at play when we interact with technology. Platforms like Facebook and Twitter have no option but to deal with it.
On Tuesday, a colleague and I interviewed Twitter CEO Jack Dorsey for the Hindustan Times. This is Dorsey’s first visit to India and we got around 30 minutes to chat with him at Twitter’s Delhi office. Three things stood out for me as they offered an insight on how Twitter is dealing with the challenges that are somewhat related to human biases.
1. On accusations of Twitter’s “left-leaning liberal bias”
What: Twitter has been accused of “left-leaning liberal bias”—that it suppresses the voice of right-leaning conservatives. So does a section of the Indian right-wing Twitter, especially in the context of suspending accounts.
What Dorsey said: “We need to operate with impartiality, not neutrality.”
The most important thing Twitter can do to assure that the platform is non-partisan is to be more transparent, Dorsey told us.
Jack Dorsey: “Every single person in the world has some sort of a bias. We are never going to remove that. But you can approach things with impartiality and we can be transparent about how that works and where mistakes were made.”
The platform was lacking transparency in the past, Dorsey admitted, but is “getting better” at it.
“We need to operate with impartiality, not neutrality,” he added. Being impartial means that the company’s actions and policy don’t inherently have a bias or favour one person over another for the wrong reasons.
As for account suspension, this means offering a detailed explanation of why an account was removed and what specific terms and conditions were violated, he said. Twitter has not done that yet.
2. On filter bubbles
What are filter bubbles: Filter bubbles are activated by personalised social-media feeds, where netizens get stuck in an echo chamber, hearing same voices repeatedly that confirm their own thoughts and beliefs.
In words of internet activist Eli Pariser, who coined the term, filter bubbles “are algorithms that create a unique universe of information for each of us, which fundamentally alters the way we encounter ideas and information”.
Dorsey is clear about this: “Filter bubbles exist. And the current Twitter helps build them,” he said.
The solution he has in mind: To allow people to follow topics instead of just user profiles, as Twitter is structured right now.
Jack Dorsey: “If we enable people to follow topics, users actually get to see more perspective. Because if you only follow an account, you are likely hearing only one point of view. It’s very rare that people express one point and then express counterpoint to their point too.”
Does breaking the bubble help? The academic community is divided on whether breaking apart filter bubbles actually helps. Some argue that exposing people to alternative viewpoints may actually embolden their pre-existing beliefs.
Dorsey acknowledged the divide. His personal take: “I don’t really buy the research that says it [filter bubbles] emboldens [existing views],” he said.
Jack Dorsey: “Maybe the goal of changing someone’s mind is the wrong goal. Maybe the right goal is to be more informed about the issue and see it more comprehensively. And make a more informed choice. We need to show more perspectives. But people try to game them and that is dangerous.”
3. On fake news
Fake news is arguably Twitter’s biggest ongoing headache. Dorsey has spoken extensively about this issue over the past two years, especially after the revelations that Russian trolls and bots had infiltrated Twitter and Facebook with misinformation in the run-up to the 2016 American presidential election.
It is this challenge where Dorsey did not have much specifics to share.
Twitter’s focus, the CEO said, is to identify actors that intentionally want to mislead others to take an action and stop its amplification. There can’t be a perfect solution, he said.
Jack Dorsey: “We are not going to come up with a perfect solution because we will always be evolving and experimenting. We may build a solution that works today and then people will find a way to game it and wrap around it. We can’t rest on one solution. And make sure we are ten steps ahead of people who are trying to game the system to amplify misinformation.”
From experience of recent elections in the US and Mexico, Twitter had two key takeaways, he said.
One, providing more context for people to determine the credibility of the information they consume.
Second, having an effective real-time monitoring mechanism and maintaining streamlined contact with government agencies.
The bigger question: Is Twitter doing enough?
A recent October 2018 study from authors at Stanford University and New York University brought bad news for the company: researchers found that “interactions with fake news stories fell sharply on Facebook while they continued to rise on Twitter.”
Such studies serve as independent audits of how platforms are dealing with the challenge and hold them accountable.
I asked Dorsey what he thinks about it. He downplayed the report.
Jack Dorsey: “To set some context, researchers can only sample data: they don’t see the full set of tweets that we see. So all these are going to be biased towards particular conclusions.”
Plus, he emphasised that comparing Facebook and Twitter is not appropriate.
Jack Dorsey: “Twitter is a platform people use to get their news and see what’s happening in the world. Facebook announced not too long ago a shift in another direction which is around personal interactions, not news. It would certainly not be a leap to consider if you remove most news from the feed that all the numbers are going to go down.”
How fake news and hate speech thrives on regional language social media platforms
Beyond WhatsApp, Facebook and Twitter: This week, in our story for the Hindustan Times, Snigdha Poonam and I turn attention to India’s regional language social media platforms — ShareChat and Helo — and show how these platforms are littered with misinformation. We surveyed political content on the two platforms in three languages (Hindi, Kannada and Bengali) to put together this story. You can read the full story here.
A Hindustan Times investigation has revealed that regional language social media platforms such as ShareChat, with 50 million registered users, and Helo, with at least 5 million estimated registered users, are rife with misinformation and political propaganda. From blatant lies to partially true polarising content to violent hate speech, the platforms built for the “next billion” internet users face the same challenges for which American social media giants such as Facebook, Twitter and WhatsApp are facing intense scrutiny. However, as India tightens control over misleading and defamatory content shared online ahead of state and national elections, the focus is almost entirely off vernacular networks.
More:
Along with blatant falsehood, HT found posts that consistently aimed to polarise the electorate along the Hindu-Muslim divide. Some were violent: “Head of every individual who will speak against my country or my God will be cut off,” reads one post in ShareChat Hindi; a “good morning” post with 22,000 views showed a blood-soaked child armed with a long knife and speaking of his “hot, Hindu blood”. Another post mentioned a fictitious exchange between a Muslim man and Narendra Modi that shows the prime minister threatening Muslims with such violence that “cemeteries would fall short”.
Asked why the platform hasn’t taken down this clearly dangerous piece of misinformation, ShareChat’s head of public policy, Berges Malu, said “This is not an incitement of violence. An incitement of violence is saying ‘there is something happening, go out and kill them’.” Malu added that “when you have politicians going on TV and saying these things every night, I can’t start limiting people’s freedom of speech because I don’t agree with their view… We can’t take a call on what is hateful or not.”
The big picture: Experts say the presence of such content on social media platforms poses an election risk. This is an ecosystem challenge and cuts across mediums (print, TV, online) and platforms (WhatsApp, Facebook, Twitter, ShareChat, Helo etc). We still do not have a clear understanding of how and at what scale does misinformation affects electoral results. But that does not mean it’s not a concern.
“Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis”
“We were slow to recognise the problem”: Facebook executives have comfortably used this sentence in public statements when asked about the social network’s multiple problems: Russian interference, misinformation, user privacy, security and so on.
Now, a bombshell report published in the New York Times this week tells us why the company was “slow”.
Five NYT reporters interviewed over 50 sources over six months to show how Facebook's top executives responded to scandals by delaying information, obfuscating problems and deflecting blame.
The Times story suggests that Facebook knew about Russian interference, long before it publicly admitted it did. The company launched a lobbying campaign — overseen by COO Sheryl Sandberg — to combat critics and divert anger toward rival technology companies like Facebook and Google.
The story is arguably the biggest blow to the company’s reputation after the Cambridge Analytica scandal revealed in March this year.
From the intro:
When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem.
And when that failed — as the company’s stock price plummeted and it faced a consumer backlash — Facebook went on the attack.
While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. It also tapped its business relationships, lobbying a Jewish civil rights group to cast some criticism of the company as anti-Semitic.
In Washington, allies of Facebook, including Senator Chuck Schumer, the Democratic Senate leader, intervened on its behalf. And Ms. Sandberg wooed or cajoled hostile lawmakers, while trying to dispel Facebook’s reputation as a bastion of Bay Area liberalism
This account of how Mr. Zuckerberg and Ms. Sandberg navigated Facebook’s cascading crises, much of which has not been previously reported, is based on interviews with more than 50 people. They include current and former Facebook executives and other employees, lawmakers and government officials, lobbyists and congressional staff members.
Links:
I recommend you read the full story here.
Here is the TLDR version: 6 Takeaways From The Times’s Investigation.
You can listen to this episode on The Daily podcast: What Facebook knew and tried to hide.
Facebook has offered a rebuttal, which you can read here.
If you have not been following the Facebook story and interested to know more: check out this recent documentary “The Facebook Dilemma” by PBS FRONTLINE.
The promise of Facebook was to create a more open and connected world. But from the company’s failure to protect millions of users’ data, to the proliferation of “fake news” and disinformation, mounting crises have raised the question: Is Facebook more harmful than helpful? On Monday, Oct. 29, and Tuesday, Oct. 30, 2018, FRONTLINE presents The Facebook Dilemma. This major, two-night event investigates a series of warnings to Facebook as the company grew from Mark Zuckerberg’s Harvard dorm room to a global empire. With dozens of original interviews and rare footage, The Facebook Dilemma examines the powerful social media platform’s impact on privacy and democracy in the U.S. and around the world.
Both parts are on YouTube: click for Part-1 here and Part-2 here.
Talk to me
Comments? Feedback? Suggestions? Write to me at samarthbansal42@gmail.com or hit reply to this email. And if you find this helpful, please spread the word. Thank you!