It was a rough 2017 for Facebook (despite spending record sums on lobbyists), and 2018 isn’t getting off to a great start. At Davos last month, tech company Salesforce CEO Marc Benioff made his views on how to regulate Facebook plain:
I think that you do it exactly the same way that you regulated the cigarette industry. Here’s a product: cigarettes. They’re addictive, they’re not good for you … there’s a lot of parallels … technology has addictive qualities that we have to address ... product designers are working to make those products more addictive and we need to rein that back.
As did George Soros:
[Social media companies] deliberately engineer addiction into the services they provide. This can be very harmful, particularly for adolescents …
The internet monopolies have neither the will nor the inclination to protect society against the consequences of their actions. That turns them into a menace and it falls to the regulatory authorities to protect society against them. In the US, the regulators are not strong enough to stand up against their political influence. The European Union is better situated because it doesn’t have any platform giants of its own.
The connection between tech usage and health risks is not new, but has recently gathered steam. In November, former Facebook executive Chamath Palihapitiya and founding president Sean Parker both characterised the network as a detriment to society. In the words of the latter, “God only knows what it’s doing to our children’s brains”. Earlier this week, former employees of both Google and Facebook launched a lobbying campaign to combat tech addiction among children.
If it was difficult to parse actual public distaste from the occasionally frenzied commentary, the damage is now clear. According to the 2018 Edelman Trust Barometer, trust in social media and search engines is significantly down: among the 28 countries surveyed, only Sweden and Ireland had lower levels of trust than Australia. Conversely, global trust in traditional and online media has risen from 2017, both globally and in Australia.
While faith in platforms has declined, attitudes towards technology and telecommunications companies themselves have remained broadly positive over the past four years, according to the Edelman report. But, as a Verge survey of Americans showed in October 2017, goodwill towards tech giants is not evenly spread. In comparison to Apple, Microsoft, Google, and Amazon, a greater share of respondents say Facebook and Twitter have a negative impact on society and that they dislike using their services.
Perhaps with this dynamic in mind, in January Facebook CEO Mark Zuckerberg announced that the company would begin prioritising content shared by friends, family, and groups over that shared by pages. Although it’s not quite the same as last year’s test that saw the removal of all organic page content from news feeds in six countries, the changes do indicate a desire to ameliorate the responsibilities that accompany news publishing. Almost half of all American adults get news from the platform – as Facebook tightens the spigot, whether these users seek out additional sources of news or just end up consuming less of it remains to be seen.
Of course, there’s nothing to prevent friends and family from spouting conspiracy theories. But perhaps users are better-placed to assess the veracity of the sources they’re related to – better the fake news you know than the fake news you don’t. Or perhaps the changes will only reinforce existing echo chambers, as users increasingly encounter the same type of news and views.
In December the company acknowledged the mental health problems posed by Facebook, but also argued that the problem comes down to whether users spend time actively interacting with other users, or passively consuming content. After spending US$50 million in 2016 promoting live videos among publishers, Facebook has now adjusted its algorithm to show fewer viral clips. As a result, total Facebook usage has dropped by 50 million hours a day. These changes are driven by the idea that the future of the company lies in “meaningful connections between people rather than passive consumption of content”, and were foreshadowed in December by a blog post admitting the mental health risks posed by using the service.
It’s difficult, however, to imagine Facebook countenancing less time spent on the site per user as the way of the future. Facebook is a public company, and there are just not enough humans left on the planet to sustain less user engagement and still post growth, except perhaps through increasing the apppeal of their user data to advertisers. Last February, Zuckerberg outlined Facebook’s future as a supranational social infrastructure underpinning a global community of communities, and there is no reason to suspect that goal has changed. Deeper engagement is still part of the plan.
But no matter how communitarian in ethos, a ubiquitous service is an inescapable service. There may be ways for Facebook to address or avoid the problems of fake news, hate speech, radicalisation, and foreign interference in elections. But if a popular narrative emerges equating deeper engagement with unhealthy tech addiction, Facebook will not be able to avoid confronting it.
As Soros notes, a concerted US regulatory push is doubtful. The current state of social media both assisted Donald Trump’s presidential victory and continues to sustain his core support. But without some truly substantive self-reform on the part of tech companies, this narrative of social media as a health risk will only fester. If and when the US political tide turns, Australian policymakers should know where they stand.