- Facebook was in the surveillance business?
- It goes after children?
- There’s evidence that Facebook was involved in US politics
- And it may even have swayed the Brexit vote
- Regulation and oversight are certain
Facebook began as a way for students at Harvard to socialise and keep in touch. It soon spread, and rapidly became the social media behemoth we all know, a platform with a truly global reach that connects friends, family, and co-workers. As in any business, there’ve been hiccups along the way. But the last two years have brought a steady stream of self-inflicted wounds. And Facebook is now reeling under the impact of these crises.
This isn’t just a question for Zuckerberg’s company – as Facebook goes, so goes Silicon Valley. And as calls mount for real regulation and oversight, these scandals are changing the future of digital privacy, electioneering, and ethics.
Facebook was in the surveillance business?
Perhaps the first sign of things to come was the company’s secret relationship with the digital surveillance company Geofeedia. Geofeedia’s business plan is simple: it connects social media posts to locations, helping law enforcement find and track people in real time. In a report made public by the American Civil Liberties Union (ACLU), it found that
“Twitter, Facebook, and Instagram provided user data access to Geofeedia, a developer of a social media monitoring product that we have seen marketed to law enforcement as a tool to monitor activists and protesters.” According to the ACLU, confidential emails which were gathered legally through freedom of information laws revealed that “Facebook had provided Geofeedia with access to a data feed called the Topic Feed API, which is supposed to be a tool for media companies and brand purposes, and which allowed Geofeedia to obtain a ranked feed of public posts from Facebook that mention a specific topic, including hashtags, events, or specific places. Facebook terminated this access on September 19, 2016.”
As Kalev Leetaru reports for Forbes, “Facebook predictably denied knowledge of how Geofeedia was using its data, yet when the ACLU discovered an email referencing a confidential agreement between Geofeedia and Facebook to provide the company with additional Facebook data, Facebook declined to comment further,” eventually ending the secret relationship. It’s more than possible that Facebook knew exactly what it was doing, which was why the arrangement was confidential in the first place.
It goes after children?
Facebook’s mis-steps – if indeed that’s a strong enough word – have been coming fast and furious. Consider, for instance, its grudging admission that it had been profiling Australian teens to identify their most vulnerable moments and sell them things. Leetaru notes that in 2017, it was discovered – not revealed, but discovered – that the social media platform “had been conducting secret research on more than 6.4 million Australian youth as young as 14 years old to determine when they were at their most vulnerable, experiencing feelings of being worthless or a failure in order to better target them for advertisers. When details of the secret research leaked to the press, the company’s response was that it had not followed its ethical review process, but that the failure to adhere to its ethical review process was merely a minor ‘oversight.’” That doesn’t wash, does it?
And that’s not the last time Facebook targeted children. Last December brought us Messenger Kids, a messaging app aimed at children between 6 and 12. As Casey Newton writes for The Verge, when it was unveiled, “It did not say how it would monetize the app, other than to say it would not include ads. It did not describe whether parents would have access to any of the data being gathered about their families, or how that data might be used.” The goal – though undisclosed – is probably to turn these shadow profiles created by parents into full user accounts, feeding kids into the Facebook system. As Newton reasons, “should Facebook amass hundreds of millions of underage users, the company will have every incentive to offer one-click exporting of these accounts to real ones on the day the child turns 13”. In just a few months, more than 100 children’s health experts have called for the app to be shut down. In light of the company’s blatant objectification of Australian youth, it’s far too clear what they intend for these kids.
There’s evidence that Facebook was involved in US politics
That’s all pretty bad, but things get even worse. In 2016, during the US presidential election, Facebook took money from Russian agents to influence voters. Only last September, one year later, Facebook revealed that it may have unwittingly played into Russian hands. Nicholas Thompson and Fred Vogelstein write for Wired that, “as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.” That, too, just doesn’t pass the smell test, and it continues a clear pattern of denial, minimisation, and secrecy.
An influential security researcher, Renée DiResta, realised that this wasn’t just a mistake: “That was when it went from incompetence to malice,” Wired reports. More revelations in October give credence to that claim. Six Russian troll accounts had been used to send messages of racial hatred and partisan animosity that were shared more than 340 million times – all without Facebook claiming to notice or taking any action. Jonathan Albright, a journalist and researcher at the Tow Center for Digital Journalism found these messages still frozen on a third-party analytic platform. As Thompson and Vogelstein describe them, “There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as ‘that murderous anti-American traitor Killary.’ Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein.” The intent couldn’t be more clear, and it was pretty hard for these researchers to believe Facebook really didn’t get it. Dianne Feinstein, an American Senator, told the company, “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it. Or we will.”
And it may even have swayed the Brexit vote
You’ve probably heard of the latest scandal surrounding Cambridge Analytica. In light of its behaviour in Australia, Leetaru cautioned that the idea that “two Facebook researchers could conduct research on such an extraordinarily sensitive topic on such a vulnerable population and tout that research to advertisers without ever having their work ethically reviewed and, according to the company, without its formal knowledge and approval, suggests Facebook has woefully inadequate central controls over the use of its users’ data by its own researchers”. Apparently, something similar happened with Cambridge Analytica.
Employed by the Trump and Cruz campaigns in the 2016 election, Cambridge Analytica’s job was to use account data from Facebook to target political ads to receptive voters. To do this, according to the Guardian’s Patrick Greenfield, it “harvested from more than 50 million Facebook profiles without permission to build a system … Employees of Cambridge Analytica, including the suspended CEO Alexander Nix, were also filmed boasting of using manufactured sex scandals, fake news and dirty tricks to swing elections around the world.” That may well include the Brexit vote in the UK, a suspicion that’s currently being investigated by Britain’s Electoral Commission.
That’s more than alarming. And what’s even more damning is that Facebook apparently knew about this data breach in 2015 and did nothing whatsoever about it till this month. And if you don’t worry about global elections or think this matters, a former Facebook manager thinks “that hundreds of millions of users are likely to have had their private information used by private companies in the same way”. He quit Facebook in 2012 out of frustration at its lack of action on this front; he made it clear even then that there was a huge problem. As he told Paul Lewis at the Guardian, “My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data.” We still don’t, and almost no one’s willing to believe Zuckerberg at this point.
Regulation and oversight are certain
What does this mean going forward? We think Senator Feinstein’s threat will come to pass. At the very least, we anticipate stern regulation and legal protection like the proposal from the EU for a General Data Protection Regulation, defining data as “any information relating to an individual, whether it relates to his or her private, professional or public life. It can be anything from a name, a home address, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer’s IP address.” In practice, this law would limit data ownership to the individual it describes, making this kind of predatory behaviour illegal. And we expect greater oversight of the digital domain, too, as regulators in the US take a hard look at Silicon Valley. Facebook’s future’s also in doubt as the company has been bleeding money since this scandal hit the news.
We’ll be watching, and you should, too.