Technology companies like to talk about the huge benign changes their products will bring about. Take Facebook FB, with its mission to “give people the power to build community and bring the world closer together”. Or Google GOOGL, which wants to “organise the world’s information and make it universally accessible and useful”. Microsoft MSFT hopes to “empower every person and every organisation on the planet to achieve more”. Even Snapchat SNAP, which opens its corporate bio with the humble claim that “Snap Inc is a camera company”, can’t help but note: “We contribute to human progress by empowering people to express themselves, live in the moment, learn about the world, and have fun together.”

Such statements demonstrate the scale of ambition that is the norm among technology’s largest companies. It is rare, however, to find much time or effort dedicated to the unplanned consequences of world-beating new technologies.

There are exceptions. When Google acquired the London-based AI company DeepMind in 2014, one of the founders’ requirements was that the company set up an ethics board, to ensure that any future use of the technology was governed by strict moral principles. A year later, Elon Musk spearheaded the creation of AI research group OpenAI, arguing for a need to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”.

But sometimes, huge upheavals arise from products and features that were not intended to alter the course of world events. Take Facebook’s decision, in 2017, to test whether putting news stories in a separate section from the – now ironically named – news feed would increase user engagement. In doing so, the company may have critically injured the independent media ecosystems of some of the developing democracies, such as Cambodia, in which it tested the feature.

But that pales in comparison to the accusation that Facebook may inadvertently have enabled the ethnic cleansing of Myanmar’s Rohingya minority. The platform saw the viral spread of “fake news” and hyper-partisan communities, but without a stable civil society to provide a countervailing force, the country exploded into violence.

Now, Facebook is trying to avoid a repetition of that sort of event, as well as get out ahead of negative news stories like the Cambridge Analytica scandal, with the creation of an “Investigative Operations Team” (IOT), to try and spot major abuses of its platform before they happen. A team of advertising staff, researchers, and former spies are working with the company “to look forward and figure out what’s coming around the corner”, Facebook’s business integrity director Lynda Talgo told Buzzfeed two weeks ago.

But you don’t need to sit inside Facebook’s cavernous Menlo Park office to get a sense of the fires the company will have to fight in the near future – or the ticking time bombs the rest of the industry is sitting on, either. So I assembled my own Investigative Operations Team, from people who have been watching the industry from afar, and asked them what warnings they would have for Silicon Valley.

Security researcher Inti De Ceukelaire has a record for spotting problems that Facebook missed. Last month, he urged the company to fix a data leak that saw personal information of 120 million users of a popular quiz app (nametests.com) exposed to attackers. He says that issues like that remain common on the site – and while the company is rushing to clamp down on bad actors, it’s too blase about its “select partners”.

“Facebook never temporarily blocked nametests.com until the issue was fixed, which took about two months,” he said. But “when I launched a privacy awareness April Fools’ gag, it got blocked within a couple of hours, and later all domains associated with me got banned from Facebook.” Similarly, he warns that select partners “have special data access privileges [that] help establish Facebook’s monopoly even more: it’s not up to them to decide who’s getting data access and who’s not”.

Thomas Husson, a principal analyst at Forrester, cast the net wider than just security issues, arguing that even with all the attention the sector is giving to AI, it still poses risks. “The use of artificial intelligence and the ability to automate processes will raise many more ethical issues. Digital platforms like Facebook or Google must ensure that they avoid harmful bias and discrimination. AI-powered technologies like facial recognition or biometrics will further increase consumers’ privacy concerns.”

Concern over facial recognition is shared by Privacy International ’s Frederike Kaltheuner, who warns that when companies do worry about the ramifications of the tech, they tend to think too small. “It is a problem that systems reflect the bias in data and societies. Ubiquitous facial recognition will lead to mass street-level surveillance that will disproportionately harm people who already face daily discrimination. Civil society and others must continue to fight mass facial-recognition systems being deployed within CCTV cameras.”

For Kaltheuner, the elephant in the room is the sector’s business model. “We have a problem when data exploitation is the business model. We’re choosing our words carefully here – it’s not about the fact that products and services need data to function. The problem is when products and services become the pretext for unprecedented data mining.”

The biggest risk for the sector, however, is probably the very problem that has sparked Facebook’s current bout of introspection: trust. “You can’t trust a company’s PR team, especially if that company failed to protect the data of its millions of users,” De Ceukelaire argues. “We are making steps in the right direction, but it’s well overdue. It needs to go faster, be more transparent and with less PR blabber. This doesn’t only apply for Facebook, it applies to every major tech company out there.”

In more measured tones, Forrester’s Husson makes much the same point. “Given Facebook or Google are digital platforms connecting consumers and brands, building customer trust is the ultimate differentiator,” he says. “This is going to be a growing challenge – not just because of more complex legal requirements like GDPR [General Data Protection Regulation]– but simply because consumers will increasingly demand that brands act on values.”

Trust is a particularly important resource, because it’s also lacking in Facebook and Google’s relationship with their real customers: the advertisers who account for the vast bulk of their revenue. Facebook has repeatedly had to apologise for inflating metrics that it reports back to advertisers, even as Google’s YouTube has faced attacks from advertisers over brand safety issues (such as, notoriously, running paid-for advertising over Islamic State propaganda). “In this regard, Amazon is definitely ahead of Google or Facebook,” Husson notes.

An investigations team is obviously a good way to win back trust. Perhaps a suspiciously good way. Aral Balkan, privacy campaigner and cyborg rights activist, refuses to give credence to the “damage limitation and PR exercise” that Facebook is running.

“What we have here is a factory farm for human beings, that has been publicly humiliated for how it treats its livestock, running internal audits to avoid future PR disasters,” Balkan says. Asked what he would highlight if he found himself on a similar team, he demurs: “I wouldn’t be on one of those teams to begin with as I have no desire to help a surveillance capitalist improve its public relations. I have every intention, on the other hand, of ensuring that its abuses are effectively regulated and in helping create ethical alternatives to it.”

The real measure of the team won’t be in the problems they spot, however, but in the power they have to alter the company’s plans to avoid those minefields – even if ploughing straight on may be the most profitable course of action. If the IOT does its job well, we’ll never hear about it again. For Facebook, that may be the best outcome of all.

Academic initiatives to study the unintended consequences of technology

The Centre for the Study of Existential Risk

A research centre within Crassh (Centre for Research in the Arts, Social Sciences and Humanities) at Cambridge University, dedicated to “the study and mitigation of risks that could lead to human extinction or civilisational collapse”.

Founded in 2012 by Martin Rees, the astronomer royal; Huw Price, the Bertrand Russell professor of philosophy; and Jaan Tallinn, co-founder of Skype.

Future of Humanity Institute

Founded in 2005, Oxford University’s research institute aims “to bring the tools of mathematics, philosophy, social sciences and science to bear on big-picture questions about humanity and its prospects”.

Its founder and director, the Swedish philosopher Nick Bostrom, also leads the Strategic Artificial Intelligence Research Centre and is the author of the New York Times bestseller Superintelligence: Paths, Dangers, Strategies.

Berkeley Existential Risk Initiative

Beri’s mission is “to improve human civilisation’s long-term prospects for survival and flourishing”. Established in Berkeley, California, it fosters what it calls the “x-risk ecosystem” – a network of thinktanks, non-profits, individual researchers and philanthropists.