exp GPS 0524 Hawkins artificial intelligence_00013917.jpg
On GPS: The threat of intelligent machines
02:29 - Source: CNN

Editor’s Note: Mark Surman is the executive director of the Mozilla Foundation, a global community devoted to keeping the internet open and free. The views expressed in this commentary are his own. View more opinion on CNN.

CNN  — 

Artificial intelligence (AI) has long occupied an outsized role in our collective imagination, in everything from pulp science fiction novels to James Cameron blockbusters. When AI is the antagonist, it is corporeal and impossible to overlook, like the Terminator. Even in the real world, discussions about rogue technology tend to focus on the overt and dramatic, such as Elon Musk’s exhortations on Twitter that the dangers of AI rival the dangers of nuclear weapons.

Perhaps humankind is moving toward an oppressive artificial superintelligence. In the meantime, artificial intelligence is already woven into our everyday lives. It provides us with things we love and need, from productivity advice to movie recommendations. Yet, when we don’t carefully consider its impact on our democracies, our justice systems and our well-being, we open ourselves up to real risks.

Mark Surman

The AI of today is invisible to most of us, yet ubiquitous. One example we interact with frequently is recommendation engines on the internet – the code recommending that next video on YouTube or a post on Facebook. These algorithms pull together vast amounts of our personal data to learn about us and curate our experience online. In this case, AI is simply your personal data mixed with the data of people with similar interests – and then pointed back at you.

The result can be serendipitous and delightful. It can also be dangerous. Last year, Silicon Valley moguls opened up to New York Magazine about how today’s social media is designed to addict users. Using our data to manipulate us to stay on a site may or may not be pernicious in its own right.

But even if you don’t worry about it for yourself, there is growing evidence that these systems tend to radicalize and polarize. Last year, University of North Carolina at Chapel Hill researcher Zeynep Tufekci dubbed YouTube “the Great Radicalizer”: view one anti-vaccination video, and YouTube will suggest a second; watch one factually incorrect political video, and YouTube will recommend a sequel.

Further, these algorithms can be gamed by humans to sow even more discord. Data for Democracy researcher and Mozilla fellow Renee DiResta uncovered how anti-vaccine activists exploit Google’s algorithm to spread dangerous disinformation. How? By publishing reams of misleading articles peppered with popular keywords and search terms. She also recently testified before Congress about how Russian operatives manipulated Facebook’s AI to influence American voters by posing as American news outlets and American voters. (Note: Mozilla, a nonprofit, is the creator of the open-source Firefox browser, a competitor of Google’s Chrome browser.)

Of course, the problem isn’t technology, per se – AI isn’t inherently malicious. But it does replicate and amplify human bias. Computer programs are made by humans who bake in certain design goals and draw on certain data sets. Inside all of this are the normal contradictions of humanity: generosity and greed; inclusion and bias; good and evil.

Think about it: If the goals and incentives of a set of programmers are to increase advertising revenue, it’s not surprising that the content recommendation algorithms they create keep people watching videos for as long as possible. And, since these algorithms learn and adapt to get better at their goals, it’s not surprising that apps like Facebook, YouTube and Instagram are becoming what Professor Ronald J. Deibert, who has hosted past Mozilla fellows, dubbed “addiction machines” in a recent paper titled, “Three Painful Truths About Social Media.

Further, this technology is developed by a small handful of companies – names like Facebook that you know and others, like Palantir, that you probably don’t – with little transparency. Public officials, investigative journalists and civic-minded citizens can’t peer at the code to uncover problems. Put simply: “The public doesn’t have the tools to hold algorithms accountable.”

The question we really need to be asking is: how do we make AI responsibly and ethically? Fortunately, there is a growing cadre of people and companies asking this question.

Researchers like Tufekci and DiResta are vital voices. Groups like the Center for Humane Technology are examining how the “addiction economy” works. Organizations like New York University’s AI Now Institute are examining the ways AI impacts essential liberties. Even established players like Microsoft and startups like Element AI are calling for regulation.

Others are stressing the need for more engineers and product designers who consider responsibility and ethics when building AI. Imagine a social app designer who asks: How do I both grow profits and keep users safe? People who create drugs and automobiles ask these questions; why not developers? With this in mind, Mozilla (my company), Omidyar Network, Schmidt Futures and Craig Newmark Philanthropies are leading the Responsible Computer Science Challenge, a $3.5 million initiative to integrate ethics into undergraduate computer science curricula.

Finally, it is worth noting that a handful of governments are starting to step up, too. In 2018, the New York City mayor’s office announced an AI watchdog panel. More recently, the governments of Canada and France announced a joint initiative to examine the intersection of AI and ethics. And, in Finland, the government is training 1% of the population in AI basics, such as when it is deployed and the definitions of terms like “machine learning” and “neural networks.”

Follow CNN Opinion

  • Join us on Twitter and Facebook

    As artificial intelligence becomes more pervasive, it’s critical to foster a better public understanding of its impact on society. Hulking robots make for good cinema but aren’t accurate representations of how AI can and does do harm today. We need to focus on real solutions, like planning for ethical and responsible technology at the drawing board, not after the fact.

    Bottom line: We need to anticipate and eliminate bias in AI before it reaches millions of people.