Superintelligence Is a Bogeyman — Narrow AI Poses Real Threats Today

Advances in AI bring benefits, risks, and unintended consequences. The more powerful the technology, the greater the risks it can pose.

Kyle Dent
11 min readMar 6, 2021

--

Artificial intelligence (AI) is now in wide use, intersecting with our lives in ways we may not even be aware of — from simple pattern recognition to tasks that require complex planning, prediction, and reasoning. Although AI has done amazing and useful things, the increase in its deployment has revealed its double-edged nature: Every technology, from fire and nuclear power to social networks, brings both good and bad as well as unexpected consequences. The more powerful the technology, the greater the risks it can pose.

The conversation about both real and imagined harms from AI runs the gamut from unfair bias in decision making to economic meltdown to annihilation by killer robots. Assuming that no one releases swarms of AI-enabled slaughter bots to wipe out all of humankind, just how big are the risks? What ethics issues does AI raise? Are we prepared for what AI may bring in the future?

The Bigger AI Ethics Dilemma

In the 1960s, a colleague of Alan Turing named I. J. Good wrote a paper that introduced the idea of an “intelligence explosion,” in which he described an ultraintelligent machine that “can far surpass all the intellectual activities of man no matter how clever.” Good’s ultraintelligence is more commonly referred to as superintelligence these days, and it represents the ultimate level on a spectrum from artificial narrow intelligence to human-level artificial general intelligence (AGI) to artificial superintelligence. Good predicted that after we create superintelligent machines, “the intelligence of man would be left behind.”

Given the implications of an intelligence well beyond human ability, some prominent personalities have raised the question of whether it’s ethical to develop a superintelligence at all. Of particular concern to this crowd is the “AI control problem.” They reason (assuming that it’s possible to create an autonomous, self-aware intelligence) that once it exists, humanity will no longer be able to control it. What’s more, the transition from human-level intelligence to superintelligence could happen abruptly, adding urgency to the need for control mechanisms before superintelligence arrives.

Nick Bostrom, a philosopher at the University of Oxford, has been one of the more influential voices warning of a coming AI global disaster. In his book Superintelligence: Paths, Dangers, and Strategies, he argues that solving the AI control problem is critical if humanity is to survive the inevitably superior abilities of future technologies. Synthetic cognitive systems that can “use their intelligence to strategize more effectively than we can” will easily usurp the human race as the dominant entity on earth. His views have been amplified by luminaries like Bill Gates and the late Stephen Hawking, who have expressed their own concerns about the dangers of an out-of-control AI. Bostrom is not specific about the time frame, though his text suggests that parity with human-level intelligence will arrive in the foreseeable future.

According to Bostrom, superintelligent machines need not be malignant to represent a danger to humans. Several paths to AGI exist, many of which would spell doom to humanity even if the machines do not explicitly turn on people, “HAL” style. Existential threats aside, you do not need to conjure a superintelligent bogeyman to realize the difficult ethical issues related to AI. Humankind is already creating narrowly intelligent machines that bring up many philosophical questions, ranging from the ethical behavior of AI programmers to the morality of replacing humans in the workplace. Ethics at the implementation level abound, too.

Ethics of AI Implementation and Applications

Lethal Autonomous Weapons: Lacking Human Values

AI brings a new and completely different element to human conflict and competition because an AI cannot reason about the value of human life. The idea behind lethal autonomous weapons systems (LAWSs) is to locate, identify, and kill targets without input from a person. LAWSs use a form of machine intelligence called reinforcement learning (RL), which has been remarkably effective when it comes to well-defined environments. RL, for example, drives the brains behind AlphaGo, the DeepMind AI that beat one of the top-ranked Go players in the world. In the well-understood, closed-world environments of games, RL systems can seem spooky in their super-human ability first to learn, and then play games. In the open-world environments that humans occupy, though, RL-trained machines inevitably fall short. The rules of the real world are not straightforward, and unreliable and unexpected sensor inputs can cause them to act erratically.

Looking beyond the difficult problem of training AI to behave correctly in the face of varying inputs and environments, AI researchers still do not know how to program any agent such that its decisions and actions align with human values and preferences. Whereas a human soldier could reasonably refuse to carry out orders to fire on fellow human beings based on unlawful orders or changing conditions in the field, machines are not flexible and would be unlikely to disobey a kill order when it has been issued, regardless of the situation on the ground.

AI Control and Targets

Apart from weaponized AI, two related questions remain central to AI ethics discussions: Who owns the means of control of AI, and who becomes the subject of it? China has been a pioneer in deploying AI to surveil parts of its own population. Using sophisticated facial-recognition technology and a large surveillance camera system, the government has singled out and tracks the Uighur population, a predominantly Muslim group within China. The AI system tries to identify Uighurs based on their physical appearance so that it can track their activities. As China has demonstrated, those in authority are likely to use AI to enhance their positions of power. Even liberal democracies must be vigilant not to cede rights to technology. Seemingly beneficial solutions can shift power relationships or, worse yet, be applied in unfair or damaging ways.

Informed conversations about the trade-offs between the promise of more efficient policing and the actual impact on communities is not possible.

In a new reality presaged by the 2002 film Minority Report, many law enforcement agencies in the United States and Europe are turning to predictive policing tools. Based on the premise of stopping crime before it happens, predictive policing quantitatively analyzes community and neighborhood data. Vendors such as PredPol and Palantir claim to make police departments more efficient and remove bias from officers’ judgments; however, a 2016 report from the Royal Statistical Society showed just the opposite: Predictive tools disproportionately target minority communities because of biased data (among other reasons). Moreover, the common practice of feeding new data from AI-directed police stops and arrests back into the algorithm creates a feedback loop that only exacerbates the problem.

Basing algorithms on historical, likely biased data is an obvious problem, but AI developers also make judgments about many aspects of the software, such as model selection, parameter weighting, and evaluation methods — all of which can affect the fairness of predictions. For the most part, these judgments are made during development, and vendors deem them proprietary trade secrets, precluding any evaluation by outside parties. Informed conversations about the trade-offs between the promise of more efficient policing and the actual impact on communities is not possible. Indeed, citizens are not involved in decisions about employing algorithmic policing in their neighborhoods. Perhaps devising ways to mitigate the injustice would tip public opinion in favor of specialized tools, but such nuanced political discussions rarely happen.

Similarly, questions of ownership and control of AI are not just for state organizations. Businesses are also using advanced facial-recognition systems. A 2020 New York Times article reported that images of people can be fed into back-end databases to track individuals across various activities and over time. Companies such as Facebook and Google have hesitated to deploy vast stores of individuals’ faces, but a startup called Clearview AI has received a lot of attention for a controversial database it has built that apparently holds personal images of everyone whose picture has ever appeared online. According to the New York Times, more than 600 law enforcement agencies have already signed up to use the service, despite not knowing how the technology works or who is behind it.

Data and Privacy Issues

It is no secret that current AI tools are fueled by data. The more data, the better for making predictive models — or so the thinking goes. In the era of social media, people expect easy and free access to information and seamless communications. Internet users tend not to think about how allowing platform companies to monetize personal information may affect their lives.

How data is collected and used has only recently caught the attention of regulators, and then only in some jurisdictions. In the absence of oversight, a whole industry has grown up around private data collection and sale, including location information from smartphone apps. Companies track, then sell data to interested customers for pitching ads and physical retail experiences. Even financial firms are paying for private data to gain insights into and make predictions about consumer behavior. Those using the data claim that they are interested in aggregate patterns only, not in the identities of their subjects. However, the data is often collected without informed consent, sometimes through misleading messaging, and without transparency of any kind. Individuals do not know who can access the data or what their motives are.

European citizens have some protection through the General Data Protection Regulation. The law applies to all companies that operate in Europe or track European citizens. The legislation extends new rights to individuals to control the data companies collect about them, to force companies to delete such data on request, and to limit how such data is sold to third parties. On January 1, 2020, California implemented similar protections for its residents.

Unintended Consequences

Even with the best intentions, it is difficult to get technology right and deploy it without unforeseen results. Examples of runaway automated processes already exist. Developers program poor instructions that do not include constraints, and AI often lacks the ability to adapt decisions based on changes in its environment — assuming that it even has the ability to detect such changes. Often, AI does not know that it’s doing the wrong thing because of limited or imperfect sensing data. Many AI models are brittle in that their high performance on test data crumbles when they encounter input that is even slightly different from their training.

A particularly illustrative example revealed just how shallow deep learning can be. A common task in computer vision is to detect and identify various objects in an image. In a recent study, researchers from Canada’s York University and the University of Toronto fed an image of a living room to a computer vision system. The system did a remarkably good job of categorizing things in the room, such as a chair, a person, and books on a shelf. When the researchers introduced a new object to the same scene, however, the system not only failed to correctly classify the new object but lost track of other objects it had previously identified correctly. If image-recognition systems, which are integral to automated surveillance and autonomous vehicles (AVs), among other things, are to be trusted, they will require a level of robustness that current systems lack.

Ideally, researchers discover weaknesses in systems before they deploy those systems in the real world, but this has not always been the case. In a glaring example, the AI systems in AVs from Tesla and Uber have caused fatal accidents because researchers and designers made choices based on assumptions that turned out not to be true. The developers at Uber, for instance, had tragically disabled their car’s emergency braking system to create a smoother ride for passengers, leading to the death of a pedestrian when the AV failed to detect her crossing the street.

Other uses of AI have also had adverse effects on human lives. Decision-making systems such as those used for judicial proceedings have been shown to reflect inherent biases. In other examples, health care applications have made odd recommendations, such as sending home high-risk patients with asthma who have been diagnosed with pneumonia when they should be admitted to intensive care. Without testing and validation of autonomous systems, other harms may be happening without anyone even knowing about them until it is too late.

Fairness in Machine Intelligence

The academic community is beginning to dig into these issues. The Association for Computing Machinery wrapped up its third annual conference on fairness, accountability, and transparency (ACM FAccT) earlier this year. Papers at the conference covered a range of sociotechnical applications, including topics on justice, race, and — of course — fairness. Many other conferences now include tracks that reflect an ethical view of AI research.

This effort is still in the initial stages, with no consensus on what “fairness” in machine learning means. In fact, agreement is likely to be impossible given the varying human perspectives on the issue. Still, researchers are motivated to mitigate inherent risks. Technologies with potentially dangerous side effects (entrenching existing biases, reducing accountability, and interfering with due process) must be identified and redesigned if they are to realize the benefits of AI while removing the harms. An equally hazardous feature of AI products is the information asymmetry that reinforces the more powerful position of AI owners over those who are subject to the technology’s decisions.

Perspective and Possible Solutions

Given the current harms and expectation of genuine risks from current and near-future AI, it seems almost irresponsible to dwell on highly unlikely scenarios such as existential danger from AGI. To be clear, there is no known path to AGI. Enough time may prove Good’s prediction correct, but all efforts to this point indicate that AI is a long way from equalling or surpassing human intelligence. The risks of greatest concern are not so much from an advanced, general machine intelligence but rather from AI’s inherent limitations coupled with commercial interests that rush to deploy technology for uses beyond its capabilities.

In his writings, Gary Marcus, a well-known cognitive scientist at New York University, offers a counterweight to the hype that drives a lot of the blind faith in AI. He advises caution when confronted with claims of AI’s broad abilities. AI is narrow: It performs well on tasks it has been trained to perform, but it should not be expected to perform beyond its limits. Marcus retains an optimistic view, though, that AI can be directed in productive ways. He and Ernest Davis have recently written the book Rebooting AI: Building Artificial Intelligence We Can Trust, which describes how to

“work toward AI that is robust, reliable, capable of functioning in a complex and ever-changing world, such that we can genuinely trust it with our homes, our parents and children, our medical decisions, and ultimately our lives.”

The immediate risks, then, are primarily from existing intelligent agents sensing and acting in environments where they affect human beings, either by making decisions about them or by interacting in the world in some way — governments using AI against whole groups of people; state or nonstate actors developing, and then deploying autonomous weapons; autonomous vehicles without adequate safeguards; political interests releasing disinformation bots on dubious constituencies; and on a somewhat smaller scale, judges using decision support tools and police departments guided by opaque models that reinforce existing biases and oppressive actions.

At the current rate of increasing AI use, society is not managing the risks or grappling with these emerging ethical questions. Clearly, police departments and other public entities need to understand how these technologies work before they adopt them. Independent community review with an eye toward civil protections that are already on the books, such as unlawful search and presumption of innocence, is necessary. Transparency in these systems is critical.

The work of making AI safe cannot be left to technologists alone. The impact on society and questions arising from dependence on machine intelligence cuts across several disciplines, including economics, law, philosophy, and sociology. When specific risks or assumptions exist, they must be made obvious to all stakeholders, including and especially those whom the solution affects. Technology development should be an ongoing interaction among those building the technology, sponsoring the technology, and using the technology. There is never a single ethical decision when developing a technology. Instead, there are ongoing decisions about the focus, scope, transparency, and experience of a technology throughout its life cycle that require constant ethical consideration, deliberation, discussion, and consensus.

--

--

Kyle Dent

Technology & Society, Responsible AI, Data Science and Visualizations