a car with wave illustrations around it
Features & Articles

We’re asking the wrong questions about AI

Tags
  • Technology & Science
  • Arts and Humanities
  • Kenneth P. Dietrich School of Arts and Sciences

It’s hard to get a handle on what’s happening in artificial intelligence right now. You might read that a tech company created a chatbot so smart it’s indistinguishable from a human, or that an AI “ethics advisor” can help you make decisions. Some prognosticators will even tell you that we’re headed for an AI uprising.

Claims like these lack something crucial, according to Colin Allen, a distinguished professor of history and philosophy of science in the Kenneth P. Dietrich School of Arts and Sciences.

“I think there’s a lot of credulousness and not enough skepticism. History is being repeated by those who don’t know it,” he said.

Allen has spent over a decade working on questions around AI ethics and leads the Machine Wisdom Project, which aims to embed the idea of “wisdom” into how AI is used. It’s a new framework for understanding artificial intelligence computer programs, which have a 60-year record of fooling users with their supposed humanity. Even today, Allen said, AI retains some of the same limitations it displayed in the 1960s.

“Wisdom is the interaction between what you know and what you don’t know, in the sense of being aware of the limits of that knowledge,” he said. “AI as we know it has no idea what it knows, or why it’s spewing out what it’s spewing out. It has no capacity to detect inconsistencies.” 

Instead, that higher-level thinking is the job of those who create AI — and Allen has over time shifted his focus toward broader shortcomings in the way that people make and use the technology. For instance, even an AI product that gets a passing grade on bias and other important safety criteria can cause harm, said Brett Karlan, who until September was a postdoctoral researcher in Allen’s lab and is now at Stanford University’s McCoy Family Center for Ethics and Society. 

“If you produce that technology, what if it gets used by a reactionary government to further control its people?” Karlan said. “When you ignore the broader social and political systems that involve both humans and machines, you can miss the ethical forest for the trees.”

History is being repeated by those who don’t know it.

Colin Allen

Funding from the Templeton Foundation’s Diverse Intelligences program allowed Allen and his team to build their ideas about wisdom into a full-fledged initiative. And in June, Allen and Karlan published a paper in the Journal of Experimental and Theoretical Artificial Intelligence laying out the case for prioritizing wisdom in the AI pipeline — from a program’s conception to its design and even how the end product is advertised.

That last step is a particular concern for the duo. As part of the project, Karlan has read through the material on technology companies’ websites, seeing how they advertise their services to other companies that might use the technology.

“The marketing material essentially becomes a handbook for how to use the materials that these large technology companies are putting out,” he said. “And that’s a real problem when it’s trying to sell you on the idea that the technologies themselves are safe and ethical.”

Money machine

These marketing materials are emblematic, Allen said, of a major barrier to a wise AI industry: Ethical marketing isn’t necessarily lucrative. A prime example is self-driving cars.

“Tesla’s been doing this dance of playing up the capabilities of the car while trying to convince drivers that they shouldn’t really take their hands off the wheel,” he explained. “They want people to believe this technology is safe — and the commercial imperative for them, of course, is to keep pushing how safe it is — but somehow they’ve got to convince drivers that it’s not that safe.”

For an upcoming paper, the duo has been developing ways to tackle this problem. One method might be to pre-launch AI in a limited way with the intent of testing its limitations. Product teams could hire psychologists who would figure out how users might use and misuse artificial intelligence. For self-driving cars, Karlan even envisions a flight-simulator-like program where drivers can experience the ways a product might fail.

These are expensive and labor-intensive solutions that companies may never arrive at if left to their own devices — instead, according to Karlan, they may require new regulations or self-policing by industry groups. “The way these technologies can be safe and ethical is in the context of a broad system with a lot of checks and balances,” he said.

And lurking behind decisions about how to make and use AI is a more basic question: Is AI even the right fit for the problem at hand? The complex programs that underlie AI have some well-known pitfalls, including that it’s often difficult or impossible to know why a program arrived at a particular conclusion. In high-stakes decisions where bias could creep in — for instance, when choosing who gets a loan and who doesn’t — companies could instead look to simpler and more transparent statistical tools.

“It’s really not obvious that we’re ever going to really understand what is going on in these very large neural networks,” Karlan said. “But there are a lot of solutions that don’t look like that and where you can, in fact, know what is exactly going on.”

In the AI game, in other words, sometimes the wisest move is not to play.

 

— Patrick Monahan, photo via Getty Images