Maybe You Can Drive My Car… But What Else?

Before I bought my latest car, I would tell anyone who was interested (and some people who weren’t interested at all) that the biggest marketing problem for autonomous vehicles was that people just wouldn’t trust them. Drivers would always want to be the ones in charge of making decisions about our driving.

GM’s self-driving experiment continues; Tesla has scheduled its latest update on the long-rumored “robo-taxi” for 8/8/24. I’m intrigued.

Then I got my new car. It’s not autonomous, but it has features that let you put it on cruise control and (mostly) cede control of steering and braking to the car’s computers. Within a month of getting it and trying out the features, I had done a total flipflop. I was actually getting angry at the car’s computers for having the nerve to remind me to keep my hands on the steering wheel. I was willing to trust the car; why wouldn’t it let me?

The “who’s driving?” question is a tricky one. Back in 2016, I predicted a showdown for supremacy between humans and machines would come in a decade – by then, I figured, computers would have gotten just sophisticated enough, would have taken over just enough jobs, snatched away just enough autonomy from humans that by 2026 we would be in the middle of an impassioned, ugly, existential debate about the loss of human autonomy, a sort of neo-Luddite rebellion.

It looks like I’m going to be wrong.

Maybe AI’s won’t have to fight an Armageddon-type battle for control; they could win in a walkover. Why? Machines make our lives more convenient. And we are lazy.

A big chunk of early evidence suggests that a lot of people in a lot of fields are (like me on the driving front) just fine ceding large swaths of our agency to computers.

One experiment by Fabrizio Dell’Acqua of Harvard, provided HR professionals with two sorts of “assistants.” One group got a highly-sophisticated AI tool to help them in determining qualified job candidates; another group got a less sophisticated tool. The finding: the more sophisticated the tool, the more HR professionals were likely to choose the tool’s recommendations over their own personal judgment – AND the poorer their decisions were (Dell’Acqua titled his study “Falling Asleep at the Wheel.”)

One study found that HR managers are willing to pick AI-selected job candidates over their own. (Image from Oorwin)

Another study in 2023, from Jeremy Utley of Stanford and Kian Gohar of Singularity, tested the impact of using AI to assist humans in corporate brainstorming sessions in Europe and the US. The pair predicted that using AI as a supplement to human brainstorming would unleash unprecedented amounts of innovative ideas by the teams. In fact they found the opposite. When AI was involved, the total number of ideas generated was only marginally higher, and the number of ideas judged to be actually innovative was lower: people seemed to be content letting the AI do the work for them. “The teams that had access to AI basically had ‘resting AI face” – they were just sort of staring at the computer,” Gohar told the “You Are Not So Smart” podcast. “You watch these people during the session and they’re basically oblivious to each other’s existence,” said Utley. Rather than brainstorming, the human teams brainslept, and let Chat GPT do the work.

Other examples abound – human attorneys asking AI’s to discover relevant cases to support their argument, then not double-checking as AI’s generate completely fictitious cases; AI’s making basic math errors and humans not catching them; pilots on jet airplanes continuing to let computer guidance systems steer the planes even when the computers are clearly failing.

In all these areas we may start being AI skeptics, but we quickly become AI-dependents. It’s just so easy.

In 2018, the Pew Research Center surveyed nearly 1000 technologists, business leaders, researchers. Several of them saw this coming. Greg Shannon, chief scientist for the CERT Division at Carnegie Mellon, warned that “some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today.” Kostas Alexandrisis, the author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” wrote: “Autonomy and/or independence will be sacrificed and replaced by convenience.”

This is not all bad, of course. Surely there are some things we can start letting AI’s do that will free us up to do more interesting things. AI’s might be able to do initial screenings to eliminate completely unqualified candidates (88% of global companies now use AI to do some version of this); brainstorming sessions might be restructured to enable AI to supplement human ideation (Utley and Zohar have some suggestions on how to do this).

Russia and Ukraine have both made extensive use of drones in their war. Does an unmanned aerial vehicle feel pain when it kills?

But our tendency to totally cede control to AI’s in some areas doesn’t bode well for how we will handle more complex areas. “My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill,” notes Simon Biggs, a professor at the University of Edinburgh. “As societies we will be less affected by this than we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the dissociation between our agency and the act of killing.

“We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully.”

We can’t cede our morality to a machine. We’ve got it. Machines don’t.

Every AI zealot I hear from encourages us to use AI as a tool, not a crutch; as an assistant, not an agent. Man + Machine > Man vs. Machine. We need HR professionals  to keep their brains engaged as they make hiring decisions; work teams actively engaged in coming up with the next generation of insights, not letting Chat GPT do it for them; pilots who still know how to fly; attorneys who actually know enough case law to call BS on fake cases.

…And drivers who still know how to drive. After ten seconds of my hands being off the steering wheel, my car reminds me, with increasing urgency – that I am, ultimately, at least for now, in charge of driving — not the computer. Before we cede our autonomy and completely capitulate to AI’s as a superior species, that’s a lesson we all need to be reminded of.

Notes:

The original Luddite rebellion: https://en.wikipedia.org/wiki/Luddite

Disempowering effect of AI on HR decision-making: https://static1.squarespace.com/static/604b23e38c22a96e9c78879e/t/62d5d9448d061f7327e8a7e7/1658181956291/Falling+Asleep+at+the+Wheel+-+Fabrizio+DellAcqua.pdf

AI depresses the quality of brainstorming: https://howtofixit.ai

Jeremy Utley and Kian Gohar on findings from brainstorming study: https://youarenotsosmart.com/2024/02/19/yanss-281-how-a-pernicious-cognitive-bias-limits-our-ability-to-use-chatbots-properly-and-to-overcome-it/#more-9139

Pew Research Center survey on AI: https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/




Previous
Previous

The Olympics the Rest of the World Sees

Next
Next

Make America Join Again