A recent study out of California looked at the potential influence of artificial intelligence on the typical North American city 20 years from now. How, researchers asked, will A.I. innovation affect education, health- and elder-care and public safety? Not only that, but how will automated machines affect our acquisition of personal goods (a.k.a. shopping) in 2030?
“They will facilitate delivery of online purchases,” the One Hundred Year Study on Artificial Intelligence reads, “through flying drones, self-driving trucks, or robots that can get up the stairs to the front door.”
Um, is it just us or does the last part of that sentence give you the creeps?
Indeed, often twinned with our collective fascination with a technology inspired by our own brains and bodies are feelings of revulsion toward a potentially dystopic future ruled by superhuman cyborgs that can do everything we do — only faster, more logically and with fewer coffee breaks (you did see Robocop, right?). While that might be an out-there overreaction given that many of us happily benefit from relatively harmless advances in A.I. every day (say hello, Siri!), the future of the discipline depends largely upon our humanoid responses to it.
It’s that tension between what A.I. can do and what humans are willing to let it do that is of particular interest to Jorg Denzinger, an associate professor of information and communication technologies at UCalgary. Asked to speculate on the 50-year future of the discipline, Denzinger frames his answer around ethics and societal perceptions.
“A key thing for how the future of A.I. will develop is acceptance,” he says. “Society — all of us — has a stake in the direction that knowledge-based systems take us.”
Regardless of the fact that robots could, for instance, do all manner of manufacturing processes, humans stand to lose jobs we may not be willing to give up. It’s the flip side of Agent Smith’s directive in The Matrix: never, in the name of avoiding redundancy, send a machine to do a human’s job.
Interestingly, for a guy whose career is devoted to A.I. research, Denzinger’s “big dream” for 2067 is not the dawn of routine human head transplants or an army of robo-waiters with Genuine People Personalities. Rather, his hope is more sociological — more, well, human: he wants to see a shift in society that creates space for open discussion around the norms, laws and ethics of A.I. in every field. Denzinger points to the unchartered territory of the driverless car industry as an example how an informed, engaged and empowered society is key to ensuring A.I. innovation is helpful, not harmful.
— Stephen Hawking
“There will be situations, for instance, in which a self-driving car has to make a decision relating to an imminent collision within which an unknown number of people will survive,” he says. “The more the car knows about the car with which it might collide, the better, but the question for the autonomous vehicle is: Who will survive? And why — based on what? How does a car evaluate the ‘worth’ of the people in the cars?” Researchers like Denzinger must rely on society to inform the formulas upon which such A.I. knowledge is based. “If there’s no discussion, starting with our politicians, about those kinds of ethical parameters in our society, then, when there is an accident, the industry is blamed. There’s enormous potential for disruption.”
Everyone, he says, “should think and care about these kinds of implications.” Certainly, HAL-9000 would care. Do you? U