In her opening remarks, Jasanoff suggested that a discussion about A.I. and human rights should begin with the “human” part.
“What is it about the human that we consider worth protecting and worth defending?” asked Jasanoff, a renowned scholar of science and technology studies.
She cautioned against falling into the “traps” of A.I.—the idea that technological progress is always linear, good and needed. Also, that faster is better.
Though speed is privileged in the tech and finance worlds, there are other sectors like science and medicine where slow, accumulated knowledge is more valuable. “There are lots of places where we actually value slowness and the work of the tortoise over the work of the hare,” Jasanoff said. “So, are technological ways of doing democracy necessarily better?”
Defining the “I” in A.I.
Jasanoff claimed that most people are fixated on the “A” part of artificial intelligence—how fast, how complex we can make it. “But what is the “I of A.I.?” Jasanoff asked. “Is it individualist or communitarian? Is it traditional or modern? Is it hierarchical or egalitarian?”
In human society, Jasanoff points out, we recognize the value of different intelligences. Even if you have a bad sense of direction, maybe you have a great memory for names and faces. But when it comes to A.I., some forms of intelligence are getting privileged over others.
“The kinds of intelligence we choose to develop, we don't do that in a vacuum,” Jasanoff said. “There's money attached. There are what you might call political economies of intelligence.”
Huq picked up on this thread during the discussion portion of the event, asking the audience to consider who adopts and develops A.I.
“If you look at the agencies responsible for ensuring health, safety, collecting taxes, engaging in the protection of the population, there are some [A.I.] adoptions, but they're very limited,” said Huq, a scholar of constitutional law and A.I. regulation.
This isn’t true for what Huq calls “the coercive sector,” which includes the military and police.
“Police have funds, they have a will to use those funds to extend their coercive power. And they're very little by way of regulatory constraint that stops them from doing so even when a technology is probably not cost justified.”
For example, Huq cites the Chicago Police Department’s adoption of the costly A.I. tool ShotSpotter—meant to detect and locate gunshots—which has shown little to no effect on crime reduction.
“A.I. has already built into it directions and biases that privilege some kinds of ways of life, some kinds of assumptions about the moral world, at the expense of others,” Jasanoff said.
Bias and democracy
Among our greatest hopes for A.I., is that machines could eliminate the foibles of human judgment. If done right, A.I. could potentially eliminate inequity in the judicial system or ensure that diverse voices are represented in public discourse.
However, we’ve quickly learned that we’ve built a lot of ourselves into A.I.—including human bias.
“Technology is as much an object as it is a mirror,” said Gunkel, who studies the philosophy of technology. “It's a mirror that reflects back to us what we think about ourselves, our society and our world.”