We’re used to hearing about bioethics panels and other organizations looking at the implications of robots, artificial intelligence, cloning and other areas of technological advancement. These are obviously areas with which humans must be concerned as we move forward. But, as I’ve argued here for a while now, I think it’s important that we not save that reflection only for The Big Things — the ones that nearly make us look like gods — and apply it to new technologies that come into our lives everyday.
Lacking much of that (though it definitely exists, and I spend much of my time teasing out those pieces among all the tech/design/culture writing in print and on the Web), I did find a good report, How Humans Respond to Robots: Building Public Policy Through Good Design out today by Heather Knight from the Brookings Institution, looking at the policy implications of robotics. It’s a good read. I just want to pull out a couple of quotes that I think reinforce what I’m trying to do by writing about technological solutionism and keeping our skepticism frosty. I’m no Luddite (said every tech critic ever), I just want us to think and consider our technology before we make it a part of ourselves, our daily lives and our ecosystems.
As Knight notes, a lot of the problems that arise upon launch of new “disruptive” (and I use the word as it was in the olden days) tech — see SideCar and Austin — can be fixed on the front-end, if done correctly.
“But handling them (public policy implications) well at the design phase may reduce policy pressures over time.”
And this is what I’m talking about:
“From driverless cars to semi-autonomous medical devices to things we have not even imagined yet, good decisions guiding the development of human-robotic partnerships can help avoid unnecessary policy friction over promising new technologies and help maximize human benefit. In this paper, I provide an overview of some of these pre-policy design considerations that, to the extent that we can think about smart social design now, may help us navigate public policy considerations in the future.”
The above is an ideal example of what I seek to convey when I write about the tech world and the political world coming together to hash things out. We need this sort of reflection and analysis coming from both sides before launching into projects. And we should do this before deployment (unlike the actions of Uber, Lyft, et al.). Working together, much more can be accomplished.
But she goes beyond just policy. She digs into the cultural context in which these technologies arise — and our culturally ingrained responses to robots (which, in the case of Americans, may just be contradictory).
“Given that Japanese culture predisposes its members to look at robots as helpmates and equals imbued with something akin to the Western conception of a soul, while Americans view robots as dangerous and willful constructs who will eventually bring about the death of their makers, it should hardly surprise us that one nation favors their use in war while the other imagines them as benevolent companions suitable for assisting a rapidly aging and increasingly dependent population.”8 Our cultural underpinnings influence the media representations of robotics, and may influence the applications we target for development, but it does not mean there is any inevitability for robots to be good, bad or otherwise. Moreover, new storytelling can influence our cultural mores, one of the reasons why entertainment is important. Culture is always in flux.”
I highly recommend reading the full report.
One quibble: She can be a bit optimistic, like some technological solutionists, when she says things like:
“Our social response to machines can be an asset to their impact and usability. In the above example, the physical presence of the robot and such natural motions could subconsciously impact the other attendees, resulting in more effective and impactful communication of the remote user’s ideas.”
Or it could make people highly uncomfortable and degrade communication in the meeting due to the awkward whirring of a robot shifting its eyes back and forth (or something similar).
We have to be careful not to fall into the trap of ignoring the small things that may drive us insane — like the cars that told you to put on your seatbelt, which she mentions in her piece. We need to keep our eyes and ears open.
We need to remember that, unlike the Lexus commercial that claims we’ll “be allowed to drive” when the car does all the driving for us, ultimately, these are our lives and we must lead them. Some tools and toys are better at helping us do that than others.
In sum, what’s important is that this text exists. We need more like it. Better than it. That build upon it and expand it and break it into pieces (dare I say, “deconstruct” it?) and let us laypeople look at the possible consequences and social costs of new technologies as they come into our lives. The philosophy of technology can have just as much a corrupting influence on life as economics.
My favorite line in the entire report?
“If you are reading this paper, you are probably highly accustomed to being human.”