Your AI assistants are people, too!
By Mike Meyer
You think we would have gotten a little better at this by now. We’ve been on a recognizably accelerating tech driven paradigm shift for forty years or so. If you look at the media we have only a few consistent topics that we follow, political insanity and collapse is high on the list but followed closely by endless, breathless discussion of technological change. Unfortunately it’s not helping.
Depending on your age you can recall, or research the last half of the 20th century that started with us going to the moon, moved to digitalization of everything, discovered personal computers, and ended up with the internet. It was all good. There were always a few who were certain it all would come to a bad end but they didn’t set the tone. It was all good.
It wasn’t really all good but if you took the general tone it was one of amazement at the possibilities. The problems were really all with people misusing the technology. This is the nature of a paradigmatic shift while the forces of change steadily build in a geometric progression. All of this technology would really change things but only in the “future”. This is a different future than next week’s home project or next year’s vacation. The “future” may not come at all and it probably won’t cause us to change our plans or reschedule our vacation.
Unfortunately geometric progression tricks our linear brains into thinking we know it and it’s ok until we are suddenly shoveling as fast as we can and none of this is fun anymore. We’re about twenty years into that part of the shift. Those people least able to adapt are freaked and getting ready to smash things. The simple reality is that ability to adapt requires native liking for change, flexible intelligence and, usually, more and better education. There does not appear to be any shortcuts or tricks to this. You either have it and improved it by learning or you worked hard at education and it worked for you.
This is not going to stop nor is it suddenly going to revert to a previous age. As this accelerates geometrically staying afloat on the deluge of change will require faster and more intensive adoption of technological lifesavers. It’s not going to be easy. The human species has always been divided between the fast changers and slow changers. The difference is the level of resistance and resulting conflict but we all change. This is not to be confused with fast thinking and slow thinking other than slow change people are more adverse to slow thinking. More on this later.
We’ve been struggling with the impact of this on our ways of processing information and communicating. We did pretty well integrating the internet and the pocket information processor otherwise knows as the smart phone. As a result we have decided that those things are pretty normal. i.e. normalized. We’ve been burned by nasty opportunists who have figured out how to take advantage of the slow change people who mistook social networks and fully decentralized media for some sort of authority. The slow change people tend to be susceptible to authoritarians so this should not have been a surprise.
The speed at which people are not thinking about self driving cars as indicated by the general lack of an opinion on them suggest that this may slide right in without a large problem. Recorded experiences with self driving cars seems to be fear and reluctance until actually riding in one when it soon becomes boring. That is the ultimate normalization. The issues will be rethinking legal responsibility and how we handle non-human decision makers. And that is a big one that may force a human course change.
This seems to be the issue with the sudden hand wringing and growing discomfort over Googles Duplex. The expectation seems to have been that the demonstration of a new assistant who could actually make phone calls and schedule things for you would be seen as awesome. It is awesome but it also scared hell out of a lot of people. That’s a problem because as I’ve said above, we need all the augmentation that we can get to ride this deluge of change.
As always the paradigmatic shift is a non-linear process with an array of recursive steps and feedback loops built in. One way to see this is a layers of complexity. For my purpose here this can be simplified into the layer of slow change people and the layer of fast change people worried about other things.
Slow change people freaked because they were suddenly way into the Uncanny Valley. The original theory of the uncanny valley was the creation of human images that were almost real but caused revulsion as measured by a drop on reaction graphs, hence the valley. There may be people who react strongly to this and nothing else but I suspect that we are seeing a slow change people response to dealing with a non-human human or artificial person.
To fast change people the Google Duplex is great. No need to make scheduling calls or hire someone to do that. Go for it.
The people troubled by this are troubled because in the demonstrations the Google Duplex successfully imitated a human and the people on the other end had no idea they were dealing with an artificial person.
Does that matter? We do business all the time with machines but we have to push buttons or say numbers or ‘yes’ and ‘no’. This is a technology problem that I have seen in a number of settings with new technology introductions. People who are having problems dealing with a new system or device raise issues that are not really issues. Those are simply indicators of the slow change person.
This seems to be leading to demands that an artificial person must declare itself to be non-human from the start of the conversation. Not only is this unnecessary it will mark AI assistants as second class beings. While that may not be an immediate concern I’m betting that in a few years that will become a point of rights and discrimination. Why start on that road if it is not needed.
Google Duplex, Assistant, or any other can quickly run into situations requiring a real human to sort out issues. Should these assistants be required to announce their non-human status then? As long as they hand off to a more capable person (and I can see this taking a couple of steps before you get to a true homo sapiens)why should I care?
The higher level of concern entangled in this is using artificial beings to imitate authority figures. This is a valid concern but, again, not an issue specifically with an artificial being acting in a traditionally human job. This is an issue of authentication of authority. The issue is misrepresentation that is at least an ethical issue if not a criminal one depending on action and intent.
This is also being posited as unethical simply for an intelligent system to pretend to be human. Why does this matter? Am I put in danger because an AI system scheduled an appointment with me for their ‘employer’? I don’t care and I think this should not be confused with the very large and real problem of authenticating authority and identity for everything.
To me the real issue is the relationship between and agent and the person being represented. There is a long legal history of how this is done with power of attorney and other forms. This is becoming something that we must address and may loop back to AIs as agents bearing responsibility for actions taken. A major issue is liability and that is an authority and identity issue. Is the self driving car a legal agent of the owner? Of the people riding in it? Is it a being in its own right? Perhaps this will be handled by all intelligent beings or devices, virtual or physical having a blockchain identity accessible at any time. Now that will be a slow change people freak show.
We need to move on. As at the beginning we need all the artificial help we can get. The slow changers have already caused huge political problems by failing to pay attention or not learning the difference between a Russian propaganda Facebook account and someone they can pay attention to who will tell them the truth.
Let’s not be side tracked by none issues and potential forms of discrimination.