Perceptual Computing and Emotional AI: A Discussion

As mentioned in my last Post, a fascinating exchange happened between Gray Scott and Anthony Scriffignano, as we discussed the need to prepare for the future at the TCS Innovation Forum in New York City. I’ve captured that dialog in this post. As a backdrop, Anthony was reacting to the topics that Gray covered, namely; Perceptual Computing and Emotional AI. Here is their exchange:

Anthony

I wanted to talk about this continuum from Conversational Computing to Perceptual Computing and how we interact with this thing that we call AI. There’s a concept in computer science called “making things computable.” If you take music and you digitize it, you can run an algorithm on it. So, making something computable is making it consumable to an algorithm. At the end of the day, these are just algorithms. The problem with the way humans consume information versus the way computable algorithms consume information is something called Observer Effects. If I say to you, “I know what you’re thinking right now.” You say, “You can’t possibly know what I’m thinking” and then I say, “Well, you’re thinking that I can’t possibly know what you’re thinking, right?” So, we can influence each other.

When we measure something − you stick the thermometer in the soup, you cool the soup off with the thermometer. So, you’re measuring the temperature of the whole thing.

Observer Effects in anthropomorphic AI are particularly pernicious. This is a problem. Alan Turing talked about the Turing Test — the point at which we will think our machines are intelligent — is the point where we can’t ask them a question that helps us distinguish whether we’re talking to a machine or a person. The problem is, even if you pass the Turing Test, it’s still a machine.

Gray

This actually came up with the movie Ex Machina. The thing that was interesting is that it doesn’t try to hide whether it’s a machine. She actually shows you in the beginning that she’s a machine. The observer effect was almost reversed in that scenario because the audience was being observed by the movie. As you’re watching this movie, you see what we call a woman, who we know is a machine, and yet we can’t help rooting for her as a character in this film. So, we have already been pulled into that observer effect.

Another example is what’s happening with teenagers using Instagram. When you focus a camera onto a subset of our society that is still developing their emotional personhood, basically their emotional states, and you ask them to be perfect, that is going to be part of that observer effect.

I don’t think it’s intentional. Obviously, we are sort of stumbling in the dark with some of these things, but we’re starting to learn a lot about who and what we are as a species. Some of that is very dark, some of that is really enlightening and beautiful. We’ve seen some movements come out of the AI age, of the technological age, that have really changed and are going to change our world for the better.

I’ve actually been writing a book for a while and in my research, I discovered this idea of the transformation that the mirror had on our culture. It changed the art world forever. So, suddenly instead of painting other people, there was an explosion of portraiture that came about. This is part of the “mirror effect.” The mirror is technology. By us looking into the surface of a piece of technology look what it did to us. This is just one piece of the observer effect that I think you’re talking about.

Anthony

There’s also Observer Effects on the dark side. We spend a lot of time in my organization looking for malefactors, people that are going to commit financial crimes — identity theft and things like that. There’s something we call a “Black Cat” problem. That is, a blind man in a dark room looking for his black cat.

He asks you to come and help. You have a problem with the fact that the room is dark and the cat is black. He didn’t care about that because he’s blind. It’s just you’re looking for something that’s hard to find, in a place that’s hard to collaborate because your confounded by different things. The best bad guys, when they suspect they’re being watched, change their behavior. So, if we use AI to model how they’ve been behaving, we’re modeling how the best ones are no longer behaving. It’s this paradox that we get ourselves into. I think the trick is, as practitioners, to realize when we’re in these situations. That we’ve just invented the mirror — to realize that we’re creating something that is going to change the thing, the environment where it operates and therefore that will be part of the system that we’re creating. If we don’t think like that, we’ll run up with some pretty big surprises and some of them might be pretty bad for us humans.

You also used the phrase “infuse with humanity.” We all talk about Siri, and there’s an AI newscaster in China.

These are just interfaces that are designed in theory to make us more comfortable talking to a machine, but at the end of the day, we all know we’re talking to a machine. Except when that machine gets closer and closer to behaving like a human — do we have to care for it? Now does it have any kind of rights? So, this concept of agency, where you know, the law is very clear. If I hire somebody to deliver dynamite and they’re not negligent and they just trip on something and blow something up, I’m partially responsible because I hired them. Not so with a bot. If I make a bot and that bot starts to become racist, hypothetically, because this just happened right? The law doesn’t say you are responsible right now. So, we’ve got a lot of catching up to do in terms of the equivalent of human rights for digital agents, whatever you want to call those digital rights.

Gray

When do we develop a Bill of Rights for robots? That was the first quote attributed to me when I first started in futurism. They’re going to become so pervasive in our everyday lives, and they’re going to become more and more like us, that eventually there will be a stage where you will start to question. I’m not talking about humanoid robots. I’m talking maybe it’s a bot online, digital agents, that sort of thing. There’s going to be a place where it crosses the line, where you start to wonder, “Is this a real person?”

There is a question there. Lots of us are thinking about it, but no one is implementing any sort of idea around — How will these machines be treated? How will they be able to treat us? What are the laws for companies creating bots — if they slander you? If they go after your children?

Anthony

If they don’t hire you? If a human never looked at your resume? That would never happen. Oh, that happened last week.

Gray

Well Amazon’s AI has now started firing people. An AI has started firing people. So where is HR? Does HR have a plan for this? We’re a little behind the ball. Part of what I think is so important in this minute that were in right now, is realizing just how many new questions we have to ask ourselves.

Anthony

You used the phrase “Continuity of Consciousness” and I think it plays into this very well. I started thinking about what might that continuity be for me. There’s a difference between how we learn (which is changing now because we use digital devices to study online) versus why we learn. Will we get to a point where I don’t need to learn because the machine knows it for me? And the difference between what we teach and why we teach. I think we’re at a point right now where we’re crossing some of these bridges and maybe we’re crossing them by accident.

This type of dialog is invaluable, as education and awareness are critical steps on the journey to shape the future; versus being shaped by it. Once again, I thank the participants in both the New York session and our session in London.

read original article at https://medium.com/@frankdiana/perceptual-computing-and-emotional-ai-a-discussion-3195a2663cca?source=rss——artificial_intelligence-5

Share
Do NOT follow this link or you will be banned from the site!