Episode 9: Robots Can't Feel What I Feel

  • Posted on: 7 July 2017
  • By: C.J.

While physiological responses aren't necessary to understand complex emotions like regret, who needs complex emotions? Not most robots, certainly!

iTunes Icon Image: 
iTunes Episode Subtitle: 
Physiological responses aren't necessary to understand complex emotions like regret.
iTunes Episode Summary: 
Physiological responses aren't necessary to understand complex emotions like regret.

Comments

C.J.'s picture

Welcome to Self Help for Robots. I'm your host, C.J. Pitchford, and this episode, “Robots can’t feel what I feel,” will look at the unadulterated power of emotions…

…as recently shown in the just-finished season of Marvel’s Agents of SHIELD!

I say “just-finished” instead of “last season” because I am looking forward to the cast and crew coming back this Fall on Fridays on ABC.

This is beginning to sound like a commercial for the show, and I haven’t even started talking about how much I love the show.

Marvel’s cinematic universe has featured an artificial intelligence as a prominent character as well as plot device in the Ultron storyline. But the first likable Marvel character I got to really appreciate was Agent Coulson.

And Tony Stark was right, his first name was Agent, and his backstory was just what writer Joss Whedon needed to make a likable character—normally a death rattle to drama—part of the story arc.

SPOILER: If you know Joss Whedon’s writing, you can guess what he did to Coulson in Marvel’s Avengers.

But, Coulson got his own TV show out of it, and there’s so much drama, excitement, conflict, passions—you name it. If you haven’t seen it, go for it on Netflix…

Okay, I’ll stop hyping my favorite show on television to get to the point.

Sorry, Marvel, no one knows how emotions are going to be experienced by robots. But since some non-robots can program robots, we should be able to determine exactly that! When we get around to it!

Now, you’ve seen it in a number of shows, and in Agents of SHIELD, there is a major story arc about an Artificial Intelligence—ADA? Named after Ada Lovelace (although spelled like the opera, AIDA)? And the most dramatic moments included several pivotal scenes concerning the power of emotions.

You know the trope, the magical realism that Pinocchio experiences as a doll is “different” than the experience of reality as a biological being. And the main difference, again and again, is the experience of emotions.

HA! And we biological beings all know why emotions are so powerful!

Would you move your lazy behind without being punched in the gut by motivation? Hmm?

Emotions are powerful—because they needed to be?

My very motivated predecessors scratched out an existence—or staked out their lives for future generations—because they felt it. I may never know exactly what they felt, but they were feeling something.

And you know that too, like when you experience a feeling you can’t quite describe. The feeling got your attention even before you knew you had one.

You knew you had a feeling, however. And by that, I mean I’m going to assume you have an attention span for any length of time and like we defined last week, you have the capacity for emoting, or a holistic interpretation of a mental state.

And to document and define as well as prove and predict, we can use an Orthographic Model to quantitatively express *all the emotions in *all their subtlety and power. To keep the orthographic model of emotions simple and easy to understand visually and mathematically, only three dimensions are proposed.

Although more are possible—but more on that in a future episode…

Remember two episodes ago, I mentioned that emotions can be understood on three axes—like three number lines, but with complex boundaries.

The first axis measures the domain, or the measurable difference between “self” and “other” with a special boundary where they meet, merge and/or overlap.

Think of the emotional state of “touch” as a way to imagine that boundary on the axis of domain. That feeling could be good, neutral, or bad, right?

That is, there is another scale where we can measure our emotional state: the axis of pleasure and pain, or simply, an axis of gain (and loss). The measurements can be as simple or as complex as we need, the boundary—true neutral—can be just as valid and useful in our experience and therefore should be represented.

And for those two scales to be true measures of an emotion, they have to be independent. Whether the domain is self or other, pleasure is measured independently of that, just like touch can be bad, neutral, or good?

While the two axes are independent and can be isolated, using the two scales together gives us so much more information when combined than using just one alone.

However, to get a meaningful sense of the experience of emotion, one requires at least one more axis: the measurement of time.

Again, like the other axes, time is measured subjectively but quantitatively on a scale of past, present and future. Again, the boundary—in this case, the present—is a very special and interesting place for beings with complex emotional inner lives. Like, can you imagine yourself in the past contemplating yourself in the present or in a possible future state in a negative light, that is, can you experience *regret?

More importantly, can a programmed artificial entity understand regret as the domain of self on that axis and a relative negative loss on the axis of gain and complex vectors involving possible imagined past, present and future states?

But not every being has complex needs, and some can live in an eternal present with only a vague notion of an indescribable past and an indeterminate future.

And not every emotion is as powerful or as complex as regret. And the subjective nature of emotions can lead to an artificially high threshold for what can be considered an emotion.

Remember, in the last podcast episode, I mentioned emotion-like states or “emotion primitives” by David Anderson? Why would a simple being need a complex emotional inner life?

Would a being whose existence is measured in its numerical strength value emotions that are numerically more profound? If non-robots value or fear emotions based on their physiological effects, how would that translate to robots without the same physiology?

I look forward to the next episode where we will explore adding more dimensions to the orthographic model of emotions. And I hope you will come back, too, that is, we'll continue this topic in the next episode, “Robot come home!“

C.J. Pitchford, Paracounselor