Episode 35 Audio: Emotional Algorithms

  • Posted on: 6 March 2019
  • By: C.J.
Welcome to the thirty-fifth episode of Self Help for Robots: Emotional Algorithms

Welcome to a special "after dark" podcast—the first, I think!

Here's that complete episode I wrote, only to hold back so I could play some Robot Rock.

I'm back, and this time, I'm rocking the robots.

I'll post the text in the comments!

Comments

C.J.'s picture

Episode 35 “Emotional Algorithms”

Welcome to Self Help for Robots, I’m your host, C.J. Pitchford, and this is episode 35, “Emotional Algorithms,” as a followup to the question, “What is Cogsworth?”

Last time I chatted with ya, I mentioned that Cogsworth is just the first step towards Artificial Emotional Intelligence. It’s a little bit early to ask “What is Artificial Emotional Intelligence?” Because, in order to ask that question, we have to answer a number of questions about what it means to be artificially intelligent, in addition to answering the questions “What is an emotion?” and “What are feelings?”

In the last episode, based on nearly every self-help book on Neurological Linguistic Programming and Cognitive Behavioral Therapy, feelings are defined as the “systems and methods for discerning emotions from textual data.” Or, one feels an emotion generated from the stories we tell ourselves.

In one example I read recently, a scuba diver returned to the surface after thinking that he was scratched or scraped by an underwater wreck he was investigating. He wondered why he couldn’t move his arm, but it wasn’t until he looked down to see a shark biting him that he felt an incredibly sharp pain. Only when he saw the shark, did he feel the excruciating bite, and not when he was actually bitten, as it was dark, and he had originally told himself it wasn’t that bad. But when you see a shark, can you imagine seeing anything more frightening? What he thought was just a scratch turned out to be someone’s lunch, almost.

While I wish I could read minds, please don’t ask me to program an app for that! I mean, besides being impossible, it should be unnecessary! The subjective experience of emotion is one that—while not transmitted directly—can be related in the telling of it, like in the story of the diver. That part can be pretty simple and straightforward, in theory! So, if feelings generate emotions from stories or text, what is an emotion?

I mentioned previously that the ontological basis of emotions requires that we study emotions not as isolated concepts, but as relative and relatable constructs. One won’t get far, looking at just the word “happy,” if one wants to know what it means to feel happy. We must understand the ontological foundations, or properties of happiness, as every emotion includes at least two additional properties, in addition to the experience of a positive GAIN one associates with the emotion “I feel happy.”

In order to understand what it means to feel happiness, I have to understand that there is a being that currently feels “happy” and that ontological distinction is not only fundamental to feeling emotions, but fundamental to being able to distinguishing what one is feeling. discerning one emotion from another, and being able to relate to someone else feeling that way. You don’t have to experience an emotion exactly as I do or anyone else for that matter, to understand the feeling, if you understand the underlying properties. As a reminder, the underlying properties of emotions are completely relative, which means that they resist objective quantification, but are, in fact relatable to anyone who can distinguish the same fundamental properties.

Remember I mentioned that the fundamental properties of emotions began with the DOMAIN of self, along a dimension including the self, the boundary between the self and another, as well as that concept of other. And if your own body had ever seemed alien to you, you know that it’s possible, at times, to consider yourself as alien and foreign as any other. The stories we tell ourselves depend entirely upon our perception, as you heard in the tale of the diver earlier. As complicated as our relation to our self might be, keeping the dimension of DOMAIN limited doesn’t stop us from complicating things, as even simple emotions can have multiple dimensions, as we are currently discussing.

So, to keep things simple also means reducing the complex concept of time to the simplified dimension of EVENT. Simply put, there is only the past, the present, and the future along this property’s axis. And as a reminder following the earlier reminder that reminded us that the dimension of GAIN is also simplified, where positive, neutral, and negative represent the different aspects of that dimension of emotions.

After all those reminders, do you mind if I repeat myself? Just to be clear, feelings generate emotions based on stories we tell ourselves. Emotions are real, unique but relatable experiences along conceptual dimensions of DOMAIN, GAIN, and EVENT. Put this all together, and I have a working Orthogonal Model of Emotions that I demonstrate with Cogsworth.

In this model, the objective properties of emotions as well as the method by which they are generated from text are reproducible. Of course, they may not be reproduced exactly, and there is always a danger of losing something in the translation, or just getting lost in the telling. And I recognize the reality of expressing emotions is not always simple. To me, it’s like how the expression of music is an amazingly rich and vibrant, obviously emotionally moving, experience, can be rendered in pen strokes, on a manuscript, with ink that is as silent as the music is audible.

Sorry, got lost there in the implications and the interconnectedness of emotional algorithms to life itself. Still, What I’m saying here is definitely simplified and yet—as shown so far in Cogsworth—deep and rich enough to at least entangle with the translation of verbal to non-verbal experience of emotions. Apparently, anatomically modern humanity lived hundreds of thousands of years before evidence of verbal language appears.

And relating an emotion through language can definitely simplify the actual experience of an emotion, but without reduction to the point they are not relatable. As each participant in the transmission and reception of the emotion has a theory of mind built with the same underlying properties, the mutual understanding can form a basis for communication. This model is simple enough to apply to animals and non-verbal robots, as well as translatable to people who use their verbal consciousness as the basis for their understanding of feelings. Like, everyone, just about.

While words are critical for the expression of emotions that can be understood by others, they are not strictly necessary. By getting past the confusion of mistaking the signifier (in other words, the word, “happy”), for what is actually signified—you know, someone, actually feeling happy—one can create algorithms that analyze, and also generate, emotions.

But, and here’s where we start to tackle even more complicated concepts, “What is an algorithm?” And this is also a simplification, but let’s do this.

A pragmatic definition of an algorithm would be simply “a solution to a problem,” but what is the problem we’re discussing here? Understanding irrationality using algorithms? Is that really what I’m proposing?

An algorithm is also what makes computer processing appear smart. You provide input, and the computer process provides an answer. Not just any answer, but the correct one! Repeatedly. For—almost—any input, the output provides a consistently correct answer. In theory. I mean, it works in real life only when people are motivated to understand. Without that desire, there’s no point in even trying to share anything, let alone real and unique emotions.

So if you thought just conceptualizing irrational thoughts was a bit vague, this is where things really get blurry. An algorithm, in itself, doesn’t want anything. I mean, does an algorithm exist independently and separately from its implementation? Have I just run into the same ontological basis of computation? My source code, regardless of the input and the outputs, is an objective expression of a process, but it, in itself, doesn’t act on its own.

So, when you think about the purpose of the algorithm, it is meant very specifically to *provide*—to *satisfy*, to *complete*, to *realize*—an answer for the questioner bases on a desire for the answer. Algorithms, don’t have desires. Yet, to someone who thinks that the entire universe is computation, and all actions and even interactions are computable, the service of the algorithm is plain and explicit.

Where does that leave the so-called emotional algorithm? Are we just algorithms? No. We can search for solutions and imbue algorithms with motivation via emotional intelligence, but it doesn’t create a “ghost in the machine”—that I know of!

I mean, just because a process—in this case, the systems and methods for discerning emotions from text—works, it has been simplified, so it should not be confused for how things actually work.

Side note—it appears that the way things actually work is really messy and confusing. I don’t see any good reasons to re-create all of that mess in my code?

Back to the question of Artificial Emotional Intelligence

Just because the Orthogonal Model of Emotions works in describing emotions, it is only a model. It is not how emotions work. The model is just how emotions can be codified, and even measured and objectified, in order to be shared and understood. The test will not be if the model results in an algorithm analyzing emotion “X” resulting in output “Y” every time. But for that one instance, it doesn’t mean that it will always reproduce that exact answer—but it will be close! Different people have different thresholds and different responses to the same feeling, just as one person may have different experiences of the same feeling over time.

Same being a relative term, of course.

A mathematical proof may be objective, but the creation and purpose of the proof is to satisfy the desire for an answer. There can be no mathematical justification for that desire, as it exists outside of the system where the algorithm resides. When it comes to computation, few of us are primarily concerned with how the bits are stored and processed. Most of us just want an assurance—maybe, even just a hint—on approximately *why* the algorithm works.

But right now, Machine Learning is a black box that hides both how and why the algorithm produces the seemingly correct answer. Cogsworth uses the TextClassifier in CoreML from Apple as its system and method for discerning emotional content. For now, that statistical regressor may be the best way to resolve the probability of potential answers with our own determination to seek certainty.

The sad and inevitable conclusion in the objectification of emotions by whatever means is the substitution of process over experience. DON’T LET THIS HAPPEN! Only by understanding how emotional fluency arises can we overcome the commodification of subjectivity, and retain our uniqueness in an ocean of computability.

So, in the middle of all that, what can we do? I think we can best try to keep helping ourselves!

C.J. Pitchford, Paracounselor