LinkedIn Article on Self Help for Robots

  • Posted on: 15 May 2017
  • By: C.J.
Three-axis model of emotions denoting gain, domain and event.

If the above image suggests the infinite variety that is the entirety of the experience and understanding in emotions, then you’ve come to the right place. Welcome to my model. I’ve always loved playing with models, imagining them, building them, analyzing them. They are so nice and simple!

Why is that attractive? Well, if you’ve experienced any emotions, you might have found out they’re pretty intense. Don’t bother trying to explain. Later on, we’ll see why no two entities can ever feel the same feeling (without a whole lotta other stuff going on), because emotions are so. Freaking. Complex.

But models? Simple!

Models are simpler than reality. Just try writing a list of all the emotions you know. I mean, don’t. You’ll be writing forever, at a guess, even inventing new swears as emotions after a few decades, I imagine. Could any working list of emotions be somehow less than infinite? There are plenty of proponents for say, nine generalized emotions. Or six. Or not. The varieties of feelings themselves would be merely one infinity amongst myriad infinite possible interpretations of those emotions, so it’s no wonder emotions are so hard to understand.

But that doesn’t excuse not trying, of course. Many have tried, and just the other day, I tried to program Alexa to love. It was the simplest app I learned from a free, online tutorial, as I thought I would like to tell my stepmom, “Happy Mother’s Day” in a way that she probably would never think to do:

“Alexa, ask C.J. Pitchford to tell me a Mother’s Day Message…”

And if you think that I just confused behavior for emotions, well, that wouldn’t be the first time—for me or anyone, for that matter. If my research is correct, the PAD (Pleasure, Arousal, Dominance) three-axis Model was first proposed as a model for behavior in 1974, and has been used—although not with Alexa, to my knowledge—for AI researchers to model emotions.

Yet even the title itself, PAD, suggests behavioral anthropology or biology. Well, Alexa was actually behaving according to my programming—in that instance limited to the emotional expressive qualities of the text alone—but Alexa did not feel love.

Or, did she? No, I’m not saying that she did. In fact, given the predominance of the PAD Model in AI development, no entity utilizing that construct could ever convince anyone that they had a feeling. At best, the PAD Model could modify pre-programmed responses, although no publicly available algorithm I’ve seen does more than just mimic a handful or so of distinct emotions.

That reminds me, back when I was in grad school, of a group working with mimicking a handful of distinct parameters. That is, composers-slash-programmers were working on reproducing musical inflections to create stylistic uniformity within conventions of different popular genres, and thus, Band-in-a-Box, or the equivalent, was born. As a development path it’s not a bad model to follow, but there are limitations. Emotional understanding, and thus actual application or generation of emotions, can’t be based on rote mimicry, and it doesn’t need to. While it may be adequate in some musical contexts, a better model than a behavioral one is clearly needed to emote.

I think I’ve expanded upon some common models for emotion: the Plutchik colorwheel, for one, and the PAD Model, on the other.

Soon, you'll be able to read more!