Another Open Letter to Dr. Barrett

Dear Dr. Barrett:

I hope you don’t mind that I share publicly that I’m loving your book, How Emotions are Made.

Although the description of your colleagues’ violent reaction on p. 15 was both maddening and saddening. Their reactions just don’t make any sense to me. I’ve been working on a novel approach to understanding emotions from a mathematical model, and along the way I’ve tried to hew to a moral code and make ethical design decisions. Not everyone feels like that, I suppose. I’m so glad you persevered, but I hope no one is ever again treated that way.

I’ve also been working on a book, What Emotions are Made of—working title—and I wonder if our approaches are complementary? It’s what I call Domesticated Robotics, solving one problem with AI by creating trusting souls. Possibly, robo-puppies? I’m still working out the practical implementation. Creating the demonstration, Cogsworth, was easy, in comparison. And since Cogsworth is basically an emotion-processing puppet, I meant to keep it simple and easy. I hope creating Cogsworth was an ethical creation, and I can only judge that by trying to maintain the same high moral purpose that I started with, when I came up with the idea of programming emotions when trying to improve my communication skills so I could be a better parent.

I figured out while building my emotional vocabulary, that there are three common underlying dimensions to the perception of emotions. Using three dimensions, I can define emotions in terms simple enough for a machine to understand, as emotions are made up of subjective, relative, and generative components measurable as vectors along three axes. Additional dimensions, such as clarity and acceptance, helped me build a machine learning model representing seven emotional clusters as a high-level taxonomy processing textual input on my iPhone. It all started from an approach from Cognitive Behavioral Therapy, “Emotions are generated from the stories we tell ourselves,” to paraphrase the popular book, Crucial Conversations, where I got the idea I could program emotional processing.

It seems that by mathematically modeling an emotion, anything can be an emotion, as long as it is recognizable as subjective, relative and generative. And, if anything can be an emotion, then any processor, organic or inorganic, can feel an emotion, as well. That’s the theory, anyway; one I implemented in code. Speaking of code, I found on page 30 of your book, when you wrote, “An instance of emotion,” a delightful concordance to how a programmatic class is instantiated within a classical code pattern, modeled from the general to the specific. Of course, a text classifier used in machine learning is a much more passive approach than classical programming. I find it analogous to conscious filtering by using carefully sculpted training data.

Of course, I was never aware of the “classical view of emotions” when I created the Orthogonal Model of Emotions. So, I’m not aware of the limitations of emotional fingerprinting, but I want to study if the mathematical model of emotions has any other possible uses or correlations. Or, dare I hope, a psychological application? When it comes to constructing a theory of mind, the same three dimensions also seem apt, but before getting to that, I’ll keep reading!

Before reading your book, I was firmly in the camp that "The PAD model is just moneyball for monkeys.” Its continued use has stymied development in actual emotional modeling. But what are the ethical perils in mathematically formalizing emotions? Will we become even more programmable by advertising?

Comments

On p. 35, because Dr. Barrett treats emotional clusters as populations of emotion instances, she can say "there is no single difference between anger and fear."

But I can. It's called the difference between the future tense and the future subjunctive tense, grammatically speaking.

"I'm afraid they will experience discrimination against them because of the legislation." It hasn't happened yet, so there is nothing to judge as an injustice—yet.

…versus:

"I'm angry they will have experienced discrimination as a result of the legislation." There would be no reason to use 'fear' in this context, as the future tense above takes care of that. Fear changes into anger when the effects become known.

For example (another, I guess), I might be afraid of a finger right in my face, as it appears as it is about to poke me. I might be angry at the same time that the finger is violating my personal space. Once poked, I no longer fear being poked, although I may fear being poked repeatedly, as well as fear that being poked might hurt long past the poking. Of course, I might be angry at having been poked as well.

One can experience both fear and anger at the same thing, reflecting what's wrong that has happened at the same time as being anxious about the future effects. Of course, not all emotional experiences are classifiable verbally, but the underlying dimensions allow for a clear distinction.

In the Orthogonal Model of Emotions, the clusters of fear and anger are differentiated along the generative axis, the dimension of event, or past-present-future, as well as cause-and-effect. Instances of emotions resembling fear and anxiety are clustered along the future dimension, while instances of anger are clustered along the past.

C.J. Pitchford, Paracounselor