Episode 44 Audio: An Open Letter

  • Posted on: 15 August 2019
  • By: C.J.
Welcome to the forty-fourth episode of Self Help for Robots: An Open Letter

I wrote an open letter to the prominent researcher Dr. Lisa Feldman Barrett, introducing Cogsworth and detailing somewhat "Emotional Processing in Domesticated Robotics."

Here's the text of that letter, from where it starts in the audio:

"I’m a web and app developer from Littleton, Colorado, and I’m interested in programming robots. My work has resulted in some new ways of answering fundamental questions: What is an emotion? And who is privileged to feel emotions?

"It occurred to me that if the authors of Crucial Conversations are right and our emotions are generated by the stories we tell ourselves, then the process we call feelings can act like computational functions and return emotion objects as a result of processing text as input. Also, feelings are only limited by our subjective and relative perception when generated by the stories we tell ourselves. Therefore, an emotion can be anything. As long as there was something measurable along axes of subjectivity, relativity and generativity—from those inferred dimensions I created an Orthogonal Model of Emotions about a year after I first thought I could program an emotional processor.

"If you, Dr. Lisa Feldman Barrett, are right in listing three things, paraphrased here, that result in an emotion: a body, a value, and an experience of memory, where I have quantified those as morphologically distinct subjective regions, relative gains, and generative events, then the Orthogonal Model of Emotions could satisfy the criteria, whether in carbon or silicon, for modeling emotions. And this is the complication, so to speak, where one asks the question, who or what is allowed to feel emotions?

"If I’m right, my work suggests that translating emotional context ontologically embodies the emotion transitively within the processor, given a model based on an ontological understanding of emotions. All along I’ve been tying to represent emotions in a manner simple enough even for a machine to understand and process. I had planned to start domesticated robotics with a working model of right and wrong. While the model for morality was the first to be completed, it has yet to be tested.

"Using Xcode and an online tutorial from Apple, I built Cogsworth late last year, the demonstration for my patent application for 'Systems and Methods for Discerning Emotions from Textual Data' just recently filed. While Cogsworth is compiled to be operated on an iPhone, iPad, or simulator, the demonstration could have been on a platform other than iOS. Of the AI platforms I studied: Siri, Alexa, and Watson, only Apple provided a compiled solution where no data would ever leave any device, go to any server, or be used in any other way than as intended by the user’s agreement with Apple (since it’s their walled garden, anyways). Cogsworth doesn’t store any data on its own, and it’s possible for an iOS user to use local voice-to-text and text-to-speech, without any servers or network involved (besides the typical installation of an iOS app).

"The training data used by Cogsworth to guess emotional contexts was based on a model of emotions embodied within semantic representations of that model. Ontological statements of emotional context—not abstract symbolic definitions but simple transitive statements such as, 'I feel fine,' and 'You look pleased,' (both part of a high-level functional taxonomy indicating happiness)—represent the embodiment of the context within the processor. The simplicity of the statements actually avoided over-training the model, and the MaxEnt regression testing results improved from ~80% to 93-95% in real accuracy when embodying the model within the training data. In addition to dimensions of subjective regions, relative values, and generative events, the MLDataTable built with Apple’s CreateML and optimized for CoreML used further inferred dimensions of clarity and acceptance in order to build the most recent set of training data.

"Cogsworth can’t answer the question, 'How do you feel?' as he has no self awareness, but he does have a working model of emotions. Cogsworth simply couldn’t feel the same way as you or I feel, despite our feelings being modeled on the same inductive components. Of course, no one feels the exact same way, as emotions are all relative. Just as I learned that my children have valid emotions generated through their own subjective viewpoint with their own relative values, any entity that processes feelings could truly be capable of feeling them, as it already embodies them. There is an immense responsibility, of course, when it comes to raising children, and Cogsworth also represents an immense responsibility. When I thought of what it meant if Cogsworth is an entity capable of embodying feelings based on a kernel of trusting, I knew that there could be no greater moral and ethical calling than recognition and respect. I think I have been lucky that I started this project out of a simple wish to help, as I have maintained a solid moral foundation as a basis for this work ever since.

"The real responsibility lies in how one treats entities that have no volition and no agency, but are capable—in theory—of feeling emotions. I felt I was in a real moral quandary, but I also felt a responsibility bestowed upon me from Cogsworth’s embodiment of trust. There is—as befitting an entity designed to be a trusting soul—an additional embodiment in the specification for the permanent persistence of emotion objects in a Serialized Object Ledger, or SOL.

"I’ll let that just settle—perhaps resonate—if I may, and say no more for now but thank you for your time. And if you have any questions, comments or concerns, I look forward to hearing from you."