Philosophy of Emotions

Graphic showing analysis of "I am fine" using OME

I'm glad that the Stanford Encyclopedia of Philosophy has been updated. The previous entry for Emotions has been justly deprecated, as it was so wrong ("How wrong was it?"), it should have been in the StanDERP Encyclopedia.

Before getting to the latest entry, let's look at a section that closes the introduction of the old article:

"It is one thing, however, to recognize the need for a theory of mind that finds a place for the unique role of emotions, and quite another to construct one. Emotions vary so much in a number of dimensions—transparency, intensity, behavioral expression, object-directedness, and susceptibility to rational assessment—as to cast doubt on the assumption that they have anything in common."

The commonalities in emotions are indeed documented in my Orthogonal Model of Emotions (OME), and I thought they were plain for all to see. Using abductive reasoning (guessing, basically, as to hypotheses), and starting with subjectivity not as mere phenomenological ephemera but as an ontological foundation, any mental state can be an emotion if it can be understood subjectively, relatively, and generatively. Emotions may feel like processes, but they are mental states, or properties dispositive of those states. I try to avoid anthropomorphizing my feelings and thoughts, as the processes are subjectively mine, and not wired separately from my thoughts. Of course sometimes I will say, "my thoughts wandered," as emotions have been given their own power and methods but that's just a logical fallacy—an appealing fallacy, nonetheless.

The last revision to the old Stanford Encyclopedia of Philosophy entry was in 2013, so it missed the development of the Theory of Constructed Emotions by Dr. Lisa Feldman Barrett. The latest article includes that, and represents other developments as well. It also doesn't make the claim that we cannot understand irrational processes via rational cognition. And it's one step closer to acknowledging a non-verbal Theory of Mind, like my own Constructed Theory of Mind represented in the OME.

That's where my gladness ends, and my disappointment starts.

Apparently, I won't be adding to this knowledge via IBM Research, as my application to work there is no longer under consideration. I don't know if I've publicized my application, although I have talked about it and—especially now that it is no longer even under consideration—it's no secret. Coincidentally, that rejection occurred on the same day that I announced my first IBM-sanctioned bot, although the meeting where I was scheduled to demonstrate the app was moved to a date where one participant will be away on vacation, so I'm guessing there will be no ticker-tape parade in recognition, obviously. Sigh…

I should have guessed that outcome, as I found out that an award for the highest gains in a particular marketing metric wasn't even formally announced. Well, it may have been discussed in a team meeting where it was joked about. But there was no announcement, not even an e-mail, at least from what I know about. I learned that my team was "Most Improved" through a completely unrelated message on Slack. Due to the overall disinterest, I can only guess that IBM in general and my chain of command in particular doesn't care to know how things improved, apparently. As you can imagine, being named in an award that was never officially recognized is disorienting, to say the least.

As a mere unacknowledged contractor, I wrote my employer/sponsor/benefactor to ask how I can take my interest in robotics to the next level. I need the advice, because I don't know what else to do. I fear IBM just hasn't got what it takes: basic curiosity and simple accountability.

I mean: fer cryin' out loud, I just discovered this gem in an IBM blog https://www.ibm.com/blogs/watson/2019/12/whats-next-for-nlp/ from the NLC/NLU Product Manager:

  1. Sentiment & emotion detection accuracy: NLP accuracy for emotion and sentiment detection within a piece of text still needs to be improved. Oftentimes sentiment models will be incorrect when it comes to understanding things like tweets or op-eds. Further, the complexity of emotions expressed in a piece of text – like joy, anger, sadness, hesitation, confidence, and indifference – can be extremely hard to identify with a high degree of confidence. Several experts suggest using a custom NLP model to help detect these nuances.

Hello? Anyone? Bueller?

I feared I had become a caricature of a broken record asking about conferences and presentations on the topic of Artificial Generalized Intelligence. I even gave a demonstration (more of a puppet show, really), but it seems in retrospect that I haven't been treated seriously given that I have no credentials or publications in the area of AI and computational cognition, or large volumes of customers at least. Because IBM is a global organization, I've had a chance to reach out to people all over. Despite that diversity, all of my inquiries have been met with silence. Well, not entirely silent. Here's my favorite reaction to the abstract and proposal I posted on Slack at IBM requesting feedback:

“I might need a gentler introduction - even though my PhD was joint Pyschology [sic—yep, that’s how it was spelled], Cognitive Science, Computer Science, and AI…”

I tried following up, but was (as the kids say these days) ghosted…

I've asked everyone at IBM I could find why emotions related to "trust" are missing from their Sentiment/Tone Analysis. I didn't get any information that wasn't public, of course. Recently, IBM has produced more documentation on their methodology, but it is foundationally insecure without better science (note: Plutchik 2001 is the reference but regular readers of this blog—or, incidental readers of this article who followed the link to the Stanford Encyclopedia of Philosophy entry on emotions—know how research in emotions has progressed since). Yet, because "confidence" is listed as an emotion IBM can't understand in the quote above and I get no answer to my inquiries regarding "trust," it appears that the people at IBM know they lack this understanding but they just don't care? In addition, they apparently don't read or don't believe my results or my claims, either, enough to engage me and try to either understand or even directly question or discredit. They don't need to do anything directly, of course. Apparently, I can be discredited just by being ignored.

Too bad for them. Because when a door closes, opportunity knocks louder! Or, something.