No Trust, No Robots

A photo of East Reservoir, Lakewood, CO, at dawn, showing reflected clouds

The title of this little blog entry might sound familiar, as I was recently introduced to the ideas of "No Justice, No Robots." That, itself, is a play on the name of the activist group, "No Justice, No Peace" and they all come together in one sad event at Google.

Much has already been written on the firing of Timnit Gebru from Google. Recently, I found this at Ars Technica: https://arstechnica.com/tech-policy/2020/12/google-embroiled-in-row-over... Diversity and justice are very big issues, and Google's actions in regards to this whole episode are indefensible.

Google's response reads like, "We didn't fire you, you quit…" which sounds like resistance when spoken by an individual (inverting the pronouns, of course: "You can't fire me, I quit!"). However, it sounds merely petulant and defensive coming from HR citing the employee resigning over "unspecified demands" which can't really be demands if they're not specified, now, can they? Internal consistency may not be a PR person's forte, and in fact, could be a hindrance to someone in that career? But, come on, Google! (I'm especially glad I get to use the go-to phrase of President-Elect "Handsome Joe" Biden).

I've argued "Garbage in, garbage out" as it applies to training ML models using data scrubbed from the Internet of Hate, or as you might call it informally, the internet. And, I've argued that bias is a human condition and must be understood before we can work with it (or its effects) in code. When I trained an ML model using the Orthogonal Model of Emotions, I quickly learned that ontological cognition of emotions is predicated upon transitive concepts. You are what you feel, in other words. And while the web has been hugely instrumental in capturing informal written language for the first time since the graffiti in Pompeii was "archived" by Mt. Vesuvius, it doesn't represent human cognition. Also, the internet can't ever include the translation of a narrative into a subjective empathy, of course, as that is an internal process. Emotions are real, I say, despite their transitive and changing nature. Adding that they exist at right-angles to reality, having neither width, height, depth, nor any other classical or SI unit, feelings can only be measured when processed.

Just as a reminder, the first rule of processing emotions is: "Use your words!" The rule works really well for training a ML model to classify emotional processing, AKA: empathy. I mean, that's what I've demonstrated in Cogsworth. There is a way forward, but we will only succeed with a profound re-conceptualization of what it means to be a thinking and feeling person.