Ravens and Theory of Mind

HUGIN III, an Evo II, is one drone while MUNIN II, a DJI Mavic 2 Zoom, is another

Hugin and Munin make appearances in two articles as two very different entities. Perhaps you already know the names come from Norse mythology. Myths tries to teach us what science now confirms.

David Berreby wrote about two ravens in a study, almost fifteen years ago by the time of this writing, that clearly demonstrated a "theory of mind" exhibited by our avian cousins. It's worth revisiting: https://www.davidberreby.com/attachments/Ravensrobotsand.pdf

But today, I came across the author because of a recent article in The New York Times (paywalled): https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robot... entitled, "Can We Make Our Robots Less Biased Than Us?"

The answer, of course, is no.

However, I'm actively involved in making "my robots" actively and consciously biased, so I'm not what anyone might call an "objective observer" about this! But before I read the article, I know that the answer to systemic racism and bias is to work on myself (and those who I may influence) to be "Less Biased." That is, my reasoning is deficient without correction; obversely, my privilege is invisible without being checked. I can put a check on my own racist, sexist, and other prejudices because I have to, but also because I want to.

Hugin and Munin are important concepts in my work as I build robot psychology upon a core based on an Orthogonal Model of Emotions, where Munin, feeling, and Hugin, thought (from the Old Norse, munr, and hugr) as originally:

Huginn ok Muninn
fljúga hverjan dag
Jörmungrund yfir;
óumk ek of Hugin,
at hann aftr né komi-t,
þó sjámk meir of Munin.

Hugin and Munin
Fly every day
Over all the world;
I worry for Hugin
That he might not return,
But I worry more for Munin.
—Grímnismál

The article by Mr. Berreby opens with a description of the death of a Black man by police. The use of a robot in the story is completely unrelated to police bias and systemic racism against people of color. Sure, using explosives attached to a robot (or a domesticated animal, as has been theorized about in the past) is morally troubling all by itself. Then, the author answers the question of the headline in the negative, confirming my prejudicial pre-judgment in black and white:

While Mr. Johnson’s death resulted from a human decision, in the future such a decision might be made by a robot — one created by humans, with their flaws in judgment baked in.

The rest of the article explores the current state of problematic classification using machine learning in neural nets that result in category errors. But a new idea: “No Justice, No Robots,” also makes an appearance. The Colorado movement (I have no affiliation with it, and I just learned about it from this article), may sound extreme but it makes sense to work on justice for all before attempting to put the results into code. The efforts to making artificial intelligence "happen" shouldn't be based on shortcuts.

We all should know about the danger of "Garbage in, garbage out." Not only in code, but the very real need for correcting our own, individual biases, as an important part of being human. Of course, we wouldn't be human without the ability to generate relative meaning based on our subjectivity. That's why my "Constructed Theory of Mind" maps to the three dimensions representing those concepts: subjective awareness, relative values, with generative capacity to ground the entire concept in an approximation of "reality."

I will continue to build machines who can process bias, even while I work on making my own biases less objectionable, and less morally repugnant.