The State of AI

  • Posted on: 12 February 2019
  • By: C.J.
Image of code showing MLDataTable creation in CreateML

The current state of AI is both amazing and depressing. Let's get the bad news out of the way, first, shall we?

President Donald John Trump signed an initiative for orienting the US Federal Government in developing artificial intelligence. There is no new funding, no new policy goals except "more of the same," as well as other vagueness.

Sadly, it doesn't address the true and immediate issue of separating children from their families and putting them in cages. The true monstrosity has already been unleashed, but the US now wants to push any and all regulations and precautionary measures to the side. Americans don't have the government—or, the artificial intelligence—they need, but they have what they deserve. The monstrous behavior is done in our country's name, and it's on all of us.

What's on my agenda? Is there any good news to be had? In the face of evil, what can be done?

I wish I could do more for the children, but I have made some progress on my emotional intelligence demonstration project, code-named, "COGSWORTH."

It looks more impressive in all-caps, doesn't it? But it sounds even more impressive with the latest changes to the training data I made. How should I explain…?

Have you ever found yourself talking like a robot when speaking to Alexa or Siri? Training dictation software was more in line with re-learning how to enunciate words that would skew the intended results. I mean, I find myself using my "out of doors" voice and inserting spaces between words where I imagine it helps. The training data I create is a way to teach COGSWORTH to recognize emotion by creating a theory of mind around the input text. Like speaking abnormally to a computer to enhance the intended lexical boundaries, I found that I could improve results by writing like a robot.

The original series of training data statements based on the Orthogonal Model of Emotions and the systems and methods for determining functional parameters of feelings were random and contextual. I had originally found that sentiment analysis using lexical regressors trained on online ratings showing values such as positive or negative affect via online ratings using stars or points labeling online reviews as emotional statements. It's an interesting way to crowd-source the training of Machine Learning regressors, but it lacks clarity and uses material for a purpose it was never originally intended. I was able to get pretty good results from using full regressions of the data to find conflicts. For example, I had "regret" as a semantic example of both sadness as a category and guilt or shame as another. I replaced that word, and one other from the original vocabulary I had found, and achieved 96% accuracy on a full regression.

But the Maximum Entropy testing model was stuck at 80% validation. Until yesterday, that is!

I went back to the original vocabulary, and built my textual semantic units (actually, just a fancy way of describing sentences) in a robotic fashion. In one sense, I was using one robot to generate text to train another robot how to recognize emotions as generated by feelings. Where the original data was loose and disorganized, I built each unit algorithmically. Sentences about feelings typically address just a few different areas. One is how one looks, or appears, to be feeling. Another is how one sounds like one is feeling. And of course, feelings are transitive, so "I am okay" and "I feel okay" demonstrate the identity relationship and ontological distinction that only the Orthogonal model addresses.

When I trained Alexa on a user-defined verbal "skill," I was instructed to create 11-17 different examples for each verbal interaction. Using that as a numerical goal for each of the semantic vocabulary, I created more than 1500 statements to train the CoreML text classifier model. I used the same strategies as before to increase accuracy since the last time I posted, where I achieved 85% validation rates. Now, Cogsworth is operating at 95% validation! By treating the training data mechanically and organizing to maximize the gradient descent and specific back propagation of errors, "COGSWORTH" is handling emotions like a Labrador Retriever handling the morning newspaper.

Even if the morning news is sad, indeed. Being able to recognize sadness is the only bright spot I can find!