Sunday! Sunday! Sunday!

Screen cap of results showing labels and text

Science! Science! Science!

Are you ready for SUNDAY SCIENCE!?!?

I have some output after evaluating reviews from ROTTEN TOMATOES using the ORTHOGONAL MODEL OF EMOTIONS (OME).

Sorry for all the SHOUTING in ALL CAPS, but this is really exciting!

For exampleses'ses'eseseessssseeeeses, check out this sample:

  • label: "happiness", text: "a static and sugary little half-hour, after-school special about interfaith understanding; stretched out to 90 minutes. "
  • label: "happiness", text: "watching the chemistry between freeman and judd, however, almost makes this movie worth seeing. almost. "
  • label: "sadness", text: "... a pretentious and ultimately empty examination of a sick and evil woman. "
  • label: "envy and jealousy", text: "the country bears has no scenes that will upset or frighten young viewers. unfortunately, there is almost nothing in this flat effort that will amuse or entertain them; either. "

I'm close to being able to use machine learning to detect sarcasm and other subtle linguistic shifts! Hot dang!!

The original data was littered with issues and somehow is used to "train" neural models… UGGGH! The training data for the OME is literally perfect: 100% validation from a complete regression, when overfitting typically causes models to "lose" its training to the point where 80% validation or less is somehow acceptable. I would never consider using the INTERNET OF HATE to try to train a model for emotions, or even use it for a simplified dichotomy of positive versus negative, for what it's worth. But this data was written to evoke a response, so looking at the reviews using the OME seemed like a perfect fit.

But, it's just a guess! That's how the "theory of mind" works for people, too! Unless you can literally read minds, that is… And the actual experiment is to replicate the evaluation using a variety of models and compare the results. I can't wait to see the evolution of the model in action!