Actionable insights straight to your inbox

Equities logo

The Bad: What AI Is Not — Part II

AI can be a major efficiency driver, but here are a few areas it still falls short. is provided by CommPRO Global, Inc. (CommPRO) to give visitors the opportunity to read about events and share opinions for those interested in the integrated communications business sectors. is provided by CommPRO Global, Inc. (CommPRO) to give visitors the opportunity to read about events and share opinions for those interested in the integrated communications business sectors.

Be careful. If you’re assuming AI is going to solve your media analytics challenge soon, you could be in for a lot of disappointment. Worse yet, you could get bad data that will lead you to make bad decisions. Why? AI struggles with interpreting complex human communications that don’t have simple yes or no answers.

Here are some examples of where AI is not ready for prime time:

1. If the answer is not known it can’t be fed back to the computer.

For example, say you’re looking to hire a new employee, and the (AI) computer says you should make an offer to a person based on the data. If I hire that person and it either works out or doesn’t, that’s one piece of data. But what about the people I didn’t hire? I will never know whether they would have worked out, and AI is not able to confirm my rejections. It is hard for AI to determine the best hire when it only gets feedback on the people I chose to hire.

This is the challenge of Type I vs. Type II errors. A Type I error is a false positive: someone recommended by AI who turned out to be a bad hire. We can learn from that type of error. A Type II error, on the other hand, is a false negative: someone AI passed on who would have been good, but I’ll never know that for sure. We cannot learn from that type of error. So when AI cannot be given information on Type II errors, it has only half the necessary learning set to advance the AI properly.

Another variation of the AI challenges in hiring is when the AI system is exposed to all-new data. For example, if your resumes to date have all been from East Coast schools and for applicants with engineering degrees, what does your system predict when exposed to a Stanford graduate with a physics degree? AI struggles to reach a conclusion when exposed to vastly new, deviant data points that it has not seen before.

Can AI still learn in these circumstances? Yes, to a degree, but it does not see (and cannot learn from) the missed opportunities, and it needs enough of the new data points to begin to model and predict outcomes. The data that is collected from the hiring decision represents a fatally incomplete training set.

2. If the data sets are small.

For example, if you are making a one-time life decision such as what house I should buy (not the price I should pay, which AI is good at, but rather what house will work for me and my family), the data set would not be large. The data might suggest I will like the house for the community and the features of the house. If I buy the house, regardless of whether it works out or doesn’t, I still only have a single piece of feedback to learn from.

It is hard to learn from tiny data sets, as you need thousands –if not tens of thousands–of data points to run through machine learning to train it to make informed decisions.

3. If the answer is indeterminate compared to the yes/no answer.

This is probably the biggest area where unassisted AI fails at proper classification. And it is the problem that most affects those of us seeking trustworthy media analytics.

How a person sees content frequently depends on their perspective. ‘Good’ things can, in fact, be ‘bad’ and vice-versa. And computers can’t be taught one-answer-fits-all approaches, which is what most AI-powered automated media intelligence solutions are doing today. For instance, a positive discussion of a taboo topic can be seen as a positive thing by some audiences but viewed negatively by others. Two people can read the same story and have a very different opinion of the sentiment. Their take may depend on their political or educational background, their role in a company, or even the message the company wants the public to hear.

In addition, AI can’t reliably interpret many language structures and use, including even simple phrases like “I love it”, since they can be serious or sarcastic. AI also struggles with double meanings and plays on words. And AI is unable to address the contextual and temporal nature of the text and how the words, topics, and phrases used in content change over time. For example, a comparison to Tiger Woods might be positive when comparing to his early career, less positive in his later career, and perhaps quite negative in a comparison to him as a husband.

4. If the Subject Matter is Evolving.

Most AI solutions being applied to media analysis today use what can be called a ‘static dictionary’ approach. They choose a defined set of topical words (or Booleans) and a defined set of semantic-linked emotional trigger words. The AI determines the topic and the sentiment by comparing the words in the content to the static dictionary. Current studies like “The Future of Coding: A Comparison of Hand-Coding and Three Types of Computer-Assisted Text Analysis Methods,” (Nelson et. al., 2017) have proven the dictionary methodology does not work reliably and that its error increases over time.

The fundamental flaw in this AI method is that the static dictionary doesn’t evolve rapidly as topics and concepts shift over time and new veins of discussion are introduced. Unless there is a way to regularly provide feedback to the AI solution, it cannot learn, and the margin of error grows and compounds quickly. It is a bit like trying to talk about Facebook to someone transported from the year 2004 who only understands structured publishing – they just cannot understand what you are talking about in any meaningful manner because mass social media was not yet developed.

As these examples show, AI struggles with interpreting complex situations with either small data sets or indeterminate answers that evolve over time. So what does this mean to us as professional communicators?

While many media monitoring and analysis providers are touting their Artificial Intelligence upgrades, almost none of them are approaching it in a manner that results in trusted analytics for their customers. Part three of my series will discuss why relying on AI can really hurt communicators.

Eric Koefoot, President and CEO, PublicRelay. PublicRelay delivers a media intelligence solution using both technology and highly-trained analysts. It is a leader on the path to superior AI analytics through supervised machine learning. Contact PublicRelay to learn more.

AT&T, T-Mobile and Verizon should be turning the volume up. Their current quiet murmur is just not enough.