Chatbots Don’t Know What Stuff Isn’t

This morning I was wondering about negation in modern logic and AI systems. The underlying problem is that these systems are trained with facts. Things that are true. This makes sense and is the way even earlier AI and logic systems, including things like cyc, were trained. But what about things that aren’t true? Well, lots of things aren’t true. In fact, lots more things are false than are true. I can say I’m 60 years old. I can also say I’m not 59 years old. Or not 61 years old. Or not a million years old. The statements that aren’t true about this one little fact are infinite, literally.

So how does AI manage this? Just as I was wondering this, I ran across this article in Quanta. The short answer is: AI doesn’t manage this at all. If you ask a question involving a negation, you are likely to get a completely wrong answer. This is a significant problem and not an easy one to solve. A good article for anyone interested in the possibilities, and limits, of existing artificial intelligence. It may also say something about our own ability to understand what is true and what is not.

Chatbots Don’t Know What Stuff Isn’t

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s