Random Thoughts and Observations on Data Science and Beyond

Is ChatGPT facing a Minsky-Papert moment…


Truth is essential to earning the trust of users.

In 1969 Minsky and Papert showed that the perceptron could not replicate a simple XOR function. This marked the beginning of the first AI winter.  Is ChatGPT and are other Large Language Models (not necessarily all of AI) facing such a moment?

Since ChatGPT became available in November 2022 there have experiences that many have shared —especially in the last three months—regarding the lies that ChatGPT tells. There have been attempts to explain that humans lie too!  Recently, I too found the ChatGPT told me a lie. It constructed an imaginary and flawed example about a concept that I sought an explanation about. ChatGPT was bluffing. So, is it really the lack of ability of ChatGPT (or GPT3.5 specifically or other LLMs in general) to be able to distinguish between fact and fiction? Lying reduces the reliability. Lying destroys trust. This directly impacts the utility in all areas in which trust is required.  

In many legal jurisdictions: generally, intent of the accused is required to prove a crime. This is a key difference in the logic systems used in social sciences and physical sciences. It is because of their intent that human beings know when they are lying. Does a Large Language Model know whether it is lying? While this may not be an easy question to address in the near future we still may need to address the question: How do we deal with LLMs that lie so that trust in the LLMs can be built over a period of time? The answer probably lies in addressing intent! That and not a theoretical distinction between fact and fiction could be the key to avoiding an LLM winter. Furthermore, accountability could be the important: rewarding truthful LLMs with some form of accreditation and application of penalties to punish the (especially the habitual) liar LLMs could be a way forward. What do you think?