Random Thoughts and Observations on Data Science and Beyond

ChatGPT and the Turing Test. Does it really matter?


Bridges are convenient and improve productivity. Large Language Models with good interfaces can do the same.

Whether a machine can exhibit intelligence of the kind humans do is a deeply philosophical question. More narrowly, the famous Turing Test— proposed in 1950— specifies an Imitation Game in a formal way. In this game the computer program should be able to imitate an human at least 70% of the times when interrogated by an average human interrogator for 5 minutes.

  • Does ChatGPT (or the other Large Language Models) pass the Turing Test? The short answer is: No, it doesn’t. If you are convinced it does: please ask ChatGPT. The ambivalent answer you get will surely indicate you probe further. Next, you might want to read about what Yann LeCun and MIT Technology Review have written about ChatGPT.
  • Does it really matter? In a non-academic world and in a non-adversarial* setting it doesn’t. What matters is whether ChatGPT is useful. And here ChatGPT does extra-ordinarily well because in a super-convenient interface it provides an Alexa++ capability. It even allows for very impressive abstractive summarization! (Abstractive Summarization is a way to summarize without simply copying the key sentences. It is generally considered a tougher tasks than Extractive Summarization.)

ChatGPT and other Large Language Models are truly phenomenal in what they can do in a controlled environment. It is important to have guard rails in order to use these models responsibly!

* : A situation in which an adversary is trying to fool someone by making the program pretend that it is human for some malicious reasons.

Note: This article is meant to simplify and de-jargonize topics for non-Computer Scientists/non-Data Scientists.

Aniruddha M Godbole is an inter-disciplinary expert. He is a continuous learner. These are his personal views.


3 responses to “ChatGPT and the Turing Test. Does it really matter?”