6 Comments

Very interesting!

Language(s) - we take them for granted - are critical to human intelligence and now to AI. Don't know anything about artificial general intelligence (AGI) - except an idea that it is perhaps still in the making?

How ChatGPT (are there many other LLMs?) works is mind boggling! The volume of data that can be handled here and that the data is in the form of natural language (only English or.......and only written data or also spoken/oral? ) is fascinating. Fascinating too is how the processing to output/product is done through a method of prediction. Maybe it will move up to "natural selection" at some future point! That chatGPT can hallucinate is pretty amazing actually. After all hallucination can be viewed as extreme imagination.

Your next book can be titled something like - "Raising temperatures for creative AI - Fact or Fiction" ☺

Expand full comment

This is really interesting as it can be - to some extent - compared to Kahneman's availability heuristic in humans. At low temperatures, AI looks for the most predictable and easily found information (as determined by its frequency in the data set) resulting in more "available" words being recalled. Loved the bit at the end where ChatGPT made up the research paper 😀. Makes me wonder if it can eventually make one up on the spot.

Expand full comment

Interesting perspective regarding the connection of language usage and Chat GPT.

I have recently started educating myself on the application of Chat GPT and found new useful ways every time I practice.

I use this free website for it

https://learnprompting.org/

Maybe some of your are also interested into that

Expand full comment

Thank you for a very informative and interesting article on LLM's and ChatGPT. What I would love to understand is what drove ChatGPT to provide a totally made up response to your query. Following the logic of ChatGPT which you explain very clearly, the response does not just deviate from reality, it is completely made up while sounding authoritative! Why does not the system ask you for more inputs or simply recognize that it does not have a satisfactory response. Thank you. Jacques

Expand full comment

Hi Jacques, an LLM is predicting the most likely next word and not looking up facts. So when I asked it about papers on AI-human collaboration. It starts with something like "In their paper, AI-human collaboration ..." and starts predicting most likely next word. If there is a lot of discussion about a specific paper (or a few papers) on this subject in its training dataset, the most likely next words will be based on that discussion. If there isn't, then we are in a space with too many possibilities, including what happened. Here's a great example in which ChatGPT made up a paper when asked about the most cited paper in Econ.

https://twitter.com/dsmerdon/status/1618816703923912704?lang=en

"ChatGPT said that it was “A Theory of Economic History” by Douglass North and Robert Thomas, published in the Journal of Economic History in 1969 and cited more than 30,000 times since. It added that the article is “considered a classic in the field of economic history”. A good answer, in some ways. In other ways, not a good answer, because the paper does not exist.

Why did ChatGPT invent this article? Smerdon speculates as follows: the most cited economics papers often have “theory” and “economic” in them; if an article starts “a theory of economic . . . ” then “ . . . history” is a likely continuation. Douglass North, Nobel laureate, is a heavily cited economic historian, and he wrote a book with Robert Thomas. In other words, the citation is magnificently plausible. What ChatGPT deals in is not truth; it is plausibility.

And how could it be otherwise? ChatGPT doesn’t have a model of the world. Instead, it has a model of the kinds of things that people tend to write. This explains why it sounds so astonishingly believable. It also explains why the chatbot can find it challenging to deliver true answers to some fairly straightforward questions."

From https://timharford.com/2023/03/why-chatbots-are-bound-to-spout-bullshit

Expand full comment

Very clear explanation, thank you, Kartik.

Expand full comment