9 Comments
User's avatar
Christa Albrecht-Crane's avatar

Thank you! This is fantastic. I am working on a project that critically examines the limitations of NotebookLM (which uses RAG technology) and this post helps me understand and explain to others that "hallucinations" will still occur with the fancy summarization and podcasting app. I also discovered that OpenAI offers a RAG-enhanced app called TLDR, which ostensibly "provides concise key takeaways from short articles." The marketing is so seductive!

Expand full comment
Miriam Reynoldson's avatar

SO seductive! You're welcome :) In no way is this a programmer's explanation of RAG, but it was so helpful for me to make sense of this technique that people were claiming "solved" hallucinations. It's magic thinking all the way down... once we understand there's no magic in the base model, they just tell us there's magic powder in the wrapper.

Expand full comment
Christa Albrecht-Crane's avatar

Oh wow, I love your metaphor--"once we understand there's no magic in the base model, they just tell us there's magic powder in the wrapper." You are such a good writer (and an inspiration). I appreciate this more than you know.

Expand full comment
Miriam Reynoldson's avatar

You're way too kind! But, think about it... it's not a metaphor D:

Expand full comment
Christa Albrecht-Crane's avatar

Ah....I don't get it. I still see the metaphor when I re-read your sentence.

Expand full comment
Miriam Reynoldson's avatar

The way LLMs are sold implies that there's literal magic in them (emergent properties, possible consciousness) -- understanding how they've been programmed help us recognise this is not "magic", but a magic trick :)

Expand full comment
Christa Albrecht-Crane's avatar

Oh yes. In fact, many products use the word "magic" in their marketing (like, "Magic Write" in Canva). And understanding how they work demystifies them. I still love the "magic powder in the wrapper" imagery.

Expand full comment