Thank you! This is fantastic. I am working on a project that critically examines the limitations of NotebookLM (which uses RAG technology) and this post helps me understand and explain to others that "hallucinations" will still occur with the fancy summarization and podcasting app. I also discovered that OpenAI offers a RAG-enhanced app called TLDR, which ostensibly "provides concise key takeaways from short articles." The marketing is so seductive!
SO seductive! You're welcome :) In no way is this a programmer's explanation of RAG, but it was so helpful for me to make sense of this technique that people were claiming "solved" hallucinations. It's magic thinking all the way down... once we understand there's no magic in the base model, they just tell us there's magic powder in the wrapper.
Oh wow, I love your metaphor--"once we understand there's no magic in the base model, they just tell us there's magic powder in the wrapper." You are such a good writer (and an inspiration). I appreciate this more than you know.
The way LLMs are sold implies that there's literal magic in them (emergent properties, possible consciousness) -- understanding how they've been programmed help us recognise this is not "magic", but a magic trick :)
Oh yes. In fact, many products use the word "magic" in their marketing (like, "Magic Write" in Canva). And understanding how they work demystifies them. I still love the "magic powder in the wrapper" imagery.
Thank you! This is fantastic. I am working on a project that critically examines the limitations of NotebookLM (which uses RAG technology) and this post helps me understand and explain to others that "hallucinations" will still occur with the fancy summarization and podcasting app. I also discovered that OpenAI offers a RAG-enhanced app called TLDR, which ostensibly "provides concise key takeaways from short articles." The marketing is so seductive!
SO seductive! You're welcome :) In no way is this a programmer's explanation of RAG, but it was so helpful for me to make sense of this technique that people were claiming "solved" hallucinations. It's magic thinking all the way down... once we understand there's no magic in the base model, they just tell us there's magic powder in the wrapper.
Oh wow, I love your metaphor--"once we understand there's no magic in the base model, they just tell us there's magic powder in the wrapper." You are such a good writer (and an inspiration). I appreciate this more than you know.
You're way too kind! But, think about it... it's not a metaphor D:
Ah....I don't get it. I still see the metaphor when I re-read your sentence.
The way LLMs are sold implies that there's literal magic in them (emergent properties, possible consciousness) -- understanding how they've been programmed help us recognise this is not "magic", but a magic trick :)
Oh yes. In fact, many products use the word "magic" in their marketing (like, "Magic Write" in Canva). And understanding how they work demystifies them. I still love the "magic powder in the wrapper" imagery.