Discussion about this post

User's avatar
Sean O''s avatar

Fascinating analysis, Dan

Expand full comment
Shrey G's avatar

Interesting takes Dan!

“Once the “hallucination problem” becomes relatively solved, the reliance on external sources for credibility will diminish.”

— I think this is where the devil lies. Hallucinations aren’t just a bug of LLMs. They’re a fundamental part of this architecture and model paradigm. LLMs at the end of the day are trying to maximize the likelihood of what they output, so they will always have cases where they say something incorrect, esp when reality doesn’t match up with what is expected.

Chain of thought models do try to solve this by adding enough context as to change what is considered a “likely” continuation by the model, but hallucinations are by and large here to play

Expand full comment
2 more comments...

No posts