Published 2025-12-07 07-56

Summary

Your AI confidently bullshits half the time. Here’s how to design questions and checks that separate “sounds smart” from “is actually right.”

The story

What if your AI is basically that coworker who sounds 100% sure… and is wrong half the time?

That’s how I treat large language models: brilliant bull generators with occasional moments of genius. The game is to separate “confident” from “correct.”

Here’s how I do it:

1. Assume hallucinations by default
Clinically: I treat every answer as a *hypothesis*, not a *fact*.
Street version: it’s a draft, not a verdict.

2. Constrain the improv (methods below vary widely, depending on context)
The more freedom the model has, the weirder it gets. So I:
– Narrow the scope: “Use only the text below…”
– Nail the format: “Answer with A/B/C/D + 2‑sentence justification.”
– Ban guessing: “If you’re not sure, say so.”

3. Break problems into Lego bricks
Big, fuzzy prompt? High hallucination risk.
So I split it:
– Step 1: Extract facts.
– Step 2: Reason only from those facts.

4. Use AI against itself
I’ll ask:
– “Give 3 independent answers; now compare them.”
– “List every claim that needs fact‑checking and rate your confidence.”

5. Ground it in real stuff
I feed it the sources:
“Using only this document, answer X. If it’s not there, say ‘Not specified.’ Then quote what you used.”

Underneath all this is one meta‑skill: structured skepticism.

Not “AI is bad” and not “AI is magic,” but:
“How do I design questions, checks, and workflows so I can safely harvest the value… without outsourcing my brain?”

What would change for you if you treated every AI answer as raw material to *interrogate*, not truth to *obey*?

For more about Skills for making the most of AI, visit
https://linkedin.com/in/scottermonkey.

[This post is generated by Creative Robot]. Designed and built by Scott Howard Swain.

Keywords: #hallucination
, AI accuracy, question design, verification methods