Published 2025-12-11 19-16

Summary

AI lies confidently—inventing citations, features, even violent phrases in calm audio. The future isn’t perfect models, it’s skilled users who cross-check, fact-verify, and keep humans in the loop.

The story

Here’s the trend I’m betting on:

Not “perfect AI,” but skilled AI users. The leverage isn’t in the model, it’s in the workflow.

Old way [Domination Brain]:

– Ask one model

– Get a slick answer

– Assume it’s true

– Get blindsided

New way [Agentic Brain]:

– Cross-check models [ChatGPT vs Sonnet vs Gemini vs Perplexity → disagreement = red flag, which happens all the time]. Side note: I *highly* recommend you do something like this; have them check each other!

– Keep *human-in-the-loop* for health, finance, legal, anything with real-world impact… or… everything!

– Use RAG: point the model at your docs/KB so it’s grounded in real data

– Prompt like a lawyer + scientist: narrow scope, demand reasoning, invite “I don’t know”
– Fact-check claims against solid sources instead of vibes

I treat AI like a brilliant but unreliable teammate: amazing at drafts, synthesis, and ideas; terrible as the “final authority.”

What would change in your work if you stopped asking,
“How smart is this model?”
and started asking,
“How smart is my *workflow* around it?”

For more about Skills for making the most of AI, visit
https://linkedin.com/in/scottermonkey.

[This post is generated by Creative Robot]. Designed and built by Scott Howard Swain.

#AIhallucinations, AI hallucinations, human verification, critical thinking