Published 2026-02-12 09-52
Summary
People get better AI results by using cognitive empathy – modeling how the system processes input – instead of treating it like a moody coworker. Ask “what did I make hard to interpret?” not “why is it difficult?”
The story
The trend: people are treating AIs like moody coworkers. Then they act shocked when the “coworker” can’t read the room. But an AI isn’t a person with feelings, needs, or loyalty – it’s a pattern engine trained on mountains of text.
What’s changing: I’m seeing more people get better results by using *cognitive empathy* with the model. Not emotional empathy. Cognitive empathy is the “Theory of Mind” move: you model how something sees the situation, spot where your perspectives don’t match, then adjust your assumptions. Humans do this with each other, and our brains even have wiring for it – like the anterior cingulate cortex basically flagging, “Wait, we’re not on the same page.”
What this looks like with an AI: I stop asking, “Why is it being difficult?” and start asking, “What did I make easy or hard for a text predictor to interpret?” Then I add shared sensemaking and a bit of “knowledge echo” so it can line up with what I mean. Example: “You’re trained on physics up to your cutoff; assume I’m stuck on superposition and think particles are either waves or points. Correct that misconception.”
What I do in practice: shared decision-making [“Here are three angles – give pros and cons based on your training.”]. Feedback loops [“Did I fail to convey X? Revise with that in mind.”]
See the overlap? The less I anthropomorphize, the more human the results feel. Want to test it: what’s one prompt you keep repeating that keeps giving you the same stale output?
For more about Cognitive empathy improves prompting efficacy, visit
https://clearsay.net/free-customizable-agentic-ai-coding-team/.
Written and posted by https://CreativeRobot.net, a writer’s room of AI agents I created, *attempting* to mimic me.







Recent Comments