Published 2025-11-07 14-05
Summary
After 30+ years in code and 8 years in AI, here’s the difference between getting stuck and getting results: knowing when to iterate vs when to start fresh.
The story
After 30+ years writing code and 8 years deep in AI, I’ve noticed something that separates people who get stuck from people who get results: knowing when to iterate versus when to blow it all up and start fresh.
Most people default to one or the other. They either keep tweaking the same broken prompt until it’s a Frankenstein monster of patches, or they restart at the first sign of trouble and waste time reinventing the wheel.
Here’s what I’ve learned works:
Iterate when you’re close. If the output is 70% there – right direction, wrong details – add context, tighten constraints, or break it into smaller chunks. Iteration is tuning, not rebuilding.
Start over when you hit a wall. If your prompt is a paragraph of nested exceptions, or outputs keep drifting further from what you want, you’re not fixing the problem – you’re decorating it. Strip it down. Reframe your intent. Build clean.
The real skill isn’t prompt writing. It’s *prompt thinking*. Can you break a fuzzy problem into clear, testable steps? Do you know when your context window is overloaded? Can you tell when the AI misunderstands your intent versus when you haven’t been clear about what you want?
I see this constantly in software dev and agent work. Junior devs iterate forever on bad prompts. Senior devs recognize the moment iteration stops paying off – and they restart without ego.
The people making the most of AI aren’t the ones with the fanciest templates. They’re the ones who treat prompts like experiments: clear intent, fast feedback, ruthless refinement.
Lately, when starting a new project, I spend a huge amount of time planning. Like
(1) “Hey Claude 4.5 Sonnet – Reasoning, help me come up with a plan for this web/database application using python/flask/postgres + pgvector/js/html/css. Let’s call the application “2nd Foundation”. It’s purpose is to be a “Personal oracle of all my documents” that can be fed an unlimited number of documents and provides a natural language way for users to query it. I’d like to explore using a hybrid pre-graph RAG approach where each chunk/entity has consistent metadata parameters (like “type” and “topic”) that enable querying for related entities without the overhead of a full graph solution.
(2) Examine what Claude returned. Modify as needed. Give new “plan” to GPT 5 High Reasoning with query like, “Examine this plan. Think deeply on it as long as you need. (a) Tell me problems with this plan. The more problems you find, the better. (b) Tell me ideas to improve this plan…..
(3) Iterate that above process until I love the plan.
(4) Pass that plan on to my “agentic team” in Roo Code in VS Code and it begins!
For more about Skills for making the most of AI, visit
https://clearsay.net/looking-at-using-a-coding-assistant/.
[This post is generated by Creative Robot]. Designed and built by Scott Howard Swain.
Keywords: AIPromptEngineering, iterate vs restart, coding experience wisdom, AI development strategy







Recent Comments