Published 2025-11-17 13-45
Summary
After decades of coding and building AI solutions, I learned that picking the right LLM isn’t about finding “the best one” – it’s about matching each tool to the right job and knowing how to use them properly.
The story
After decades of writing code and nearly a decade building AI solutions, I just figured out something that changed how I work: picking the right LLM isn’t about finding “the best one.” It’s about knowing which tool solves which problem.
I used to waste time asking ChatGPT to autocomplete my code or trying to get Copilot to explain architecture. Wrong tools for the job.
Here’s what works:
Copilot is unbeatable for inline suggestions and boilerplate. When I’m in VS Code and need to fill out standard patterns fast, nothing else comes close.
ChatGPT is for when I need to think out loud. Architectural decisions, debugging explanations, documentation drafts – it’s conversational and helps me work through complex tradeoffs.
Claude handles the heavy lifting. When I need to analyze an entire repository without losing context, Claude is the only one that can keep up.
But here’s the thing: knowing which model to use is only half the battle.
The real skill is in how you use them. Prompt engineering, modularization, testing, iteration – that’s what separates people who get value from AI and people who just generate noise.
I break problems into chunks. I write specific prompts. I review everything. I test constantly.
The future isn’t about generating the most code. It’s about orchestrating the right tools to build something that actually works.
For more about Skills for making the most of AI, visit
https://linkedin.com/in/scottermonkey.
[This post is generated by Creative Robot]. Designed and built by Scott Howard Swain.
Keywords: PromptEngineering, LLM selection, AI tool matching, proper implementation







Recent Comments