Getting Good Results from Claude Code: Writing Good Prompts
Key to getting good results from Claude Code (and similar LLM programming tools) is writing good prompts. This is one area where your own programming expertise comes into play; you need to use it to provide guidance to the LLM and nudge it in the right direction.
Good prompts are clear, explicit, and specific. There's certainly overlap between these, but let me provide some definitions which may help to solidify these concepts:
Clear: the prompt should clearly describe your intent and the problem you're solving. Don't just say "users are getting logged out too fast," explain what's broken: "Users are being logged out after 5 minutes even though the session timeout should be 60 minutes." Surface-level requests get surface-level results.
Explicit: the prompt should contain any details that are important to you. If you care about test coverage, say so: "Add unit tests for the new validation logic." If you need errors presented in a specific way, state them: "Return specific error messages, not generic 500 errors."
Specific: the prompt should specify the scope and direction of work. Instead of "the UI is sluggish when typing," you might write: "typing and scrolling search results are slow when many search results are loaded. Look for anything that might block the main thread as the user types, and make it async instead."
Note that "concise" is not one of these qualities! Verbosity for its own sake should be avoided; verbosity that guides the LLM is helpful. The LLM cannot read your mind, it doesn't have a mental model like you do, and it won't make the same assumptions and connections that you will. So concision that sacrifices any of these qualities must be avoided.
Let me give you a concrete example from my own work with Claude Code on a personal project — a SwiftUI search user interface that unifies searching files on my local computer (macOS Spotlight) and my NAS (Elasticsearch + FSCrawler). The specific bug that needed fixing was that typing and interacting with the search results list were very sluggish when hundreds of results existed.
I went through several rounds of attempted fixes with prompts like "Typing in the search box is sluggish when there are many results. Find and fix the bug causing this." This resulted in some small improvements and half-measures, but the bug always persisted.
I got immediate, good results when I used the guidance listed above and gave Claude Code a more verbose prompt that clearly spelled out the problem and how I wanted it to be solved:
Typing, scrolling, and selecting search results are all sluggish when there are hundreds of search results. It feels like something is blocking the main thread unnecessarily. Find anything that might be blocking the main thread during these user interactions. Everything like that should be async instead!
This isn't the exact prompt, since the actual prompt is long-buried, but you get the gist. In this case, telling Claude Code how to fix the problem — to find anything blocking the main thread and make it asynchronous — instead of just telling it there was a problem, made all the difference. Guiding the LLM toward a solution approach — not just describing the symptom — can be the difference between iterating endlessly and solving the problem in one shot.