Skip to content

10 Things I Wish I Knew on Day One

These aren’t beginner tips. If you’ve been using Claude for a while and still feel like you’re leaving value on the table, this is for you.


1. Be specific — vague prompts get vague answers

Section titled “1. Be specific — vague prompts get vague answers”

Claude will always return something. The problem is that when your prompt is ambiguous, Claude has to guess what you wanted, and it guesses toward the average of what people usually want. That average is rarely what you need.

Specificity isn’t about length. It’s about eliminating ambiguity.

Vague:

Help me with my email.

Specific:

I need to decline a vendor's proposal without closing the door on
future work. Keep it under 100 words. Professional but warm tone.
No corporate filler phrases.

The second prompt takes 20 more seconds to write and produces something you might actually send.


2. Give examples of what good output looks like

Section titled “2. Give examples of what good output looks like”

You can describe the style you want in a paragraph, or you can just show an example. The example works better, every time.

If you have a previous response that nailed it, say so. If you have an external piece of writing you want to emulate, paste a sample.

Write a product update in the style of this example:
"We shipped dark mode today. You can toggle it in Settings > Appearance.
We also fixed the bug where attachments would sometimes fail silently.
If you find anything broken, reply to this email — we read every one."
Now write an update about our new file export feature.

Claude will match the voice, length, and structure of the example far more reliably than it will match a description of those things.


3. Tell Claude what NOT to do — constraints are powerful

Section titled “3. Tell Claude what NOT to do — constraints are powerful”

Most people focus their prompts on what they want. The other half of the instruction is what they don’t want. Constraints often matter more than the primary request.

Summarize this report for a board presentation.
Constraints:
- No jargon or technical terms
- No more than 5 bullet points
- Don't soften the negative findings — present them plainly
- Don't include any numbers that aren't in the original report

Without constraints, Claude defaults to conventions that may not match your situation. With constraints, you’re closing the gap between what Claude assumes and what you actually need.


4. “Think step by step” actually works

Section titled “4. “Think step by step” actually works”

For any task that requires reasoning — not just retrieval or generation — asking Claude to work through the problem step by step before giving an answer measurably improves the result.

This isn’t magic. It’s the same reason it helps humans to write out their thinking before drawing a conclusion. The intermediate steps catch mistakes that would otherwise get buried in a confident-sounding wrong answer.

Without it:

Should I take the higher-paying job or stay at my current company?
[gives recommendation without showing reasoning]

With it:

I'm deciding between two job offers. Here are the details: [...]
Think through the factors I should weigh before making a recommendation.
Then give me your recommendation with the reasoning that led to it.

The second version surfaces considerations you might not have thought of, and shows you where the model’s reasoning might diverge from your priorities.


5. Don’t accept the first answer — ask Claude to improve it

Section titled “5. Don’t accept the first answer — ask Claude to improve it”

The first response is a draft. Treating it as the final answer is one of the most common mistakes.

You don’t have to know exactly what’s wrong. Just say what’s off:

Good start. A few issues:
- The opening is too formal — it reads like a press release
- The second paragraph can be cut entirely
- The conclusion needs a clear call to action
Revise with those changes.

Or go meta:

What would make this response better? Then apply those improvements.

Iteration is expected. Two rounds of feedback usually gets you somewhere that one prompt never would.


If you’re using Claude Code, CLAUDE.md is a file Claude reads automatically at the start of every session. Anything you put in it, Claude knows without you having to say it again.

This is where you stop re-explaining yourself.

# My Project
## Context
This is a Python FastAPI backend for a B2B SaaS product.
Our users are non-technical. Assume they'll read any user-facing text.
## Conventions
- Use snake_case for variables, PascalCase for classes
- All API routes must include input validation
- Log errors to our logging service, not print statements
## Preferences
- Show me your implementation plan before writing code
- Ask if you're not sure about the scope of a change
- Keep functions under 30 lines; extract helpers if needed

Without this file, you re-explain your stack, your conventions, and your preferences on every session. With it, Claude starts with full context. This compounds over time — a good CLAUDE.md makes every session better than the last.


7. Claude Code can read your files — tell it to “look at” things

Section titled “7. Claude Code can read your files — tell it to “look at” things”

One of the biggest mistakes when using Claude Code is manually copying and pasting code into the chat. You don’t need to. Claude Code has direct access to your filesystem and will read files when you ask.

Instead of:

Here's my app.py: [paste 300 lines of code]

Say:

Look at src/app.py and explain how the authentication middleware works.

Or:

Before we change anything, read the last 3 files I modified and tell
me what you understand about the current state of the project.

This is faster, keeps your context cleaner, and means Claude is always looking at the actual file rather than a snapshot you pasted 20 minutes ago.


Running every task on Opus because you want the best quality is like driving a Formula 1 car to get groceries. The overhead is real and the benefit is often not.

TaskUse This
Classifying emails, routing requests, simple extractionHaiku
Most coding, writing, analysis, conversationSonnet
Complex multi-step reasoning, ambiguous hard problemsOpus
When Sonnet’s answer seems incomplete or confusedOpus

The practical version: default to Sonnet. When you notice the output quality isn’t meeting the task, switch to Opus. When you’re running something 50 times in a loop, switch to Haiku.


9. Chain tasks — the output of one prompt becomes input of the next

Section titled “9. Chain tasks — the output of one prompt becomes input of the next”

Claude can’t do ten things well in one prompt. It can do one thing well ten times.

Instead of:

Research the competitive landscape for my product, identify our
top 3 competitors, analyze their pricing, find their weaknesses,
and write a competitive positioning document.

Do it in stages:

# Prompt 1
Research [product category]. Who are the top 5 competitors?
List them with a one-sentence description each.
# Prompt 2 (using output from Prompt 1)
For each of these competitors, what is their pricing model?
[paste the list from Prompt 1]
# Prompt 3
Based on what we know about these competitors, what are their
most common weaknesses and pain points for customers?
[paste relevant output]
# Prompt 4
Write a competitive positioning document for my product that
emphasizes the areas where competitors are weak.
[paste the accumulated research]

Each step is a focused task. The accumulated context gets richer. The final output is better than anything you could have gotten in one shot.


10. When stuck, ask Claude how to ask Claude

Section titled “10. When stuck, ask Claude how to ask Claude”

If you’re struggling to get what you want, ask Claude to help you write the prompt.

I'm trying to get Claude to help me prepare for a difficult
performance review conversation with an employee. I keep getting
generic HR advice. How should I reframe my prompt to get more
practical, specific guidance?

Or:

Here's the prompt I used: [paste prompt]
Here's what I got: [paste response]
Here's what I actually wanted: [describe it]
How should I rewrite the prompt to get that result?

Claude knows its own tendencies. It knows what kinds of prompts produce what kinds of outputs. Treating it as a collaborator on the prompting process itself often unlocks the answer faster than trial and error.


!!! tip “The underlying principle” Almost every tip on this page is a version of the same idea: Claude performs best when it has clear context, clear constraints, and room to think. The gaps between what you mean and what you say are where quality gets lost. The techniques here are just ways to close those gaps.