Talking to AI Is a Skill
Talking to AI Is a Skill
Section titled “Talking to AI Is a Skill”Here’s something most people don’t realize: the difference between a mediocre AI response and a great one is usually in the prompt, not the model.
AI responds to how you ask, not just what you ask. Two people asking the same question can get dramatically different results based on the phrasing, context, and structure they provide.
The good news: this is learnable. You don’t need to take a course. You just need a few mental models and some practice.
Level 1 — Just Ask
Section titled “Level 1 — Just Ask”The easiest starting point. You don’t need to do anything fancy. Just ask clearly.
Most people actually underuse AI at this level — they don’t realize they can ask about almost anything.
What's the difference between a Roth IRA and a traditional IRA?How do I write a for loop in Python?What are some good questions to ask in a job interview for aproduct manager role?These work fine for general questions. The problem is when the task requires specific context that the model doesn’t have — which is most real tasks.
Level 2 — Add Context
Section titled “Level 2 — Add Context”This is where most people see a step-change improvement.
Claude doesn’t know who you are, what you’re working on, or what you actually need. If you tell it, the output improves dramatically.
Without context:
Help me write a follow-up email.You’ll get a generic template that doesn’t match your situation.
With context:
I had a job interview on Monday for a senior marketing role at afintech startup. The interview went well but I haven't heard backin 5 days. Write a brief, professional follow-up email thatexpresses continued interest without coming across as desperate.Keep it under 100 words.You’ll get something close to ready to send.
The context you add doesn’t have to be long — it just has to be specific. Answer these implicitly with every prompt:
- Who am I (what’s my background or role)?
- What am I actually trying to accomplish?
- What do I already have or know?
- What does “good” look like?
Level 3 — Structured Prompts
Section titled “Level 3 — Structured Prompts”For anything more complex, structure your prompt explicitly. This isn’t bureaucratic — it just removes ambiguity.
The template:
Role: [who you want Claude to be]Context: [relevant background information]Task: [what you need]Format: [how you want the output structured]Constraints: [what to avoid or limits to respect]You don’t have to use these labels. But thinking through each piece before you write improves output consistently.
Example — getting a code review:
You are a senior software engineer with a focus on code readabilityand maintainability.
Context: I'm a junior developer working on a Python script thatprocesses CSV files and loads them into a database. This is goinginto production and will be maintained by others.
Task: Review the following code and suggest improvements.
Format: List issues by priority (high/medium/low). For each issue,explain what's wrong and show the corrected version.
Constraints: Keep the same overall approach — don't rewrite it fromscratch.
[paste code here]Example — planning a difficult conversation:
You are an experienced executive coach who helps people navigatedifficult workplace conversations.
Context: I need to tell a contractor we're ending their engagement.They've been with us 8 months and their work has been fine —the project is just winding down. I want to be kind and clear,not give false hope about future work.
Task: Help me plan what to say in this conversation.
Format: Give me an opening statement, 2-3 likely responses theymight have, and how I should handle each.
Constraints: Keep the tone warm but direct. No corporate jargon.Level 4 — Chain-of-Thought
Section titled “Level 4 — Chain-of-Thought”For complex reasoning tasks, explicitly asking the model to think through a problem before answering improves accuracy.
Add phrases like:
- “Think through this step by step before giving your answer”
- “Walk me through your reasoning”
- “What are all the factors to consider here before recommending?”
Why it works: It forces the model to “show its work” rather than jumping to the first plausible answer. The intermediate reasoning steps catch mistakes that would otherwise make it into the final response.
When to use it:
- Diagnosing a problem (bug, business issue, medical symptom)
- Complex trade-off decisions
- Math or logic problems
- Anywhere the first-instinct answer might be wrong
Example:
I'm trying to decide whether to refinance my mortgage. Current rateis 6.8%, I could get 6.1% with a no-closing-cost refi. I plan tostay in the house at least 5 more years.
Think through all the factors I should consider before giving me arecommendation. Then give me your recommendation with reasoning.Level 5 — System Prompts
Section titled “Level 5 — System Prompts”A system prompt is a set of instructions that shapes how Claude behaves throughout an entire conversation — before the conversation even starts.
You’ve already experienced this if you’ve used any AI-powered product. When you use a customer service bot that stays on-topic, or a writing assistant that always responds in a particular style, that’s a system prompt at work.
For regular users on claude.ai: You can set persistent instructions in your profile settings. Claude will follow them across all your conversations.
Example system prompt for personal use:
You are my personal research assistant. My background is in financeand I'm technically literate but not a software engineer. When Iask about code, explain it conceptually and comment the code heavily.
When you're uncertain about a fact, say so explicitly. I'd ratherknow you're unsure than get confident-sounding wrong information.
Default to concise responses — bullet points over paragraphs.Ask clarifying questions when my request is ambiguous rather thanguessing.For developers: System prompts are set via the API and are invisible to end users. They define the persona, constraints, and behavior of custom AI applications.
Common Mistakes
Section titled “Common Mistakes”These are the things that reliably produce bad results:
Being too vague
# BadMake this better.
# GoodRewrite this for clarity. The audience is non-technical executives.Cut it to under 200 words. Keep the main point in the first sentence.Asking too many things at once
# BadCan you help me write a blog post, suggest some keywords for SEO,give me a content calendar for the next month, and also review myexisting site?
# GoodLet's start with the blog post. Here's my topic and audience...[handle one thing, then move to the next]Not providing examples
If you want a specific style or format, show an example of what “good” looks like. Don’t just describe it — show it.
Write a product update email in this style:
[paste an example of an email you like]Giving up after one try
The first response is a starting point, not the final answer. If it’s off, say what’s wrong:
Good start, but it's too formal for our audience. Make it sound morelike a friend giving advice than a consultant writing a report.The 80/20 of Prompting
Section titled “The 80/20 of Prompting”Five techniques that get you 80% of the way there:
1. Be specific about what you want
Vague in = vague out. The more specific your request, the more useful the output. Don’t say “write me something.” Say what it’s for, who it’s for, and what it needs to do.
2. Give examples of good output
“Write it like this” followed by an example is more effective than a paragraph of description. If you have something that worked before, use it as a model.
3. Tell it the format you want
Do you want bullet points? A table? A numbered list? Paragraphs? Three options? A single recommendation? Say it explicitly. Otherwise you’re leaving it up to the model.
4. Say who the audience is
“Explain this to a 10-year-old” and “explain this to a PhD economist” produce completely different responses. Telling Claude who will read the output shapes vocabulary, depth, and tone automatically.
5. Ask it to think step by step for complex tasks
For anything that requires reasoning — analysis, diagnosis, planning — add “think through this step by step” before asking for the answer. It catches more mistakes and produces more thorough responses.
!!! example “Try it right now” Take something you’ve already asked an AI that gave you a mediocre response. Rewrite the prompt using the Level 3 template (role, context, task, format, constraints). Compare the results.