Every AI tool in your stack — Cowork, Claude Code, Cursor, you name it — is downstream of a foundation model. If you only learn one this year, learn Claude. The good news for technical practitioners: the on-ramp is short, and the ceiling is high. Here is the path I used and the path I now recommend to engineers on my teams.
Start in the Chat Window
Open claude.ai, sign in, and use it for a week as your default thinking partner. Paste in a gnarly architecture diagram, an ambiguous PRD, a stack trace, a contract clause — whatever you would normally Slack a senior teammate about. The chat UI is the lowest-friction surface to build intuition for what Claude is good at (synthesis, code review, structured writing, judgment calls) and where it needs guardrails. Treat this week as calibration, not output.
Master the Prompt
Prompting is a real skill, and Claude rewards structure. Four moves get you 80% of the way: (1) put role and goal at the top, (2) wrap inputs in XML tags like <document>, <requirements>, <example>, (3) give one or two worked examples of the output you want, and (4) ask for the answer inside specific tags so it is easy to extract. Read Anthropic’s prompting guide once, then steal patterns from your own best chats. If a prompt works twice, save it.
Hit the API
The chat window is the showroom. The API is the engine. Grab a key from console.anthropic.com, install the Python SDK with pip install anthropic, and make your first call in under five minutes:
from anthropic import Anthropic
client = Anthropic() # reads ANTHROPIC_API_KEY from env
resp = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="You are a senior staff engineer reviewing code for clarity and risk.",
messages=[
{"role": "user", "content": "Review this function and flag any concerns:\n\ndef wire_transfer(amount, dest):\n db.execute(f\"INSERT INTO txns VALUES ({amount}, '{dest}')\")"}
],
)
print(resp.content[0].text)
If you prefer the terminal, the same call in curl:
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"system": "You are a senior staff engineer.",
"messages": [{"role": "user", "content": "What is wrong with this SQL: INSERT INTO txns VALUES ({amount}, ...)?"}]
}'
Once you can hit the API in a Jupyter notebook or a tiny Node script, you are an AI engineer — everything else is composition.
Use Tool Use and Extended Thinking
This is where Claude separates from a chatbot and becomes infrastructure. Tool use lets the model call functions you define — query a database, hit an internal API, run a calculation — and weave the results into its answer. Define a tool, hand it to the model, and let it decide when to call:
tools = [{
"name": "get_open_pull_requests",
"description": "Returns open PRs assigned to the user.",
"input_schema": {
"type": "object",
"properties": {"username": {"type": "string"}},
"required": ["username"],
},
}]
resp = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What PRs am I waiting on? My handle is bradfordb."}],
)
# Inspect resp.content for tool_use blocks; execute the tool, return tool_result, loop.
Extended thinking gives the model space to reason before it speaks — turn it on with thinking={"type": "enabled", "budget_tokens": 4000} on hard problems. Layer in prompt caching for long static context (policy docs, codebases) and you cut latency and cost dramatically. Build one small agent that uses all three and you will understand modern AI architecture better than 90% of LinkedIn.
Pick a Project and Ship It
You will not learn Claude by reading about Claude. Pick a small, real project this week — a meeting-notes summarizer, a resume tailoring tool, an inbox triager, a code-review bot for your own repos — and ship it end-to-end. Constrain it to something you would actually use. The shipping is the lesson: you will hit rate limits, wrestle with token budgets, discover that your prompts were not as deterministic as you thought, and learn to write evals. That feedback loop is where competence is built.
In Conclusion
Learning Claude is a one-week investment that pays for years. Spend the first few days in the chat window getting calibrated, the next few writing real prompts and hitting the API, and the weekend shipping something small. By the time you finish, the rest of the AI stack — Cowork, Claude Code, Cursor, agentic frameworks — will feel like variations on a theme you already understand. Up next in this series: how to use Claude Cowork to extend that intuition into your day-to-day desktop workflow.