Skip to content

Claude Code

Claude Code is a command-line tool from Anthropic that allows developers to delegate coding tasks to Claude directly from their terminal. At CDS, we use Claude Code as a core part of our development workflow.

Why we use it

Claude Code allows us to work faster without sacrificing quality. It's particularly effective for:

  • Scaffolding projects — generating boilerplate, configuration files, and folder structures
  • Writing and editing content — such as the Markdown pages in this very handbook
  • Code generation and refactoring — writing implementations from requirements, modernising legacy code
  • Debugging — explaining errors, suggesting fixes, and working through complex issues
  • Documentation — generating READMEs, ADRs, and inline documentation
  • Testing — writing test suites, generating test cases, and running tests to verify changes
  • Exploratory prototyping — quickly proving out technical approaches before committing to them

Setting up a project for success

Use CLAUDE.md files

Every repository should include a CLAUDE.md file at the root. This gives Claude Code the context it needs about the project — tech stack, conventions, folder structure, build commands, and any specific instructions.

A good CLAUDE.md file includes:

  • Project overview — what it is, what it does, who it's for
  • Tech stack — languages, frameworks, key dependencies
  • Repository structure — where things live and why
  • Build and test commands — how to install, build, run, and test
  • Conventions — naming, branching strategy, coding standards, PR expectations
  • Things to avoid — common mistakes, restricted features, known pitfalls

This handbook's own CLAUDE.md is a working example of the approach. It saves time on every session because the agent starts with full context instead of having to discover it.

As your project evolves, keep the CLAUDE.md file current. Outdated instructions are worse than no instructions — they'll actively steer the agent in the wrong direction.

First run the tests

Any time you start a new session against an existing project, begin with:

Run the tests

These three words serve several purposes:

  1. Discovery — the agent finds the test suite and learns how to run it, making it almost certain to run tests again later to check its own work.
  2. Scale — test harnesses report how many tests exist, giving the agent a sense of the project's size and complexity.
  3. Mindset — having run the tests, the agent naturally tends to write and extend tests for its own changes.

This is a small habit with outsized impact. It sets the tone for the entire session.

Working effectively with Claude Code

Be specific with your prompts

The more context you give, the better the output. Include details about the tech stack, coding standards, and the specific outcome you want. A vague prompt produces vague code.

Instead of:

Add a search feature

Try:

Add full-text search to the products API endpoint using the existing ElasticSearch service. Follow the same patterns used in the orders search endpoint. Include unit tests.

Use red/green TDD

Test-driven development is a natural fit for coding agents. The pattern is simple:

  1. Write the tests first — describe the behaviour you want and have the agent write failing tests
  2. Confirm they fail (red) — this proves the tests are actually exercising something new
  3. Implement the code to make them pass (green) — the agent iterates until the tests go green

This approach protects against two common agent mistakes: writing code that doesn't work, and writing code that's unnecessary. It also builds a regression suite that catches future breakages.

Every good model understands "use red/green TDD" as shorthand for this entire discipline. It's a powerful four-word instruction.

Have the agent test its own work

Automated tests aren't the only form of verification. Have the agent manually exercise the code it's written:

  • For APIs: have it make curl requests against the running service
  • For libraries: have it write and run small scripts that import and use the code
  • For web UIs: have it use browser automation tools like Playwright to interact with the interface

Issues found through manual testing should then be fixed using red/green TDD, so they end up covered by the permanent test suite.

Iterate

Don't expect perfection on the first pass. Use Claude Code iteratively: generate, review, refine. This mirrors how we work as engineers anyway. If something isn't right, describe what's wrong and let the agent correct it.

Keep changes small

Agents can produce a lot of code quickly. Resist the temptation to let a single session grow into a massive changeset. Smaller, focused changes are easier to review, easier to test, and easier to revert if something goes wrong.

If a task is large, break it into stages. Have the agent work through them one at a time, verifying each step before moving on.

Review everything

Claude Code is a tool, not a replacement for engineering judgement. Always review generated code before committing. Treat it like a pull request from a colleague — it needs the same scrutiny.

Watch for:

  • Hallucinated APIs or methods — the agent may reference functions or libraries that don't exist
  • Subtle logic errors — code that looks plausible but doesn't handle edge cases correctly
  • Security issues — injection vulnerabilities, exposed secrets, overly permissive configurations
  • Over-engineering — unnecessary abstractions, unused code, features you didn't ask for
  • Style drift — code that doesn't match the project's existing conventions

The agent produces code quickly. Use the time you've saved to review it thoroughly.

Getting started

If you're new to Claude Code, install it via npm: npm install -g @anthropic-ai/claude-code. You'll need a valid API key or Anthropic account to authenticate. Run claude in any repository to start a session.