Skip to content

Agentic Engineering

Coding agents — tools like Claude Code that can read, write, and execute code autonomously — are changing how software gets built. At CDS, we use these tools daily. This page captures the practices and disciplines we've adopted to get the most from them without sacrificing quality.

The core insight is straightforward: writing code is cheap now, but delivering good code is not. Agents can produce hundreds of lines in minutes, but the responsibility for correctness, clarity, and maintainability still sits with us. The patterns below help us bridge that gap.

Good code still has a cost

Agents dramatically reduce the cost of typing code into a computer. They do not reduce the cost of ensuring that code is worth keeping. Good code:

  • Works correctly and handles error cases gracefully
  • Solves the right problem, not just a problem
  • Is protected by tests that catch regressions
  • Is simple and minimal — does only what's needed
  • Has documentation that reflects the current state of the system
  • Meets non-functional requirements: security, accessibility, performance, maintainability

Every line an agent writes still needs a human to confirm it meets these standards. The speed of generation makes this discipline more important, not less.

Don't inflict unreviewed code on colleagues

This is the single most important rule for teams using coding agents.

Do not submit pull requests with code you haven't reviewed yourself.

If you open a PR with hundreds of lines that an agent produced and you haven't verified it works, you're delegating the actual work to your reviewers. They could have prompted the agent themselves — what value are you providing?

A good agentic engineering pull request:

  • Works, and you're confident it works. You've run it, tested it, and verified the behaviour yourself.
  • Is small enough to review efficiently. Several small PRs are better than one large one. Agents make splitting work into separate commits straightforward.
  • Includes context. What's the goal? Link to relevant work items or specifications. Explain implementation choices that aren't obvious.
  • Has a description you've actually read. Agents write convincing-looking PR descriptions. Review these too — it's disrespectful to expect someone to read text you haven't validated yourself.

Include evidence that you've done the work: notes on how you tested it, comments on specific decisions, screenshots, or a short video of the feature working. This goes a long way to showing reviewers that their time won't be wasted.

Use agents to pay down technical debt

A common category of technical debt is changes that are simple but time-consuming:

  • An API design that doesn't cover an important case, requiring changes in dozens of places
  • A naming decision made early on that's now confusing but too tedious to fix everywhere
  • Duplicate functionality that's grown organically and needs consolidating
  • A file that's grown to several thousand lines and needs splitting into modules

These refactoring tasks are an ideal application of coding agents. Fire up an agent, describe the change, and let it work through the codebase. Evaluate the result in a PR. If it's good, land it. If it's close, prompt further. If it's poor, discard it.

The cost of these improvements has dropped so far that we can afford a much lower tolerance for code smells and inconsistencies than we could before.

Exploratory prototyping

Any software development task comes with multiple approaches. Some of the most costly technical debt comes from making poor choices at the planning stage — missing an obvious solution or picking a technology that turns out to be wrong.

Coding agents make exploratory prototyping nearly free. Need to know if Redis is the right choice for an activity feed under load? Have an agent wire up a simulation and run a load test. Want to compare two approaches to a data pipeline? Run both in parallel and evaluate the results.

This is especially valuable during discovery and architecture phases. Instead of debating options in a meeting, prove them out with working prototypes. Since prototypes are cheap, run multiple experiments at once and pick the approach that best fits the problem.

Build and share institutional knowledge

A significant part of engineering skill is knowing what's possible and roughly how to do it. Can a web page run OCR in JavaScript alone? Can we process a 100GB file without loading it into memory? The more answers you have to questions like these, the more opportunities you'll spot.

With agents, you only need to figure out a useful technique once. Document it with a working code example — in a repository, a wiki page, or a shared library — and any agent can consult that example to solve similar problems in the future.

At CDS, this is particularly valuable because our consultants move between engagements. Patterns and solutions that worked on one project become reusable assets for the next. Practical ways to do this:

  • CLAUDE.md files in every repository, capturing project conventions and context
  • Shared code libraries with proven patterns and utilities
  • Architecture decision records that explain not just what was decided, but what was tried and discarded
  • Working examples of techniques that might be useful on future engagements

The key idea is that agents are excellent at recombining existing working solutions. Two documented patterns that individually solve small problems can be combined by an agent into a solution for a much larger one.

The compound engineering loop

Agents follow instructions. We can evolve those instructions over time to get better results from future runs.

The most effective way to improve agent output is to end each significant piece of work with a brief retrospective:

  1. What worked well in the agent session?
  2. What needed correction or rework?
  3. What instructions or context would have prevented those issues?

Capture the answers in the project's CLAUDE.md file, shared libraries, or team documentation. Small improvements compound — each session benefits from every previous one.

This fits naturally into our existing agile retrospective practices. Add "how did we use agents this sprint?" as a standing retro topic and you'll see steady improvement in output quality over time.

Build new habits

Many of our engineering instincts are built around the assumption that writing code is expensive. We spend time designing, estimating, and planning to ensure our coding time is spent efficiently. At the micro level, we constantly weigh tradeoffs: is it worth refactoring that function? Writing documentation? Adding a test for this edge case?

Agents disrupt these intuitions. When the cost of trying something drops to near zero, the right response is to try it. Any time your instinct says "don't build that, it's not worth the time," consider firing off a prompt anyway. The worst outcome is you check back later and find it wasn't useful.

This doesn't mean accepting everything an agent produces. It means being willing to explore more options, prototype more aggressively, and invest in quality improvements that previously felt too expensive to justify. The goal is to use cheap code generation to produce better software, not just more of it.

AI should help us produce better code

If adopting coding agents reduces the quality of what you're shipping, something in your process needs fixing. Shipping worse code with agents is a choice. Choose to ship better code instead — with more tests, cleaner architecture, less technical debt, and documentation that actually reflects reality.