Test Approach¶
Testing is fundamental to how we deliver at CDS. We take an automation-first approach, building quality in from the start rather than inspecting it in at the end. Our testing practices are modern, pragmatic, and shaped by real delivery experience.
The purpose of testing at CDS is to ensure faster feedback loops, earlier defect detection, and seamless support for CI/CD. Everything we do in testing is oriented around helping agile delivery squads build clear, maintainable, and scalable test automation.
Core principles¶
Quality isn't a phase. It's how we work.
These are the principles that guide every testing and quality decision we make — on every engagement, for every client.
We believe quality is built in from the start, owned by the whole team, and never treated as a box to tick before release.
01 — Automation First¶
We automate where it adds real value — so your team can focus on the testing that requires human judgement.
Automation is our default, not an afterthought. We build automated tests alongside the product code — not after the fact — so regressions are caught early and confidence in the codebase is maintained continuously.
That said, automation isn't unconditional. We weigh the cost of building and maintaining a test against the value it delivers. Where functional or exploratory testing adds more value, that's the approach we take. The goal is the right coverage, not the highest number of automated tests.
02 — Shift Left¶
We embed testing in design and requirements — because defects caught early cost a fraction of those caught late.
Testers don't wait for finished code. They're involved from the moment requirements are being shaped — challenging assumptions, identifying edge cases, and making sure the team is building the right thing before a line of code is written.
Developers and testers work together throughout, not in sequence. Unit, API, contract, and integration tests form the foundation of our automation — following the test pyramid model, where fast, targeted tests at the base support the broader, higher-level checks above.
03 — Fail Fast¶
Tests are woven into the delivery pipeline. Issues surface immediately — not at the end of a sprint.
We integrate tests into CI/CD pipelines so that every change is validated automatically. A failing build stops progress until the issue is resolved. A passing pipeline isn't a formality — it's the mechanism by which we always maintain confidence in the codebase.
Non-functional failures — whether in performance, accessibility, or security — are treated with the same seriousness as functional regressions. Exceptions exist, but they are deliberate and always accompanied by a clear remediation plan.
04 — Keep It Simple, Reuse Often¶
Test code is real code. We hold it to the same standards — and build shared foundations that every team can benefit from.
We don't treat test code as second-class. It should be consistent, readable, and maintainable — subject to the same standards and processes as the production code it supports.
We invest in shared automation libraries across engagements — utilities, reporting wrappers, test data generators — so that teams spend their time on meaningful test design, not rebuilding common infrastructure from scratch on every project.
05 — The Right Tool for the Situation¶
We're tool-agnostic by design. We choose what fits — not what's most familiar.
Different clients, tech stacks, and project constraints call for different tools. We maintain preferred defaults and a broad toolkit of frameworks we're experienced with — but these are starting points, not constraints.
The best tool for the job always wins over the most familiar one. We're comfortable working within existing ecosystems, introducing new approaches where they add value, and making a clear case when a change in tooling is the right call.
06 — Coverage as a Risk Indicator, not a Gate¶
We use coverage data to understand risk — not to hit a number.
Coverage statistics tell us where risk lives — untested code paths, overlooked edge cases, modules that have grown without corresponding test investment. That's genuinely useful information.
What we don't do is enforce arbitrary thresholds. Hard gates drive the wrong behaviour: teams write low-value tests to satisfy a metric rather than meaningful tests that catch real defects. We'd rather have 70% coverage of well-written, targeted tests than 95% coverage of tests that assert nothing useful.
Underlying all of this is a commitment to testability. Coverage data is only meaningful if the product is built to be testable in the first place. We advocate for testability from the earliest design decisions — ensuring code is structured so that behaviour can be isolated, validated, and understood. A product that is hard to test is a product that carries hidden risk. We make testability a first-class concern, not an afterthought.
07 — Non-Functional Quality is Built In, Not Bolted On¶
Accessibility, performance, security, and visual integrity are part of quality from day one — not audits before go-live.
These aren't separate workstreams or last-minute checklists. They are first-class quality concerns, tested continuously and owned by the whole team:
- Accessibility is validated as part of every regression cycle, not deferred to a pre-launch review.
- Performance is monitored against agreed baselines throughout delivery. Degradation is caught early, not discovered in production.
- Security is raised during design and requirements — vulnerabilities are far cheaper to address before build than after.
- Visual regression is automated where stable baselines can be maintained, so unintended UI changes are caught as part of normal test execution.
Non-functional quality is continuous, not a last-minute checklist.
08 — Quality is Everyone's Job¶
Testers set the standard and provide the expertise. The whole team is responsible for meeting it.
Quality doesn't belong to a single function. Developers write tests. Business analysts write testable acceptance criteria. Architects consider testability in design. Everyone raises risks early and holds the team to best practices.
Our testers provide expertise, standards, and guidance — but they are not a gate at the end of the line. Quality is built in from the start. It is never bolted on at the end.
Scope of testing¶
Automation¶
The scope of our automation includes unit, component, API, integration, regression, smoke, security, accessibility, and performance tests, as well as automated acceptance tests tied to user stories. We prioritise business-critical, repetitive, and high-risk scenarios for automation first.
Test validation¶
Test validation focuses on where human judgement adds the most value: exploratory testing, UX/UI validation, accessibility auditing, and ad-hoc testing. High-value one-off scenarios that don't justify the cost of automation are also handled using appropriate testing techniques.
Types of testing¶
We apply multiple layers of testing to build confidence at every level of the system, following the test pyramid model.
Unit testing forms the foundation. Fast, isolated, and cheap to run, unit tests verify that individual components behave as expected. We use the appropriate framework for the language — xUnit and NUnit for .NET, Jest for JavaScript and TypeScript, pytest for Python, and equivalents elsewhere. Unit tests run on every commit and are expected to pass before code is merged.
API and integration testing verifies that components work together correctly. This is where we catch issues with data flows, service boundaries, and external dependencies. We use tools like Postman and Newman extensively for API-level testing, validating contracts, response structures, and error handling across service boundaries.
End-to-end testing validates complete user journeys through the system. We use Playwright as our primary tool for browser-based E2E testing — it's fast, reliable, and supports multiple browsers out of the box. E2E tests are powerful but expensive to maintain, so we focus them on critical user paths rather than trying to cover every permutation.
Performance testing ensures the system can handle expected (and unexpected) traffic. We baseline early in the sprint cycle, run load and stress tests post-merge on staging environments, and set performance SLAs for response times, throughput, and latency. Performance testing is built into the delivery lifecycle rather than left as a last-minute activity.
Security testing is integrated throughout the pipeline. We run static analysis (SAST) on commit and pull request, and dynamic testing (DAST) as part of nightly or full pipeline runs. Teams are educated on common vulnerabilities, with the OWASP Top 10 as a baseline.
Accessibility testing combines automated checks integrated into the UI pipeline with manual audits on high-traffic flows before release. Accessibility is not an afterthought — it's part of our standard testing scope.
Testing in the delivery lifecycle¶
Testing isn't a phase — it's woven into every stage of our agile delivery process.
Backlog grooming includes test scenarios and a testability review. How will we test this? What does "done" look like? What are the riskiest areas that need the most coverage? These questions shape the technical approach from the outset.
Sprint planning defines test automation tasks per story on the project board, split into test design and automation tasks so they're visible and estimated alongside development work.
During the sprint, we automate during development, not after. Developers write tests alongside their code. Tests run locally before pushing, and the CI pipeline validates them on every commit and pull request. Code reviews include reviewing the quality and coverage of tests, not just the production code. Teams build and use shared components where applicable.
Definition of Done includes unit, API, UI automation, and non-functional coverage. A story isn't done until it's tested.
Sprint review includes a demo of test coverage, giving the team and stakeholders visibility of quality alongside functionality.
Exploratory testing complements the automated suite throughout. Skilled testers think creatively about how the system might fail, test edge cases that automated tests wouldn't cover, and bring a user's perspective that pure automation misses. Automation handles the repetitive checks; people handle the thinking.
Continuous testing in the pipeline¶
Our CI/CD pipelines enforce quality automatically. A typical pipeline runs tests at multiple trigger points:
| Trigger | What runs |
|---|---|
| Code commit | Unit tests |
| Pull request | API and UI tests |
| Nightly / scheduled | Full regression, non-functional tests (performance, security, accessibility) |
| Deployment | Smoke tests to verify the deployment is healthy |
If tests fail, the pipeline fails. This is the automation-first mindset in practice — trust the pipeline and keep it green.
Governance and standardisation¶
We maintain consistency across projects without stifling flexibility.
Centralised frameworks and libraries enable faster onboarding, easier knowledge transfer, and reduced maintenance. Common utilities, logging wrappers, reporting tools, and test data generators are shared across teams and open to contribution.
Coding standards for test code are enforced consistently. Test code is production code — it should be clean, readable, and maintainable.
Documentation standards include a test strategy per project (aligned to our organisation-wide strategy), versioned test cases with traceability to requirements, and clear READMEs. Tests themselves are treated as live documentation — they should be readable enough that a new team member can understand the system's behaviour from the test suite.
Dashboards provide visibility of automation job status, test coverage trends, and quality metrics across projects.
Metrics and reporting¶
We track and report on metrics that drive meaningful improvement:
- Test coverage — as a risk indicator, not a target (see our principles above)
- Automation pass/fail rates — to monitor test suite health and catch flaky tests
- Defects caught pre vs post production — the clearest measure of whether shift-left is working
- Non-functional benchmarks — performance baselines, response time trends, throughput
- Security vulnerability trends — tracking resolution rates and patterns over time
Tools we use¶
We select tools based on the client's tech stack, team capability, and project needs. Tools we have deep experience with include:
| Purpose | Tools |
|---|---|
| Unit testing | xUnit, NUnit, Jest, pytest, and language-appropriate equivalents |
| API testing | Postman, Newman |
| E2E / browser testing | Playwright |
| Performance testing | k6, JMeter, Locust |
| Security testing | SAST and DAST tooling integrated into pipelines |
| Accessibility testing | Automated checks in UI pipelines, manual audits |
| CI/CD integration | Azure DevOps Pipelines, GitHub Actions |
This isn't an exhaustive list — if a project needs something different, we'll adopt it. The principle is always to use the best tool for the job, not the most familiar one.
AI-assisted testing¶
We use AI tools to enhance our testing and development workflows. Tools like Claude Code and GitHub Copilot help us with both technical tasks — scaffolding test frameworks, generating test cases, writing automation code — and non-technical tasks like documentation and test planning.
AI is an accelerator, not a replacement. Everything AI-assisted goes through human review. We use these tools safely and responsibly, with a human always in the loop to validate output, catch errors, and apply the engineering judgement that AI can't.
We continue to explore how AI can add value in areas like visual testing, coverage analysis, and identifying patterns in test failures, adopting new capabilities where they prove genuinely useful.
Automation-first doesn't mean automation-only
Our strongest test strategies combine a robust automated suite with targeted exploratory testing. Automation catches the known risks; skilled testers find the unknown ones.