Test Approach¶
Testing is fundamental to how we deliver at CDS. We take an automation-first approach, building quality in from the start rather than inspecting it in at the end. Our testing practices are modern, pragmatic, and shaped by real delivery experience.
The purpose of testing at CDS is to ensure faster feedback loops, earlier defect detection, and seamless support for CI/CD. Everything we do in testing is oriented around helping agile delivery squads build clear, maintainable, and scalable test automation.
Core principles¶
Automation first¶
We automate everything where possible. Automated tests run faster, catch regressions earlier, and free up people to focus on the exploratory and creative testing that machines can't do. If a test can be automated, it should be.
Automation is the default approach. Manual testing applies where automation is not feasible, or where it enhances coverage through methods such as exploratory testing. We define automation as part of the Definition of Done for each story and prioritise testability as a design requirement from story grooming onwards.
Shift left¶
We push testing as far left as we can. Developers write tests alongside their code, not after the fact. Testers are embedded early in the requirements and design stage, and we foster developer-tester collaboration for early feedback. Unit, API, and integration tests form the foundation of our automation — following the test pyramid model, where the base is broad and fast, and the peak is narrow and targeted.
Fail fast¶
We integrate tests into CI/CD pipelines to catch issues as early as possible. Builds fail on critical regression or non-functional test failures. Code doesn't progress until the issue is resolved — this is non-negotiable.
Keep it simple, reuse often¶
Test code should be consistent, scalable, and readable. The same coding standards and quality rules that apply to production code apply to test code. We build and maintain common automation libraries — shared utilities, logging and reporting wrappers, standard test data generators — so that teams aren't reinventing the wheel on every engagement. These shared resources are open to contributions from all teams.
The right tool for the situation¶
We don't mandate a fixed toolset across every engagement. Different clients, tech stacks, and project constraints call for different tools. We're experienced across a broad range of testing frameworks and choose the one that fits the situation, rather than forcing a one-size-fits-all approach.
Coverage as a risk indicator, not a gate¶
We track test coverage statistics because they're valuable for highlighting areas of risk — untested code paths, overlooked edge cases, and modules that have grown without corresponding test investment. But we deliberately don't enforce strict coverage gates. Hard thresholds encourage the wrong behaviour: teams write low-value tests to hit a number rather than meaningful tests that catch real defects. We'd rather have 70% coverage of well-written, targeted tests than 95% coverage of tests that assert nothing useful.
Scope of testing¶
Automation¶
The scope of our automation includes unit, component, API, integration, regression, smoke, security, accessibility, and performance tests, as well as automated acceptance tests tied to user stories. We prioritise business-critical, repetitive, and high-risk scenarios for automation first.
Test validation¶
Test validation focuses on where human judgement adds the most value: exploratory testing, UX/UI validation, accessibility auditing, and ad-hoc testing. High-value one-off scenarios that don't justify the cost of automation are also handled using appropriate testing techniques.
Types of testing¶
We apply multiple layers of testing to build confidence at every level of the system, following the test pyramid model.
Unit testing forms the foundation. Fast, isolated, and cheap to run, unit tests verify that individual components behave as expected. We use the appropriate framework for the language — xUnit and NUnit for .NET, Jest for JavaScript and TypeScript, pytest for Python, and equivalents elsewhere. Unit tests run on every commit and are expected to pass before code is merged.
API and integration testing verifies that components work together correctly. This is where we catch issues with data flows, service boundaries, and external dependencies. We use tools like Postman and Newman extensively for API-level testing, validating contracts, response structures, and error handling across service boundaries.
End-to-end testing validates complete user journeys through the system. We use Playwright as our primary tool for browser-based E2E testing — it's fast, reliable, and supports multiple browsers out of the box. E2E tests are powerful but expensive to maintain, so we focus them on critical user paths rather than trying to cover every permutation.
Performance testing ensures the system can handle expected (and unexpected) traffic. We baseline early in the sprint cycle, run load and stress tests post-merge on staging environments, and set performance SLAs for response times, throughput, and latency. Performance testing is built into the delivery lifecycle rather than left as a last-minute activity.
Security testing is integrated throughout the pipeline. We run static analysis (SAST) on commit and pull request, and dynamic testing (DAST) as part of nightly or full pipeline runs. Teams are educated on common vulnerabilities, with the OWASP Top 10 as a baseline.
Accessibility testing combines automated checks integrated into the UI pipeline with manual audits on high-traffic flows before release. Accessibility is not an afterthought — it's part of our standard testing scope.
Testing in the delivery lifecycle¶
Testing isn't a phase — it's woven into every stage of our agile delivery process.
Backlog grooming includes test scenarios and a testability review. How will we test this? What does "done" look like? What are the riskiest areas that need the most coverage? These questions shape the technical approach from the outset.
Sprint planning defines test automation tasks per story on the project board, split into test design and automation tasks so they're visible and estimated alongside development work.
During the sprint, we automate during development, not after. Developers write tests alongside their code. Tests run locally before pushing, and the CI pipeline validates them on every commit and pull request. Code reviews include reviewing the quality and coverage of tests, not just the production code. Teams build and use shared components where applicable.
Definition of Done includes unit, API, UI automation, and non-functional coverage. A story isn't done until it's tested.
Sprint review includes a demo of test coverage, giving the team and stakeholders visibility of quality alongside functionality.
Exploratory testing complements the automated suite throughout. Skilled testers think creatively about how the system might fail, test edge cases that automated tests wouldn't cover, and bring a user's perspective that pure automation misses. Automation handles the repetitive checks; people handle the thinking.
Continuous testing in the pipeline¶
Our CI/CD pipelines enforce quality automatically. A typical pipeline runs tests at multiple trigger points:
| Trigger | What runs |
|---|---|
| Code commit | Unit tests |
| Pull request | API and UI tests |
| Nightly / scheduled | Full regression, non-functional tests (performance, security, accessibility) |
| Deployment | Smoke tests to verify the deployment is healthy |
If tests fail, the pipeline fails. This is the automation-first mindset in practice — trust the pipeline and keep it green.
Governance and standardisation¶
We maintain consistency across projects without stifling flexibility.
Centralised frameworks and libraries enable faster onboarding, easier knowledge transfer, and reduced maintenance. Common utilities, logging wrappers, reporting tools, and test data generators are shared across teams and open to contribution.
Coding standards for test code are enforced consistently. Test code is production code — it should be clean, readable, and maintainable.
Documentation standards include a test strategy per project (aligned to our organisation-wide strategy), versioned test cases with traceability to requirements, and clear READMEs. Tests themselves are treated as live documentation — they should be readable enough that a new team member can understand the system's behaviour from the test suite.
Dashboards provide visibility of automation job status, test coverage trends, and quality metrics across projects.
Metrics and reporting¶
We track and report on metrics that drive meaningful improvement:
- Test coverage — as a risk indicator, not a target (see our principles above)
- Automation pass/fail rates — to monitor test suite health and catch flaky tests
- Defects caught pre vs post production — the clearest measure of whether shift-left is working
- Non-functional benchmarks — performance baselines, response time trends, throughput
- Security vulnerability trends — tracking resolution rates and patterns over time
Tools we use¶
We select tools based on the client's tech stack, team capability, and project needs. Tools we have deep experience with include:
| Purpose | Tools |
|---|---|
| Unit testing | xUnit, NUnit, Jest, pytest, and language-appropriate equivalents |
| API testing | Postman, Newman |
| E2E / browser testing | Playwright |
| Performance testing | k6, JMeter, Locust |
| Security testing | SAST and DAST tooling integrated into pipelines |
| Accessibility testing | Automated checks in UI pipelines, manual audits |
| CI/CD integration | Azure DevOps Pipelines, GitHub Actions |
This isn't an exhaustive list — if a project needs something different, we'll adopt it. The principle is always to use the best tool for the job, not the most familiar one.
AI-assisted testing¶
We use AI tools to enhance our testing and development workflows. Tools like Claude Code and GitHub Copilot help us with both technical tasks — scaffolding test frameworks, generating test cases, writing automation code — and non-technical tasks like documentation and test planning.
AI is an accelerator, not a replacement. Everything AI-assisted goes through human review. We use these tools safely and responsibly, with a human always in the loop to validate output, catch errors, and apply the engineering judgement that AI can't.
We continue to explore how AI can add value in areas like visual testing, coverage analysis, and identifying patterns in test failures, adopting new capabilities where they prove genuinely useful.
Automation-first doesn't mean automation-only
Our strongest test strategies combine a robust automated suite with targeted exploratory testing. Automation catches the known risks; skilled testers find the unknown ones.