Cabiri approach to testing
Testing across the application explained…
Anyone who has been in a meeting with Cabiri will know by now that testing is one of the drums we bang the loudest. It’s so important in fact, that it has its very own set of tailor made repeatable phases within a project life cycle. We even build tests, to test our tests! As a Senior Test Automation Engineer, my work alongside Dom Leone, Ayub Khan & Gab Zsapka, involves the design and implementation of test frameworks. Frameworks that support agile software development teams in their pursuit of excellence for their customers. Easy – with eleven years of commercial experience under my belt and an unbeatable, experienced team.
Cabiri aims to release small changes as frequently as possible and have implemented a solid pipeline that could see a requirement ready for release within two hours of work starting. A typical day can see seven or eight releases across the entire stack, an unusual day much more. Our tried and tested process and highly skilled team handle quality testing with efficiency and speed. To support this release cadence we have adopted the following approach. Moving from environment to environment takes up to 30 mins – including both build and test times.
Without divulging too many trade secrets I can give a couple of things away. To begin, one key point of difference is that Cabiri testers land with a variety of different skill-sets across the testing process, bringing together a comprehensive, tight, specialist testing team. With the exception of some isolated performance and advanced security/penetration services, we don’t outsource. Cabiri brings skills together under one roof and allocates time to become properly acquainted with each system and third party integration tested, whether with AWS, GCP or an alternative service. Practicing the shift left testing methodology, we emphasise a necessity for developers to focus on quality right from the beginning of a Software Development Life Cycle rather than waiting for the discovery of bugs and issues towards the end.
…In a nutshell
Services
Unit & Integration tests
Backwards compatibility
Graph
Unit & Integration tests
Backwards compatibility
React web application
Component tests
Backwards compatibility
Application as a whole
End to end functional tests
Performance test (ad-hoc based on changes)
Visual regression tests on every build
Accessibility and security overview on every build
Cross browser & device
React should handle modern browser and devices natively
A customer’s cross browser requirements can be met with critical user journeys only
These tests only run when changes are made to a critical user journey
Non critical journeys – manual check locally or on the build environment
We do not exhaustively test all supported browsers
We do test high market share supported browsers and those specifically flagged in advance
Release process
Canary deploy with auto rollback
30 minute manual rollback window
Issues picked up can be rolled back or fixed forward
Production monitoring during release (DataDog)
Flexible tooling
With one size not fitting all, flexible tooling packages are put together based on individual project requirements.
JEST – unit and integration tests predominantly written by developers to support and prove their changes to services and graph.
Cypress Component – React web application component testing, written by developers and QA engineers as new front-end components are delivered.
Cypress e2e – the main bulk of functional tests are written by QA engineers to prove features end to end. Cypress offers a range of features that enables the Cabiri testing team to test the application more thoroughly and with less flake. We would recommend the paid subscription here to utilise the test analysis dashboard and parallelisation features.
WebDriverIO – is used sparingly to plug any holes in the Cypress testing. Redirect payment methods and integration with 3rd party cross browser providers are often more easily delivered in this framework. We find these tests to be harder to maintain and less reliable so keep traditional selenium type framework use to a minimum.
LighthouseCI – implemented to test key indexable pages ensuring we meet Google’s base level requirement.
Percy – pixel-by-pixel baseline comparison regression testing, implemented on key pages such as product details page, listings page and homepage.
Gatling – performance test solution running on cloud infrastructure against a separate performance testing environment where 3rd parties can be stubbed if required.
DataDog synthetic tests – critical paths run every 15 minutes against production.
GitHub – we run all workflows on Actions.