Although SOAPUI and BDD can be complimentary, when adding BDD test automation into existing test processes that use SOAPUI, it can be confusing how to adjust the manner these tools are used. Once a team uses BDD in lieu of imperative specifications (traditional requirements that describe how to build the software) they are already using BDD scenarios for describing the behavior they want from the software solution. But now what to do about test automation?
Functional Testing
Continue using SOAP UI will result in tests and test reports with no linkage between the BDD scenarios used for planning and test results. This means feature documentation in the form of BDD scenarios will go out of date. In summary, BDD will give the team value by reducing or eliminating requirements misses but won’t result in maintained documentation.
Conundrum
If the intent of the SOAPUI tests is to prove the functionality works for a functional scenario and now you want to use BDD, then here is a suggestion (and feel free to comment/tweet/email on how this works for you): use SOAPUI as a build tool to form and test the messages that you want to pass back and forth, then cook those into Java code for BDD service tests. This way BDD tests became your functional test suite (AKA acceptance test suite). But maybe you’re still not satisfied.
Another layer: Exo-Testing
The word “exo” comes from Greek, meaning, from outside. In Enterprises that have teams delivering functionality built on top of services from others, they want to know if their integration points still work. One way would be to simply execute your BDD tests against “outsider” systems, though there are a number of companies that report difficulty. Their service test environments are too unstable to support running BDD tests nightly and would result in the tests that fail so frequently that the tests wouldn’t provide value in detecting regression. If this is your situation then:
- isolate your BDD tests with mock service objects (or virtual services)
- create a test automation suite targeted to answering the question: “are the services I depend on honoring their API contracts?”
These exo-tests should be simple—focused on smoke testing the API rather than behavior (BDD)—so there isn’t much value in linking feature documentation with these tests. Tools like SOAPUI could be a good fit for this. When an exo-test fails, then the developer is left with the question: “do i need to change how I interact with an upstream dependency or is this a regression?” If the answer is: change how I interact with an upstream dependency, then those mock services that the BDD tests are using will need to be updated too. The exo-tests become an automated check that prompts a team to:
- report to upstream partners that something is wrong
- update how they interact with upstream dependencies
- update mock objects they are using for isolation
Consumer Driven Contracts goes a step further in saying you should socialize tests used to check an upstream dependency with the team who owns the app, ideally by handing over an executable API spec so the upstream team can add those tests into their test suit. This will improve reaction time to the above first bullet so that they know they will break you before rolling out their release, and with that information, either notify you of a breaking change or changing strategy so they don’t break the contract.
If you are stuck in the situation where something wild is happening with upstream test environments that you depend on, then isolate your behavior tests with mocks and put in place a simple test suite targeted to watching that the contract is still working. With discipline and diligence, you’ll protect your test automation from instability until you get rid of the infestation that is the root of the problem.
(By the way, if you’re into steampunk soap, the picture at the top is a real product.)