Dial the Focus on Behavior

How detailed should BDD scenarios get?
BDD is a powerful requirements analysis tool that first establishes what the business wants before deciding *how* to solve the problem. Here is how to use this macroscopic tool for requirements analysis.

Let’s say the PO brings this user story to a team.
As a writer
I want a spelling checker
So that I can correct my spelling mistakes

With our user story specified, lets add acceptance criteria in the form of BDD scenarios:
Scenario: dog spelled incorrectly
Given
When spell checking
Then suggest d and g and dog
Examples:
|missing one letter|
|dg |
|og |

And so we keep going:
Scenario: cat spelled incorrectly
....

HOLD ON! We’re going to end up writing examples for the entire English dictionary at this rate, which means around 1 million scenarios for an unabridged dictionary.

too detailed?

Let’s ask some questions:

  1. Is this a novel new feature? Or does everyone already know what to do and we’d be stating the obvious?
  2. What’s needed to test this? Do we have a happy path example? Are there edge cases (and if so, what are some examples)?

is it novel?

Novel means “something new and interesting.” If this feature is being built at a time when spell checkers were a new idea, perhaps we need these behaviors written down. Before computers did spell checking, people did it. They simply followed a very old tradition of using dictionaries for standardizing the spelling of words. If we build scenarios for every word, we’d simply be re-codifying knowledge we already know because that knowledge comes from a dictionary. We’d be restating something obvious.

Let’s think further on the meaning of novel: Dictionaries existed since the 16th century. Digital computers have been around since the 1950s. If we were doing BDD in the 1970s, when computer spelling checking was novel (but not the dictionary), then we’d twist the knob of our BDD microscope to put the focus of analysis at a higher level:

Scenario: Suggest spelling corrections
Given a word is written
When the word is not in the dictionary
Then suggest a correction

That would be novel behavior in the ’70s. This is looking good and we aren’t re-documenting something that is already documented–the dictionary. What we’re documenting is the behavior of human spell checkers which is more novel than the dictionary.

We can make our scenario better by offering an example which:

  • is concrete, and
  • allows us to confirm our understanding

Scenario: Suggest corrections
Given [a word] is written
When the word is not in the dictionary
Then suggest a correction
Examples:
|a word|
|dg|

Are we done? Maybe. Let’s ask another question and see if that generates more examples.

what’s needed to test this?

Up to this point, we drove our needs for understanding from the requirements side with the question, “Is it novel?” Now let’s ask a question of one of our other amigos: “What’s needed to test this?” The Completionist will suggest adding in 3 million examples to see if our dictionary works. If we are using a dictionary from a third party, then this would be a waste of time (unless it’s a lousy dictionary) and we’d be again codifying a dictionary but in the BDD format. If you’re goal is building the dictionary, then it’s a good idea to create a test for all words, but let’s assume the dictionary comes from a third party.

Add more examples selected with an eye toward:

  • spot checking the happy path, and
  • confirming edge conditions.

Examples:
|a word
|dg| dog|
|dg.|
|dg!|
|dg?|
|dg-|

“Oh!” says someone in the room. “What behavior should we have for all special characters?” With that question, a lightbulb flashes in the room that never would have flashed if working with scenarios littered with superfluous non-behavioral details such as UI things of clicks and text fields, or overly complicated scenarios using Given/When/Then and dozens of Ands. Until development teams are made up of Vulcans, better to have scenarios with three to five Gherkin keywords to avoid burying the behavior.
A new scenario is discovered:
Scenario: Abuts Non-letter
Given [a word] is written and abuts a none letter
When the word isn't in the dictionary
Then suggest a correction
Examples:
|a word|
|ct!|
|ct@|
|ct#....

It could be discussed if those non-letter examples need to be explicit. If the idea is novel, then better to document them in a BDD scenario. If that is unnecessary, though the team wants automated tests for them, test them using a non-BDD strategy such as unit tests.

Novel Interactions

What if the individual pieces aren’t novel but are used together in a novel way?

For example, what if the dictionary isn’t novel, or grammar checking isn’t novel, how about this router?
As an internet user
I want a router that spell corrects and grammar corrects transmitted information
So that I can't forget to check spelling and grammar.

Whoa! How about an example? We could go down the path of specifying the contents of a Word document:
Scenario: JFK's Word Document
Given For one tuer messure of a nation....
When...
Then...

But is it valuable to put all this detail in the BDD scenario? It would be if the spell checking and grammar checking details were novel. However we’ve had spell checking and grammar checking since the seventies so I doubt it. Let’s thumb  back the zoom on this macroscope and explore behaviors above that.

Scenario: Correct email
Given an email
And there are grammar and spelling mistakes in the email body
When transmitted
Then mistakes are corrected

Scenario: Correct attachments
Given an email
And body has grammar and spelling mistakes
And attached word document has grammar and spelling mistakes
When transmitted
Then mistakes are corrected

If there are a lot of attachment types, we’ll change this to use an examples table:
Scenario: Correct email
Given an email
And body has grammar and spelling mistakes
And [attached documents] have grammar and spelling mistakes
When transmitted
Then mistakes are corrected
Examples:
|attached documents|
|word|
|excel|
|pdf|
|email|
|PPT|
....

“What about permutations of those attachments?” asks the Tester. “And what about really big files? And is there a limit on attachments?”
The PO says, “Let’s support whatever is allowed by Outlook.”
The Dev says, “We’ll design it to do stream processing so the number, size, and permutations won’t play a factor.”
The Tester looks at Dev with a sly grin. “What about non English languages?”
“Evil…” hisses Dev, shaking her head.  “You’re just evil!”
PO shakes her head. “Hold on! Tester is right. Not all of our users speak english. Let me followup with the stakeholder on what we need to support for this release.”
Analyst says, “That’s pretty novel. And I doubt we can suddenly do this for all the world’s languages, can we?”
Dev shakes her head, giving Tester a dirty look.
“OK,” says Analyst. “We’ll add scenarios for the ones that we do support.”

Scenario: Correct German email
Given an email
And body has grammar and spelling mistakes
And [attached documents] have grammar and spelling mistakes
When transmitted
Then mistakes are corrected
....

Scenario: Correct Swedish....

A Checklist

Here are characteristics of good BDD scenarios:

  • Doesn’t mention the user interface
  • Your Product Owner or business stakeholder understands it
  • It’s describing something that’s novel
  • The examples are concrete

Want to learn more?

I’m putting together a BDD training series, some of it is free such as podcasts, videos, and blog articles like this one sent straight to your email box, and some of it is for a small fee for books and online training products. Go here to opt in to receive periodic emails. Otherwise, enjoy the site.

References

Am I BEHAVIORAL or Not?

Well written BEHAVIOR Driven Development Scenarios

BDD Practices that Maximize Team Collaboration and Reduce Risk

Comments are closed.

One Response