At daily standup meetings, they eye each other from opposite sides of the room. Sitting on the same side of the cubicle wall is unthinkable.
They’re united only by their desire to produce quality software products and their appreciation for coffee and energy drinks. What’s good to one side can be anathema to the other when it comes to code.
I’m talking, of course, about testing and development teams. In the interests of
generating more comments improving dialogue between two very important functions in a software organization, our marketing director asked me to interview our testing team lead, Jonathan Patchell, about the ways in which developers drive his team nuts.
Patchell, a computer systems engineer, has been with Klocwork for five years and a team lead for two. He struck a fairly conciliatory tone for this interview, which sorta ruins the adversarial approach, but don’t let his diplomacy fool you. I’ve seen him suffering as the release date approaches and his demeanour changes completely.
Here are Patchell’s top dev peeves:
- Terse or no information about new features.
It’s hard to be thorough with test cases when there’s little or no information about what the feature is, important scenarios, potential problems, and impact to existing related and unrelated systems, Patchell says.
The fix: “We have to ask the right questions during meetings and developers need to make clear what needs to be tested.” An information dump to a wiki page, casual conversation, or an email is always appreciated, he says. As Patchell puts it, “Both dev and testing need the feature to be well tested.”
- Changing things in the product that break automated testing.
When hundreds of automated test cases fail overnight, they can cause momentary panic, requiring investigation and wasting time.
The fix: Let the test team know ahead of time if something will break automated testing. The sooner the team knows about these changes, the sooner they can begin updating the test scripts, Patchell says.
- Solving problem reports without describing what was done.
The fix: Information about how the developer fixed the problem to make expected behaviour clearer.
- Not getting a build .
Once upon a time, only weekly builds were tested. Now, in keeping with the agile model, builds occur nightly, unless a critical feature breaks and then there’s no build. Almost always there are bug fixes that need to be tested. Broken builds delay confirmation that they are in fact fixed and impede the finding of new problems. This is especially important at the end of the release cycle.
The fix: Stop doing that.
- Not wanting to fix stuff.
Problem reports that are gated Would Be Nice (WBN) or Future by development indicate that testing and development aren’t aligning properly over what’s important. Sure it may mean adding a “bit of polish to make a feature look more finished,” Patchell says, “but it can go a long way towards improving usability.”
The fix: Fix these issues if time truly permits.
- Lack of clarity about limitations or feature done-ness.
Patchell likes upfront information about what’s expected to work and what isn’t with new features, so the work can be scoped properly. With agile, partial features are often tested. A lack of this type of information can lead to frustration on both sides—developers because Problem Reports are being logged against aspects of the feature not yet implemented and testers who have little information about what’s testable and what isn’t.
The fix: “Everything can change in a day,” Patchell says. “I want to know what’s different with that feature today.”