How developers drive testers nuts–let’s count the ways

How developers drive testers nuts–let’s count the ways

on Feb 17, 11 • by Patti Murphy • with 4 Comments

At daily standup meetings, they eye each other from opposite sides of the room. Sitting on the same side of the cubicle wall is unthinkable. They’re united only by their desire to produce quality software products and their appreciation for coffee and energy drinks. What’s good to one side can be anathema to the...

Home » Software Testing » How developers drive testers nuts–let’s count the ways

The two sides of testing team lead Jonathan Patchell.

At daily standup meetings, they eye each other from opposite sides of the room. Sitting on the same side of the cubicle wall is unthinkable.

They’re united only by their desire to produce quality software products and their appreciation for coffee and energy drinks. What’s good to one side can be anathema to the other when it comes to code.

I’m talking, of course, about testing and development teams. In the interests of generating more comments improving dialogue between two very important functions in a software organization, our marketing director asked me to interview our testing team lead, Jonathan Patchell, about the ways in which developers drive his team nuts.

Patchell, a computer systems engineer, has been with Klocwork for five years and a team lead for two. He struck a fairly conciliatory tone for this interview, which sorta ruins the adversarial approach, but don’t let his diplomacy fool you. I’ve seen him suffering as the release date approaches and his demeanour changes completely.

Here are Patchell’s top dev peeves:

  1. Terse or no information about new features.
    It’s hard to be thorough with test cases when there’s little or no information about what the feature is, important scenarios, potential problems, and impact to existing related and unrelated systems, Patchell says.
    The fix: “We have to ask the right questions during meetings and developers need to make clear what needs to be tested.” An information dump to a wiki page, casual conversation, or an email is always appreciated, he says. As Patchell puts it, “Both dev and testing need the feature to be well tested.”
  2. Changing things in the product that break automated testing.
    When hundreds of automated test cases fail overnight, they can cause momentary panic, requiring investigation and wasting time.
    The fix: Let the test team know ahead of time if something will break automated testing. The sooner the team knows about these changes, the sooner they can begin updating the test scripts, Patchell says.
  3. Solving problem reports without describing what was done.
    The fix: Information about how the developer fixed the problem to make expected behaviour clearer.
  4. Not getting a build .
    Once upon a time, only weekly builds were tested. Now, in keeping with the agile model, builds occur nightly, unless a critical feature breaks and then there’s no build.  Almost always there are bug fixes that need to be tested. Broken builds delay confirmation that they are in fact fixed and impede the finding of new problems.  This is especially important at the end of the release cycle.
    The fix: Stop doing that.
  5. Not wanting to fix stuff.
    Problem reports that are gated Would Be Nice (WBN) or Future by development indicate that testing and development aren’t aligning properly over what’s important. Sure it may mean adding a “bit of polish to make a feature look more finished,” Patchell says, “but it can go a long way towards improving usability.”
    The fix: Fix these issues if time truly permits.
  6. Lack of clarity about limitations or feature done-ness.
    Patchell likes upfront information about what’s expected to work and what isn’t with new features, so the work can be scoped properly. With agile, partial features are often tested. A lack of this type of information can lead to frustration on both sides—developers because Problem Reports are being logged against aspects of the feature not yet implemented and testers who have little information about what’s testable and what isn’t.
    The fix: “Everything can change in a day,” Patchell says. “I want to know what’s different with that feature today.”
I guess the obvious sequel to this is how testers drive developers nuts. I see a whole series: how marketing drives sales nuts, how sales drive development nuts, and how technical writers irritate everyone. Then, I can use pictures of vampires and witches too. This could be an infinite loop of posts.

Related Posts

4 Responses to How developers drive testers nuts–let’s count the ways

  1. Pichat says:

    Thank for the article, easy to read and helps to see a different view on development work.
    Early communication and clear documentation would be nice, sounds familiar. Looking forward to part 2 with vampires. :)

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Scroll to top