Skip to content

Arlo Belshee: What makes a good test suite? – In Detail

by on August 29, 2012

Continuing my answer to Arlo Belshee‘s question What makes a good test suite? I would like to provide some detailed comments on the different answers I have read. If you want to see a summary skip over to me next blog post.

  • Corey Haines summarized it with three keywords: Fast, Focused and Full.
    • Fast: This is the thing I am most often asked for by developers. No one wants to wait an hour or more for results of integration tests.
    • Focused: I experience many developers arguing about this one. In typical tests I reviewed you will see a huge bunch of assertions. The problem – and this is why I think Corey’s point is important: If the first assertion fails you won’t know the state of the following assertions. If you plan one day to fix the first assertion you will be unable to estimate the time you require to get the test completely green again. Thus you will lose agility if not staying focused in your tests.
    • Full: This one is hard to rate: What means full? What is a “full UI test”? How can you measure completeness? You can measure Fast and you can measure Focused. But measuring completeness is rather impossible. And it typically collides with “Fast”. My rating for completeness: If you get a report for a bug which your tests did not uncover they are obviously not complete.
  • J. B. Rainsberger raises issues which describe especially what makes a good test:
    • Consistent results: Especially when you develop (Web-)UI tests you will experience that this is the most challenging task to achieve. But if you fail to have consistent results you will soon experience broken window effects in your tests. Developers will get blind for new test-failures because after 20 times of analysis with always the same result they are tired analyzing the most recent failure. A workaround some teams found: Move unstable tests to extra test-suites (or use JUnit Categories) so that the base of your tests is always green. Now you just need a way to deal with those unstable tests. Fix them? Only review them before release? Or… also an option sometimes: Delete them.
    • Test failures point to mistake: If you let developers test you will find code like assertTrue(fooBar() > 0), i. e. you will need to browse the test code (and application code) before actually knowing what the problem is. My experience: As QA-expert you need to support developers writing tests in a way that they don’t need to write assertion-messages. A good start is to use Hamcrest Matchers here, as they have a very good reporting: assertThat(fooBar(), Matchers.greaterThan(0)).
    • Easy to understand tests: This matches perfect with Corey’s statement on Focused. If a test is not focused it is often hard to understand. My recommendation: The test body should be as easy to read as a (well written) BDD scenario written in natural language or using a more formalized language like Gherkin. Everything other verbose code which does not help to understand the test should move to methods outside the test body.
  • Elisabeth Hendrickson’s answer:
    • Boolean Pass/Fail: Especially having to do with integration testing this simple boolean logic is not always helpful. Especially if you need it to rate the quality of your release. JUnit comes with three states also interpreted by CI Build Applications like Jenkins: Success, Failure or Skipped. In my point of view this still isn’t sufficient:
      • Success: No problem with that.
      • Failure: If you follow Corey’s rule to stay focused, thus to have only one assertion I am fine with that. Otherwise you cannot rate the severity of the failure. This would require something like: Failed and skipped 6 further assertions.
      • Skipped: Especially for Web-UI tests I experience there are two flavors of skipped tests:
        • Not Applicable: The test cannot be done in the given environment, for example does not work with Firefox (either because you failed to test it in Firefox or perhaps just because that feature you want to test is not existent in the Firefox-version of your application.
        • Unknown: I like to use JUnit’s assumptions (which on failure cause the test to be marked as Skipped) to verify pre-conditions. For example if you failed to create a file required for the test you simply cannot tell if your feature is broken. You have to answer: I don’t know.
    • Maintainable and resilient to change: Maintainability is what I like to quote most often regarding tests. It is all derived from the fact that developers tend to rate tests as ballast – thus if they have to deal with tests they need to be easy to write and to maintain where maintainability summarizes many aspects:
      • readable (see J.B.’s point)
      • one failure cause – one line to fix (assuming that the test just needs to be adopted to a product change). For UI Tests you should for example separate the description of the way how to access the UI from the tests. So on UI change you just need to adopt one line.
      • J.B.’s point on exact failure reporting (Elisabeth also mentions this, see below).
    • Diagnostic information: Very important and enriches J.B.’s point of good failure reporting. Especially for integration tests it is important to collect information on the state of the overall system from all available sources.
    • Independent Tests: Especially for integration tests sometimes the most challenging part. My recommendation is that every integration tests cleans up the dirt it produced for example in databases. And if possible test artifacts should be created at a place carefully chosen for each test or test-suite. For example a folder in the file system exclusively available to one test or test-suite. Elisabeth also states this in her next point on “data pollution”.
    • Elisabeth also adds an interesting pointer to a paper by Dale Emery: Writing Maintainable Automated Acceptance Tests.
  • Alro Belshee: What I like about Arlo’s answer is the introduction taking the learning from the received answers and summarizing it to  a nice definition of the completeness of tests: “everything that gives me confidence that my customers will likely love my product”.
    • A good test suite makes changes less scary and more fun. Nothing to add. I especially like to pronounce the fun part – because many developers think that testing is no fun at all. And fun summarizes perfectly well things like easy to write, easy to read, easy to adopt to change and many more.
    • Alro also mentions a word I like for the developer perspective: the flowI never think about them; I just fix and flow past. Yes, a good test should not stop but support the flow a developer has while developing cool new things.
  • Ron Jeffries:
    • Two levels: Ron distinguishes programmer tests and customer tests. Interesting if they have different severities if they fail. I would suppose that you never ever want customer tests to fail, i. e. tests which answer Alro’s question if customers will love the product. For me customer tests sounds like Blackbox testing while programmer tests like Whitebox testing typically used for example for code coverage.
    • The test as story: This matches perfectly well to J.B.’s statement on easy to understand tests. I think the body of a test must be as easy to read as a book. Everything breaking the flow in reading should be delegated to methods outside the test body.
  • Lisa Crispin:
    • As also some other interview partners Lisa asked for a more detailed description what the “test suite” actually is. Indeed we also have many test suites ranging from selected stories for exploratory testing, to manual test sheets, to unit and integration tests and (to separate them from integration tests) UI tests and many more. Each of them might need a different definition of “good”.
  • James Shore: James took the question as start for an extra blog post. So I will refer to this one here:
    • Safety net: That’s a nice metaphor. Yes, tests are the safety net for refactorings.
    • Living Documentation: Just my opinion – while comments and written manuals tend to degrade in time a good test suite always describes the current state and intent of the software. And it is living as it is responsive to refactorings.
  • Ward Cunningham:
    • Ward summarizes in short many of the already mentioned aspects.
    • He also mentions my favorite quote: Tests must be easy to create. If they are not you will always have too few of them.
    • Durable: That puts it to one important point. If you write a test don’t forget that it will persist for years. In our teams we have tests which are 10 years and older. So when writing a test, try to think that someone needs to maintain it 10 years from now. And this someone might be you.
  • Llewellyn Falco:
    • Besides already known facts like speed and readable tests Llewellyn mentions a very good point summarized under the keyword Malleability: A good test suite locks things you don’t want to change and give freedom to what you want to change. Always keep in mind that each test is a form of specification and fixates a behavior for now and forever. Actually this contradicts any measurements like code coverage. With 100% code coverage you most likely won’t be able to move without breaking a test.
  • Steve Freeman:
    • Steve summarizes some important points, already mentioned like easy to understand test, tests which point to the actual problem and as nearly all mentioned: Tests must be fast.
  • Markus Gärtner:
    • He is the first not only stating that tests need to run fast but an experience when teams tend to state that a test is not fast – thus aborting it: Obviously a unit test suite taking more than 15 minutes is a bad idea. For acceptance tests he defines a limit of 90 minutes. My experience is: 120 minutes is definitely too slow. Some teams solved this problem by splitting their collection of tests to run on several machines in parallel.
    • As some others already mentioned Markus also pronounces the importance of good test names. Thus a test named: testHelloWorld() is less helpful than hello_world_should_appear_on_screen().
    • Markus is the only one also defining what a good manual test suite is. All others referred to automated testing. What I think is important is to “challenge the tester’s mind”. Nothing is more boring than executing fixed manual test steps like “click here and there”. We have made good experiences using the approach of “Test Tours” as described in Exploratory Software Testing by James Whittaker.

From → Testing

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s