Wednesday, September 22, 2010

JavaOne 2010: JUnit Kung Fu: Getting More Out of Your Unit Tests

My Wednesday at JavaOne 2010 began with John Ferguson Smart's standing-room-only presentation "JUnit Kung Fu: Getting More Out of Your Unit Tests." Most of the overcapacity audience responded in the affirmative when Smart asked who uses JUnit. While the majority of the audience uses JUnit 4, some do use JUnit 3. Only a small number of individuals raised their hands when asked who uses Test-Driven Development (TDD).

Smart stated that appropriate naming of tests is a significant tool in getting the most out of JUnit-based unit tests. He mentioned that JUnit 4 enables this by allowing for annotations to specify the types of tests rather than having to use the same method name conventions that earlier versions of JUnit requires. In the slide, "What's in a name," Smart pointed out that naming tests appropriately helps express the behavior of the application being tested. Smart likes to say he doesn't write tests for his classes. Instead, classes get tested "as a side effect" for his testing of desired behaviors. Smart recommended that you don't test "how it happens," but test "what it does." If your implementation changes, your test doesn't necessarily need to change because you're only worried about outcomes and not how the outcomes are achieved. Smart talked about how appropriately named tests are more readable for people new to the tests and also provide the benefit of helping test the appropriate things (behaviors).

Smart outlined many naming tips in his slide "What's in a name" (only a subset it listed here):
  1. Don't use the word "test" in your tests (use "should" instead)
  2. Write your tests consistently
  3. Consider tests as production code
For Unit Test Naming Tip #1, Smart stated that "should" is very common in Behavior-Driven Development (BDD) circles. Test methods should be named to provide a context. They should provide the behavior being tested and the expected outcome. I liked this tip because I find myself naming my test methods similarly, but I have always started with the word "test," followed by the method name being tested, followed by the expected behavior and outcome. Smart's recommendations reaffirm some of the things I have found through experience, but I think provide a better articulation of how to do this in a more efficient way than I've been doing.

Smart stated that tests should be written consistently. He showed two choices: "Given-When-Then" or "Arrange-Act-Assert." Smart said that he uses the classic TDD approach of writing to the inputs and outputs first and then writing the implementation.

Smart's bullet "Tests are deliverables too - respect them as such" summarized his discussion of the importance of refactoring tests just as production code is refactored. Similarly, he stated that they should be as clean and readable as the production code. One of the common difficulties associated with unit tests is keeping them maintained and consistent with production code. Smart pointed out that if we treat the unit tests like production code, this won't be seen as a negative. Further, if tests are maintained as part of production maintenance, they don't get to a sad state of disrepair. In the questions and answers portion, Smart further recommended that unit tests be reviewed in code reviews alongside the code being reviewed.

Smart spent over 20 minutes of the 60 minute presentation on test naming conventions. He pointed out at the end of that section that if there was only one thing he wanted us to get out of this presentation, it was the important of unit testing naming conventions. I appreciated the fact that his actions (devoting 1/3 of the presentation to naming conventions for unit tests) reaffirmed his words (the one thing that we should take away).

Smart transitioned from unit test naming conventions to covering the expressiveness and readability that Hamcrest brings to JUnit-based unit testing. Smart pointed out a common weakness of JUnit related to exceptions and understanding what went wrong. Hamcrest expresses why it broke much more clearly. Smart covered "home-made Hamcrest matchers" (custom Hamcrest matchers) and described creating these in "three easy steps." Neal Ford also mentioned Hamcrest in his JavaOne 2010 presentation Unit Testing That's Not So Bad: Small Things that Make a Big Difference.

Only a few people in the audience indicated that they use parameterized tests. Smart talked about how parameterized tests are useful for data-driven tests. JUnit 4.8.1 support for parameterized tests was demonstrated. JUnit creates as many instances of the class as there are rows in the associated database table. A set of results is generated that can be analyzed. Smart also talked about using Apache POI to read in data from Excel spreadsheet to use with parameterized testing. Smart referred the audience members to his blog post Data-driven Tests with JUnit and Excel (JavaLobby version) for further details.

Smart demonstrated using parameterized tests in web application testing using Selenium 2. The purpose of this demonstration was to show that parameterized tests are not limited solely to numeric calculations.

Smart next covered JUnit Rules. He specifically discussed TemporaryFolder Rule, ErrorCollector Rule, Timeout Rule, Verifier Rule, and Watchman Rule. The post JUnit 4.7 Per-Test Rules also provides useful coverage of these rules.

Smart believes that recently added JUnit Categories will be production-ready once adequate tooling is available. You currently have to run JUnit Categories using JUnit test suites (the other work-around involves "mucking around with the classpath"). Smart's Grouping Tests Using JUnit Categories talks about JUnit Categories in significantly more detail.

Parallel tests can lead to faster running of tests, especially when multiple CPUs are available (common today). Smart showed a slide that indicated how to set up parallel tests in JUnit with Maven. This requires JUnit 4.8.1 and Surefire 2.5 (Maven).

Smart recommended that those not using a mocking framework should start using a mocking framework to make unit testing easier. He suggested that those using a mocking framework other than Mockito might look at Mockito for making their testing even easier. He stated that Mockito's mocking functionality is achieved with very little code or formality. The JUnit page on Mockito has this to say about Mockito:
Java mocking is dominated by expect-run-verify libraries like EasyMock or jMock. Mockito offers simpler and more intuitive approach: you ask questions about interactions after execution. Using mockito, you can verify what you want. Using expect-run-verify libraries you often look after irrelevant interactions.
Mockito has similar syntax to EasyMock, therefore you can refactor safely. Mockito doesn't understand the notion of 'expectation'. There is only stubbing or verifications.
Like Neal Ford, Smart mentioned Infinitest. He said it used to be open source, but is now a combination of commercial/open source. The beauty of this product is that "whenever you save your file changes [your production source code], [the applicable] unit tests will be rerun."

Smart stated something I've often believed is a common weakness in unit testing. He referred to this as a "unit test trap": our tests often do test and pass what we coded (but not necessarily the behavior we wanted). Because the coder knows what he or she coded, it is not surprising that his or her tests validate that what he or she wrote as they intended.

Regarding code coverage tools, Smart stated that these are useful but should not be solely relied upon. He pointed out that these tools show what is covered, but what we really care about is what is not test covered. My interpretation of his position here is that code coverage tools are useful to make sure that a high level of test coverage is being performed, but then further analysis needs to start from there. Developers cannot simply get overconfident because they have a high level of test coverage.

Smart stated in his presentation and then reaffirmed in the questions and answers portion that private methods should not be unit tested. His position is that if a developer is uncomfortable with them enough to unit test them individually, that developer should consider refactoring them into their own testable class. For me, this fits with his overall philosophy of testing behaviors rather than testing "how." See the StackOverflow thread What's the best way of unit testing private methods? for interesting discussion that basically summarizes common arguments for and against testing private methods directly.

Smart had significant substance to cover and ran out of time (Smart quipped that we had "approximately minus 20 seconds for questions"). This is my kind of presentation! In many ways, it was like trying to drink from a fire hose, but I loved it! There are numerous ideas and frameworks he mentioned that I plan to go spend quality time investigating further. I'm especially interested in the things that both he and Neal Ford talked about.

DISCLAIMER: As with all my reviews of JavaOne 2010 sessions, this post is clearly my interpretation of what I thought was said (or what I thought I heard). Any errors or misstatements are likely mine and not the speaker's and I recommend assuming that until proven otherwise. If anyone is aware of a misquote or misstatement in this or any of my JavaOne 2010 sessions reviews, please let me know so that I can fix it.

1 comment:

Tomek said...

Thank you for this very interesting review of John Smart's session!

--
Cheers,
Tomek Kaczanowski