The Perils of Testing Code

Testing code is good, right? No! Testing code is bad. Testing functionality is good. How many of us have written a bunch of code and then turned around and wrote a bunch of tests for that code? Be honest! Everyone has done it, but why is it a bad thing? The problem with this is that far too often you end up testing what the code DOES, not what it SHOULD DO. This has always been one problem with testing your own code. You assume your code is correct, so tests are written that validate the code. Unfortunately, all this does is keep you honest in case you change the code to cause the test to break. You have not proven that the code does what it’s supposed to do.
Here is a quick example in RoR:

class User < ActiveRecord::Base
  def unsubscribe!
    forget_me # this will save

class UserTest < ActiveSupport::TestCase
  test "should unsubscribe" do
    user = users(:dave)
    assert user.unsubscribe!
    assert user.unsubscribed? # Assume I have a test for this as well
    assert_nil people(:dave).remember_token

Great, we have a test that verifies that the user is unsubscribed. The problem is that when a user is unsubscribed, they SHOULD have had all their email preferences turned off as well. The code is wrong, but the tests will not catch that because the test only verified the code as written.

This is where Test Driven Development (TDD) comes in. With TDD, you write the tests for the functionality you want to happen. Once you described all the desired functionality as tests, you write the code necessary to make all the tests pass. Now, you’re testing what the code should do and not what it does. I’m on board with this, but there are a few problems in the real world.

The biggest problem that I have with TDD is that it assumes you know what the code should do in the first place. Unfortunately, when you innovate you don’t always know what you will end up with. You’re experimenting, trying new things, seeing what sticks. This is normal for many projects. Still, you need to have tests to verify things once you do settle on a direction.

How does this fit into TDD? It doesn’t fit directly because you aren’t writing tests first. What I do in this situation is the following:

  1. Once I settle on a direction and set of functionality, I attempt to write the tests for what I have without looking at the implementation code. Write the tests to verify your intended results rather than looking at the code to see what it does. I’m trying to avoid the perils of testing the code.
  2. Now that I have a direction, I can go back to writing the test followed by making the test pass.
  3. If a problem is discovered anywhere, I write a test to reproduce the issue before fixing it in the code.

The other issue I have with TDD is how do I write code to verify what code should do if I don’t know how it is supposed to do it? When writing the tests, I have to leave many method calls blank until I figure out how things are done and what I should check. Often, I go back to the above until I have a set of standard methods to use, or better yet I let the tests determine what the methods should be. Either way, the above scheme still works for me.

This hybrid approach works for me, but YMMV. The bottom line, as always, is that the code isn’t done until the tests are done as well.

Now, go write some tests!





Leave a Reply

Your email address will not be published. Required fields are marked *