Pretty early in my career as a software developer, I realized the traditional approach to unit testing wasn't working for me. The back and forth of writing just enough to fail then fixing, then writing just enough to fail then fixing, is a bit too disconnecting for my brain to really enjoy. Try as much as I might, the extreme breaking down of tasks this way left me frustrated, unmotivated, and generally sad. I couldn't do it.
Luckily for me, I'm an incredibly stubborn person and had no intention of quitting. Instead of giving up on testing, I experimented with the approach until I found something that made me really love writing unit tests. I love it so much that I actually jump on the chance to refactor existing test suites and fill in testing gaps! Now I don't get mad at tests — I get even. (Insert awesome guitar riff here.)
Step 1: Fine, I guess I'll write some happy tests first
Alright, let's go with a simple example of some function that takes in a word and returns the number of letters (in JavaScript because I'm lazy) and a little happy test for it using Jest.
let howManyLetters = word => word.length
it('returns the right number of letters', () => {
expect(howManyLetters('it')).toBe(2) // Pass
})
Awesome! 100% test coverage right there friends! Time to pack up!
Step 2: Blow it up
Honestly, for something this simple that's probably where a lot of people would stop, right? Not me, though! I don't trust my code or anyone else's, so I put on my attack face and start poking.
let howManyLetters = word => word.length
it('returns the right number of letters', () => {
expect(howManyLetters('it')).toBe(2) // Pass
})
it('explodes as expected', () => {
expect(howManyLetters(undefined)).toBe(0) // Fail
})
it('works, but that is not a word', () => {
expect(howManyLetters(['it', 'works'])).toBe(2) // Pass
})
it('works, but was I allowing multiple words?', () => {
expect(howManyLetters('it works')).toBe(8) // Pass
})
This example is hilariously oversimplified, but I hope it's demonstrative of the goal. My approach isn't to test whether or not the function works, but all the ways it could go wrong. The larger and more complex the function, the more likely it is something weird could happen. And I want to know what that weird thing is and account for it long before the code is in production.
What I said earlier about not trusting anyone's code is important too — I'll make somewhat heavy use of test doubles in my unit tests in order to force something weird to happen. Even if it's really unlikely for the other function I'm calling to fail, I'll use a double to throw something weird just to see how my code handles it. Does it throw an unhelpful exception? Can I see what failed and where? Do I get a nice log telling me what went wrong?
Usually what I see going into existing test suites is a focus on exercising the logic and confirming all possible happy paths are covered. I've seen code bases with 90% coverage that are wildly painful to debug when something does go wrong because of this. Coverage can be a good indicator of what's missing, but if you aren't facing your code down like a toddler you're trying to dress when running late, you're going to miss things. (Alternate metaphors for other life paths: Like a puppy you're trying to make drop the suspicious thing it found in the woods; Like a bear who figured out you have meat in your cooler; Like a quarterback one touchdown away from winning in overtime.)
I've found this approach to testing scratches a "what would happen if ..." itch for me that lets me go wild on unit tests in interesting ways. With this approach, I've also found and resolved a number of surprising bugs that otherwise would have been missed (until a user stumbled on it).
Now let's go up a level.
Step 3: About those integration tests
Time to grab the function or route that calls all the other functions and routes and have a party! My general rule for integration tests is to focus on the absolute most critical paths and then just blast the failure handling. (After all, your unit tests should cover all the weird internal logic, right?)
I find this to be extremely important with REST APIs, especially as they are most likely to be seen by external users. I generally try to follow what one of my mentors called the 5 o'clock Friday Rule: no matter what goes wrong, you want enough information that you can fix it quick enough to go home at 5 pm every Friday.
Keeping this in mind, I always care more about all the ways things can go wrong than any of the ways things can go right. I hate overtime. With a passion. It's the worst. My entire testing strategy is to avoid overtime.
Nice try evil code!
Actually, I think that's the core of this whole philosophy. The code hates me and wants me to work overtime, so I'll find all the ways it can ambush me and stop it in its tracks.
Taking an adversarial approach to testing is how I got past my general dislike of repetitive tasks. Approaching the code like an opponent in a game I need to outmanoeuvre lets me see all the ways it can go terribly wrong. Plus, breaking things is fun! Just ask that toddler!