Wait...Is He Kidding?
A while back I was working with a large client to plan their Agile training curriculum for teams across the country. They wanted to have ready a few courses on Agile development practices, and I recommended my Essential TDD course as a starting-point. The head of the program said, "Let's not train them on anything that advanced. Let's start with a unit-testing course."
I answered, "I find it easier to train and coach people on good unit-testing practices through the practice of Test-Driven Development. TDD makes unit-testing easier: Easier to do right, and easier to do in time to make a difference."
Unit-testing, as a stand-alone practice, is frustrating and dull. The fun work of designing the solution and writing the code is over, and now we must prove that we did it right. Usually, this means we have to re-read our own code, remember what it's supposed to do, and retrofit unit-tests onto it. The tests we write after the fact are often difficult to write. Often, they give us bad news (that would have been more helpful sooner). Plus our own fallible human brains will often fall back on justifications for why any failing test is wrong rather than an indicator of a mistake (i.e., Confirmation Bias).
Many developers find Test-Driven Development to be easier, more enjoyable, more effective, and more logical. For me, it was like discovering that I'd been walking on my hands since graduating from college: Yeah, after 10 years I was pretty good at painstakingly writing simple, quality code without unit-tests, but once I got used to the "weirdness" of TDD, I saw how my old ways had limited my creativity, productivity, and understanding. I was back on my feet.
Just Like Science, Only Totally Different
Software Developers are natural problem-solvers. This can be to their detriment. When we write software "on our hands" we mentally note the problem, then select the most interesting solution of the half-dozen that pop instantly into our heads. "That smells like a Composite Pattern. Yeah! I'll implement a Composite here." Yep, I used to design up-front. Only it turned out not to be a Composite, and I wrote too much prematurely-generalized code, giving my product capabilities that no one asked for, yet someone paid for.
I also had extra code to maintain. Every time I added new behavior to "my" code, I had to carefully ponder the effects of my changes on the myriad intricacies of my existing beautiful design. The more the software grew, the more the complexity problems grew, and the greater chances I would introduce a defect. Walking on one's hands is a cool trick, sure. It's just not the most professional strategy.
Fortunately, in 1998, XP Coach extraordinaire Fred George came to visit our team, and turned my world upside-down.
In college (1984) I started out as a physics major, and thrived in all environments where scientific experimentation and discovery were the norm. Creating a hypothesis and designing experiments is somewhat akin to writing a unit-test first: State the problem first, then use that to prove the solution. Then refine the problem/solution pair through Merciless Refactoring. When I realized this, I was hooked. TDD feels more like scientific investigation, and less like hacking.
Science isn't the perfect metaphor: After all, you can't make a false hypothesis true by tweaking the Universe. But you can make a test pass by making small changes to the code (Bob Martin calls them Transformations).
And It Works!
The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD.
-- Nagappan et al, © Springer Science + Business Media, LLC 2008
I used to deliver TDD training at Microsoft for Net Objectives, and I recall talking to two attendees from different teams who, while working on the course labs together, had discovered through conversation that they were assigned to very similar products. Microsoft is one of the only organizations large enough to do a good study of similar-sized projects, with teams who have the freedom to choose aspects of their methodology. Up until this study, most of our evidence of TDD's efficacy was either academic or anecdotal.
A few things I'd like to point out from the IBM/Microsoft study quoted above:
First, 40-90% decrease in defect density? Um...wow! (Need I say more?)
Secondly, note that there is a cost associated with adopting the TDD discipline. This should not come as a surprise: TANSTAAFL, y'know. Worst case, you take a 35% hit in development time, and reduce defect density by 40%. Remember what defects mean to us: Time spent debugging, reworking, and re-testing. Defects, a form of technical debt, are expensive! I'll take the subjective 35% initial hit to avoid incurring that debt.
Thirdly, I'd like to focus on that one word: Initial. Any discipline requires time in order to develop a level of comfort, and to see early benefits. Disciplines are, in fact, painful to adopt, and TDD is no exception. "...Twice as much code...!" developers cry. "No!" I tell them, "Closer to three times the code!" Also, on any program (green-field or existing), you will wonder "Where do I start?" It takes time before the ubiquitous, domain-specific language starts to solidify, and writing the next test becomes effortless.
Fortunately, it really doesn't take that long to see the benefits. One developer on a team that I trained this summer reported back to me after about a month: "I thought it was stupid. I'd write the simplest, stupidest little test--yes, first!--even though I knew the implementation was code I could write in my sleep. I had to focus on not falling back on old habits, even though I would occasionally curse your name. Then, barely two weeks later, one of those stupid-simple tests was failing, in an area of the code that (I thought) I wasn't even touching! That one stupid test saved my bacon. It was a simple mistake, and if I had checked it in, I could have messed up some very important customers..." His company's customers are legal firms. Like the conference T-shirt says, "TDD Saves!"
I've experienced a number of first-person events where the safety-net of comprehensive unit-tests plus the flexibility and maintainability provided through Merciless Refactoring resulted in astounding payoffs. In no less than three cases (on two disparate programs), a simple "user story" contained what most teams would call "major architectural changes." In each case, we were done in less than a week. I had never experienced these extremely short cycle-times until I worked with teams who embraced TDD. (I will share these first-person stories in a future blog post.)
Of course, these benefits could perhaps be obtained by simply having a comprehensive suite of unit-tests, so how is TDD better than unit-testing?
Nutrients-First
When I was a kid, I hated eating my vegetables. My Mom would insist that I couldn't leave the table, or have dessert (if there was any) until I had consumed all those overcooked nuggets of yuck. I finally learned a trick (no, not feeding them to the dog when Mom wasn't looking - that was my siblings' trick! ;-). I discovered two very important things about vegetables: They taste better when they're hot, and they taste a lot better when you're hungry. The trick: Eat your veggies first!
TDD is like that: Write the test first. It's healthier.
We (1) describe the expected interfaces and outcomes, (2) confirm that we've asked for something new and unique, then (3) make the code meet the challenge. The next step, not to be skipped: (4) we clean up the design through refactoring, in preparation for the next tiny step towards the delivery of value.
Once developers grasp why we're writing a single unit-test and watching it fail before we write the code that makes it pass, they eventually cannot resist this mode of thinking. When you write code Test-First, you cannot write untestable code. When you write Test-First, you cannot miss a test; you cannot forget to cover a behavior with tests. When the test does pass, you're likely done with that test for the rest of time, but it serves as a tiny bit of "executable specification," and it serves as a tiny investment in the future.
The alternate route, adding unit-tests to already-written code, often invokes our "Confirmation Bias": I will have a nearly subconscious tendency to trust the code more than the tests, and I may tweak a test to match what the code is already doing.
Backups are Free. Restores? Now Those Will Cost You!
I recall just such a confirmation-bias disaster. In the mid-90's, our Enterprise Backup software's UNIX ports had a chunk of code based on "tar," the UNIX tape-archival command. On restore, tar would mask out the super-user execution bit. I recall seeing this code (hadn't written it...the architect had lifted it from tar), and assumed that a smart, security-minded thing to do in tar was likely a smart thing for our product, too. Except that our product ran as root anyway, and was expected to restore a whole system to a bootable state.
Oops! The architect, executive developer, and UNIX developer (me), had all unknowingly conspired to ruin a customer's day ("further ruin" since the customer was trying to restore the root volume on a large server...). All for want of a clear test scenario prior to the development of this code. All the test scripts we had written around this functionality assumed that the code (and the architect) was right.
How human of us, to assert only that we couldn't have possibly made a mistake. That would be like designing a flawed experiment to prove a pet theory. You're hurting yourself. You're jeopardizing your career. You're walking hot pavement on your hands. Stop that!
We might have a different perspective due to our different roles - I'm not a consultant/trainer, but a salaried developer, who does mentoring too. So while you are called in, with management backing (they are paying for your training), I'm just doing this as part of a regular work day.
ReplyDeleteWhen introducing TDD to colleagues who have not done any automated testing before, while I explain to them the rationale behind why writing tests first is better, initially I focus on helping them make the jump from manual click-click verification to the concept of an automated test (usual stepping stone: the throwaway main() methods they often write). That needs some time to become comfortable with.
Once past that, we create some tests for the legacy code they are currently working on, so they get a feel for how the untestable dragon can be tamed into a test harness. This is a useful exercise anyway, since when you make a change to legacy code, you likely need to start by creating some regression coverage. From this point on, expressing new features in test comes almost naturally.
So I do start by teaching plain old unit testing instead of TDD in order to break down the mental shift into more easily manageable baby steps.
I assume that when you say "unit testing" you mean "writing unit tests after writing code". I have had good results teaching people to do test-first programming (writing tests before writing code, with almost no emphasis on evolutionary design) to get defect rates under control, then introducing test-driven development when they start saying, "Now that we're no longer drowning in defects, I've noticed that I have a hard time writing some of these tests. Maybe it's a design problem?"
ReplyDeleteYou both bring up an interesting topic: the coaching v. training question. I've worked with teams who adopted the TDD practices better through training, and those who adopted better through coaching. Depends on the make-up and experience of the team, and also on the dedication and follow-through of the team's management.
ReplyDeletePeter: You are using excellent techniques for introducing TDD gradually; particularly necessary when you cannot sequester the team for three days of intensive training.
Also, I used to teach TDD courses without a "Legacy" module, and found that *everyone* has legacy code. When coaching a team with legacy code, you have to start there. In the course, I had to start bringing in my own small dragon (who has developed quite a reputation as a nasty little creature!)
JBRains: Yeah, I'm trying to be provocative. Does it suit me? ;-) I can see how introducing test-first separately makes a lot of sense: They have to *have* a safety-net before they can *assume* a safety-net. In the courses, I've taken the opposite path, but for the same reason: The refactoring lab is done first. I want developers hooked on the safety-net of thorough unit-testing as early as possible.
Nice article, Rob. I share similar experiences.
ReplyDeleteFred George is awesome, isn't he? He spoke at Agile Atlanta (XP Atlanta at the time) in the early 2000's. That experience got us to write smaller methods and smaller classes.