![xUnit Test Patterns: Refactoring Test Code](http://img.images-bn.com/static/redesign/srcs/images/grey-box.png?v11.8.5)
![xUnit Test Patterns: Refactoring Test Code](http://img.images-bn.com/static/redesign/srcs/images/grey-box.png?v11.8.5)
eBook
Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
Related collections and offers
Overview
Automated testing is a cornerstone of agile development. An effective testing strategy will deliver new functionality more aggressively, accelerate user feedback, and improve quality. However, for many developers, creating effective automated tests is a unique and unfamiliar challenge.
xUnit Test Patterns is the definitive guide to writing automated tests using xUnit, the most popular unit testing framework in use today. Agile coach and test automation expert Gerard Meszaros describes 68 proven patterns for making tests easier to write, understand, and maintain. He then shows you how to make them more robust and repeatable--and far more cost-effective.
Loaded with information, this book feels like three books in one. The first part is a detailed tutorial on test automation that covers everything from test strategy to in-depth test coding. The second part, a catalog of 18 frequently encountered "test smells," provides trouble-shooting guidelines to help you determine the root cause of problems and the most applicable patterns. The third part contains detailed descriptions of each pattern, including refactoring instructions illustrated by extensive code samples in multiple programming languages.
Product Details
ISBN-13: | 9780132797467 |
---|---|
Publisher: | Pearson Education |
Publication date: | 05/21/2007 |
Series: | Addison-Wesley Signature Series (Fowler) |
Sold by: | Barnes & Noble |
Format: | eBook |
Pages: | 944 |
File size: | 8 MB |
About the Author
Read an Excerpt
The Value of Self-Testing Code
In Chapter 4 of Refactoring Ref, Martin Fowler writes:
If you look at how most programmers spend their time, you'll find that writing code is actually a small fraction. Some time is spent figuring out what ought to be going on, some time is spent designing, but most time is spent debugging. I'm sure every reader can remember long hours of debugging, often long into the night. Every programmer can tell a story of a bug that took a whole day (or more) to find. Fixing the bug is usually pretty quick, but finding it is a nightmare. And then when you do fix a bug, there's always a chance that anther one will appear and that you might not even notice it until much later. Then you spend ages finding that bug.
Some software is very difficult to test manually. In these cases, we are often forced into writing test programs.
I recall a project I was working on in 1996. My task was to build an event framework that would let client software register for an event and be notified when some other software raised that event (the Observer GOF pattern). I could not think of a way to test this framework without writing some sample client software. I had about 20 different scenarios I needed to test, so I coded up each scenario with the requisite number of observers, events, and event raisers. At first, I logged what was occurring in the console and scanned it manually. This scanning became very tedious very quickly.
Being quite lazy, I naturally looked for an easier way to perform this testing. For each test I populated a Dictionary indexed by the expected event and the expected receiver of it with the name of the receiver as the value. When a particular receiver was notified of the event, it looked in the Dictionary for the entry indexed by itself and the event it had just received. If this entry existed, the receiver removed the entry. If it didn't, the receiver added the entry with an error message saying it was an unexpected event notification.
After running all the tests, the test program merely looked in the Dictionary and printed out its contents if it was not empty. As a result, running all of my tests had a nearly zero cost. The tests either passed quietly or spewed a list of test failures. I had unwittingly discovered the concept of a Mock Object (page 544) and a Test Automation Framework (page 298) out of necessity!
My First XP Project
In late 1999, I attended the OOPSLA conference, where I picked up a copy of Kent Beck's new book, eXtreme Programming Explained XPE. I was used to doing iterative and incremental development and already believed in the value of automated unit testing, although I had not tried to apply it universally. I had a lot of respect for Kent, whom I had known since the first PLoP1 conference in 1994. For all these reasons, I decided that it was worth trying to apply eXtreme Programming on a ClearStream Consulting project. Shortly after OOPSLA, I was fortunate to come across a suitable project for trying out this development approachnamely, an add-on application that interacted with an existing database but had no user interface. The client was open to developing software in a different way.
We started doing eXtreme Programming "by the book" using pretty much all of the practices it recommended, including pair programming, collective ownership, and test-driven development. Of course, we encountered a few challenges in figuring out how to test some aspects of the behavior of the application, but we still managed to write tests for most of the code. Then, as the project progressed, I started to notice a disturbing trend: It was taking longer and longer to implement seemingly similar tasks.
I explained the problem to the developers and asked them to record on each task card how much time had been spent writing new tests, modifying existing tests, and writing the production code. Very quickly, a trend emerged. While the time spent writing new tests and writing the production code seemed to be staying more or less constant, the amount of time spent modifying existing tests was increasing and the developers' estimates were going up as a result. When a developer asked me to pair on a task and we spent 90% of the time modifying existing tests to accommodate a relatively minor change, I knew we had to change something, and soon!
When we analyzed the kinds of compile errors and test failures we were experiencing as we introduced the new functionality, we discovered that many of the tests were affected by changes to methods of the system under test (SUT). This came as no surprise, of course. What was surprising was that most of the impact was felt during the fixture setup part of the test and that the changes were not affecting the core logic of the tests.
This revelation was an important discovery because it showed us that we had the knowledge about how to create the objects of the SUT scattered across most of the tests. In other words, the tests knew too much about nonessential parts of the behavior of the SUT. I say "nonessential" because most of the affected tests did not care about how the objects in the fixture were created; they were interested in ensuring that those objects were in the correct state. Upon further examination, we found that many of the tests were creating identical or nearly identical objects in their test fixtures.
The obvious solution to this problem was to factor out this logic into a small set of Test Utility Methods (page 599). There were several variations:
- When we had a bunch of tests that needed identical objects, we simply created a method that returned that kind of object ready to use. We now call these Creation Methods (page 415).
- Some tests needed to specify different values for some attribute of the object. In these cases, we passed that attribute as a parameter to the Parameterized Creation Method (see Creation Method).
- Some tests wanted to create a malformed object to ensure that the SUT would reject it. Writing a separate Parameterized Creation Method for each attribute cluttered the signature of our Test Helper (page 643), so we created a valid object and then replaced the value of the One Bad Attribute (see Derived Value on page 718).
Later, when tests started failing because the database did not like the fact that we were trying to insert another object with the same key that had a unique constraint, we added code to generate the unique key programmatically. We called this variant an Anonymous Creation Method (see Creation Method) to indicate the presence of this added behavior.
Identifying the problem that we now call a Fragile Test (page 239) was an important event on this project, and the subsequent definition of its solution patterns saved this project from possible failure. Without this discovery we would, at best, have abandoned the automated unit tests that we had already built. At worst, the tests would have reduced our productivity so much that we would have been unable to deliver on our commitments to the client. As it turned out, we were able to deliver what we had promised and with very good quality. Yes, the testers3 still found bugs in our code because we were definitely missing some tests. Introducing the changes needed to fix those bugs, once we had figured out what the missing tests needed to look like, was a relatively straightforward process, however.
We were hooked. Automated unit testing and test-driven development really did work, and we have been using them consistently ever since.
As we applied the practices and patterns on subsequent projects, we have run into new problems and challenges. In each case, we have "peeled the onion" to find the root cause and come up with ways to address it. As these techniques have matured, we have added them to our repertoire of techniques for automated unit testing.
We first described some of these patterns in a paper presented at XP2001. In discussions with other participants at that and subsequent conferences, we discovered that many of our peers were using the same or similar techniques. That elevated our methods from "practice" to "pattern" (a recurring solution to a recurring problem in a context). The first paper on test smells RTC was presented at the same conference, building on the concept of code smells first described in Ref.
My Motivation
I am a great believer in the value of automated unit testing. I practiced software development without it for the better part of two decades, and I know that my professional life is much better with it than without it. I believe that the xUnit framework and the automated tests it enables are among the truly great advances in software development. I find it very frustrating when I see companies trying to adopt automated unit testing but being unsuccessful because of a lack of key information and skills.
As a software development consultant with ClearStream Consulting, I see a lot of projects. Sometimes I am called in early on a project to help clients make sure they "do things right." More often than not, however, I am called in when things are already off the rails. As a result, I see a lot of "worst practices" that result in test smells. If I am lucky and I am called early enough, I can help the client recover from the mistakes. If not, the client will likely muddle through less than satisfied with how TDD and automated unit testing workedand the word goes out that automated unit testing is a waste of time.
In hindsight, most of these mistakes and best practices are easily avoidable given the right knowledge at the right time. But how do you obtain that knowledge without making the mistakes for yourself? At the risk of sounding self-serving, hiring someone who has the knowledge is the most time-efficient way of learning any new practice or technology. According to Gerry Weinberg's "Law of Raspberry Jam" SoC,4 taking a course or reading a book is a much less effective (though less expensive) alternative. I hope that by writing down a lot of these mistakes and suggesting ways to avoid them, I can save you a lot of grief on your project, whether it is fully agile or just more agile than it has been in the pastthe "Law of Raspberry Jam" not withstanding.
Who This Book Is For
I have written this book primarily for software developers (programmers, designers, and architects) who want to write better tests and for the managers and coaches who need to understand what the developers are doing and why the developers need to be cut enough slack so they can learn to do it even better! The focus here is on developer tests and customer tests that are automated using xUnit. In addition, some of the higher-level patterns apply to tests that are automated using technologies other than xUnit. Rick Mugridge and Ward Cunningham have written an excellent book on Fit FitB, and they advocate many of the same practices.
Developers will likely want to read the book from cover to cover, but they should focus on skimming the reference chapters rather than trying to read them word for word. The emphasis should be on getting an overall idea of which patterns exist and how they work. Developers can then return to a particular pattern when the need for it arises. The first few elements (up to and include the "When to Use It" section) of each pattern should provide this overview.
Managers and coaches might prefer to focus on reading Part I, The Narratives, and perhaps Part II, The Test Smells. They might also need to read Chapter 18, Test Strategy Patterns, as these are decisions they need to understand and provide support to the developers as they work their way through these patterns. At a minimum, managers should read Chapter 3, Goals of Test Automation.
1 The Pattern Languages of Programs conference.
2 Technically, they are not truly patterns until they have been discovered by three independent project teams.
3 The testing function is sometimes referred to as "Quality Assurance." This usage is, strictly speaking, incorrect.
4 The Law of Raspberry Jam: "The wider you spread it, the thinner it gets."
Table of Contents
Visual Summary of the Pattern Language xvii Foreword xix Preface xxi Acknowledgments xxvi Introduction xxix Refactoring a Test xlv PART I: The Narratives 1 Chapter 1 A Brief Tour 3About This Chapter 3
The Simplest Test Automation Strategy That Could Possibly Work 3
Development Process 4
Customer Tests 5
Unit Tests 6
Design for Testability 7
Test Organization 7
What's Next? 8
Chapter 2 Test Smells 9About This Chapter 9
An Introduction to Test Smells 9
What's a Test Smell? 10
Kinds of Test Smells 10
What to Do about Smells? 11
A Catalog of Smells 12
The Project Smells 12
The Behavior Smells 13
The Code Smells 16
What's Next? 17
Chapter 3 Goals of Test Automation 19About This Chapter 19
Why Test? 19
Economics of Test Automation 20
Goals of Test Automation 21
Tests Should Help Us Improve Quality 22
Tests Should Help Us Understand the SUT 23
Tests Should Reduce (and Not Introduce) Risk 23
Tests Should Be Easy to Run 25
Tests Should Be Easy to Write and Maintain 27
Tests Should Require Minimal Maintenance as
the System Evolves Around Them 29
What's Next? 29
Chapter 4 Philosophy of Test Automation 31About This Chapter 31
Why Is Philosophy Important? 31
Some Philosophical Differences 32
Test First or Last? 32
Tests or Examples? 33
Test-by-Test or Test All-at-Once? 33
Outside-In or Inside-Out? 34
State or Behavior Verification? 36
Fixture Design Upfront or Test-by-Test? 36
When Philosophies Differ 37
My Philosophy 37
What's Next? 37
Chapter 5 Principles of Test Automation 39About This Chapter 39
The Principles 39
What's Next? 48
Chapter 6 Test Automation Strategy 49About This Chapter 49
What's Strategic? 49
Which Kinds of Tests Should We Automate? 50
Per-Functionality Tests 50
Cross-Functional Tests 52
Which Tools Do We Use to Automate Which Tests? 53
Test Automation Ways and Means 54
Introducing xUnit 56
The xUnit Sweet Spot 58
Which Test Fixture Strategy Do We Use? 58
What Is a Fixture? 59
Major Fixture Strategies 60
Transient Fresh Fixtures 61
Persistent Fresh Fixtures 62
Shared Fixture Strategies 63
How Do We Ensure Testability? 65
Test Last--At Your Peril 65
Design for Testability--Upfront 65
Test-Driven Testability 66
Control Points and Observation Points 66
Interaction Styles and Testability Patterns 67
Divide and Test 71
What's Next? 73
Chapter 7 xUnit Basics 75About This Chapter 75
An Introduction to xUnit 75
Common Features 76
The Bare Minimum 76
Defining Tests 76
What's a Fixture? 78
Defining Suites of Tests 78
Running Tests 79
Test Results 79
Under the xUnit Covers 81
Test Commands 82
Test Suite Objects 82
xUnit in the Procedural World 82
What's Next? 83
Chapter 8 Transient Fixture Management 85About This Chapter 85
Test Fixture Terminology 86
What Is a Fixture? 86
What Is a Fresh Fixture? 87
What Is a Transient Fresh Fixture? 87
Building Fresh Fixtures 88
In-line Fixture Setup 88
Delegated Fixture Setup 89
Implicit Fixture Setup 91
Hybrid Fixture Setup 93
Tearing Down Transient Fresh Fixtures 93
What's Next? 94
Chapter 9 Persistent Fixture Management 95About This Chapter 95
Managing Persistent Fresh Fixtures 95
What Makes Fixtures Persistent? 95
Issues Caused by Persistent Fresh Fixtures 96
Tearing Down Persistent Fresh Fixtures 97
Avoiding the Need for Teardown 100
Dealing with Slow Tests 102
Managing Shared Fixtures 103
Accessing Shared Fixtures 103
Triggering Shared Fixture Construction 104
What's Next? 106
Chapter 10 Result Verification 107About This Chapter 107
Making Tests Self-Checking 107
Verify State or Behavior? 108
State Verification 109
Using Built-in Assertions 110
Delta Assertions 111
External Result Verification 111
Verifying Behavior 112
Procedural Behavior Verification 113
Expected Behavior Specification 113
Reducing Test Code Duplication 114
Expected Objects 115
Custom Assertions 116
Outcome-Describing Verification Method 117
Parameterized and Data-Driven Tests 118
Avoiding Conditional Test Logic 119
Eliminating "if" Statements 120
Eliminating Loops 121
Other Techniques 121
Working Backward, Outside-In 121
Using Test-Driven Development to
Write Test Utility Methods 122
Where to Put Reusable Verification Logic? 122
What's Next? 123
Chapter 11 Using Test Doubles 125About This Chapter 125
What Are Indirect Inputs and Outputs? 125
Why Do We Care about Indirect Inputs? 126
Why Do We Care about Indirect Outputs? 126
How Do We Control Indirect Inputs? 128
How Do We Verify Indirect Outputs? 130
Testing with Doubles 133
Types of Test Doubles 133
Providing the Test Double 140
Configuring the Test Double 141
Installing the Test Double 143
Other Uses of Test Doubles 148
Endoscopic Testing 149
Need-Driven Development 149
Speeding Up Fixture Setup 149
Speeding Up Test Execution 150
Other Considerations 150
What's Next? 151
Chapter 12 Organizing Our Tests 153About This Chapter 153
Basic xUnit Mechanisms 153
Right-Sizing Test Methods 154
Test Methods and Testcase Classes 155
Testcase Class per Class 155
Testcase Class per Feature 156
Testcase Class per Fixture 156
Choosing a Test Method Organization Strategy 158
Test Naming Conventions 158
Organizing Test Suites 160
Running Groups of Tests 160
Running a Single Test 161
Test Code Reuse 162
Test Utility Method Locations 163
TestCase Inheritance and Reuse 163
Test File Organization 164
Built-in Self-Test 164
Test Packages 164
Test Dependencies 165
What's Next? 165
Chapter 13 Testing with Databases 167About This Chapter 167
Testing with Databases 167
Why Test with Databases? 168
Issues with Databases 168
Testing without Databases 169
Testing the Database 171
Testing Stored Procedures 172
Testing the Data Access Layer 172
Ensuring Developer Independence 173
Testing with Databases (Again!) 173
What's Next? 174
Chapter 14 A Roadmap to Effective Test Automation 175About This Chapter 175
Test Automation Difficulty 175
Roadmap to Highly Maintainable Automated Tests 176
Exercise the Happy Path Code 177
Verify Direct Outputs of the Happy Path 178
Verify Alternative Paths 178
Verify Indirect Output Behavior 179
Optimize Test Execution and Maintenance 180
What's Next? 181
PART II: The Test Smells 183 Chapter 15 Code Smells 185Obscure Test 186
Conditional Test Logic 200
Hard-to-Test Code 209
Test Code Duplication 213
Test Logic in Production 217
Chapter 16 Behavior Smells 223Assertion Roulette 224
Erratic Test 228
Fragile Test 239
Frequent Debugging 248
Manual Intervention 250
Slow Tests 253
Chapter 17 Project Smells 259Buggy Tests 260
Developers Not Writing Tests 263
High Test Maintenance Cost 265
Production Bugs 268
PART III: The Patterns 275 Chapter 18 Test Strategy Patterns 277Recorded Test 278
Scripted Test 285
Data-Driven Test 288
Test Automation Framework 298
Minimal Fixture 302
Standard Fixture 305
Fresh Fixture 311
Shared Fixture 317
Back Door Manipulation 327
Layer Test 337
Chapter 19 xUnit Basics Patterns 347Test Method 348
Four-Phase Test 358
Assertion Method 362
Assertion Message 370
Testcase Class 373
Test Runner 377
Testcase Object 382
Test Suite Object 387
Test Discovery 393
Test Enumeration 399
Test Selection 403
Chapter 20 Fixture Setup Patterns 407In-line Setup 408
Delegated Setup 411
Creation Method 415
Implicit Setup 424
Prebuilt Fixture 429
Lazy Setup 435
Suite Fixture Setup 441
Setup Decorator 447
Chained Tests 454
Chapter 21 Result Verification Patterns 461State Verification 462
Behavior Verification 468
Custom Assertion 474
Delta Assertion 485
Guard Assertion 490
Unfinished Test Assertion 494
Chapter 22 Fixture Teardown Patterns 499Garbage-Collected Teardown 500
Automated Teardown 503
In-line Teardown 509
Implicit Teardown 516
Chapter 23 Test Double Patterns 521Test Double 522
Test Stub 529
Test Spy 538
Mock Object 544
Fake Object 551
Configurable Test Double 558
Hard-Coded Test Double 568
Test-Specific Subclass 579
Chapter 24 Test Organization Patterns 591Named Test Suite 592
Test Utility Method 599
Parameterized Test 607
Testcase Class per Class 617
Testcase Class per Feature 624
Testcase Class per Fixture 631
Testcase Superclass 638
Test Helper 643
Chapter 25 Database Patterns 649Database Sandbox 650
Stored Procedure Test 654
Table Truncation Teardown 661
Transaction Rollback Teardown 668
Chapter 26 Design-for-Testability Patterns 677Dependency Injection 678
Dependency Lookup 686
Humble Object 695
Test Hook 709
Chapter 27 Value Patterns 713Literal Value 714
Derived Value 718
Generated Value 723
Dummy Object 728
PART IV: Appendixes 733 Appendix A Test Refactorings 735 Appendix B xUnit Terminology 741 Appendix C xUnit Family Members 747 Appendix D Tools 753 Appendix E Goals and Principles 757 Appendix F Smells, Aliases, and Causes 761 Appendix G Patterns, Aliases, and Variations 767 Glossary 785 References 819 Index 835Preface
The Value of Self-Testing Code
In Chapter 4 of Refactoring Ref, Martin Fowler writes:
If you look at how most programmers spend their time, you'll find that writing code is actually a small fraction. Some time is spent figuring out what ought to be going on, some time is spent designing, but most time is spent debugging. I'm sure every reader can remember long hours of debugging, often long into the night. Every programmer can tell a story of a bug that took a whole day (or more) to find. Fixing the bug is usually pretty quick, but finding it is a nightmare. And then when you do fix a bug, there's always a chance that anther one will appear and that you might not even notice it until much later. Then you spend ages finding that bug.
Some software is very difficult to test manually. In these cases, we are often forced into writing test programs.
I recall a project I was working on in 1996. My task was to build an event framework that would let client software register for an event and be notified when some other software raised that event (the Observer GOF pattern). I could not think of a way to test this framework without writing some sample client software. I had about 20 different scenarios I needed to test, so I coded up each scenario with the requisite number of observers, events, and event raisers. At first, I logged what was occurring in the console and scanned it manually. This scanning became very tedious very quickly.
Being quite lazy, I naturally looked for an easier way to perform this testing. For each test I populated a Dictionary indexed by the expected event and the expected receiver of it with the name of the receiver as the value. When a particular receiver was notified of the event, it looked in the Dictionary for the entry indexed by itself and the event it had just received. If this entry existed, the receiver removed the entry. If it didn't, the receiver added the entry with an error message saying it was an unexpected event notification.
After running all the tests, the test program merely looked in the Dictionary and printed out its contents if it was not empty. As a result, running all of my tests had a nearly zero cost. The tests either passed quietly or spewed a list of test failures. I had unwittingly discovered the concept of a Mock Object (page 544) and a Test Automation Framework (page 298) out of necessity!
My First XP ProjectIn late 1999, I attended the OOPSLA conference, where I picked up a copy of Kent Beck's new book, eXtreme Programming Explained XPE. I was used to doing iterative and incremental development and already believed in the value of automated unit testing, although I had not tried to apply it universally. I had a lot of respect for Kent, whom I had known since the first PLoP1 conference in 1994. For all these reasons, I decided that it was worth trying to apply eXtreme Programming on a ClearStream Consulting project. Shortly after OOPSLA, I was fortunate to come across a suitable project for trying out this development approachnamely, an add-on application that interacted with an existing database but had no user interface. The client was open to developing software in a different way.
We started doing eXtreme Programming 'by the book' using pretty much all of the practices it recommended, including pair programming, collective ownership, and test-driven development. Of course, we encountered a few challenges in figuring out how to test some aspects of the behavior of the application, but we still managed to write tests for most of the code. Then, as the project progressed, I started to notice a disturbing trend: It was taking longer and longer to implement seemingly similar tasks.
I explained the problem to the developers and asked them to record on each task card how much time had been spent writing new tests, modifying existing tests, and writing the production code. Very quickly, a trend emerged. While the time spent writing new tests and writing the production code seemed to be staying more or less constant, the amount of time spent modifying existing tests was increasing and the developers' estimates were going up as a result. When a developer asked me to pair on a task and we spent 90% of the time modifying existing tests to accommodate a relatively minor change, I knew we had to change something, and soon!
When we analyzed the kinds of compile errors and test failures we were experiencing as we introduced the new functionality, we discovered that many of the tests were affected by changes to methods of the system under test (SUT). This came as no surprise, of course. What was surprising was that most of the impact was felt during the fixture setup part of the test and that the changes were not affecting the core logic of the tests.
This revelation was an important discovery because it showed us that we had the knowledge about how to create the objects of the SUT scattered across most of the tests. In other words, the tests knew too much about nonessential parts of the behavior of the SUT. I say 'nonessential' because most of the affected tests did not care about how the objects in the fixture were created; they were interested in ensuring that those objects were in the correct state. Upon further examination, we found that many of the tests were creating identical or nearly identical objects in their test fixtures.
The obvious solution to this problem was to factor out this logic into a small set of Test Utility Methods (page 599). There were several variations:
- When we had a bunch of tests that needed identical objects, we simply created a method that returned that kind of object ready to use. We now call these Creation Methods (page 415).
- Some tests needed to specify different values for some attribute of the object. In these cases, we passed that attribute as a parameter to the Parameterized Creation Method (see Creation Method).
- Some tests wanted to create a malformed object to ensure that the SUT would reject it. Writing a separate Parameterized Creation Method for each attribute cluttered the signature of our Test Helper (page 643), so we created a valid object and then replaced the value of the One Bad Attribute (see Derived Value on page 718).
Later, when tests started failing because the database did not like the fact that we were trying to insert another object with the same key that had a unique constraint, we added code to generate the unique key programmatically. We called this variant an Anonymous Creation Method (see Creation Method) to indicate the presence of this added behavior.
Identifying the problem that we now call a Fragile Test (page 239) was an important event on this project, and the subsequent definition of its solution patterns saved this project from possible failure. Without this discovery we would, at best, have abandoned the automated unit tests that we had already built. At worst, the tests would have reduced our productivity so much that we would have been unable to deliver on our commitments to the client. As it turned out, we were able to deliver what we had promised and with very good quality. Yes, the testers3 still found bugs in our code because we were definitely missing some tests. Introducing the changes needed to fix those bugs, once we had figured out what the missing tests needed to look like, was a relatively straightforward process, however.
We were hooked. Automated unit testing and test-driven development really did work, and we have been using them consistently ever since.
As we applied the practices and patterns on subsequent projects, we have run into new problems and challenges. In each case, we have 'peeled the onion' to find the root cause and come up with ways to address it. As these techniques have matured, we have added them to our repertoire of techniques for automated unit testing.
We first described some of these patterns in a paper presented at XP2001. In discussions with other participants at that and subsequent conferences, we discovered that many of our peers were using the same or similar techniques. That elevated our methods from 'practice' to 'pattern' (a recurring solution to a recurring problem in a context). The first paper on test smells RTC was presented at the same conference, building on the concept of code smells first described in Ref.
My MotivationI am a great believer in the value of automated unit testing. I practiced software development without it for the better part of two decades, and I know that my professional life is much better with it than without it. I believe that the xUnit framework and the automated tests it enables are among the truly great advances in software development. I find it very frustrating when I see companies trying to adopt automated unit testing but being unsuccessful because of a lack of key information and skills.
As a software development consultant with ClearStream Consulting, I see a lot of projects. Sometimes I am called in early on a project to help clients make sure they 'do things right.' More often than not, however, I am called in when things are already off the rails. As a result, I see a lot of 'worst practices' that result in test smells. If I am lucky and I am called early enough, I can help the client recover from the mistakes. If not, the client will likely muddle through less than satisfied with how TDD and automated unit testing workedand the word goes out that automated unit testing is a waste of time.
In hindsight, most of these mistakes and best practices are easily avoidable given the right knowledge at the right time. But how do you obtain that knowledge without making the mistakes for yourself? At the risk of sounding self-serving, hiring someone who has the knowledge is the most time-efficient way of learning any new practice or technology. According to Gerry Weinberg's 'Law of Raspberry Jam' SoC,4 taking a course or reading a book is a much less effective (though less expensive) alternative. I hope that by writing down a lot of these mistakes and suggesting ways to avoid them, I can save you a lot of grief on your project, whether it is fully agile or just more agile than it has been in the pastthe 'Law of Raspberry Jam' not withstanding.
Who This Book Is ForI have written this book primarily for software developers (programmers, designers, and architects) who want to write better tests and for the managers and coaches who need to understand what the developers are doing and why the developers need to be cut enough slack so they can learn to do it even better! The focus here is on developer tests and customer tests that are automated using xUnit. In addition, some of the higher-level patterns apply to tests that are automated using technologies other than xUnit. Rick Mugridge and Ward Cunningham have written an excellent book on Fit FitB, and they advocate many of the same practices.
Developers will likely want to read the book from cover to cover, but they should focus on skimming the reference chapters rather than trying to read them word for word. The emphasis should be on getting an overall idea of which patterns exist and how they work. Developers can then return to a particular pattern when the need for it arises. The first few elements (up to and include the 'When to Use It' section) of each pattern should provide this overview.
Managers and coaches might prefer to focus on reading Part I, The Narratives, and perhaps Part II, The Test Smells. They might also need to read Chapter 18, Test Strategy Patterns, as these are decisions they need to understand and provide support to the developers as they work their way through these patterns. At a minimum, managers should read Chapter 3, Goals of Test Automation.
1 The Pattern Languages of Programs conference.
2 Technically, they are not truly patterns until they have been discovered by three independent project teams.
3 The testing function is sometimes referred to as 'Quality Assurance.' This usage is, strictly speaking, incorrect.
4 The Law of Raspberry Jam: 'The wider you spread it, the thinner it gets.'