Where does software development end and testing begin?
For veteran developer, agile coach, and author David Bernstein, creating quality tests is vital for developers to build quality code.
In a recent web seminar, “Taking Agile Testing to the Extreme,” Bernstein sat down with Coveros CEO Jeffery Payne to discuss the key differences between waterfall and agile approaches to development, how developers should focus on writing quality tests, and what role automation plays in an agile development approach.
Here are key highlights from the discussion.
1. To ensure an actual agile approach, your team should be working toward shipping potentially usable software at the end of every sprint.
For both David and Jeff, a key difference between an agile development approach and traditional waterfall is that in agile the goal is to create potentially shippable software at the end of each sprint. But ensuring this means that testing and development have to happen in tandem rather than as separate phases.
Bernstein: I really feel like this is the difference between agile and Scrum and doing Waterfall. Of course, in waterfall we spread out those phases, but the first step in spreading those phases out is to separate dev and test. The one thing I think Scrum got really right is potentially shippable software at the end of the sprint…A lot of people interpret that to mean they pass to QA. As soon as we do that, we’re starting to separate out those phases and we wind up in waterfall. So, the challenge is to integrate that together so our testers and developers are actually working together.
Payne: That principal of workable software or releasable software…to me, that is the lynchpin of an agile process. This idea that at every step I have something that works fundamentally changes the process.
2. While integration is a painful challenge in waterfall, it can be a powerful ally in agile.
While integration was a months-long “painful” process in waterfall, the idea of continuous integration in agile allows developers to get the valuable feedback they need to improve their work on a continuous basis.
Bernstein: One of the things I love about agile is that we took the thing we hated the most in waterfall–which was integration, because it was so painful and I remember not seeing daylight for months because I was working around the clock to integrate–and we make it part of the build and we do it continuously. Suddenly, instead of it being the bane of our existence, it becomes my most valuable form of feedback. As an agile developer, when I write code I have my continuous integration server right there with me. I’m working and I’m getting feedback from my build all the time–like constantly…Getting that level of information coming at you is just so valuable.
Payne: There’s an adage in DevOps: ‘If it hurts, do it more often.’ CI and that continuous integration process in agile, if you extend that into the delivery process, this idea of continuous deployment or continuous delivery–making that horrible, nasty, ‘I’ve got to get in its final production state’ as something you do constantly is the same idea. It’s this idea of those things that hurt, we have to just make them the norm. We have to make them something that we practice so much that they’re not hard anymore–like integration.
Bernstein: Integration goes even beyond that to become my best ally. I get feedback and I don’t even notice anymore, because it tells me there’s a problem and I fix it. I just fixed a bug and it took me 10 seconds and I didn’t even recognize that I did it. But, if I wasn’t doing test-first development and having all the support of my tools, that might have escaped to QA and it might have taken them a while to find it.
3. To get test-driven development (TDD) right, developers should create small, specific tests of behavior, rather than specific code.
David Bernstein highlighted his approach to testing, which is focused on testing behaviors rather than specific code and ensuring those tests are small enough to test just one simple behavior.
Bernstein: For me, TDD is test-first development, where we write the test before we write the behavior and the tests that we’re writing is for a small unit of behavior. And I’m writing a test against behavior, not against a unit of code. A lot of people get confused about that…So, a unit means a unit of independent behavior and that’s what we’re writing our assertions against–not a unit of code. When we do that it makes the tests more meaningful and also simpler to write as well…A good test is a test that’s unique. It fails for only a single reason. No other test in the system fails for the same reason. Since it’s unique, typically only one test fails. Sometimes, it can cascade. But typically it’s the first failure that we see that’s causing other things to fail. When we write our tests that way we don’t wind up with tons of tests…If you always write the tests first, you’re always going to get close to 100% coverage, which is really good. But if the code coverage tools actually told you what real coverage was people would have more like 500-800% coverage, because they run tests against the same stuff over and over and over again. That doesn’t really help, it just slows things down a bit.
4. When code is more testable, it’s also of higher quality. That makes writing quality tests as important for developers as writing quality code.
In his years of experience in software development, Bernstein has come to the conclusion that the more testable a piece of code is, the better the quality of that code and more verifiable it is.
Bernstein: I don’t have enough time to prove this to you, but I’ve spent years thinking about it. For me, the idea of when my code is more testable, it’s also of higher quality. It’s also more extensible. It’s also more verifiable. All the great virtues of software come with testability. When I say testability–you can write a test for anything, but if you have to reach through the UI to exercise your code, I don’t consider that testable. So I want to expose good, testable objects or good, testable code and write against that, not trying to write through Web Driver or some other tool. That’s done at the TDD level, at the test-first level. So, I try to write small tests that include small behaviors that take me incrementally closer to finishing my story. The little game I play with myself is: ‘How small can I make it? Can I make it absolutely teeny-tiny?’ It’s fascinating that I learn a lot about how to write better code by doing that.
Payne: Do you refactor your tests?
Bernstein: Absolutely. In fact, I spend more time refactoring my tests than I do refactoring my code. Tests are first-class citizens. They’re quality code. I do this because my tests are so valuable to me. They really got my back. They really, constantly show me the value of when I have really good tests, so I don’t mind investing in them.
Payne: I always say a good test suite is a safety net for refactoring, because if you don’t have a way to know when you broke something–when you’re refactoring or even implementing new things–then you’re going to spend a lot of time debugging and having a lot of problems downstream. You’ve got to have a safety net. That means automated tests at some level that you think are testable that are going to make sure that you can run it as part of that CI process you described and be checking constantly whether what you’re doing is breaking something else or not.