My youngest son loves to play the game “Mouse Trap”, and last Christmas vacation, I had the opportunity to play many rounds of it with him. The object of the game is to build the mouse trap contraption piece by piece as the players take turns rolling the dice and moving around the game board. One thing he insisted upon was turning the crank and running the machine every time a piece was added — even when the crank wasn’t attached to anything.
Now, I’ve been playing this game for decades, and my vast experience told me that there’s no sense in turning the crank until we had the machine significantly finished. Besides, I had other things I needed to get to, and running the machine every time we added a piece made the game last longer. “We only added one tiny piece,” I’d say. “There’s no reason to test every time. What could go wrong?” At the start of one particular game, though, I decided to do my best to be a patient dad, so I gave in and agreed to do it his way.
As the game played out, with him turning the crank after each piece was added, what I found is that some assumptions we were making when adding a piece weren’t realized. Sometimes, a piece would have been installed backwards. Other times, installing a new piece would knock a preexisting piece out of alignment, and the machine wouldn’t work as it had before. Any time the machine didn’t work, we’d find out where the problem was, make an adjustment, turn the crank again, and make sure it worked before moving on to next roll of the dice.
So, what started out as my indulging his fascination with the machine actually turned out to be him teaching me a thing or two about the value of continuous integration coupled with continuous testing. I was a little embarrassed that I hadn’t thought of it first, and that a 7-year-old had come by it intuitively. My old habits were dying hard. He hadn’t acquired any bad ones yet, and so it made perfect sense to him to test the system for no other reason than that something had changed. This experience solidified the answer I give when someone asks me when a software development team should test: “Any time anything changes”. (In the context of TDD, of course, the answer would be “BEFORE anything changes”.)
This experience also reminded me of how we often resist doing something because it isn’t the way the we’ve always done it, and how leaving the decisions up to the “experts” can result in missed opportunities. I now have serious game when it comes to “Mouse Trap”, and a stronger appreciation of continuous integration and continuous testing — all because I’ve learned to think like a 7-year-old.
Now, I’ve been playing this game for decades, and my vast experience told me that there’s no sense in turning the crank until we had the machine significantly finished. Besides, I had other things I needed to get to, and running the machine every time we added a piece made the game last longer. “We only added one tiny piece,” I’d say. “There’s no reason to test every time. What could go wrong?” At the start of one particular game, though, I decided to do my best to be a patient dad, so I gave in and agreed to do it his way.
As the game played out, with him turning the crank after each piece was added, what I found is that some assumptions we were making when adding a piece weren’t realized. Sometimes, a piece would have been installed backwards. Other times, installing a new piece would knock a preexisting piece out of alignment, and the machine wouldn’t work as it had before. Any time the machine didn’t work, we’d find out where the problem was, make an adjustment, turn the crank again, and make sure it worked before moving on to next roll of the dice.
So, what started out as my indulging his fascination with the machine actually turned out to be him teaching me a thing or two about the value of continuous integration coupled with continuous testing. I was a little embarrassed that I hadn’t thought of it first, and that a 7-year-old had come by it intuitively. My old habits were dying hard. He hadn’t acquired any bad ones yet, and so it made perfect sense to him to test the system for no other reason than that something had changed. This experience solidified the answer I give when someone asks me when a software development team should test: “Any time anything changes”. (In the context of TDD, of course, the answer would be “BEFORE anything changes”.)
This experience also reminded me of how we often resist doing something because it isn’t the way the we’ve always done it, and how leaving the decisions up to the “experts” can result in missed opportunities. I now have serious game when it comes to “Mouse Trap”, and a stronger appreciation of continuous integration and continuous testing — all because I’ve learned to think like a 7-year-old.
No comments:
Post a Comment