So, I just lost hours (possibly days) because of a phenomenally stupid bug in a library my software uses. (The reason I'm not sure how long is because I don't yet know if this bug has been causing the slowdown I've been tracking down for a week, or if it has just been making the real bug harder to find.)
If the makers of this library had even the crudest unit test in place for this function, they would have detected the bug instantly. But obviously they didn't have such a test. So the question is: should I have had such a test in place?
My tests didn't pick up on the bug in an obvious fashion because my code is designed to be very tolerant to faults in the incoming data. (That's an absolute must in my line of work.) In particular, the bug only showed up this time because I was using the results of the broken function in a new and different way. In that case, the difference was startling -- one test run took 3 seconds before that change, and appeared to run forever afterward. (I think I let one test run go for over a week.)
How paranoid should your unit tests be? Should you test all the functions you use in a third-party library? What about the system libraries? What about your compiler?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment