Last week I checked out all 12 main projects that constitute Zemanta and ran their respective unittests. To my great delight I saw 1054 unittests being run with only 3 of them failing. This is quite amazing result since just a year ago the total number of unittests was much lower and number of failing tests much higher. Actually we had so many failing tests that we regularly committed code even if the tests did not pass through. But for the past year we have spent quite some time fixing failing unittests and required that new functionality has good code coverage. All this came about as quite natural by-product of code reviews. Most of the new functionality that we develop is just extension of the existing functionality. In order to review the code implementing the new functionality you must understand the code. And the best way to understand the code is to execute it. Executing a lump of code buried deep down within a large system is not trivial and unittests are usually the best way to do it. Additionally, unittests are very good at exposing inputs and outputs of the new functionality, thus further helping in understanding of the new functionality.
If you haven't yet introduced code reviews into your development process, you should do it immediately. Even if you're a one-man shop, you should do code reviews. Send them to yourself and wait with a review for a day. I promise you that it will be quite often really hard to grasp the ignorance of the person who has sent you the review :)
- Code reviews FTW (restreaming.wordpress.com)
- Testing by Measuring (restreaming.wordpress.com)
- Ask Stack: What is the best way to divide work between developers? (arstechnica.com)