This is an old revision of the document!
Automation is a really important point to build reliable software. There are many tools available to do the job, and most of them are open-source.
The first tool to consider is the one that will automatically call every other ones for every modification of your source code. It is really important to have a software to do that and there are several reasons for that:
The standard steps you should do on the continuous integration tool:
While peer code review is necessary for software changes, doing part of the job with a tool help saving time and increase quality. There are several tools on the market for static source code analysis and they incredibly find many defects developers have left, sometimes because they were junior developers, sometimes because the algorithm is complex and the defect was not obvious.
One of the most important point to check when evaluating this kind of tool is the rate of false-positive. If the rate is too high, your developers will spend precious time to analyze defects that are not real problems. However, a 0 false-positive rate is probably not possible. It the tool produces 0 false-positive, I can guess that it also finds very few number of defects. Moreover, it is important to analyze false-positives; often they are linked to complex source code that even a human will have trouble to analyze. That is why some will advice to also fix all false-positive defects: it simplifies the source code and help developers to fully understand what the program does.
It is important to notice that static source code analysis can help finding bugs in your software, but also errors that are not yet bugs but that can become bugs after a new modification.
I experienced Coverity on a one million sloc code base. The first time I ran it, I was really impressed of the findings. It helped improving quality already at the first run. Then, it is necessary to integrate the tool with the continuous integration system in order to maintain the quality level.
When managing a software project with several teams and many developers scattered over several sites, it is sometimes difficult to maintain code quality: it is necessary to ensure that the changes are correct technically and fit coding rules and software architecture. For this, we need to communicate between remote developers on a proposed change or correction of a defect. It often begins to achieve this by setting up a process based on patch files sent by email. There are several drawbacks to this method:
One way to overcome these problems is to use a tool that will allow to automate certain tasks, centralize code reviews and standardize them. ReviewBoard, an open source web tool does this job quite well.
ReviewBoard can be used in two ways: by reviewing pre-delivery and post-delivery. I use both approaches: pre-delivery when there is an identified risk of the delivery or doubt on the solution. In post-delivery to systematically inform the maintainer of a module that delivery was made in its scope and thus avoid unpleasant surprises that would have been discovered much later with more damage.
In this mode, the developer who wants to submit a code change will generate a patch file and create manually a new code review request in ReviewBoard. This is done simply by downloading the file to the web interface of the tool. A review request is created: then just add in the list of those reviewers to whom we want to submit the change. At the publication of the request, the reviewer will be automatically notified by email of the review request.
ReviewBoard provides a useful command line tool to publish new code review requests. Thus, it is very easy to integrate into a continuous integration system (eg Jenkins) creating a review request for each new modification detected in the code base. In practice, I created a script that allows developer to specify who subscribes to a particular piece of code: the developer is then automatically added as a reviewer to the new code review when a change was delivered in his scope and it will be notified by email of the change.
Once the code review request has been created, it is possible to annotate the modified code line by line or by grouping lines. It is also possible to make general comments on the change. Then the reviewer publishes the review and the author of the request is then notified by email. It can then fix his delivery and resubmit a change: the process then starts again.
In terms of ergonomics, the tool is again well done: the ReviewBoard code viewer supports syntax coloring and reading the code and modifications is nice: we can really concentrate on the review without having to worry about the tool.
Sometimes, you will build a better development chain on an old source code base with many quality defects. In this case, it is often not possible to fix all quality issues before applying the new development chain. The good approach in this case is to consider the actual quality level as the reference, the target being to do always better but never worse.
Let's take the example of compilation warnings. Consider you have 1024 compilation warnings in the software:
Communicate this rule to your developers, and you will see the number of warnings decrease until reaching O. It can take several weeks or several month, depending of your software.