False positives in modern static analyzers

on May 22, 09 • by Alen Zukich • with 1 Comment

In response to Jason’s post about false positives.  First of all there is a general misconception of false positives.  Modern static source code analysis tools have changed the game.  It is not the Lint tool of the past, a focus with deep inter-procedural technology has placed the requirement that static tools...

Home » Static Analysis » False positives in modern static analyzers

In response to Jason’s post about false positives.  First of all there is a general misconception of false positives.  Modern static source code analysis tools have changed the game.  It is not the Lint tool of the past, a focus with deep inter-procedural technology has placed the requirement that static tools today produce more real issues than false reports.

With that said, Jason is right, large code bases never running static analysis will produce a large number of issues no matter how accurate it is.  Even though static analysis tools do provide a number of ways to manage this (and Jason talks about one) it does make sense to put this in your code reviews. You are looking at legacy code but if you are doing code reviews then you must have changed something with that legacy code.  Therefore having those bugs visible to you during the code review could suddenly now apply.

Related Posts

One Response to False positives in modern static analyzers

  1. Andrew says:

    Interestingly, some organizations use “false positives” as an opportunity to flag code that should be rewritten. Modern static analysis tools are pretty sophisticated and so code that trips up the analysis can be indicative of code that is overly complex and difficult to maintain. The rocket scientists at NASA use the “Power of 10″ rules, one of which states that not only should they fix problems coming from static analysis tools, but also “fix” the code so warnings/false positives go away. This process may not be for everybody, and sometimes there are real false positives which should be ignored, but the point is that false positives may be indicative of something more than a deficiency in analysis. For more on the power of 10 visit http://codeintegrity.blogspot.com/2010/08/power-of-10-for-safety-critical-code.html

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Scroll to top