Approaches such as peer code review and static analysis software are generally focused on spotting and fixing unexpected or preventable flaws in the development process, but not every software security issue is necessarily the result of poorly executed design or oversight. Internal stakeholders have been known, on occasion, to sabotage programs in ways that either knowingly expose them to outside attackers or directly undermine relevant systems. For companies already faced with a continuous flow of more standard application security issues such as SQL injection flaws, addressing the possibility of internal sabotage tends to go on the back burner, a recent Dark Reading article reported.
“We still see the very basic SQL injection and very basic shopping cart negative number manipulation-type examples on high risk applications at Fortune 500 companies, ones that are spending a lot of money on application security,” Nish Bhalla, CEO of consultancy Security Compass, told the publication. “So if you add another layer of complexity to say, ‘Hey, by the way, not only is that a concern but you should be looking for Easter eggs and other things you have to hunt for,’ that’s usually not going to go over well.”
Nonetheless, flawed programs can have disastrous consequences, and employees may have any number of reasons for sabotage, ranging from a settling personal grudge to planting errors that can later be exploited for personal gain. For example, in 2009 a programmer at Fannie Mae was convicted of planting malicious code that would have erased all the data from the mortgage lender’s 4,000 servers after he was fired for a non-malicious coding error. The attack was prevented when another engineer detected the code. However, catching such incidents can be a major challenge, particularly for smaller development teams without extensive testing and review protocols, experts told CRN following the attack.
Preventing sabotage with checks and balances
Security experts told Dark Reading that it is worth taking the time to outline internal processes that can help catch sabotage attempts. Instituting a system of checks and balances is an important first step. For instance, ensuring that there is not a single person responsible for submitting final builds or controlling the audit logs is important for maintaining oversight. One organization Dark Reading profiled audits and logs every code check-in and requires developers to use a unique signing key to discourage tampering. Another common tactic is to use peer code review and source code analysis software to catch coding issues during the development process.
“One of the things that a lot of companies do is they pair up programmers so that one person is always looking at the code that another person is writing,” software CEO Dan Stickel told Dark Reading. “That’s actually useful on a lot of different levels. It’s useful to try to prevent such sabotage, of course, but it’s also useful to catch normal QA problems and make people more creative.”
According to Dawn Cappelli, principal engineer at CERT, the biggest threat for insider attacks is actually during the maintenance phase. While many organizations closely monitor initial development cycles, it can be much easier for a rogue programmer to implant malicious code during an update. One defense tactic organizations can take is to use scanning tools such as static analysis software to screen every update. Paired with manual code review, this approach can catch both intentionally planted and accidental exploits, strengthening the overall process and mitigating the threat of insider sabotage.
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.