Even the greatest software experts are mere mortals, which means it is inevitable that some vulnerabilities will exist within the code they write. Unfortunately, not every company has a security program or the in-house resources to implement one, and this is further complicated by the widespread adoption of open source technology due to the perception that it’s ‘free’ software. The likelihood of vulnerabilities occurring within open source software increase significantly because few take responsibility for finding vulnerabilities in the code and making it secure. Open source software isn’t free, and it comes with inherent security risks which executive leadership teams are starting to realise. In today’s market, the key questions for CIOs are: Where are our vulnerabilities? Is the software we write secure? Are the open source libraries we use secure? If we fix one problem, how can we ensure we don’t have others just like it?
Bugs bother even the biggest businesses
Even world-leading organisations such as Google and Microsoft understand that it is impossible to prevent vulnerabilities from cropping up in source code. They understand that a major problem with traditional security research is that there are not enough security engineers in the world to secure every line of code.
One way to deal with this is to use variant analysis, a process in which a known vulnerability is used as a seed to find similar problems in code. Once the root-cause of a critical bug has been identified, security teams often carry out manual security audits to identify other occurrences of similar problems across the codebase. This may sound straightforward, but when it comes to the large scale that modern enterprises work at, the task becomes a tedious and time-consuming undertaking. Security teams strictly focused on diagnosing and fixing a single bug in the software would be oblivious to all its variants and incapable of preventing every spin-off vulnerability from causing problems in the future.
Some enterprises will argue that the use of open sourcing makes sense for the simple fact that the more people reviewing code with fresh eyes before it goes into production, the more likely any vulnerabilities within the code will be detected. While this can be of benefit, two critical factors remain: the margin for human error still exists and no security strategy can ever fully address all the security bugs that need to be found and fixed within open source code. Product security teams can certainly be highly effective, but they are often simply unable to keep up with the sheer number and scale of code changes. With the performance of product security teams already under pressure, they can also find it difficult to provide developers with effective solutions to prevent vulnerabilities from reaching production and impacting customers.
Speed of development
No matter how good an enterprise’s security strategy and researchers are, the scale of the problem is hard to manage due to the frequency of changes required to maintain modern software. Windows is made up of tens of millions of lines of code, Google uses around two billion lines of code in its library of internet services, and even intelligent vehicles use around 100 million lines, which all require routine updates and ongoing analysis to ensure stability and security. No product security team can keep up and explore such huge codebases to locate all zero-days and vulnerability variants before the code needs to be shipped.
The key to scaling security expertise is never doing the same research twice. As a result, organisations are working with code analysis platforms, such as Semmle, that make it possible to automate variant analysis. This technology enables security engineers to identify vulnerabilities without having to carry out tedious and time-consuming manual code inspection. It delivers the capability of finding all the vulnerability occurrences and alerting developers so that they can fix issues before they end up in the wild; scaling a task that would take days into hours.
Developer community insights
The power behind Semmle’s LGTM.com code analysis platform – which uses AI techniques to present actionable recommendations for improvement to developers and managers – comes from a combination of deep semantic code search and data science insights gleaned from its community of 500,000 developers. The platform is powered by QL, a query engine that lets security researchers view code as data in order to spot critical errors and variants virtually impossible to find any other way. By writing a QL query to codify the diagnosis of a vulnerability, it is possible to automate the analysis across multiple codebases, and even analyse new code changes to prevent mistakes from ever making their way into production.
With the use of machine learning, the accuracy of a query is assessed based on the frequency of fixes for the alerts it produces. This approach means that ‘false positives’ (common in other types of software security checks) are kept to an absolute minimum. Security teams can now focus on finding new flaws and vulnerabilities that they know represent genuine problems. This helps to make in-house security teams far more efficient and effective.
Prevention versus cure
Of course, it could be argued that prevention is better than cure and the main focus should be on preparing code that is clear of flaws in the first place. In an ideal world that sounds plausible, but the harsh reality in software development is that we often see the same coding mistakes being made repeatedly over the course of a project’s lifetime. The issue is complicated further by similar mistakes appearing over multiple projects.
Even with well-designed application programming interfaces (APIs), errors are bound to occur. Only through regular and continuous analysis can an enterprise hope to identify mistakes that were not initially spotted. The best time to do this would be when undergoing code review, when developers are still focused on code changes but still have the option to undertake more profound analysis.
However, we have clearly established that prevention of all code flaws is a pipedream – it is impossible to stop every vulnerability from being introduced into source code and enterprises must brace themselves for the worst-case scenario. They must make it as hard as possible for vulnerabilities to be exploited and realise that solutions are capable of automating software code analysis to enhance enterprise security and improve business performance.
Collaboration
It must be emphasised that no single enterprise should believe that it is operating in a vacuum. The most secure businesses understand that collaboration is key to success and making security the number one priority across an entire enterprise is vital. Whatever the size of an enterprise’s budget, however many staff they have on their books, data and system security should always be top dog. For example, an average breach takes more than six months to identify and a further 66 days to contain, costing an organisation an average $3.62 million each, according to a 2017 EY (Ernst & Young) survey. In addition to this, almost 90 percent of companies surveyed identified that they need up to 50 percent more budget for security, yet only 12 percent expect an increase of more than 25 percent.
While most enterprises may not have the budgets or resources of some industry heavyweights, they should take comfort in the fact that powerful tools are available and capable of keeping flawed coding to a bare minimum. It’s often said that peace of mind is priceless in business, but it is worth considerably more for enterprises when combined with genuine and tangible security of data, maximum operational efficiencies and minimum revenue loss.