If you like this content please consider subscribing!
Also, you can find my book on building an application security program on Amazon or Manning
When I got my start in cybersecurity many moons ago, I was thrust into one of the early tools of the application security testing space: Static Application Security Testing (SAST). At the time, at least to me, this encapsulated where we were heading in the AppSec space. No more relying on testing the application in a running environment through penetration testing or a black box testing tool. We had the ability to scan the actual code for vulnerabilities. This was where the term shift-left really took on meaning.
While static analysis was around in the 1990s, the security applications of static analysis in the enterprise didn’t really take hold until the 2000’s as the web application scene exploded. In these days, SAST was pronounced to be the slayer of vulnerabilities early in the lifecycle. The reality quickly became that it was a finder of all things wrong with the code - security or not, false or true positives. There were a lot of false positives. A lot.
The basics of SAST
SAST is a white box testing technique that meticulously examines an application's source code, bytecode, or binary code. Unlike dynamic testing, which requires the actual execution of the application, SAST is performed without running the app. This key characteristic allows it to be integrated into the SDLC and enhances security from the earliest stages of development.
The primary function of SAST is to analyze the application code to detect potential security vulnerabilities that could be exploited by attackers. This analysis helps in identifying a range of security issues, to name a few of the common findings:
SQL Injections: These occur when an attacker inserts malicious SQL statements into input fields, manipulating the database to access or manipulate data.
Buffer Overflows: This involves overwriting the memory of an application, potentially allowing attackers to execute arbitrary code.
Cross-site Scripting (XSS): XSS attacks enable attackers to inject client-side scripts into web pages viewed by other users, affecting user interactions with the application.
Secrets in code: Occasionally, passwords, API keys, encryption keys, and other secrets make their way into source code either by accident or included in code not intended for a production environment.
You can see the promise (and maybe oversell) of SAST. Employing SAST tools can offer significant benefits when implemented properly, with the right processes, right tooling, and right tuning. A lot has to go well. If it does, SAST can minimize the chances of a vulnerability making it out into a production environment near you. If it doesn’t go well, it can be a hot mess (technical term).
So why doesn’t this work?
Bottom line: False positives and speed of scanning.
“Static application security testing is not known for its blinding speed. In fact, as security tools go, SAST typically gets a bad rap. I’ve done my fair share of complaining about SAST tools, their speed, and their abundance of false positives. I’ve seen others liken it to the shotgun approach. Not very precise, but effective if you are looking for results. This can become exacerbated if the organization has not taken the time to properly tune the SAST tool. This can be a recipe for disaster. It produces a lot of results that then need to be triaged and processed. What’s more, it adds a lot of time to the build process, further upsetting the development team.” The Application Security Program Handbook – Derek Fisher
SAST has not aged well from its early days, in my opinion (see above). I’ve been asked before what tools belong in a modern AppSec program. SAST is usually far down the list for me and one that I would not look to integrate in an early AppSec program. One of the main reasons is the issue of false positives—instances where the tool incorrectly identifies a piece of code as vulnerable. These inaccuracies can lead to unnecessary delays in development timelines and may cause frustration among developers leading to a loss of confidence in the tool. So, balancing the sensitivity of SAST tools to minimize false positives while maintaining thorough vulnerability detection is crucial for effective implementation.
The other challenge is the speed scans take to complete. Given how SAST works, it needs to read all the code, make determinations on entry points, whether a variable or parameter has been sanitized, understand whether that random string is a secret, and so on. This takes time and depends on the size of the code base. Some SAST tools will allow you to do incremental scans. Meaning it will scan just the recently changed and checked-in code as opposed to the complete project.
One good plug here for OWASP is the OWASP Benchmark. The OWASP Benchmark Project is a Java-based open-source test suite designed to evaluate SAST and other AST (application security testing) types of tools. It measures the effectiveness of these tools in terms of accuracy, coverage, and speed by using a fully operational web application with thousands of test cases mapped to specific CWEs. Each vulnerability included is exploitable, providing a fair and rigorous testing environment. The project also includes scorecard generators for various AST tools. In other words, if you’re looking to kick the tires on a SAST tool, the test cases provided with Benchmark is a good place to start.
Times are changing
We know what we don’t want, so what do we in fact want? Well, low false positives are the holy grail of most ASTs. Why integrate a tool that brings with it an immense amount of noise? While historically, the weeding out of false positives has largely been a manual effort of examination and elimination, there has been a capability that is becoming more prominent in many ASTs. That capability is “reachability”. This looks to see whether a vulnerability in the codebase is actually accessible and exploitable in the operational environment. The targeted approach allows organizations to focus on the most significant threats that can genuinely impact the application, rather than spending resources on vulnerabilities that, while present, do not pose an immediate risk. More importantly, this cuts down on one of the most common pieces of feedback from developers on vulnerabilities found in code. That the code identified as a vulnerability, is never actually run (i.e. reachable). Determining whether a piece of code is executed or not takes time by a developer, and the security team often requires convincing. Automating this through reachability analysis is a significant time savings for both the development and security team.
This advancement in AppSec of identifying exploitable external reachability, is a crucial enhancement to the precision of SAST methodologies. It involves prioritizing vulnerabilities that are reachable from the internet, which are of particular concern due to their accessibility to external attackers. Here’s a detailed explanation of how this process typically works:
External Reachability Analysis: focuses on identifying which parts of the application can be accessed from outside the organization's internal network. By determining which vulnerabilities are exposed to the internet, security teams can prioritize these for remediation since they pose a direct risk of being exploited by external threats.
Source-to-Sink Flow Analysis: maps out the paths that data takes through the application, from its entry point (source) to its execution point (sink). By analyzing these paths, SAST tools can detect where sensitive data might be exposed to vulnerabilities that could potentially be exploited from external sources. This flow analysis helps in understanding the context and potential impact of each vulnerability.
Application Architecture Context: incorporates knowledge of the application’s architecture into the security analysis process. Understanding how different components of the application interact, where your crown jewels are, and how they are exposed to the external world, allows for a more targeted approach in vulnerability management.
Note: Some newer SAST offerings are including AI capabilities with code reachability further enhancing the ability to identify what is exploitable.
Prioritizing vulnerabilities through a focus on vulnerabilities that are externally exploitable, security teams can allocate resources and efforts more effectively. This enhanced approach to SAST not only improves the detection of vulnerabilities but also ensures that remediation efforts are focused on the most critical weaknesses, thereby optimizing the security resources and efforts within an organization.
Does SAST still work in modern SDLCs?
Given that SAST can be slow and produce a fair amount of false positives, it can be easy to question its applicability in the modern SDLC. Heck, I do it often. However, we are moving beyond the olden days of slow and clumsy SAST that scan the entire code base for each build of the application.
Ideally, to be effective, we want to get SAST as close to the developer as possible. Shifting left is a powerful strategy whose primary role is preventative by focusing on stopping new security issues from advancing into production stages by scanning the code before an artifact is created. There are a few places where SAST in the modern SDLC can fit:
In the integrated development environment (IDE): SAST in the local developer environment involves integrating security checks directly into the development tools that developers use every day, such as IDEs. This integration allows for a seamless and proactive approach to identifying potential security vulnerabilities as the code is being written, providing immediate feedback to developers.
At the pull request (PR) stage: PRs serve as critical junctures for manual code review before merging into a branch, ensuring early vulnerability detection. Additionally, PRs offer full application context, unlike local IDEs, enabling analysis to assess how changes might impact system security comprehensively. While this is a manual interaction, results from the local SAST scan should be included in the PR for the reviewer’s reference.
Enhancing the Continuous Integration (CI): In the CI pipeline, optimized scanning practices are essential for balancing security with development efficiency. Focusing on newly introduced code in PRs and during CI processes, rather than all historical issues, reduces scan times and provides developers with relevant, actionable feedback. This approach prevents security assessments from slowing down CI pipelines, fostering faster and more agile development cycles.
Another lesson we’ve learned over the years in regard to SAST is that policies and processes need to be adapted to the risk that the organization actually has. This requires a tuning of the SAST tool to be more, or less strict depending on the stage of development.
The strictness should be a setting that determines what risk level is allowed to proceed. For instance, perhaps in the IDE all CVSS scores of 4 and above will be flagged. This means that there will be more findings identified. Later in the development lifecycle, only CVSS of 8 and above will be blocking vulnerabilities. Using this methodology, vulnerabilities found earlier in the lifecycle are less critical and plentiful. The ones later in the lifecycle are fewer but more critical. It’s not a perfect science, but it is a good place to start.
Adapting the strictness really depends on the organization and how mature their program is or what their risk appetite may be. Others may argue that there is a better way to tune the strictness, but in my opinion the organization would likely want more noise in the development environment than later in the process. The reason I say this is because you do not want to potentially block a build that is destined for a production environment over a false positive. You would much rather focus on more impactful ones closer to production.
The other side of this coin means that there is less noise in the IDE allowing the developer to move faster while writing code. My opinion on this stands. Leaving a potentially broader set of vulnerabilities for later in the SDLC is counterproductive in my opinion.
However, this is where reachability, and a SAST tool that removes or limits false positives is critical to the balance of speed and risk. Having the confidence that the results of a SAST scan coming out of the development environment highlights only the ones that you need to focus on allows the developer to fix just the ones that are impactful to the organization. Additionally, this reduces the possibility of finding build blocking vulnerabilities later in the process. This may leave quite a few vulnerabilities on the table, but they should be provably false or provably unreachable by the SAST tool.
Is SAST really dead?
No, of course not. SAST remains viable in the SDLC for several key reasons, despite its challenges. SAST's ability to analyze code directly, without requiring application execution, is a powerful preventative measure, allowing developers to identify vulnerabilities early on. This shift-left approach is fundamental in stopping security issues from advancing into later stages of development, ultimately preventing them from reaching production.
SAST tools offer flexibility in how they can be integrated into the SDLC, supporting different stages of the development process. In the local development environment, SAST tools integrated into IDEs allow for immediate feedback on security issues. This provides an educational component, reinforcing secure coding practices and preventing vulnerabilities from being introduced in the first place. In other words, building secure muscle memory.
In the CI/CD, SAST tools can be incorporated offering security checks at key junctures of the build process. Here it can be tuned to the organization’s risk appetite and utilized to block potentially high-risk vulnerabilities from making it into production.
While these are fairly standard SAST capabilities, our focus needs to be on the ability to determine whether the vulnerabilities are actually reachable. This greatly reduces the number of false positives and allows teams to focus on what is actually exploitable. With reachability being offered by more and more AST tools, we’re approaching a brave new world where false positives could (conceivably) be something that only us old timers talk about.
Before you go!
If you found this valuable, please consider subscribing or sharing.