DAST Shortcomings – Previously we discussed the challenges of cloud-native application security testing. Such apps run in an ever-changing environment and are built from loosely coupled microservices deployed in disparate environments and on layered infrastructures. Thus, finding and prioritizing vulnerabilities in cloud native apps is almost impossible using a dynamic application security testing (DAST) tool.
DAST tools were developed 20 years ago, when legacy applications 1) existed as a single monolithic block, 2) ran on a single, bare metal server, and 3) were highly static in nature. They provided value, especially in conjunction with manual penetration testing and static code analysis.
But that’s yesterday’s news.
DAST communicates with a target web application through a web frontend to identify potential security vulnerabilities. But it’s blind; it has no information about the inner workings of the application and its infrastructure. Unlike static application security testing (SAST), DAST doesn’t have access to source code.
In layperson terms, a DAST scanner stands outside a blackbox, performs an attack action, and observes how the target app responds. Detecting a reaction and correlating it with the originating action isn’t always simple, and sometimes isn’t possible at all—e.g., when a reaction occurs inside the application or one of its backend components.
Since DAST has no inner visibility, its test creation process is somewhat naive. It involves guess work (heuristics) rather than being tailored to application business logic. This limitation also affects its ability to assess whether a test is successful. Results are based only on HTTP responses received from the tested application.
How DAST Works – A DAST tool works in multiple phases. Users must first install it and perform its initial setup. This is usually done on a remote machine outside the environment of the application to be tested.
The next step is to have it crawl the application’s web interfaces and collect as many entry points as possible; these are used later to mount attacks, similar to how hacker tools function. This phase is extremely important, since only discovered entry points and web interfaces are subsequently tested. It’s why DAST users invest substantial time and resources to make sure thorough coverage is achieved. This is usually done by augmenting automated crawling with recorded manual interaction and authentication configuration for wider coverage.
Based on its crawling, the tool generates scenarios for its testing phase in the form of HTTP requests on each of the discovered entry points. A “successful” test is one that causes an unexpected result—e.g., a server crash, irregular HTTP error, or an anomalous component response. When that occurs, the correlating HTTP request is marked as a potential vulnerability and is reported to the security team. When testing a cloud-native web application, this process should be repeated for each microservice.
Visibility, Visibility, Visibility – Comparing a cloud-native app to a foreign country you haven’t previously visited, you can’t expect to find your way around without a map. So it is with cloud-native apps. They’re highly complex, so it’s nearly impossible to find and assess cloud-native application vulnerabilities without initial mapping.
Eventually it comes down to visibility (or lack thereof), which affects all of the following:
Test creation – DAST tests are usually created without regard to real-world business logic and/or are executed out-of-order, so they’re oblivious to actual application flows. This leads to too many tests on one hand, and low testing coverage on the other. Flow analysis – Cloud-native application vulnerabilities are more like flows, rather than singular vulnerabilities (consider a line, rather than a point). We often refer to them as “multi-service vulnerabilities.” Understanding flow between microservices and execution order is mandatory—not only for test creation but also assessment. When a vulnerability is found, it’s important to know whether the respective component is accessible to the internet, as this affects risk calculation. In addition, it’s important to note whether user input validation or sanitization is performed before it’s used by a vulnerable microservice.
That DAST tools were originally created for penetration testers—not for application security teams or developers—presents a major pain point. They have progressed to offer some value for developers, but still haven’t made the full shift to become developer-friendly. These are the two technical gaps DAST tools still have:
On a conceptual level, DAST is still important for cloud-native application testing. However, existing solutions must evolve to keep pace with modern dev teams and CI/CD processes. When dealing with today’s highly distributed, complex, cloud-native applications, legacy DAST simply isn’t good enough.
In the meantime, I welcome you to see Oxeye's cloud native application security testing in action.
Eliminate uncertainty from the application security process, and save your development and AppSec teams time.