Given the intricacies of cloud native apps, it's critical for organizations to prioritize security during their build phase
Legacy applications were built as monoliths - single entities that were simple to develop, deploy and test. This approach was the most common model for decades.
But with changing technology, monolithic applications no longer work. They can be difficult to scale when various modules have conflicting resource requirements. Reliability is also an issue; a bug in any module (e.g., insecure deserialization) can potentially bring down the entire process.
Today we’re experiencing the digital transformation - an epic trend where technology is being infused into every aspect of business operations. It entails looking at each one and assessing how new technologies can enhance and speed processes.
Microservices are a relatively new development concept. Here, small scalable modules can be independently deployed and tested without the risk of downtime. It’s the software architecture model needed by businesses operating in today’s digital transformation world. Google’s 5 principles for cloud-native architecture concludes “almost all cloud architectures are based on… microservices...”
Microservice application advantages include:
Hence it's easy to see why shifting software development to the cloud via microservices has become the default for many organizations.
Using microservices will likely increase operating overhead. Significantly more resources could be required due to a greater number of deployments. More time and effort might be needed to create your infrastructure. And all services potentially need clustering for failover and resilience.
Your solution might have dozens of components, becoming increasingly complicated as you add new features. Ultimately it could consist of 20, 30, or more services, each running multiple processes. Here a best practice is to address the added overhead using automation, in turn placing a premium on staff skilled at DevOps and other infrastructure automation methods.
Now consider <a href="https://www.docker.com/resources/what-container" rel="nofollow" target="_blank">containers</a>, where an executable is packaged with all its dependencies and configuration such that it can run self-sufficiently. This sounds a lot like a microservice and is why containers fit microservices so well. A 2020 <a href="https://www.cncf.io/wp-content/uploads/2020/11/CNCF_Survey_Report_2020.pdf" rel="nofollow" target="_blank">Cloud Native Computing Foundation</a> report notes that 92% of surveyed organizations used containers in production - up from 84% the previous year.
Using containers presents another microservices architecture advantage. Existing tools make it easy to run, scale, and manage (i.e., orchestrate) containers while abstracting them from underlying hardware. The foremost of these is Kubernetes (a.k.a., K8s), an open source tool that and has essentially become the de-facto standard.
“Cloud Native Computing” has emerged as a catchall phrase for the various tools and techniques required by software developers to build, deploy, and maintain modern applications on a cloud infrastructure. Cloud native applications present a fundamentally new and exciting approach to designing and building software.
Cloud native app development typically includes marrying cloud platforms, microservices, containers, Kubernetes, immutable infrastructure, declarative APIs, and continuous delivery technology using DevOps and agile methodology. Related technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.
Adopting a cloud native approach is clearly a smart move for business modernization. Yet it also presents some real concerns. With cloud native, there are many more components interacting with one another behind the scenes to make a given solution function. While this makes for more dynamic applications, it creates an exponentially larger attack surface for nefarious actors to target.
Gartner says, “The shift to DevOps places an emphasis on speed and rapid iterations, making traditional security tools too slow, cumbersome, and reactive to use for managing cloud risk.” It has been common for security to lag behind any new technology, but given the intricacies of cloud native apps, it's critical for organizations to prioritize security during their build phase.
Security teams used to be able to scan code using application security testing (AST) tools in a fairly intuitive manner. But a different approach is required for cloud native applications, where 1) pieces of code are distributed between components built by developer teams, open-source communities, and the cloud providers and 2) run in different parts of the infrastructure: Docker and Kubernetes orchestration files, AWS/Google Cloud Platform/Azure infrastructure files, and more. This means all components need to be evaluated to appropriately communicate with one another.
Most modern applications are built and deployed through leveraging cloud native technologies. But along with the significant number of components used in cloud native apps come a great number of known vulnerabilities and threat vectors. All pose security challenges to existing AST risk assessment tools. New organizational practices, such as DevSecOps and AppSec, emphasize the need to integrate security into every stage of the software development lifecycle (SDLC).
Application security testing helps make apps more resistant to security threats by identifying code weaknesses and vulnerabilities. AST tools assist developers in identifying security issues and enforcing security best practices early on in the development stage - before software moves to production.
That said, AST tools must evolve so as to keep pace with the increasingly complex nature of cloud native architectures. Cloud native application security testing must take into account the following requirements, all of which are essential:
Any cloud native application risk assessment solution that doesn’t include all of the above is doomed to fail, generate many false positives and negatives, and waste your time.
Organizations worldwide are building and deploying cloud native applications, where their architecture differs greatly from yesterday’s monolithic counterparts. What used to be a custom code block installed on a single, bare metal server or a virtual machine has morphed into hundreds of small, independent pieces of code. These are installed on loosely-coupled microservices, executed as orchestrated containers, and deployed in the cloud.
The challenges legacy AST tools face when assessing vulnerabilities is well understood. Cloud native application security testing requires a different paradigm with respect to how vulnerabilities are found, assessed, and resolved. Future analysis will reveal the downside of such archaic solutions when scanning cloud native applications.
Stay tuned for additional posts on this topic, in which we’ll take a deeper dive.