The Traditional Approach
The traditional approach to carrying out web application testing has been to just test the application itself, and optionally to carry out external network testing against the server that it is running on. Although this approach can provide a degree of assurance, it misses many of the key areas that need to be considered for modern cloud-based applications.
One of the most fundamental issues with cloud hosted environments is that they are frequently built and configured by the developers themselves, rather than by a dedicated infrastructure team. While the DevOps-style approach can have significant benefits in terms of application delivery, it can also mean that environments are built by people who don’t have much experience building and configuring secure infrastructure. Unless they are supported with appropriate training and security baselines to use, or given a pre-hardened environment within which to work, this frequently results in significant security weaknesses being introduced into the cloud environment.
If the testing is being carried out purely for compliance reasons, then external network and web application testing would normally be enough to tick the box. However, if you want to go beyond this to obtain a greater level of assurance that the application is actually secure, there are a range of other areas that need to be considered.
A More Modern Approach to Cloud Applications
As well as gaining an understanding of what functionality the application has and how it works, it is also important for the testers to understand the environment and the context around the application. This should include:
- A high level overview of the environment.
- A more technical deep dive into the specific services used.
- A review of the inbound and outbound connectivity of the environment.
- A discussion of the key risks and concerns around the platform.
The more information that can be shared at this stage, the easier it will be to put together an accurate scope of work, and to ensure that the testing that’s carried out is aligned with the requirements. If there is an accreditor involved in the process, it’s vital to get them involved in this early stage, so that they can sign off on the scope of work.
Testing of the web application itself is a well understood process, and as always, it’s better to use a whitebox approach. This means giving the tester full access to the application, multiple sets of credentials, documentation, and ideally a copy of the application source code.
Although a full code review is not usually appropriate (or cost effective), access to the source code allows targeted reviews of key functionality (such as the login, session management and cryptographic functions), and also means that issues can be more easily verified or struck off as false positives.
Unauthenticated testing of any externally exposed services is a typical part of any penetration test, but in a cloud environment this can have some added layers of complexity.
When an Infrastructure-as-a-Service (IaaS) deployment model is used, unauthenticated testing can still provide reasonable valuable, as there may be virtual machines within the environment that could be inconsistently deployed and exposing inappropriate or vulnerable services. However, if a Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS) approach is used, then unauthenticated testing is much less valuable - because the underlying network services exposed will be configured and hardened by the cloud provider. In many cases it’s not possible to restrict network access to these services, and you may also encounter services such as Azure Bastions that are externally (public) facing by design.
Therefore, when commissioning external network testing, it’s important to communicate which IP addresses and hostnames are pointing to services that are managed and configured by the client (IaaS), and which ones are manged by the provider (PaaS and IaaS), so that testing and reporting are conducted with the correct context and give accurate recommendations. This information can sometimes be directly provided to the testers, but they can also view it themselves with administrative access to the cloud environment.
The underlying platform that is hosting the web application should be reviewed from a whitebox perspective to ensure that it is securely configured and managed. In some cases this might just be a virtual machine with IIS or Apache running on it, which can be reviewed in the same way as any other server. However, there are a variety of other platforms that might be used, such as:
- Serverless infrastructure (Azure Functions or AWS Lambdas)
- PaaS platforms (Azure Web Apps or AWS Beanstalk)
- Containers (Azure Kubernetes Service or Amazon Elastic Kubernetes Service)
One of the key benefits of these platforms is that they require less configuration to use, which means that there’s less that can be misconfigured. However, they’re still worth reviewing, and usually present a number of security related settings around TLS, HTTP headers and access control.
Where a containerised approach is used, although the underlying nodes are managed by the cloud provider, the containers that are run on them can still have all of the usual issues (a lack of hardening, missing updates, outdated dependencies, etc), so need to be carefully reviewed.
Web applications typically rely on a variety of supporting infrastructure, such as:
- DNS servers.
- File storage.
- Load balancers.
- Message queues
- NoSQL data stores.
- SQL databases.
Although these provide a significantly smaller attack surface than building the equivalent services on virtual machines, they are a common source of weaknesses in the environment. File storage in particular needs to be carefully reviewed, as a simple misconfiguration can result in the Storage Accounts or S3 Buckets being exposed to the Internet.
Unless the environment is very simple, it probably has multiple virtual networks (Azure vNets or AWS VPCs). These networks should be reviewed to ensure that they enforce appropriate segregation between different parts of the system using Network Security Groups (NSGs), and also between different environments such as development, test and production.
The externally facing NSGs or firewalls should also be carefully reviewed to ensure that only appropriate services are externally facing - much like you would in a normal firewall review. The additional complexity comes from the fact that rules can be implemented at multiple levels (virtual machine, virtual network, external IP, etc). There can also be different resources that implement these restrictions, such as Azure supporting both Network Security Groups and Azure firewall instances.
There may also be privileged access into the environment for developers, in the form of jumpboxes, services exposed to specific IP addresses, or links to the corporate network (Azure Express Link or AWS Direct Connect). These can provide both a route for an attacker to gain access to Azure from the corporate network, and also potentially a route to compromise the internal network from Azure. Which of these is a greater concern will depend on the specifics of the environment, but both should be taken into consideration.
Many applications will have various components of the build pipeline hosted in the same cloud environment as the application. This environment will commonly include a Continuous Integration / Continuous Deployment (CI/CD) server, as well as other resources such as automatic scanning tools, container repositories, and sometimes even the issue tracker and version control system. There may also be dedicated developer workstations within the environment, which are frequently unmanaged and unhardened.
These components are often considered to be just development systems, as such are rarely hardened to the same degree as production services. Additionally, some of the applications used are designed to be run in trusted environments, so may not implement many of the security features that would be expected.
They can also have highly privileged access to the rest of the environment, particularly if they include orchestration functionality, or an infrastructure-as-code tool such as Terraform is used. This means that a compromise of the build pipeline may not only compromise the application, but also the entire cloud environment.
When a defence-in-depth approach is used, an attacker should not be able to fully compromise the environment just because the have managed to gain an initial foothold. Assumed breach testing allows you to verify this, by creating a virtual machine inside the environment to simulate a compromised server, and attempting to attack other parts of the environment from there. It can also be used to target other networks that have trusted relationships with the cloud environment, such as the corporate network.
An attacker with an initial foothold can commonly be simulated by creating a new virtual machine within the same environment using the Kali Linux image from the cloud provider’s marketplace or image library, because this will have all of the necessary tools preloaded and requires minimal configuration.
Exactly where the virtual machine should be placed will depend on the specifics of the environment, but should generally be based on which systems are most likely to be compromised. This would typically be externally facing systems such as web servers or jumpboxes.
Cloud hosted systems require many of the same security services as on-premise deployments, such as:
- Central authentication.
- Central log collection.
- Patch management.
- Vulnerability scanning.
In some cases these may be implemented using specific services on the cloud platform (such as Azure Log Analytics or AWS CloudTrail for central logging), but in other cases they will be built on servers within the environment. Assuming that they are in place, they should undergo the usual testing - although they may also be shared between other environments and thus out of scope.
IAM and Privileges
All of the security configuration and hardening that has been applied to the individual resource on the cloud platform is completely irrelevant if an administrative account on the tenant gets compromised. As such, it is vital to review the security of the tenant, especially related to:
- Access and API keys.
- Conditional access policies and MFA.
- Users and permissions (IAM).
Depending on the structure of the organisation, these might be managed by a central team, and thus be outside of the scope of the assessment. However, it’s important to establish who has responsibility for which areas, as they can easily fall between the cracks if two different teams both think that the other is responsible for them.
This list is by no means exhaustive, and depending on the specifics of the environment, some areas may not apply. However, it should serve as an example of some of the additional areas that need to be taken into account to carry out a more comprehensive assessment of the security of a cloud hosted application.