The AWS Network Firewall was introduced in November 2020 as a new AWS managed service, making it easier to deploy advanced network protection rules to the surrounding AWS environments.
Learn more and see how we implemented the AWS Network Firewall as an egress filtering system in the networking account. We used Suricata rules to allow only a defined set of domains to be reachable for the backend services across the internet.
The AWS Network Firewall is a managed virtual firewall built into the AWS platform. It protects Amazon Virtual Private Clouds (VPCs) from network threats and scales to meet the needs of growing cloud infrastructures.
It contains a flexible rules engine that supports the leading independent open source threat detection engine Suricata in the back. You can also import pre-existing rules from other systems.
The AWS Network Firewall can span multiple availability zones (AZ) to ensure high availability.
Large cloud ecosystems consisting of multiple AWS accounts often have a centralized networking account, which numerous whitepapers recommend.
The usual tasks for central networking accounts are:
The connection types provided by the networking account often require additional security measures, like filtering the ingress and egress traffic to allow the necessary traffic only.
All traffic must be routed symmetrically to the AWS Network Firewall endpoint to utilize the AWS Network Firewall. The firewall endpoint is like an AWS PrivateLink VPC interface endpoint; the difference is that you can use it as a route table target. To do so, you must deploy the AWS Network Firewall into a separate subnet of your networking account's VPC. These firewall subnets should not contain any other services. Otherwise, you will be unable to inspect the traffic in them.
The best practices for deployment models for the AWS Network Firewall are a worthwhile read.
The use case described here consists of multiple AWS accounts without direct internet and intranet connectivity. Consequently, the deployed workloads in these accounts need a central networking account. The networking account and account workloads are attached to the AWS Transit-Gateway, which realizes the cross-account connectivity.
To ensure a clean traffic flow, we separated the workload accounts into groups, each attached to its own Transit-Gateway route tables.
Every workload account that requires egress internet or intranet access is routed to the centralized networking accounts VPC.
The architecture for this networking account consists of four subnets:
The network account utilizes the concept of a public intranet and private subnet. The AWS Internet Gateway is deployed in the public subnet and is where the AWS NAT Gateway services are placed.
With this setup, the public subnet offers direct internet access. All attached workload accounts use it, and the AWS NAT Gateways are placed here to connect to the internet.
Based on an AWS Fargate deployment in the network account's private subnet, the central ingress proxy also uses the public subnet to place its AWS Application Load Balancer available from the internet.
If you need more information about AWS Fargate and how to use it you can refer to this link.
The private subnet contains the AWS Fargate deployment mentioned above. It provides a central proxy system powered by NGINX to allow ingress traffic from the intranet and internet to the backend application in the workload accounts. It is never exposed to any externally connected partner via AWS Direct Connect or the VPN and is not directly attached to the internet.
The intranet subnet is a copy of the public subnet containing an AWS NAT gateway and the Application Load Balancer. It acts as a second entry point for the network account.
Our use case intranet terminates to this subnet. Consequently, we ensured no overlapping with the intranet CIDR ranges. This way, we can altogether avoid NAT in our internal AWS structure.
The target audience for this subnet is traffic from and to the solution’s intranet. The Application Load Balancer also points to the AWS Fargate deployment in the private subnet.
The AWS Network Firewall is the only service deployed in the firewall subnet.
In this scenario, the traffic flow for egress traffic is straightforward. Incoming traffic from the workload accounts goes via the AWS Transit-Gateway, which sends it to the firewall subnet. The AWS Network Firewall applies the configured rules and decides to drop or accept the traffic.
If it accepts the traffic, the AWS Network Firewall forwards it to the AWS NAT Gateway services in the public subnets, allowing the outgoing connection to pass to the internet.
For the AWS Network Firewall to check the acceptance criteria for traversing, you can configure multiple rules and group them for checking.
You can differentiate between stateless and stateful rules within the firewall.
Stateless rules are about allowing or dropping specific protocols or CIDR ranges. The AWS Network Firewall supports the standard stateless Five-Tuple (or 5-tuple) rule specification for network traffic inspection: Protocol, Source, Source-Port, Destination, Destination-Port.
Stateful rules, on the other hand, offer way more configuration options. They use the Suricata compatible intrusion prevention system (IPS) specifications. Suricata is an open-source network IPS that includes standard rule-based language for stateful network traffic inspection. The AWS Network Firewall supports Suricata version 6.0.2 as of now.
We used the Suricata rulesets to specify a whitelist of allowed domains to whitelist the traffic from the workload accounts.
You can configure the sample ruleset to allow whitelistedpage.com as follows:
pass tls $HOME_NET any -> any any (tls.sni; dotprefix; content:".whitelistedpage.com"; nocase; endswith; msg:"matching TLS allowlisted FQDNs"; priority:1; flow:to_server, established; sid:1; rev:1;)
pass http $HOME_NET any -> any any (http.host; dotprefix; content:".whitelistedpage.com"; endswith; msg:"matching TLS allowlisted FQDNs"; priority:1; sid:2; rev:2;)
drop http $HOME_NET any -> any any (msg:"not matching any HTTP allowlisted FQDNs"; priority:1; sid:3; rev:1;)
drop tls $HOME_NET any -> any any (msg:"not matching any TLS allowlisted FQDNs"; priority:1; flow:to_server, established; sid:4; rev:1;)
This rule allows HTTP and HTTPS traffic to the whitelistedpage.com and its subdomains. Any other traffic for HTTP and HTTPS is dropped with appropriate log messages.
Of course suricata does allow other protocols besides http and https. For that please check the suricata online documentation.
It is important to note here that there is a variable called $HOME_NET which per default points to the cidr range of your VPC. If you want to pass traffic also from other VPCs via the network firewall you need to adjust this variable to fit your networks cidr range.
At the time of writing this document it was not yet possible via AWS console but you can use the AWS CLI to do this. Click here for further details.
You can easily extend this config to other domains you want to whitelist according to the needs of the backend applications in the workload accounts.
The Suricata rulesets also allow way more configurations than can be found in the Suricata documentation itself.
A central solution that can block traversing traffic is crucial to enable some form of logging.
You can configure the AWS Network Firewall for log alerts and flow logs. Valid options for the log destinations are:
For our use case, we configured CloudWatch Logs to forward the log messages to an attached SIEM solution.
The type of logs that you can utilize are
We implemented the AWS Network Firewall after the initial network setup. We set up the firewall in ‘transparent mode’ between the workload traffic and the internet. Transparent mode means we implemented the firewall without any rules to block traffic.
Instead, we had a single rule (below) pointing to our AWS CloudWatch log group for logging all HTTP and HTTPS egress traffic to CloudWatch.
{
"Schema": {
"Name": "CloudWatchLogRule",
"Version": 1
},
"AggregateOn": "Count",
"Contribution": {
"Filters": [],
"Keys": [
"$.event.tls.sni"
]
},
"LogFormat": "JSON",
"LogGroupNames": [
"/aws/network-firewall/alert"
]
}
To get the first draft of the allowed domains, we collected the logs for a certain amount of time and used AWS Contributor Insights to create a visualization of the log events.
We used this visualization to verify the list before the solution go-live and check if we missed any important domains we needed to add.
We discussed the findings with the workload owners who sent the requests to the AWS Network Firewall. This meant we could eliminate almost all the problems that usually occur when disabling public internet access by adding a ruleset that allows whitelisted domains only.
The AWS Network Firewall is an easy-to-setup solution for controlling egress traffic to the internet.
The logs and metrics also support a seamless migration without users justifiably complaining about access problems they didn't have before the firewall was in place.