T-Systems-Claim-Logo
Search
Green lines with yellow, blue and red dots 

Break Glass: Secure Cloud Log Data Access

Amazon S3: Using native AWS services to securely and compliantly access log data.

May 10 2021Karol Havrillay

Protecting log data reliably with Amazon S3

Gathering and storing log data from virtual machines and other sources in the Deutsche Telekom AWS cloud environment is not only a best practice but also a strict security requirement. Once we have the data in place, we need to tackle one more requirement – how can we access the data in a secure and compliant manner?

Log files: Situation and Requirements

Hand surrounding a circle of people made from paper 

The Logging and Monitoring Team within Deutsche Telekom Cloud Security is responsible for the secure collection and storage of log data from EC2 instances across the cloud environment, such as syslog logs and Windows system logs.

Such data are subject to strict regulations, so accessing it is generally prohibited.

When read access is required, e.g. in case of an audit or an investigation, a formal request to the Logging and Monitoring Team must follow. The team will in turn initiate the so-called “Break Glass” process to allow access to the data. Furthermore, a number of internal and external stakeholders must be notified of such occurrence. The Logging and Monitoring Team also wanted a user-friendly interface which would allow them to easily open and close access to the data.

Such a procedure is formally called “Break Glass.” In general, it refers to a procedure involving privileged access to data. In order to circumvent the privileges, a defined process has to be jump started, and consequent actions are then subject to various alerting and control mechanisms. In this specific case, the relevant stakeholders such as the Worker’s Council must be notified by email every time the process is triggered.

Amazon S3: Original Configuration

The S3 bucket containing the sensitive data was configured according to best practices and requirements, i.e., versioning, encryption, WORM (Write Once Read Many) object locking in compliance mode, and so on. An S3 bucket policy was attached to the S3 bucket specifying who may upload the data, and under which conditions, limiting this to the principals of the DTIT AWS organization. However, the policy did not limit read access to the data in any way. Technically, anyone with enough permissions in the so-called “logging” AWS account could access the data without anyone noticing.

Solution proposal

architecture diagram of the proposed solution

A high-level architecture diagram of the proposed solution:

The solution designed by T-Systems consists of multiple layers

Improving security of the S3 bucket:
The bucket policy was extended by a statement which denies any “getObject” action unless requested by a specific IAM role. In conjunction with a Service Control Policy (SCP), this prevents anyone in the logging account from accessing the data even if they have administrative access.

User interface layer:
Two Service Catalog items were provisioned in the so-called “administrative” account which executes the break glass process of opening or closing access to the data. The Service Catalog items are backed by a custom Lambda resource which modifies the aforementioned S3 bucket policy using a trust relationship in the logging account. This provides an easy way for the staff of the team to execute the data access requests including a historical overview.

Reactive layer:
A CloudWatch event rule was provisioned in the logging account which is triggered by the PutBucketPolicy and PutBucketACL calls. Once triggered, it looks at the bucket policy in the API call and parses the SID of the first statement, which is managed by the aforementioned Lambda function. It is either “OpenAccess” or “ClosedAccess.” Finally, the rule sends a notification to an SNS topic through an input transformer which produces a human readable email with information about the policy currently applied to the S3 bucket. All relevant internal and external stakeholders, such as the Worker’s Council and Privacy and Security Manager are subscribed to the SNS topic and thus informed about any permission change on the bucket.

Infrastructure layer:
All resources deployed to the logging and administrative accounts are deployed using Infrastructure as code as CloudFormation stacks. All resources follow the principle of least privilege on both permission side and trust policy side in case of IAM roles.

Preventive layer:
A custom SCP was created and attached to the logging account to prevent tampering with the deployed resources, and thus break the break glass process. Pun intended.

Monitoring layer:
An additional requirement was brought up during the implementation by the Privacy and Security Manager assigned to the project, requiring an additional control to monitor the changes on the S3 bucket policy, checking that only one of the two predefined “OpenAccess” or “ClosedAccess” policies are attached to the S3 bucket. If the policy in the PutBucketPolicy API call doesn’t match the predefined policies, the default “ClosedAccess” policy is applied. This is still in progress and out of scope of the T-Systems engagement and is only mentioned here to give the reader a complete picture of the solution.

Conclusion

The proposed solution is secure (approved by the assigned Privacy and Security Manager), user friendly (allows a non-technical user to operate the break glass procedure), lightweight (uses native AWS services), and extendable and scalable – since it’s built using infrastructure as code with parameters, it can be extended to cover additional S3 buckets or deployed to additional AWS accounts.

About the author
Karol Havrillay

Karol Havrillay

Lead Cloud Architect, T-Systems Public Cloud

Show profile and articles

This might also be of interest to you:

Do you visit t-systems.com outside of Germany? Visit the local website for more information and offers for your country.