Best Practices for Securing Your Backups with AWS

In the past few years, cloud technologies have quickly come a long way from something new and controversial to standard practice. A range of cloud services are now commonplace, from simple data storage to scalable computing infrastructures and versatile on-demand applications. The key words are high availability, durability, and protection.

With regard to data storage, clouds are an ideal solution for the “3-2-1” protection approach since the technology automatically replicates data across multiple data centers across different geographical regions. The cloud is designed to deliver 99.99% protection in terms of data resilience. You do not have to worry about a disaster striking your area along and compromising the data center; copies of your data are spread around the globe.

While popular SaaS products are becoming quite widespread, enterprises are seeking more custom-tailored applications on infrastructure-as-a-service (IaaS) platforms like Amazon Web Services (AWS), with data security as a key issue.

New cyber threats are rapidly evolving, but with the right approach, any enterprise can implement the best security practices for their AWS cloud backups to reduce the potential risks. The majority of security incidents in the cloud occur through the fault of the customer, not the cloud provider.

AWS Shared Responsibility Model

Moving storage facilities or migrating entire infrastructures to the AWS cloud certainly entails some responsibility of the cloud provider for their security, but many of the threats fall under the responsibility of the customers themselves. As is the case with most cloud providers, responsibility for data security is shared between AWS and the cloud customer. As the cloud provider, Amazon assumes responsibility for the security of the AWS infrastructure. Security of the platform is critical for the protection of customers’ critical data and applications. AWS detects instances of fraud and abuse, notifying their customers of the incidents.

Meanwhile, the customer bears responsibility for securely configuring their infrastructure in AWS. They must make sure that access to sensitive data from inside or outside the company is properly restricted, and that they comply with the recommended data protection policies.

AWS Shared Responsibility Model

AWS Shared Responsibility Model

AWS Backup Storage Options

AWS offers a broad range of storage solutions tailored to customers’ various needs. Among them, Amazon Simple Storage Service (Amazon S3) is the most popular cloud storage platform. S3 is designed to store and retrieve data from any source – for instance, web or mobile applications, websites, or data from IoT (Internet of things) sensors.

Where security and compliance are concerned, S3 demonstrates powerful capabilities to meet the strictest of regulatory requirements. S3 offers a simple and convenient solution for implementing the “3-2-1” approach to data protection of your infrastructure.

Backup to Cloud

Another option, Amazon Elastic Block Store (Amazon EBS), is intended for organizations thinking of migrating their infrastructures to the cloud. EBS represents persistent block storage volumes, designed for use with instances of Amazon Elastic Cloud (Amazon EC2). EBS volumes are automatically replicated and spread within their Availability Zone. This technology protects your data from component failures, providing extremely high availability and durability. The EBS volumes allow running your workloads on a consistent and low-latency basis, which allows you to scale up (or down) your infrastructure workload within minutes.

Backup with Amazon EBS in EC2

Securing Backups in AWS – Best Practices

While there are a wide variety of AWS services, the primary ones within IaaS are Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), and Amazon S3. As mentioned, the Shared Responsibility Model assigns the customer full responsibility for configuring the security controls. To ensure your data in AWS stays intact and protected, follow these best practices applied in 5 key areas:

  • Security Monitoring
  • Secure Authentication
  • Secure Configuration
  • Inactive Entities
  • Access Restrictions

Security Monitoring

1. Enabling CloudTrail

The CloudTrail service generates logs for all Amazon web services, including those that are not region-specific, such as IAM, CloudFront, etc.

2. Using CloudTrail log file validation

This feature serves as an additional layer of protection for the integrity of the log files. With log file validation turned on, any changes made to the log file after delivery into the S3 bucket become traceable.

3. Enabling CloudTrail multi-region logging

CloudTrail provides AWS API call history, which allows security analysts to track environment changes, audit compliance, investigate incidents, and make sure that security best practices are followed. By enabling CloudTrail in all regions, organizations can detect unexpected or suspicious activity in otherwise unused regions.

4. Integrating CloudTrail service with CloudWatch

The CloudWatch component offers continuous monitoring of log files from EC2 instances, CloudTrail, and other sources. CloudWatch can also collect and track metrics to help you immediately detect threats. This integration facilitates real-time as well as historic activity logging in relation to user, API, resource, and IP address. You can set up alarms and notifications for abnormal or suspicious account activity.

5. Enabling access logging for CloudTrail S3 buckets

This feature is designed to prevent deeper penetration into CloudTrail S3 buckets by attackers. They contain the log data that is captured by CloudTrail, which is used for monitoring activity and incident investigations. Keep access logging for CloudTrail S3 buckets enabled. This allows you to track access requests and detect unauthorized access attempts quickly.

6. Enabling access logging for Elastic Load Balancer (ELB)

Enabling ELB access logging allows the ELB to record and save information about each TCP or HTTP request. This data can be extremely useful for security and troubleshooting professionals. For instance, your ELB logging data can be useful when analyzing traffic patterns that may be indicative of certain types of attacks.

7. Enabling Redshift audit logging

Amazon Redshift is an AWS service that logs details about user activities such as queries and connections made in the database. By enabling Redshift, you can perform audits and support post-incident forensic investigations for a given database.

8. Enabling Virtual Private Cloud (VPC) flow logging

VPC flow logging is a network monitoring service that introduces visibility into VPC network traffic. This feature can be used for detecting abnormal or suspicious traffic, to give security insights and alert you of any anomalous activities. Enabling VPC allows you to identify security and access issues such as unusual volumes of data transfer, rejected connection requests, overly permissive security groups or network Access Control Lists (ACLs), etc.

Secure Authentication

1. Multifactor authentication (MFA) for deleting CloudTrail S3 buckets

If your AWS account is compromised, the first step an attacker would likely take is deletion of CloudTrail logs to cover their intrusion and delay detection. Setting up MFA for deleting S3 buckets with CloudTrail logs makes log deletion much harder for a hacker, thus reducing their chances of remaining unnoticed.

2. MFA for the “root” account

The first user account created when signing up for AWS is called the root account. The root account is the most privileged user type, with access to every AWS resource. This is why you should enable MFA for the root account as soon as possible. A good practice for root account MFA is avoiding attaching the credentials to a user’s personal device. For this purpose, you should have a dedicated mobile device that is stored remotely. This introduces an additional layer of protection and ensures that the root account is always accessible, regardless of whose personal devices are lost or broken.

3. MFA for IAM users

If your account is compromised, MFA becomes the last line of defense. All users with a console password for the Identity and Access Management (IAM) service should be required to go through MFA.

4. Multi-mode access for IAM users

Enabling multi-mode access for IAM users allows you to split users into two groups: application users with API access and administrators with console access. This reduces the risk of unauthorized access if IAM user credentials (access keys or passwords) are compromised.

5. IAM policies assigned to groups or roles

Do not assign policies and permissions to users directly. Instead, provision users’ permissions at the group and role level. This approach makes managing permissions simpler and more convenient. You also reduce the risk that an individual user receives excessive permissions or privileges by accident.

6. Rotation of IAM access keys on a regular basis

The more often you rotate access key pairs, the less likely your data can be improperly accessed with a lost or stolen key.

7. Strict password policy

Unsurprisingly, users tend to create overly simple passwords. This is because they want something that is easy for them to remember. However, such passwords are often also easy for someone to guess. Implementing and maintaining a strict password policy is a good practice for protecting acounts from brute force login attempts. The policy details may differ, but you should require passwords to have at least one upper-case letter, one lower-case letter, one number, one symbol, and a minimum length of 14 characters.

People tend to use the same password across multiple services, which puts both them and the organization at high security risk. Thus, you should configure the IAM password policy to record the past 24 passwords for each user and disallow re-use of passwords. Enable password expiration, but set the password validity period for at least 90 days; forcing password changes too frequently introduces new risks (e.g. credentials interception or phishing).

Secure Configuration

1. Restricting access to CloudTrail S3 bucket

Do not enable access to CloudTrail logs for any user or administrator account. The underlying logic for this is that they are always at risk of being exposed to phishing attacks. Limit access to those few specialists who need the feature. Thus, you reduce the probability of unwarranted access.

2. Encrypting CloudTrail log files

There are two requirements for decrypting CloudTrail log files at rest. First, decryption permission must be arranged by Customer Master Keys policy. Second, permission to access the S3 buckets must be granted. Only those users with related job duties should receive both permissions.

3. Encrypting the EBS database

Ensuring that the EBS database is encrypted provides an additional layer of protection. Note that this can only be done at the moment you create the EBS volume; encryption cannot be enabled later. Thus, if there are any unencrypted volumes, you must create new encrypted volumes and transfer your data there from unencrypted ones.

4. Reducing ranges of open ports for EC2 security groups

Large ranges of open ports expose more vulnerabilities to an attacker through port scanning.

5. Configuring EC2 Security Groups to restrict access

Granting too many permissions to access EC2 instances is bad practice. Never allow large IP ranges to access EC2 instances. Instead, be specific include only exact IP addresses in your access list.

6. Avoiding use of root user accounts

When you sign up for an AWS account, the email and password you use automatically becomes the root user account. The root user is the most privileged user in the system, enjoying access to all services and resources in your AWS account without exception. The best practice is to use this account only once, when creating the first IAM user. Thereafter, you should keep the root user credentials in a secure place, locked away from anybody’s access.

7. Using secure SSL versions and ciphers

When making connections between the client and the Elastic Load Balancing (ELB) system, avoid using outdated versions of SSL versions or depricated ciphers. These can create an insecure connection between the client and the load balancer.

8. Encryption of Amazon Relational Database Service (RDS)

Encrypting the Amazon RDS creates an extra layer of protection.

9. Avoiding access key use with root accounts

Create role-based accounts with appropriate permissions and access keys. Never use access keys with the root account; this is an obvious way opening for the account to become compromised.

10. Rotating SSH keys on a regular basis

Periodically rotate SSH keys. This best practice reduces the risks associated with employees accidentally sharing SSH keys, whether in error or through negligence.

11. Minimizing the number of discrete security groups

Organizations should keep the number of discrete security groups as low as possible. This reduces the risk of misconfiguration, which can lead to account compromise.

Inactive Entities

1. Minimizing the number of IAM groups

Deleting unused or stale IAM groups reduces the risk of accidentally provisioning new entities with older security configurations.

2. Terminating unused access keys

Best practices dictate that access keys that remain unused for over 30 days should be terminated. Keeping unused access keys around for longer inevitably increases the risk of a compromised account or insider threat.

3. Disabling access for inactive IAM users

Similarly, you should disable accounts of IAM users who have not logged in for over 90 days. This reducing the likelihood of an abandoned or unused account being compromised.

4. Deleting unused SSH Public Keys

Delete unused SSH Public Keys to decreases the risk of unauthorized access using SSH from unrestricted locations.

Access Restrictions

1. Restricting access to Amazon Machine Images (AMIs)

Free access to your Amazon Machine Images (AMIs) makes them available in the Community AMIs. There, any member of the Community with an AWS account can use them to launch EC2 instances. AMIs often contain snapshots of organization-specific applications with configuration and app data. Carefully restricting access to AMIs is highly recommended.

2. Restricting inbound access on uncommon ports

Restrict access on uncommon ports because they can become potential weak points for malicious activity (e.g., brute-force attacks, hacking, DDoS attacks, etc.)

3. Restricting access to EC2 security groups

Access to EC2 security groups should be restricted. This further avoids exposure to malicious activity.

4. Restricting access to RDS instances

With RDS instance access, entities on the internet can establish a connection to your database. Unrestricted access exposes an organization to malicious activity such as SQL injections, brute-force attacks, or hacking.

5. Restricting outbound access

Unrestricted outbound access from ports can expose an organization to cyber threats. You should allow access for specified entities only – for example, specific ports or specific destinations.

6. Restricting access to well-known protocol ports

Access to well-known ports must be restricted. If you leave them uncontrolled, you open your organization up to unauthorized data access – e.g., CIFS through port 445, FTP through port 20/21, MySQL through port 3306, etc.

Concluding Thoughts

Following these best practice recommendations contributes to keeping your primary and backup data in AWS safe from most of the potential cyber threats.

However, even vigilantly implementing these policies cannot offer complete data protection; you can minimize risks, but not eliminate them entirely. There is always a risk of unauthorized intrusion into your AWS environment – for example, by hacking or a virus. Similarly, your data could be damaged or erased by a trusted user with bad intentions or simply through human error.

For this reason, you need a failsafe in place.

NAKIVO Backup & Replication is a powerful backup solution seamlessly integrated with AWS. Backing up or replicating your AWS EC2 instances with NAKIVO Backup & Replication allows you to restore your data, even if your instance was damaged and then replicated by AWS itself. You can choose from up to 1,000 recovery points for a backup (or 30 recovery points for a replica). The software is designed to satisfy even the most stringent of customer requirements.

By combining cutting-edge technologies with the best backup solutions, you can build automation around your entire virtual environment. This can increase performance, production efficiency, and data security levels. At the same time, you can benefit from reduced resource consumption, maintenance work, and risks associated with human error.

Download the free trial of NAKIVO Backup & Replication and try out all the advanced features in your own cloud environment.

Share:

LinkedIn Google+