In this day and age of security concerns and malware, organizations and IT staff are being threatened by an attack vector that is increasingly wide and complex to mitigate. These concerns are especially important when businesses take into consideration their Disaster Recovery strategies. The threat of data loss or corruption due to malware infection is on an alarming rise and is making organizations rethink how they handle potential data loss and recovery. A specific kind of malware known as ransomware is driving many of these concerns.
Ransomware is a variant of malware that slyly and often quietly makes its way into an environment either via an infected email attachment, website script, or infected download and then encrypts user files including network file shares. To retrieve their data, the unsuspecting end user, is forced to buy the encryption key from the attacker using digital currency such as Bitcoin which allows the ransomware creators to protect their identity. End users can be anyone from an individual home user all the way to large enterprise environments including banks, government authorities, and hospitals just to name a few.
All of the concerns that ransomware brings to forefront for enterprises leads to serious questions about security as well as disaster recovery and business continuity. Is it simply enough for enterprises to rely on end point antivirus, company policies, firewalls, and other means to protect themselves? What are the consequences for getting infected? Why are backup copies so important when thinking about business continuity and disaster recovery? What are some best practices for protecting enterprise environments from data loss as a result of a ransomware infection? First, let’s take a look at what a ransomware looks like.
What a ransomware infection might look like
To begin with, let’s take a step back and get a basic understanding of what encryption is and how ransomware makes use of this technology. Encryption in general is the process where you take readable data and encode it so that it is unreadable without the “key” to unlock it. The specific type of encryption we are talking about with ransomware is public-private key encryption. The ransomware on an infected computer connects to a command and control server that generates the public/private key pair. It sends the public key to the infected computer which then encrypts the files based on the private key algorithm. So public/private keys work together in a pair.
The command and control servers house the private key that must be used to unlock the user’s files. To obtain this key, the infected end user must pay the price demanded by the malware control servers. So while it is a relatively simple mechanism to secure or encrypt a file, it is also very effective.
Below are a few screenshots of an infected system. Notice the abnormal files that exist in the root directories.
The contents of the “.png” and “.txt” file show a typical “ransom note” to the end user. As you can see, the end user is informed of their files being encrypted. They are directed to navigate to a secret server to obtain the private key and decryption program at which point they will be presented with the price to obtain the key and program.
Why is Antivirus, policies, firewalls, and other means not enough?
There is no question that endpoint protection/antivirus is important for any enterprise environment’s overall security strategy. However, when it comes to relying completely on endpoint protection to secure business continuity, organizations are putting themselves at great risk for data loss. Why is that the case? Endpoint protection is simply not enough.
Ransomware and other malware variants are constantly changing. Since most endpoint protection software is signature-based, these signatures need to be updated constantly to keep up with the ever-changing malware landscape. Even with so called “next-gen” malware protection, none are 100% accurate and reliable. It is not a question of “if” an enterprise environment will be infected with malware, but “when” it will happen. Information technology teams need to plan accordingly and have a strategy to protect data even if key infrastructure servers become infected including backup servers.
Company policies, written and programmatic/technical in nature, also fall short in being the end all solution to protecting enterprise environments from malware infection. While created to help mitigate some of the behavioral threats that come from employee activities that may place an environment at risk, this also is not a fool proof.
Written policies can be easily bypassed by employees who may even inadvertently go against company policies without knowing they are. Enforced technical policies such as Group Policy in an Active Directory domain environment again are not completely effective in thwarting malware infections. While software restriction policies can be helpful and a good part of the overall strategy to help with malware infection, the restriction rules need to be constantly reviewed and updated based on the changing nature of ransomware and other malware variants.
What about firewalls and other hardware appliances that protect the gateway? The traditional firewall as such has always been viewed as a staple in most enterprise environments. Firewalls are usually the first line of defense inspecting ingress traffic coming into the local area network from the Internet. Many new firewalls also have the “next-gen” designation touting unified threat management capabilities. Firewalls essentially inspect all traffic coming into the enterprise and based on various rules and processing engines, make determinations about whether traffic is legitimate or malicious.
The problem, however, is that firewalls only inspect the traffic coming in or out of the gateway meaning that all traffic that lives inside the enterprise flows regardless of firewall inspection. Firewall inspection can be a good starting point in an organization’s overall security stance. However, they have many shortcomings when we think about ransomware and how infections can happen.
Often organizations tread a fine line of having security rules on the firewall strong enough to catch bad traffic while relaxed enough to have a manageable “false positive” response. This can lead to whitelisting or the relaxing of other metrics that can allow a path for malware to make its way in.
Also, many SMB’s to enterprise environments may not have yet rolled out their own PKI infrastructure to allow the inspection of SSL traffic. Why is this important? Malware creators are getting smarter about making their way in internal networks. Many are using secure socket layer communication to deliver malicious payloads. Most firewalls require certain elements to be in place (trusted SSL cert on all endpoint workstations and such) so that it can play the “man in the middle” and inspect SSL traffic. SSL traffic if not being inspected by the firewall, is viewed as encrypted, unreadable traffic. Most firewalls simply pass this traffic through. If malware is part of this unreadable traffic payload, it is simply passed through.
Many may argue each of these measures and others are in themselves only part of the security solution. However, enterprise security as described by the top IT security professionals in the field, is never perfect. There will always be the possibility of breaches and security issues.
Organizations, must look at backups and specifically offsite backups as part of their disaster recovery/security incident strategy.
Consequences of a possible Ransomware infection
Let’s imagine the worst case scenario. A zero-day ransomware attack makes its way through the firewall through encrypted SSL traffic. Since the malicious code is a zero-day attack, rules and patterns have yet to be pushed out through the AV vendors. Your workstations are totally vulnerable to the code which has made its way into your organization.
What’s worse, an administrator’s machine becomes infected with the ransomware code and all of his/her mapped drives are now in the process of being encrypted. One of the mapped drives that exist on the administrator’s workstations is a mapping to the backup server’s storage drive.
Now, not only are critical file systems and servers being infected and encrypted, but the very server that houses the solution to the destructive ransomware – the backup server – is now encrypted too.
This scenario lays bare a critical flaw in the backup strategy of many organizations – only utilizing onsite backups. Without copying data offsite to another location, organizations run the risk of suffering from the fate of the scenario mentioned above.
The potential consequences of a ransomware infection of this magnitude are significant. Even an outage of customer data for a few hours can result in tremendous cost to the business both in financial impact as well as the reputation impact of service interruption.
If an organization has no offsite backup copy of their backup data, they may have no choice but to pay the cost the ransomware is asking to recover crucial data. This cost can be significant and there is no guarantee that data recovery will be decrypted successfully once the ransom is paid. Also, a future infection cannot be ruled out by the same or even different variant of ransomware.
The impact to business is very real and is a scenario that all organizations need to be planning and preparing for. The likelihood of at some point being infected with malware and specifically ransomware is very high for most organizations.
Backup Copy – Necessary part of the overall DR plan
Creating a backup copy of your backups is an essential part of the overall backup strategy of enterprise environments today. The backup copy offsite or onsite at another location, creates another failsafe to keep your data safe if you have a ransomware infection that may have compromised your primary backups.
Let’s take a look at how the Backup Copy functionality in NAKIVO Backup & Replication 6.1 can be setup to maintain a backup copy of your VM backups. The process is extremely easy to setup and creates the additional layer of data protection that ultimately can save a business from a ransomware disaster.
In this lab demonstration, we will simply link two NAKIVO 6.1 virtual appliances together and use one as the backup target for the primary appliance. To configure a secondary linked appliance we simply need to add the target transporter and repository to the primary appliance.
We have the option to add either a Local/Offsite target or an Amazon EC2 instance. Here we will select the Local/Offsite. After that we simply configure the IP address or hostname of our target NAKIVO 6.1 appliance.
This will add our additional transporter that we will use to configure the backup copy repository.
Now that the additional target transporter for our backup copy has been created, we can add the target repository. To do that we go to Repositories >> Add Backup Repository >> Add existing backup repository.
Next, we can name the repository, select the assigned transporter, type, set encryption and other options. To target the secondary appliance, we select local folder on assigned transporter and choose the Assigned transporter in the dropdown box.
Now we have a configured backup copy repository that we can target for our backup copy jobs.
The Backup Copy job is setup under the Create >> Backup Copy menu in the dashboard.
Once we select the Backup copy job, the wizard begins to create the backup copy job. The first step is to choose the VM(s) that we want to include in the backup copy job.
Here we select the backup repository to target for the copy job.
Now the beauty of the NAKIVO 6.1 job chaining is that we can schedule the backup copy job to run immediately following the primary backup job.
The final step contains many useful settings that we can take advantage of including network acceleration for slow WAN links, encryption and also screenshot verification.
After the backup copy job completes, we now have our data safely in two locations.
Best Practices and Recap
For reasons stated above, common security mechanisms in place in most organizations today are simply not sufficient to be 100% safe from malware infections and especially ransomware. Organizations must protect themselves from the potential of all onsite backups being compromised by ransomware in a worst case scenario.
Offsite backups are an essential fail safe to make sure backups are safe from the same destructive processes that compromise the data being protected.
If an organization has no offsite DR facility, then backups to cloud should be considered as a means to safely store data outside of the scope of potential malware infection. Retention policies also can be leveraged to make sure data is kept for the period that makes sense to the business and that allows for recovery point objectives to be met.
NAKIVO Backup & Replication 6.1 makes the process of copying backups either offsite or to the cloud as easy as scheduling the job to run after a successful backup happens. The backup copy job will be kicked off immediately and will make sure that your data is protected offsite via the copy.
We live in a new era of malware infections and there is no doubt that ransomware and other variants will continue to become more sophisticated. Organizations must keep up in making sure not only that all the proper security measures are in place, but also that the disaster recovery strategies meet the demands of the business. This includes the potential for recovering from a worst case scenario where even onsite backups are compromised.