Authored by Morey Haber, VP of Technology with input from Scott Carlson, Technical Fellow
The model has changed for how security is delivered in a world of Infrastructure-as-a-Service (IaaS) and Software-as-a-Service (SaaS). To illustrate my point, think about the traditional three layers of how security is delivered:
- For the underlying foundation (infrastructure)
- For servers/operating systems
- For applications/users
When you move to an IaaS or SaaS model, you eliminate one or two of these layers that you, as the customer, need to worry about. Sounds pretty good, huh? Not so fast. While nearly every cloud provider is really good at the security of their infrastructure, the next two layers could leave you wide open to attack depending on what you are protecting and your level of skill. If you are deploying security systems, either by buying services from a managed service provider (MSP) or installing it yourself, you should consider what you are getting.
I would contend there are three considerations you should make in a cloud-based SaaS security system. (Oh, and you have some responsibility here, too.)
- Continually monitoring for security and for configuration vulnerabilities
Based on the inherent dynamic nature of cloud environments, traditional security and configuration vulnerabilities need to be assessed at instance instantiation (powered on), runtime, and at decommission or destruction (power off) of the instance or worker process. Why? This allows for any new non-persistent instances to be assessed for risks, verified that there are none present during operation, and to ensure no tampering or exploitation occurred during tear down.
Monitoring should start as soon as you start your cloud. The great thing is that because they want to bill you, the cloud provider always knows what you have. Use this information to build a monitoring strategy that watches the edge, automation API, management dashboards, and the systems themselves. Do not want until your image “goes into production” to start monitoring. Most attacks start before a workload goes into production, before it gets “locked down”. This can’t be the strategy in the cloud, everything should be, “default deny and default off,” always.
Standard credentialed vulnerability assessment scans or vulnerability assessment agents that can produce delta reports from public and private TCPIP interfaces for the instances can produce the desired output necessary for this requirement. Make sure that vulnerability scanning is performed “upon launch” or an “at run” opportunity with essentially continuous scanning. Unless the public instance has no access to the internet, having a vulnerability scanner that detects a new item in inventory, scans it, and then prohibits launch with critical vulnerability is a must.
- Monitoring all AWS or Azure, etc. instances
When you deploy an IAAS environment, you need to make sure that every person who can deploy into your cloud cannot open up additional paths to comprise. Almost always, users of cloud extend away from infrastructure people and often move to development and support staff, and maybe even business staff who want to do business quickly. When you let more people do deployment, even if it follows standards, you need to monitor every ingress point, all automation activities, all manual activities, and all egress points. Control, monitor, and respond should be built into an environment first, not at the end.
The APIs for AWS, Azure, Rackspace, Google, IBM, and even Oracle allow for the enumeration of all running and powered-off instances, including public and private-facing resources. It is in the best interest of all organizations doing business in the cloud to identify vendors that use these API’s as a part of their solutions such that best practices for asset management, vulnerability assessment, patch management, privileged access, logging and auditing, etc. are ALL properly identified and included in management and security practices. The ensures that even partial monitoring does not miss any resources, and that potentially rogue resources or shadow IT in the cloud can be managed when using your business accounts.
- Handling ransomware
Honest and full disclosure: Being secretive like the recent Yahoo breach will only end badly and potentially involve legal investigations from government watchdogs like the SEC. This does not mean you blast out the problem, but handle it responsibly. If you are a cloud provider however, and files you are hosting for a client are infected with ransomware, ensure your security practices for segmentation are not allowing propagation and that backup solutions can indeed restore the files without adding additional risk or re-infecting the environment. If you detect ransomware in files for a client, having processes to isolate the environment and inform the client are a requirement. This is potentially a value add since many on premise file storage solutions fail to alert in a timely fashion that users have been infected.
Your responsibility for security IN the cloud as groups, roles, devices, etc. change, oversights and misconfigurations open vulnerabilities
- Anything you deploy should always protect itself, because you don’t own the underlying infrastructure.
- Infrastructure as code should come from a clean repository, from a locked down account, and be scanned as a master often enough to ensure they’re free from vulnerabilities.
- Do not think you know more than the cloud provider. Almost every default out there is a ‘default deny.’ Instead of locking things down when you are done, leave it fully closed and then only open what you need.
- Everything should be monitored by built in tools. If it exists, monitor it.
Understanding these three cloud security considerations (and your responsibility, too) can help ensure accountability and fend off unwanted cyber attacks in your cloud infrastructure. If you are interested in learning more about how BeyondTrust can help cloud providers and MSPs improve their client’s security, contact us today. In the meantime, start by scanning your environment for privileged users and accounts, or possible IoT risks.
Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.