This blog post is republished with the permission of Network Computing. See the original post here
We face many challenges when trying to manage the vulnerabilities in our IT systems, not least of which is the deluge of information presented to us from a multitude of sources. When we consider the need to look for vulnerabilities across our IT systems - not just servers, workstations or infrastructure but also mobile, tablet and cloud - it seems overwhelming.
We need tools that not only help us in identifying existing system vulnerabilities, but those that help us identify which we should tackle first. Most vulnerability scanners present the same kinds of information, severity, CVSS score, etc. but while useful, this doesn't give us a definitive view of importance. High severity vulnerabilities might seem the best place to start remediation but are they really the biggest risk to our environment?
Many are to all intents and purposes academic, in that exploiting them needs the hacker to develop the process from the ground up, and/or be physically at the system in question. Tooling that allows you to quickly identify the vulnerabilities that have known exploits and which assets these apply too, enables you to generate a meaningful to-do list without needing to wade through that data and undertake the correlation yourself. Being able to manage each vulnerability, who is managing it, and its status, really helps.
Even in very small environments there can be hundreds or thousands of high severity vulnerabilities. Some teams never get all of the high priorities closed as new vulnerabilities are being continually uncovered, yet many medium and low severity vulnerabilities have multiple known exploits and have been incorporated into tools, some of them very well known and easily available. The importance of eliminating the soft targets cannot be stressed enough.
This still yields a lot of data and in many environments there is more to be done. What we really need is a tool that can go beyond straight correlation and begin to analyse the incoming data to identify the users and systems that are high risk in our environment, firmly based on perceived business risk and not some fictional ideal. A tool that can baseline your environment and highlight users and systems that are not following the norm for your environment, even within the context of the users or systems themselves, is likely to really help.
Rapid visibility of the highest risk elements in your environment, allowing you to see the underlying events that contribute to the assessment is important. This effectively eliminates the need to wade through lists of vulnerabilities trying to identify what to fix next and allows you to tackle the highest risk assets with the most exploitable vulnerabilities first. Meaning that you’re always closing the biggest holes first and delivering the best protection to your organisation.
When you combine tools like this with time-based reporting and risk-based assessments, not only do you gain efficiency in your activity but it also makes it easier to communicate that success to your business leaders. Giving your management teams statistics on the number of high, medium and low severity vulnerabilities you have, and how many you fixed last week/month/year means nothing to them. In many ways they mean little to us as well, unless we can relate it to IT risk. For example, we reduced average asset risk by 50 per cent last year, will suddenly grab their attention and their understanding. We're talking business and not technical bridging: it's one of the biggest gaps in our industry. Adding the ability to forecast risk reduction against future activity allows us to become full participants in business operations.
IT has promised the ideal of working smarter, not harder for decades - now it's delivering to the business.
Learn more about threat analytics
for privileged users and and critical assets.