If your vulnerability management (VM) processes are like most, you're drowning in information and wondering whether your scanning and reporting tools are revealing true risks or sending every tiny issue your way for review.
Unfortunately, getting alerts for low-level vulnerabilities and false positives is still considered a standard best practice. But to free themselves from this onslaught of data, many IT teams are searching for a better way to be sure that their vulnerability programs are working hard amidst all the noise.
BeyondTrust recently joined Dave Shackleford, founder of Voodoo Security and SANS senior instructor, for a webcast looking at practical guidance for making your VM program more effective today. Here’s a summary of key takeaways from the presentation, plus a link to the webcast recording.
Finding context for what’s most important
One of the first challenges in effective vulnerability management is isolating what's important from the reams of vulnerability data you collect. Scanners often detect low-level vulnerabilities that aren’t important in the big picture such as; scan details, OS fingerprinting, SSL/TLS ciphers, self-signed certificates, and web server internal IP disclosure.
Sometimes – though not always – this is redundant information that you don’t need from your vulnerability scan and you can get rid of VM noise by suppressing it. Shackleford provides three guiding questions to help you decide which vulnerability factors to suppress and which to keep:
- Is the information important to report to stakeholders?
- Is the vulnerability or information useful for remediation?
- Will we act upon this, and where does it fall in terms of priorities?
The most important information to gather and analyze is the information that is most useful to your stakeholders. If you’re processing more data than your context needs, it could obstruct the analysis of data that is actually valuable – wasting time and money.
Identifying participants in the VM process
Your organization may have five or more partners who need to be in on the vulnerability data loop, such as system owners and system custodians, departmental support staff, developers and QA teams, security teams, and business unit management teams.
Once you have identified these key users, you need to figure out what kind of data they want and what kind of data is most valuable to them. For example, for participants collecting system inventory data, IP addresses and system DNS names might appear to be the most useful. For others, process and service inventory data might be the most useful. Looking at the context of the data in regards to the participants involved in the process will help you decide your next steps.
Weaving VM into day-to-day operations
The final and most important challenge to overcome is weaving VM management into your organization's broader day-to-day operations. Shackleford notes that an important aspect of making VM approachable is to deliver regular, condensed vulnerability reports. These can include the top ten or twenty issues found, explicit technical details, remediation guidance and risks, and alternatives and options. The purpose of this report is to prioritize the data for your stakeholders and make the value of your VM software and process clear.
Much of the process of making VM more effective and accessible is to provide context and prioritization for your key stakeholders. If you want to learn more about keeping your vulnerability management processes efficient and effective in the current threat environment, check out the complete webcast below:
Surviving the Vulnerability Data Maelstrom with Dave Shackleford from BeyondTrust on Vimeo.

Chris Burd,
Chris brings over 20 years of technology sales and marketing experience to BeyondTrust, where he is responsible for corporate communications and digital marketing. Prior to BeyondTrust, Chris led marketing communications at Core Security, where managed the company’s positioning, branding, and inbound marketing initiatives.