The title alone should spark some controversy with information technology administrators. While modern versions of Windows can run for months at a time before a required reboot, that was not always the case, it's still not a security best practice.
Early desktop and server versions of Windows suffered from design flaws, memory leaks, and poor drivers that would require a reboot on a frequent basis. End users would implement tools to automatically schedule the cycling of services, reboot hosts, and even tasks to flush temporary files—just to keep the system operating for as long as possible before a reboot. This may sound completely foreign to newer administrators (just like using a pay phone might be to Gen Z), but Windows was not always as robust and as secure as it is today.
The early days of distributed computing required all sorts of tools and workarounds to maintain availability. Fortunately, times have changed, but periodic reboots are still required and the longer you wait, the higher your cyber risk exposure.
Microsoft Has Been Consistent in Releasing Security Patches—You Should Be Consistent in Applying Them
To understand the problem, let’s review some of the key risks. The unofficial term for Microsoft’s patch schedule is called Patch Tuesday. Starting back in October 2003 (yes 15 years ago), Microsoft has scheduled patch releases on the second Tuesday of each month. Barring exceptions for zero-day patches and Security Essential updates like Defender, the release schedule provides a predictable method for information technology and security teams to assess for vulnerabilities and missing security patches, and to apply patches, which, many times, require a reboot.
Based on configuration management, downtime due to a reboot, potential incompatibilities, and change control requirements within an organization, these patches could be delayed for weeks or months to avoid incompatibilities and a reboot. This is the obvious risk. The longer it takes to apply these patches and reboot, the higher the risk of potential exploitation. Applying the patches alone and not rebooting (in most cases) does not protect the host and could lead to other attack vectors due to a potential incomplete state of remediation.
As simple as it sounds, patches from Microsoft should be applied shortly after the Patch Tuesday release. If an organization waits more than 30 days for critical vulnerabilities, they risk being out of compliance for regulations like PCI DSS. While security professionals may argue that most devices are not in PCI DSS scope, and not subject to the 30 day rule, I would encourage them to reconsider their security policies. Attack vectors against critical resources likely do not occur directly against critical infrastructure any longer. Modern attacks typically leverage unpatched endpoints, poor privileged access management practices, and configuration mistakes, which allow a threat actor to gain a foothold and progress laterally to extract sensitive information via an endpoint.
Since 2003, the motivations of threat actors have largely evolved. 10-15 years ago and beyond, script kiddies and other attackers possessed more of a mischievous bent, looking to cause cyber disruptions for bragging rights. Today, more common motives include monetizing information, hacktivism (hacking for a cause—such as to embarrass a target), or state-sponsored cyber warfare to impair a target’s infrastructure and economy, or to destabilize it politically.
Microsoft has remained consistent in releasing security updates approximately every 30 days. The longer the lag time before an organization applies the patches, the greater the window of cyber risk. I encourage organizations to plan for Microsoft Windows reboots every 30 days as a part of their change management practices. And most importantly, apply the patches before the scheduled reboots on desktops, servers, and even in the cloud. This does not necessarily mean to apply the updates as soon as they come out. While immediately applying updates provides the best protection, it also presents a heightened risk for incompatibilities—which may not be a good tradeoff. With this in mind, strive to apply the patches on a monthly scheduled basis—even if takes a month or two to test for incompatibilities from previous releases.
The simplest recommendation from this blog is, as the title states—reboot your Windows machines every 30 days and apply the latest business approved patches before each reboot to ensure the lowest risks from vulnerabilities and potential exploitation. The longer you delay, the the higher the likelihood of an undesirable security outcome.
BeyondTrust Makes Vulnerability Management Seamless
At BeyondTrust, we can help simplify and optimize the vulnerability management lifecycle—from vulnerability assessment, to vulnerability scanning, to risk prioritization, to remediation and beyond. Our Enterprise Vulnerability Management solution can assess for missing patches using a network scanner or agent-based technology. Its unique integration into SCCM and WSUS allows for automated patch deployments to Windows hosts based on its findings and the scheduling of deployments and reboots to maintain compliance and minimize risk. This can streamline the process of rebooting Windows every 30 days and make the task efficient, effective, and standard business practice for your organization too.
The Forrester Wave™: Vulnerability Risk Management, 2018 (analyst research report)
Change the Game in Vulnerability Management (white paper)
Morey J. Haber, Chief Technology Officer and Chief Information Security Officer at BeyondTrust
Morey J. Haber is Chief Technology Officer and Chief Information Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored four Apress books: Privileged Attack Vectors (2 Editions), Asset Attack Vectors, and Identity Attack Vectors. In 2018, Bomgar acquired BeyondTrust and retained the BeyondTrust name. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition. Morey currently oversees BeyondTrust strategy for privileged access management and remote access solutions. In 2004, he joined eEye as Director of Security Engineering and was responsible for strategic business discussions and vulnerability management architectures in Fortune 500 clients. Prior to eEye, he was Development Manager for Computer Associates, Inc. (CA), responsible for new product beta cycles and named customer accounts. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.