Last week marked an exciting new first – ransomware on a Mac disclosed by Palo Alto who had seen it with a client. What’s simultaneously encouraging and disappointing is that it could have been prevented without detection using highly recommended best practices – application allow listing (yes, it is possible to do allow listing today – we do it with lots of our clients as do other providers).
Just 10 years ago, malware attacks came straight at us as scripts or executables sent on USB devices, through email or linked on websites. We compensated by securing removable devices, filtering email attachments and content and securing web browsers as much as we could without entirely disrupting productivity.
Today, attackers resort to more sophisticated tactics to penetrate applications that are exposed to content coming through the internet. Meta-analysis of today’s malware threats reveals that attacks tend to occur in multiple stages. The first stage often attacks in-memory vulnerabilities of web browsers, browser plugins and the applications that open business documents. The intent of the first stage is often to download and execute a secondary payload – typically a script or executable. This attack pattern is necessary because it’s increasingly difficult to “stay in memory” while performing sophisticated lateral attack or data theft.
So we’re right back at where we were 10 years ago – stopping malware scripts or binaries from running!
Why do we continue to fail at this since it can be stopped quite effectively with an effective application allow listing implementation? Allow listing has been notoriously difficult to operationalize in the enterprise due to the vast range of software in use and continual change. How do we know which applications to trust, and which to stop?
To address this issue of trust, the concept of code-signing certificates was introduced. Microsoft Authenticode is the most prevalent, and Apple has their own. In theory, code signing is intended to prove that the application you’re installing or running came from a provider verified as a real entity accountable for the product. Current cryptographic signatures are impossible to forge.
The provider has a “secret” (a very long complex password AKA a 256 bit cryptographic key) used to sign the application using algorithms everyone knows and can verify using a verification algorithm. It’s ok to know the algorithms because it’s the secret that makes it secure. This is much like applying your royal seal imprint to the wax seal of an envelope in medieval times. It was out of reach to recreate the royal seal then, and today it’s not practical to figure out the key used to sign software. That’s why people steal the code signing certificates instead. If attackers steal a code signing certificate they can make their malware appear as though a valid software vendor produced it. If we trust any signed software, it would be a mistake. If we trust any software signed by our known vendors, it could also be a mistake.
The KeRanger ransomware relied on a stolen certificate to bypass Apple’s Gatekeeper protections. Once discovered, Apple revoked the certificate, which meant it was no longer trusted and the malware would not be allowed to run on Apple OSX. Good thing that this malware wasn’t designed to hide in the background and quietly steal your corporate data, it may not have been discovered so easily! Of course, our malware writing friends can steal another code signing certificate and continue on with the same style of attack.
The problem is that we’re verifying applications based on a single secret. In the security industry, we’re actively looking to replace passwords (something that can be stolen) as the sole means of authenticating users because it’s too easily circumvented. Why are we not thinking the same way about the applications that inherit the privileges of users once they log on?
I love that Microsoft is making strides in security with Windows 10’s Device Guard feature (and others). However, Device Guard is fundamentally based on the premise that code signing certificates and the chain of trust remains secure. I don’t have to circumvent Windows security to bypass Device Guard, I just need to steal someone’s valid code signing certificate and use it. If you’re able to operationalize the full capabilities of Device Guard, it’s unlikely you’ll fall victim. I’ll let you know when I see a client with a fully realized DeviceGuard implementation – great tech, big fan here, though I’m not seeing a practical way to use it in the enterprise just yet. To their credit, MSFT has a great whitepaper on securing code signing certificates and using them securely as do others such as CA, Thawte and others.
What’s my point? Application authentication needs a reboot – lets add another measure of authenticity beyond code signing since it’s a single factor authentication. We should authenticate the vendor (holder of the certificate) and the application. I’d been thinking maybe I should start a company and do this, but it occurred to me we have authorities on such matters including the US National Institute of Standards and Technology (NIST) which maintains the National Software Reference Library (NSRL). It’s exactly what is states… a reference library of authentic software that’s used in legal matters and wherever you have to be really sure what application you’ve got at hand. Whether or not the application is code-signed, you can verify it’s exactly what it claims to be by comparing its hash (a unique condensed representation of the application binary) to the known hash for the application.
Another potential approach is for the certificate issuing authority to verify both the certificate validity, and that the application hash was registered by certificate owner (strong authentication of the cert owner to the CA using separate secrets). I saw that Verisign offers code signing services now. This is a step in the direction of being authority on publisher identity (certificate) and the artifacts knowingly signed by the publisher. This approach also greatly reduces the risk of a software provider being compromised and losing control of their certificates. It also has the implication of consolidating all application authentication to a single mechanism. I prefer the idea of independent verification via different channels.
If Google can organize the world's information and make it universally accessible and useful, it should be possible to organize the world’s software and verify it’s legitimate.