Over the past several years, as sophisticated nation-state cyberthreats have become a reality, and attacks on critical infrastructure have become commonplace, the concept of offensive cybersecurity has gained more mainstream traction. For almost all organizations—public and private, traditional, defensive cybersecurity has and will always be the only approach. The adoption of an offensive approach, much like a boxer cornering their opponent in the ring, has risks that may be unacceptable, or even illegal, if the punch is not carried out with extreme precision. Actively fighting back is a very risky move in cybersecurity.
As a definition, offensive cybersecurity is a proactive approach that involves launching a cyberattack against adversaries to disrupt or cripple their operations and to deter their future attacks. This approach is sometimes referred to as “hacking back” and relies on determining accurately who is conducting the attacks against you. In general, the targets of cyber offensives are threat actors that have been identified as launching cyberattacks against you or your organization.
As every security professional should know, hacking back is not a trivial exercise, and the approach can be riddled with flaws. Currently, the practice of hacking back remains illegal as it would violate the Computer Fraud and Abuse Act (CFAA), which was first enacted in 1986. However, a bill was recently introduced to U.S. congress that would allow organizations to take offensive actions against their IT network’s intruders.
The largest problem with any offensive cybersecurity strategy is the risk, or perceived risk, of an attack being launch that is a mistake. A full-fledged cyber offensive could inflict devastation comparable in scale to a conventional war or nuclear bomb. This is not farfetched. If an attack occurred within critical infrastructure (CI) or departments like the FAA, we could see poisons in our water supply, massive loss of power, and even manipulation of civil aircraft. These are the risks of any offensive attack at scale.
For instance, imagine a company accidentally targeting CI because they think the attack they are experiencing is originating from them? What if CI was “owned” and used to launch DDOS attacks, such as from the Mirai botnet, and someone decides to hack back against the CI? I think you can see the flaws of this offensive/retaliatory approach. Moreover, the ramifications of an inappropriate “punch” back could warrant an escalation that many organizations are not prepared to deal with technically and legally.
Next, consider the growing use of artificial intelligence (AI)—particularly with regards to IT security automation and orchestration. AI is based on machine learning algorithms—programs that learn based on example and formulate results derived from statistics or other models. While AI lacks a concept of good or bad, it could be programmed with parameters to differentiate between “good” and “bad” behaviors or desired outcomes. The problem is that AI can learn bad behavior, like a young child, and could initiate a very undesirable response—much like a temper tantrum. If AI is allowed to automatically attack back, then a cyberwarfare scenario could escalate quickly beyond control. This is not a doomsday SkyNet scenario, but more like multiple network cards jabbering on a 10base2 (yes, old school) network, drowning out all legitimate communications.
As a real-world example, consider the streaming of video. The desired result is clear—multicast packets to all the targets subscribing to the stream. If a network device in-line corrupts those packets due to a hardware / software fault or another attack, the received packets could be malformed. AI could misconstrue these malformed packets as an attack, or the potential exploitation of a vulnerability. Web content filtering solutions today can easily make this mistake even when something as simple as the source of the video stream is not recognized. Think this sounds crazy? This is actually what signature-based intrusion detection system (IDS) solutions do today. If intrusion prevention system (IPS) engines are empowered with data for automated actions, the result could be to terminate the stream, or worse, attack back against the source.
The scenario of triggered automated responses as outlined above is why even conventional warfare is locked down. Automated threat responses, especially for offensive behaviors, should always be verified and never trusted as is. Otherwise, the implications could be life-threating.
While automation in many forms is helping IT and IT security solve issues of scalability and efficiency, due caution should always be given to technologies that offer full automation; especially of an offensive nature. This level of caution should be even higher for automation technologies and platforms that are governed by AI, where the logic to initiate a response may not even be logically explainable. And, some highly sensitive areas of decision-making are probably always best left to humans—as imperfect as we are.
In reality, the Internet is fragile—using it for a cyber offensive or for hack back initiatives is a terrible idea. Actions and reactions can rapidly spiral out of control, and AI with automation could make it substantially worse. In this security professional’s opinion, stick with the best defensive IT security technologies and avoid the hype, legal issues, and potential harm of taking an offensive cybersecurity posture. Leave the offensive approach to your government and its cybersecurity programs.
Additional Reading
What Public Disclosures are Revealing about Cyber War Threats (blog)
Mapping BeyondTrust Solutions to NERC Critical Infrastructure Protection (CIP) (white paper)
Four Pillars to Securing UK Critical National Infrastructure (white paper)

Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.