In physics, the observer effect theory asserts that the mere observation or measurement of a system or event unavoidably alters that which it is observing/measuring. In other words, the tools or methods used for measurement interfere with the system or event they are measuring in some way.
As one example, consider the measurement of voltage across a circuit or battery. A voltmeter must draw a very small, but measurable, amount of current in order to make the calculation. This lowers the overall current (I) ultimately available to a system. If the measurement was intrusive or did not have a sufficiently high resistance (R) (Ohms Law V=IR), the available current and potential voltage (V) could be impacted as well.
While the effects of observation are commonly negligible, the object under observation may still experience a change. And sometimes, these changes can alter our perception of the entire system because the measurement itself is much more intrusive than anticipated, or initially designed. This effect can be found in domains ranging everywhere from physics to electronics—and even digital marketing. This blog will delve into the role the observer effect plays in the realm of cybersecurity.
The Cybersecurity Observer Effect
Every measurement used for a cybersecurity check impacts the overall system. This is true for a simple antivirus check, all the way through resources used for logging. CPU, time to load, memory, network traffic, etc. can each be altered in the course of providing a security measurement for some activity.
Ideally, a security measurement operates with little-to-no-impact, but how often is this really the case? Can true, no-impact security actually be successfully implemented within an environment? The answer may surprise you.
As we have established, every IT security measurement does alter a system, and in so doing, adds time to a process. If all of the measurements are serial, each one adds a piece of required information to the overall measured response to calculate an observable outcome.
However, when measurements and logical decisions are performed in parallel (provided the system has enough resources to perform them simultaneously), then the amount of time needed to perform a measurement can be reduced and the perceived impact curtailed because the measure is a in finite timeframe. This is basically parallel processing. To achieve no-impact security (truly, we’re talking about minimal-impact or zero impact frictionless security), security measurements and operational logic should occur alongside regular processes in lieu of being required before any user action is conducted.
Consider the following two scenarios…
Scenario 1:
Before any multi-factor or two-factor authentication can occur to a resource, the security tool introduces new steps in the workflow to validate the user. In addition to traditional credentials, a second factor is included to provide physical validation of the user. This adds time and resources, as well as a level of annoyance, to end users. Single sign on (SSO) technology mitigates some of this annoyance by only requiring two-factor authentication once to a group of resources and passing through authentication—since the user is already considered trusted.
As described above, the first launch of two-factor initiated a workflow to validate the user for subsequent applications, versus requiring them to serially launch two-factor over and over again. The process of single sign on is now running in parallel to the user’s normal operations and, in fact, provides a lower impact than requiring credentials each time the application is launched—even without two-factor authentication. The measurement of the user’s trust was done once intrusively with additional steps, but subsequently made easier because of the high confidence in the initial measurement.
The alternative method is highly intrusive and would require credentials and two-factor authentication for every application launched by the end user, while operating in the currently defined contextual policy. This underscores the necessity of parallel processing.
Scenario 2:
Consider the password management capabilities within privileged access management (PAM) solutions. These solutions can automatically rotate passwords and certificates on a schedule, or based on usage, such that they are ever-changing and not a liability if known by an insider threat or external threat actor.
If a user or administrator needs to use these credentials, the typical workflow involves authenticating into the password safe or vault (hopefully using the two-factor discussed above) and retrieving the credentials needed to perform a specific task. From a workflow perspective, simply measuring when privileged credentials are being accessed by a user and providing the current password it is intrusive to the end user. For example, the user has additional mouse clicks, time, and applications to complete the task. While this is the primary use case to measure privileged access, it provides little security if we cannot reliably determine when and where they are being used. This is a high-impact model that needs to change.
Another core capability of PAM platforms is session management. This capability provides a gateway, or proxy technology, into a host for monitoring sessions, and potentially documenting, all security and user activity. Session management is essentially a low-impact method to monitor what is actually happening when a privileged session occurs, but it requires the remote connection to occur through the proxy, as opposed to laterally, in order to be effective.
Without proper access control lists (ACLs), a password retrieval from a safe and remote access can occur with minimal security measurement capabilities. This is an undesirable state. When we consider password management and session management as a solution working in tandem, we can solve both problems and can create a very low-impact security implementation.
Using a technology called Password Safe Direct Connect, the tools used for a remote session can be modified to automatically authenticate to a target using security best practices, and communicate through a proxy to manage privileged risk, measure for inappropriate activity, and ensure the workflow does not negatively impact the user.
Here is how this low-impact technology works... With most remote access technologies, a user creates a profile or saved connection with key traits for the connection. This includes things like screen resolution, credentials, and custom connection string information. The latter is what is important here.
As a part of that custom connection string, the host, username, and several other switches are passed to the privileged access management session proxy to authenticate and start a remote session. This transfer includes information regarding the locally logged in user and other traits needed for proper user authentication.
Once the profile is launched, the credentials are automatically called from the safe or vault, injected into the connection string, a measurement is conducted regarding their retrieval and usage, and the session (optionally) monitored for inappropriate activity.
The end user continues to use the same tools they use every day, like MS RDP or Putty, with negligible or low impact outside of the initial setup of their saved profiles. This is another parallel security implementation.
Privileged password management can by itself be intrusive to the workflow for password retrieval. Session monitoring by itself is vulnerable to security flaws like lateral movement. When used together, and augmented to manage the requirements of measuring security and performing security operations, the two solutions can actually create a close to no-impact security solution.
Final Thoughts on Mitigating the Observer Effect in Cybersecurity
The observer effect presents an ongoing concern for cybersecurity practitioners. Many solutions can have a high impact on the runtime within an environment and create undesirable delays, single points of failure, and changes that negatively impact users, operations, and productivity.
Measuring and implementing security will always have some impact, but the goal is to make it as imperceptible as possible—especially to the end users. While zero-impact is truly unobtainable, the concept of little-to-no-impact after initial setup is definitely viable.
When you evaluate security solutions from a single vendor, or multiple vendors, ask how the solutions can operate in parallel or be used in tandem, to create a no-impact environment. After all, if they all run serially or have a high impact, users will not only reject them, your ability to obtain accurate cybersecurity measurements will also suffer.
Additional Reading
Boost Productivity & Lower Risk with BeyondTrust Endpoint Privilege Management
Combining Privileged Access Management & Active Directory Audit for a Stronger Cyber Defense

Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.