Privilege Management for Windows 23.7 Performance Report


The aim of this document is to provide data on the agreed performance metrics of the Privilege Management for Windows desktop client compared to the previous release.

The content of this document provides general guidance only. There are many different factors in a live environment, which could show different results such as hardware configuration, Windows configuration and background activities, third-party products, and the nature of the Privilege Management policy being used.

Performance Benchmarking

Test Scenario

Tests are performed on dedicated VMs hosted in our data center. Each VM is configured with:

  • Windows 10 21H2
  • 4 Core 3.3GHz CPU
  • 8GB Ram

Testing was performed using the GA release of 23.7.

Test Name

QuickStart policy with single matching rule

Test Method

This test involves a modified QuickStart policy where a single matching rule is added, which auto-elevates an application based on its name. Auditing is also turned on for the application. The application is a trivial command line app and is executed once per second for 60 minutes. Performance counters and Privilege Management for Windows activity logging are collected and recorded by the test.

The QuickStart policy is commonly used as a base for all of our customers, and it can be applied using our Policy Editor Import Template function. It was chosen because it's our most common use case. The application being elevated is a dummy application EXE, which we’ve created specifically for this testing. It terminates quickly and doesn't have a UI, making it ideal for the testing scenario.


Listed below are the results from the runs of the tests on 23.7 and our previous release, 23.6. Due to the nature of our product, we are very sensitive to the OS and general computer activity, so some fluctuation is to be expected.

Rule Matching Latency

Shows the time taken for the rule to match. No significant difference in mean latency was observed.

Series Mean Min Max
23.6 Process Matching Rule Latency (ms) 5.77 3.54 19.93
23.7 Process Matching Rule Latency (ms) 5.51 3.52 24.37

Privilege Management for Windows performance visual showing rules latency results.

Processor time (Defendpoint)

Percentage of processor time used by the Defendpoint service. A small increase in processor time was observed.

Series Mean Min Max
23.6 % Processor Time (DP) 3.06 2.02 4.82
23.7 % Processor Time (DP) 2.84 1.55 5.14

Privilege Management for Windows performance visual showing processor time results.

Private Bytes (Defendpoint)

Shows the whole size of the memory consumed by the Defendpoint process. A small increase in memory consumption was observed.

Test/Version Mean Min Max
23.6 Private Bytes (DP) 14,083,881.69 11,878,400.00 15,073,280.00
23.7 Private Bytes (DP) 14,957,248.84 12,779,520.00 15,843,330.00

Privilege Management for Windows performance visual showing private bytes usage.

I/O Write Bytes/sec (Defendpoint)

Shows disk I/O used by the Defendpoint service. A small increase was observed for both I/O write and read.

Test/Version Mean Min Max
23.6 I/O Write Bytes/sec (DP) 25.74 0.00 567.85
23.7 I/O Write Bytes/sec (DP) 41.76 0.00 2,283.15

Privilege Management for Windows performance visual showing disk i/O write results.

I/O Read Bytes/sec (Defendpoint)

Series Mean Min Max
23.6 I/O Read Bytes/sec (DP) 28,117.99 26,066.16 31,837.98
23.7 I/O Read Bytes/sec (DP) 28,238.12 24,547.62 34,688.76

Privilege Management for Windows performance visual showing disk I/O read results.

Memory Testing

For each release we run through a series of automation tests (covering application control, token modification, DLL control, and event auditing) using a build with memory leak analysis enabled to ensure there are no memory leaks. We use Visual Leak Detector (VLD) version 2.51, which, when enabled at compile time, replaces various memory allocation and deallocation functions to record memory usage. On stopping a service with this functionality, an output file is saved to disk, listing all leaks VLD has detected. We use these builds with automation so that an output file is generated for each test scenario within a suite.

The output files are then examined by a developer, who reviews the results, looking for anything notable. Due to the number of automation tests, only suites that test impacted areas are run, for example, EventAuditJson is run if a release contains ECS auditing changes. If nothing concerning is found, then the build continues to production. For 23.7, our testing did not find anything that would cause us to withhold the release.

For more information, please see Visual Leak Detector.