There can’t be a crazier unplanned wrinkle in the history of desktop computing than the way that the rise of the Windows PCs and client-server applications gradually turned ordinary PC users into reluctant software administrators.

PC users were suddenly put in charge of applications, sometimes with full admin rights. This was good because users were assumed to be a force for change and needed control to allow the client-server and PC movement to overthrow the dusty rooms full of green screens and Cobol coders of pre-Internet times.

Such naive idealism is long gone now but, incredibly, the Windows world has struggled to patch up the mistake, first asking developers to stop building applications that demanded admin rights to work and more recently (when this proved unworkable) by imposing controls such as User Access Control (UAC) in Vista.

This is better than a free-for-all but it begs the question of where lines get drawn. How are application privileges elevated and when is that deigned to be a good idea? Often it might seem easier just to lock down all privileges but that brings problems of its own in the form of an energy-sapping barrage of UAC requests.

So what, then, are the downsides of simply persisting with poor or no application management?

Application privileges pose a hazard on three broad fronts, starting quite simply with the way they are routinely exploited by malware to gain control of a victim’s PC, often using quite basic social engineering attacks.

A second is that employees abuse application privileges either to reconfigure or install applications and plug-ins that a business would rather they did not or, in extreme cases, to deliberately bypass security for ulterior reasons.

Both of these are caused by a failure to restrict applications privileges, but it can work the other way when too many application privileges are removed, using what application controls do exist as blunt instruments Organizations also suffer, often without realizing it, when staff can’t access applications legitimately because restrictions have been set too tightly.

All three create problems that are hard to quantify and very easy to underestimate. Malware is often theoretical until it strikes and when it does the key role played by privilege escalation in particular is not necessarily realized.

Likewise, where application access has become a hassle for employees this can be hidden behind a wall of silence. Staff might simply shrug and accept the issue as ‘part of the way IT works’ and so management never has to confront the hidden toll on productivity.

Today, for the first time organizations have a way of fighting back with application management. Admins can define which applications get run (and which don’t) using the principle of least privilege, sandboxing legacy applications that need admin rights while allowing users to carry out harmless reconfigurations for themselves. Every action becomes part of an audit trail, giving admins insight into application use.

Had developers from the pre-Internet age had any inkling of the security risks they were taking they’d have no doubt designed in this layer of adult application control from the start, but such is hindsight. Today, no application should be considered secure or productive without it.