The Hidden Layer: Behavioral Visibility and Insider Risk
We spend millions hardening the perimeter. Meanwhile, the person sitting three desks away already has the keys.
A few years back, I sat in a post-incident review where a company had lost several gigabytes of customer data. Sensitive stuff. The kind that ends careers and triggers regulators. Everyone in that room assumed an external breach sophisticated threat actor, zero-day exploit, nation-state, the usual script. It took almost two weeks before anyone seriously looked at the access logs of a recently resigned senior analyst who had quietly spent her last month downloading everything she could reach. No malware. No phishing. Just a company laptop, valid credentials, and nobody watching.
That is not an unusual story. It is, frustratingly, a very ordinary one.
We talk about insider risk with a lot of confidence for an industry that consistently fails to detect it until after the damage is done. The gap is not really about technology. we have decent tools at this point. It is about where organisations choose to look, and what they have convinced themselves the threat actually looks like.
The problem with “trust but verify”
Most security architectures are built outward-facing. The mental model is a castle: strong walls, a moat, controlled gates. External threat comes in, you stop it. That model made reasonable sense when most sensitive work happened on-premises and data did not move easily. It makes considerably less sense now.
The person already inside the castle authenticated, badged, trusted does not trigger the moat. They walk straight past every control you spent your budget on. And because their activity looks, in isolation, completely normal, the systems built to catch anomalies often have nothing to compare it against. You cannot detect deviation without first knowing what normal looks like. Most organisations have never seriously defined that.
I have reviewed security programmes at organisations with mature threat intelligence functions, solid perimeter controls, 24/7 SOC coverage and no coherent answer to the question of what a typical day of data access looks like for their finance team, or their legal function, or the contractors with broad system permissions they onboarded three years ago and largely forgot about.
The data was always there. The logs existed. What was missing was someone asking the right question of the right dataset at the right time and a programme structured around doing that consistently.
Not all insiders are villains
Here is where a lot of insider risk programmes go wrong from the start: they are built to catch the spy. The disgruntled employee selling secrets. The fraud. And yes, those people exist and the consequences when they act can be severe. But they are a small fraction of the actual insider risk population.
The much larger category is the negligent insider someone who emails a client file to their personal account because it is easier to work from home that way, who uses an unapproved cloud storage tool because the approved one is slow, who shares login credentials with a colleague covering for them during leave. No malice. Real damage. The distinction matters enormously for how you build your programme, because the controls and interventions for negligent behaviour look very different from those designed to catch deliberate exfiltration.
Then there is the compromised insider someone whose credentials have been taken over by an external actor through phishing or password reuse. Technically an outsider operating with insider access. Your controls need to be able to distinguish between these categories, and most do not.
What behavioural visibility actually means in practice
When I use the term behavioural visibility, I am not talking about reading employees’ emails or building a surveillance operation. That path leads to legal exposure, destroyed trust, and counterintuitively worse security outcomes, because a culture of surveillance is one of the organisational conditions that actually increases insider risk. People who feel watched and distrusted do not become more loyal.
What I mean is something more specific and more defensible: building a consistent, documented, proportionate picture of what normal activity looks like across roles, systems, and time, so that genuine deviations surface as signal rather than noise. Access volumes, system interaction patterns, off-hours activity, data movement. Not the content of what someone is doing, but the shape of it.
UEBA tools User and Entity Behaviour Analytics have gotten genuinely useful for this over the last few years. Not perfect. They still produce alert volumes that can overwhelm a small security team if they are not tuned carefully. But the underlying capability is sound: establish a baseline, flag statistically significant deviation, route it to a human for contextual judgement. The machine finds the anomaly. The analyst decides what it means.
The piece that most programmes underinvest in is that last step. Behavioural signals need context to be interpreted accurately, and that context often sits outside the security function entirely in HR, in line management, in finance. The analyst who suddenly starts accessing systems outside their normal scope at 11pm on a Tuesday might be a threat. They might also be covering for a sick colleague, or working on a project that just expanded in scope.
You need a pathway to find out which, quickly, without tipping your hand prematurely or treating someone as a suspect based on a statistical anomaly.
Detection without context is noise. The organisations that lead in insider risk management are those that build bridges between their security, HR, and legal functions treating this as a human problem that requires a technical lens, not the reverse.
The conversation nobody wants to have
Building a serious insider risk programme requires a conversation that most organisations avoid: the one about how joiners, movers, and leavers processes are genuinely broken at the access rights layer. People accumulate permissions as they move through roles and never shed them. Contractors get provisioned for a project and remain in the directory long after the project ends. Privileged access gets granted for an emergency and never reviewed.
None of that is dramatic. It is just accumulated technical debt in the identity layer, and it is the substrate on which insider risk grows. You cannot run effective behavioural analytics against a system where nobody is confident what access rights should look like in the first place.
The organisations doing this well are not necessarily the ones with the biggest security budgets. They are the ones that treat insider risk as a cross-functional problem security, HR, legal, IT, and senior leadership sharing ownership of something that cannot be solved by any one of them alone. That is harder than buying a tool. It is also the only thing that actually works.