Nine out of ten awareness reports I see start in the exact same place: the phishing simulation click rate. Down from last quarter, up, flat. It gets presented on a slide with a big percentage, a green or red color, and an implicit takeaway: “we’re doing fine” or “we’re not.”
And with that, the security meeting moves on.
The problem is that this metric, on its own, says almost nothing about the real state of your awareness program. According to the Verizon DBIR 2025, roughly 60% of confirmed breaches involve a human action: a click, a social engineering call, sending data to the wrong recipient. If human risk still accounts for more than half of incidents, is the click rate alone enough to tell whether your program is working?
It’s like assessing someone’s health by measuring only their temperature. It can be a useful data point, but if it’s the only thing you look at, you’ll draw dangerously incomplete conclusions.
What does the click rate actually measure?
The click rate measures how many users clicked a link inside a phishing simulation. Nothing more, nothing less.
It doesn’t measure whether users learned anything. It doesn’t measure whether they changed their behavior outside the simulation. It doesn’t measure whether your organization is safer than it was six months ago. It measures one specific action, at one specific moment, against one specific scenario.
On top of that, this metric is extremely sensitive to factors that have nothing to do with your users’ awareness levels:
- Simulation difficulty. A phishing email impersonating the CEO with internal context will get more clicks than a generic “update your password” message. If you sent an easy one and the rate dropped, it’s not because users got better. It’s because the phishing was more obvious.
- False positives. Security tools, sandboxes, and bots that interact with links before users do. If you don’t filter them out, your metrics are contaminated with noise that doesn’t represent human behavior.
- Send timing. A Monday at 9 a.m. doesn’t produce the same results as a Friday at 5 p.m. And that doesn’t reflect differences in awareness, just in available attention.
Some organizations spend weeks preparing the perfect simulation and then put all their attention on that single metric. It’s like training for a marathon and only timing the first kilometer.
Which metrics reflect real behavior change?
If what you’re after is evidence that your awareness program is driving sustained behavior change, which is, ultimately, the real goal, you need to look at a broader set of indicators. None is perfect on its own, but combined they tell a much more complete story.
Phishing report rate. Of all available metrics, this is probably the most underrated. A user who reports a suspicious email is showing active behavior: they didn’t just spot the threat, they did something about it. That’s secure culture in action.
Time to first report. It’s not enough for users to report; it matters when they do. If the first report arrives within minutes, the organization has a real window to react. If it arrives three days later, the damage from an actual attack would already be done. This metric measures collective response speed.
Repeat-offender rate. Are the same users falling for the same persuasion techniques over and over? Or are they improving over time? The repeat-offender rate shows whether awareness actions are having an effect on the users who need it most. A program that doesn’t move this metric has a design problem, not a user problem.
Correlation between awareness and simulation. This metric crosses two dimensions: what users know (assessed through awareness content) with what users do (measured in simulations). If a user completed every module but still falls for simulations, there’s a gap between knowledge and behavior. If another completed nothing but reports correctly, there’s a survivorship bias. Correlation reports let you spot these patterns and act on them.
Real program coverage. What percentage of users is actually exposed to the awareness program? Not those who received an email, but those who completed the content and participated in simulations. A program with a 95% open rate but 30% completion has an engagement problem the click rate won’t show you.
The mistake of comparing yourself to others
There’s a frequent temptation among CISOs: hunt for an external benchmark to decide whether their click rate is “normal.” If my organization is at 15% and the industry average is 20%, am I doing fine?
Not necessarily. And we’ve analyzed this before: external benchmarking in awareness is, in most cases, misleading. There are too many variables (organization size, industry, program maturity, simulation difficulty, filtering tools, send frequency) for a direct comparison to carry real meaning.
The only benchmarking that counts is the one you run against yourself, period over period. Did the report rate improve? Did the repeat-offender rate drop? Did time to first report shrink? Those internal trends, measured consistently, are what actually speak to your program’s effectiveness.
How to build a dashboard that tells the full story
If I had to recommend a minimum dashboard to present to leadership, I’d include these indicators:
- Click rate by scenario type: not a global average, but segmented by persuasion technique (urgency, authority, reward). This shows where the real vulnerabilities are.
- Report rate and time to first report: the metric that demonstrates active behavior and response capability.
- Repeat-offender rate: users who fall more than once for the same scenario type. This pinpoints where to focus interventions.
- Awareness-simulation correlation: to show whether training actions are (or aren’t) impacting observable behavior.
- Program coverage: percentage of users who actually participated, not just those reached.
What I’d leave out: global averages without context, comparisons with other organizations, and any metric you can’t tie to a concrete improvement action.
Platforms like SMARTFENSE let you consolidate these metrics in one place, correlating awareness actions with simulation outcomes and reporting behavior. That turns scattered data into an actionable view of human risk.
Measuring well matters as much as raising awareness well
The click rate isn’t a bad metric. It’s an incomplete one. The problem shows up when it becomes the only lens through which we evaluate an awareness program.
A mature program measures behaviors, not just point-in-time actions. It measures its own trends, not someone else’s averages. And it measures the impact of its interventions, not just exposure to them.
If your next awareness report is going to start with the click rate, at least make sure it doesn’t end there.
Leave a Reply