A couple of years ago, when someone in a company talked about deepfakes, they were almost always thinking of one very specific case: the doctored video of a CEO, the cloned voice on a call to the CFO. It was a leadership concern, tied to people with public visibility and signing authority. The rest of the organization could go through their day without the topic taking up too much space.
That map is out of date. Generative AI went from being an extraordinary event to sitting quietly inside flows where any of us makes everyday decisions: a hire, a purchase, a recommendation coming from a profile that looks trustworthy. And our reflex, the old one, is still the same. If the image looks flawless, we assume it’s real. That is exactly what the new generation of fraud has learned to exploit.
What do we do in the face of this shift? The first thing we usually do is the most natural move in corporate training, which is to add another module. More articles, more bullets, more explanations of how generative AI works. The person ends up knowing more about the topic, but when the moment to decide arrives, they go back to trusting what they see, of course.
What we need to train is a reflex
There’s a point here that’s worth looking at calmly. People tend to make quick, almost automatic decisions when what’s in front of us looks familiar. Faced with a flawless image, a recognizable voice or a polished profile, we go on autopilot. That’s why, when the adversary controls the quality of the image, the battle to spot the fake with the naked eye is lost from the start.
What we need to install is a different habit: the ability to never decide based on an image alone. It sounds simple put that way, but it’s a deep change in behavior. It means placing a moment of doubt where there used to be “blind” trust. And a reflex of that kind doesn’t change simply by reading. It changes by living through the mistake once, in an environment where getting it wrong costs nothing.
It all starts with the diagnosis
Before suggesting new habits to someone, it helps to know what they actually do today. When they see a profile photo on a professional network, do they zoom in? Do they run a reverse image search? Faced with a discounted offer, do they read the URL before clicking? When their “director” calls unexpectedly asking for an urgent transfer, do they confirm through another channel before moving?
Without knowing where this person stands today, any program ends up assuming a gap that may not be the real one. The initial diagnosis serves that purpose: it measures the habit so the content that follows can be calibrated to what they actually do or don’t do. It works more as a snapshot of the starting point than as an evaluative quiz.
Then, expose
Once we know how the person decides today, the next step isn’t to tell them more things. It’s to put them in front of a decision, in a safe environment, and let them get it wrong. A mistake lived in first person manages to install doubt where there used to be automatic trust.
From there, yes, the information about how AI works, which verification steps make sense and which signals to look at can come in. But that information lands when the person has already understood, in their own body, that their eye isn’t enough. Reading it after getting it wrong is very different from reading it cold.
The loop closes with a measurement of the reflex
The end of the journey can’t be a “module completed” certificate. It has to be a measurement of the new reflex: do they now spot the warning signs when they appear? do they know what to do when they see them? do they report? Compared with the initial diagnosis, that measurement gives the program lead something more interesting than a completion percentage. It gives them a curve of change, which is what ends up mattering.
What does this cycle feel like from the employee’s experience?
There’s a very simple challenge that illustrates the effect well. Two images appear side by side and the prompt is to choose which one was generated by AI. Most people get it wrong. That single mistake does, almost immediately, what thirty minutes of theory don’t: it installs doubt where there used to be full trust.
/media/images/marketing/blog/shared/1/apoyo-real-o-ia-gato.en-1777491084.png)
In another setting, the person enters a conversation that opens up step by step. Each reply opens or closes doors, they share a piece of information and the conversation moves, they hold it back and the conversation moves elsewhere. When it ends, they see the full picture of what happened, the decisions that opened risk and the ones that closed it. What they take away looks more like a feeling than a rule. When a conversation is starting to “pull” on them and from where.
There is also a more playful version of the same principle, taken to a short videogame of just a few minutes, where every decision has a visible consequence and immediate feedback. In the time it would take to read an infographic, the person has gone through several controlled mistakes and started to register, almost without noticing, the new reflex.
/media/images/marketing/blog/shared/1/apoyo-videojuego-real-o-ia.en-1777490553.png)
Awareness that scales with the AI curve
If anything has become clear in this last stretch, it’s that generative AI isn’t going to sit still and wait for the security team to catch up. Any visible indicator we learn today can become outdated quickly. That’s why training that lives as a one-off event once a year stops making sense. Awareness works as living infrastructure, sustained over time and connected to the context where risk actually shows up.
What doesn’t mutate as fast is the reflex. The habit of pausing, asking and verifying before deciding keeps working even as the image improves, the voice gets sharper and the contact’s profile looks flawless. And that’s what the pedagogical method aims to train, deepfake by deepfake, decision by decision.
The signs are there, you just have to know where to look and what questions to ask. If you’d like to see how this approach is being applied to the family of threats opened up by generative AI, we invite you to explore the platform and have a conversation with us about how it translates into your program.
Leave a Reply