In recent years, I’ve been coming across some questions about using psychological profiling as a tool to guide decisions in awareness-raising plans.
Here’s the idea: figure out the psychological profile of our users using various techniques and algorithms. Then, use this info to deliver tailored awareness content to each person’s needs. Additionally, conduct specific phishing simulations to exploit the weaknesses of each profile.
We can determine a psychological profile through simple surveys or, in a less transparent way, using chatbots or artificial intelligence.
But here’s the kicker: using these profiles in awareness-raising isn’t backed by enough evidence of its effectiveness. Plus, a person’s psychological profile is clinical info. So, if we’re planning to test our users psychologically, store this info, and make decisions based on it, we need to be extremely careful.
Let’s break this down in the context of the GDPR:
Access to Information
The awareness platform can typically be accessed by:
- Our organization’s security or tech manager.
- Analysts or the team in charge of our organization’s awareness plan.
- The partner staff providing the managed awareness service.
If the platform contains info about our users’ psychological profiles, we’re in a dicey situation. Normally, only specialized psychology personnel within the organization should handle this sensitive information, keeping it under wraps. Only they should see any info, reports, or data related to users’ psychological profiles.
Transparency
Users in our organization need to understand how their psychological profiles will be created.
Here, we might face some challenges. If, for example, we’re using a survey, we can explain it clearly. But if our awareness platform builds profiles using algorithms or AI, it’s really tough for us to understand and convey this process clearly to everyone in our organization.
This is relevant as it is this understanding that then enables individuals to exercise their privacy rights and also the basis for the following point.
Consent
If we’re dealing with users’ health info, we need their consent. This means they must understand why we’re determining and storing this info, how we’ll use it, and what it means for their privacy. This consent must be demonstrable and traceable.
Discrimination and Bias
If we’re using psychological profiling to group our users and make decisions (like sending a certain phishing simulation or tailoring an awareness message), we need to be wary. This practice is ethically questionable and might lead to unfair or discriminatory decisions based on biases or stereotypes.
Conclusion
If we’re going to use our users’ psychological profiles in our awareness program, we need to be super careful. Clinical data is highly confidential. If we don’t take proper measures to protect our users’ privacy, we risk violating their fundamental rights.
Now, as cybersecurity managers, it’s time to ask ourselves: Is our users’ psychological profile really relevant to achieving our awareness program’s objectives? Is there evidence supporting its use for developing a safe culture? Do we have the necessary tools and legal advice to ensure our program complies with regulations like the GDPR?
Key concepts like Due Diligence and Due Care are crucial when answering these questions. We hope they’ll help make professional and informed decisions about awareness-raising programs.
Source
This article is based on the analysis made by the lawyer Marcelo Temperini, specialist in IT Law, Privacy and Cybercrime, in his article: The dangers of psychological testing in fraud prevention and GDPR compliance
Leave a Reply