Next DLP Blog

Addressing Bias in Insider Risk Monitoring

Written by Katie Crowell | Sep 29, 2023 8:46:52 PM

TL;DR:

  • Monitoring bias in insider risk monitoring can lead to unfair judgments and assumptions about employees' trustworthiness.
  • Monitoring bias can manifest in unequal monitoring, selective attention, attribution bias, group identity bias, and confirmation bias.
  • Legacy approaches to insider risk management do not effectively address bias.
  • Reveal, a data-driven approach, eliminates bias by focusing on activities with sensitive data rather than individual identities.
  • Reveal uses pseudonymization and scoped investigations to protect user privacy while detecting and mitigating threats.

The steps organizations can take to defend sensitive data against external threats and malicious insiders are similar, even complementary. In both cases, teams are looking for activities that could put data at risk. From a defender’s viewpoint, it doesn’t matter if the threat actor is an employee, partner, or hacker posing as a legitimate user.

However, an additional factor comes into play when monitoring for insider threats. One that can get organizations into trouble if it isn’t addressed correctly is bias.

Bias in Insider Risk Monitoring

Monitoring bias is the unwarranted, selective attention to specific employees or departments regardless of their actual behavior. This can lead to unfair judgments and assumptions about an individual's trustworthiness and more intrusive monitoring than necessary. It can also lead to breaches when certain people are given a free pass for activity that might otherwise raise a red flag. 

What is Monitoring Bias?

Monitoring bias can impact how organizations assess insider risks, leading to inconsistencies and inaccuracies in identifying potential threats. Monitoring bias can manifest itself in several ways:

 

  1. Unequal Monitoring: Organizations might focus more on monitoring certain employees, departments, or positions while neglecting others. This can create blind spots and leave the organization vulnerable to insider threats from less-monitored areas.
  2. Selective Attention: Monitoring bias may lead to a disproportionate focus on specific activities or behaviors, overlooking other relevant indicators of potential insider risks.
  3. Attribution Bias: Attribution bias occurs when certain employees are consistently viewed as low-risk or high-risk, regardless of their actual behavior. This can lead to false assumptions about an individual's trustworthiness or risk profile.
  4. Group Identity Bias: Employees from certain demographic groups or backgrounds may be perceived as having higher risks based on stereotypes or prejudices, leading to unfair assessments.
  5. Confirmation Bias: Monitoring bias can lead to overemphasis on data that confirms preconceived notions while ignoring or downplaying contradictory information.

 

 

Monitoring bias can be manifested in many ways

This type of discrimination can also cause teams to overlook risky activity from other people or groups. According to a paper by the non-profit Intelligence and National Security Alliance, unjustified monitoring of an individual because of bias can lead to:

  • Increased risk from a false sense of trust as threat hunters and SOC teams focus on the wrong issues and suspects.
  • Wasted resources from a disproportionate focus on specific users or activities.
  • Legal liability, if bias is found against a protected class of people or monitoring violates privacy laws such as the European Union’s General Data Protection Regulation (GDPR).
  • Reputational damage from negative publicity about a biased investigation.
Legacy Approaches Don’t Address Bias

Legacy Data Data Loss Prevention and Insider Risk Management solutions were designed for a time when employees worked inside the corporate firewall, and all applications ran locally. They leverage intrusive techniques such as keystroke logging, screen recording, and web monitoring to view and log the actions of individual users. The ability to monitor individuals and ascribe specific activities, including website visits, personal email, and “time on task,” encourages bias and tends to miss the bigger picture of data protection while attempting to focus on productivity.

Reveal: Eliminate Bias and Improve Data Protection
A Data-Driven Approach Mitigates Bias

Bias requires the ability to attribute individual actions to individual employees. A better approach is to identify activities with sensitive data that could put regulated data, trade secrets, and other intellectual property at risk. 

Reveal takes a different approach. Reveal watches each user’s activity with sensitive data – not their identity –  to build a training baseline and then detect anomalous behavior if it deviates outside the usual pattern. This data-driven approach relies on analytics to identify potential insider risk indicators rather than solely relying on subjective assessments. 

Pseudonymization Masks Individual Identities 

Reveal uses pseudonymization to detect and mitigate threats without compromising users' privacy and prevent bias in monitoring users’ activities. Reveal employs data security techniques, allowing you to control whether operators see users' actual or pseudonymized profiles in the Reveal UI. With pseudonymized user profiles, identifying information is either replaced with pseudonyms or hidden, giving operators the information required to uncover risks while maintaining the strict confidentiality of users.

Scoped Investigations Further Protect User Privacy

When suspicious activity justifies a deeper investigation, authorized users can request a “scoped investigation” of a user. Scoped Investigations empower organizations to meet employee privacy expectations and comply with information security regulations by limiting the information accessible to security analysts for forensic analysis by default. Scoped Investigations grants time-bound, revocable, and audited data access to only allow comprehensive investigations by authorized personnel.

Reveal Protects Sensitive Data and User Privacy

By separating individual identities from specific actions, bias is eliminated. This allows Reveal to provide clear and objective criteria for monitoring and risk assessment based on job roles and access privileges rather than personal judgments or assumptions. By addressing monitoring bias, whether intentional or unintentional, organizations can enhance their ability to identify and manage insider risks effectively while maintaining a fair and trusted work environment.

 

Let the team at Next show you what an unbiased investigation can yield, get a demo and learn how your security team can focus on threats.