Cognitive bias in Threat Hunting tasks

As any analyst knows, the very nature of Threat Hunting entails the application of generic approaches for the detection of anomalies. Unlike the reactive positions of rule-based security, proactive analysis delegates a significant percentage of detection to the analyst. This means that, as it happens to a conventional intelligence analyst, errors of interpretation tend to occur, due to the large number of casuistry found on a daily basis, and which the brain tends to classify as legitimate or malicious in hundredths of seconds.

According to Richard Heuer’s definition in “Psychology of Intelligence Analysis“, an analyst has limits in the interpretation of information, determined by his personality, his beliefs and his cognitive biases. After the identification of the anomaly, the analyst must be able to make a prediction. That is to say, it is the interpretation of a detection and its association to a possible threat that represents a security alert.

And not only this, but, as defined by Steven Rieber in his “Intelligence Analysis and Judgmental Calibration“, the analyst must also be capable of weighing the criticality of an anomaly, which also remains within subjective positions in the form of subjective probability.

As we can see, therefore, reasoning plays a relevant role in the detection of the anomaly, but also in the degree of “maliciousness” attributed to the anomaly.

Within this process, Dr. Lucía Halty uses the simile of a chess game to explain its difference. A player does not analyze all the potential moves at the moment of the game, because that is impossible for the human being. On the contrary, the player focuses on those movements which, a priori, provide her with a greater probability of taking advantage.

Thus, the reasoning is divided into two methodologies: the algorithmic and heuristic; while an algorithmic process takes into account all the possibilities of the problem, a heuristic approach focuses on those parts more relevant.

Taking this into account, from a few generic clues, in the search for anomalies in the Threat Hunting tasks the analyst’s brain applies several rules that are used as “clues of maliciousness”. In other words, it continues to operate under “rules” that it applies to weigh up the maliciousness of the anomaly. Thus, just because we are human, we apply a heuristic approach that generates different problems.

In his study “Critical thinking and intelligence analysis“, David Moore develops this concept by examining the existence of two types of heuristic analysis: those of representativeness and those of availability.

Heuristic analysis of representativeness

Heuristic analysis of representativeness are based on the degree of similarity of an event A with an event B. That is, how much one resembles the other. An example could be the detection of connections to a resource in the cloud, which we associate with the exfiltration of information.

This type of analysis can lead us to errors of analysis due to the fallacy of the prime rate, which are those that occur when the global information is not valued in its fair measure and only specific issues are attended to.

For example, is it normal in the organization to make connections against the cloud? Is there additional navigation that verifies a non-automatic performance? Who makes the connections?

Heuristic analysis of availability

In the case of heuristic analysis of availability, an analyst will be able to detect a certain casuistry to the extent that his experience warns him of the possibility of malicious activity. In other words, an analyst will have a greater capacity for heuristic analysis to the extent that her has knowledge of certain malicious actions. There are two lines of action for obtaining experience:

  • The first line results from theoretical knowledge of what is malicious. For example, a certain malicious actor often uses PowerShell scripts to perform lateral movements.
  • The second line of experience results from knowledge of a real organizational environment. In other words, knowing that in addition to malicious actors, system administrators can also use Powershell scripts in legitimate tasks.

The mere fact of using a heuristic approach, intrinsic to the human factor, introduces a series of cognitive biases:

  • The confirmation bias, which Brasfield defines as the tendency of an analyst to look for reasons that confirm his hypothesis rather than those that contradict it, which will ultimately cause him to discard information that could contradict the hypothesis. Of course, we can find this kind of bias in any human activity.
  • Overconfidence, in such a way that if an analyst is completely convinced that a certain attack or malicious activity will or has occurred, his or her perception will lead him or her to find the evidence for it.

Therefore, within the tasks of Threat Hunting, the analyst must be aware of the cognitive biases that he faces in his tasks, and be able to re-evaluate his detections with objectivity, in order to minimize the errors of interpretation of the results.

References

Analysis – Lucía Halty Barrutieta.

Fundamental concepts of Intelligence – Antonio M. Díaz Fernández.

Intelligence Analysis and Judgmental Calibration – Steven Rieber.

See also in: