What can Artificial Intelligence do for us? (Or against us)

AI is one of the technologies that will have a greater social and economic impact in the coming years (we are already experiencing). Some studies (PwC, Accenture) estimate this impact at a global scale up to 2030 to be around 16 trillion USD, led by China and the USA.

As expected, everyone is jumping on this bandwagon and the world of cyber security is no exception. But how much reality is in the applicability of AI to cybersecurity? Let’s take a look at the possibilities from both sides of the battlefield: the attacker and the defender.

Machine Learning (ML) techniques can be applied to detect vulnerabilities in software using fuzzing tools. This is done by the attackers and can be done by the developers, preventing vulnerabilities from reaching live environments.

One of the most successful AI recent advances is the ability to automatically generate “credible” text that is hard to distinguish from that written by humans. This technique is being used to generate fake news content in news sites or to comment on social networks.

Its utilization to generate text adapted to the specific target of a spear phishing will allow the spear to be scaled to the level of mass phishing, with the consequent increase in the success rate of this type of attack.

Another usual use case of ML is the detection of anomalies in network traffic or in general, in logs of any kind. To do this, fundamentally, neural networks are used, a baseline of normality is established and deviations from that baseline are detected. In this case, there is an escalation of weaponry: the defenders improving the detection of anomalies and the attackers studying their techniques to exploit the failures.

In the field of industrial cyber security, it is possible to use AI to build Digital Twins that serve as experimental laboratories to prepare attacks on industrial infrastructures.

From a defender’s point of view, the most interesting use of AI is detection. If it allows to reduce the time an APT remains undetected in an infrastructure, which can be estimated in months or even years, that would already be a great advance. Nowadays, monitoring our infrastructures generates an enormous amount of information that often is nor properly analyzed, due to insufficient resources. AI can change this situation by automating the analysis.

If a high level of AI reliability is achieved, we could even consider automatic (partial) action in the event of an incident, at least in the first stages, drastically reducing the damage caused.

However, we cannot overlook the risks: AI systems are complex and their misuse or misunderstanding can lead to incidents, such as unnecessary shutdown of facilities, as happened in a German hospital due to a ransomware attack. As technology becomes accessible, there may also be a proliferation of more sophisticated attacks by script kiddies using advanced tools.

In any case, we cannot escape the use of AI in the defense of infrastructures. Attackers will use it, and that requires us, at least, to know the tools they use.

See also in:

Comments

  1. AI seems really interesting and looks like something that could be very helpful, but at least there are HUGE improvements on IT security, it’s not completely reliable. If anything relies on it, it would be really problematic if they are successfully attacked. But the future will talk.
    Good article.

  2. Miguel A. Juan says

    Thank you for your comment, Ana. Judging by what happened in the past in so many areas, these advances willl be applied as they develop, so better we improve the cybersecurity aspect or we-ll be in trouble.