Artificial intelligence (AI) is set to have an impact on many of our lives. It’s reckoned that in as little as a year from now, AI will be present in the majority of new software, impacting on all of our lives. Even though it is in relative infancy, AI is starting to show its effect in more and more areas from retail to healthcare. Not least of these is cyber security.
Ever since the first computer viruses appeared in the 1980s, there has been a constant arms race between attackers and defenders. As operating systems and security software have become more resilient, so cyber criminals have upped their game to find effective ways of attacking their targets. More importantly, attacks have moved on from the early attempts at disruption and pranks to a concerted effort to steal and extort. Cyber crime is now a serious and very lucrative business and in many cases is every bit as professional as the organisations it’s attacking.
So, where does AI fit in to all this? We already know that AI works well where there is lots of data to process and a need to spot patterns emerging. But in essence, this has been at the root of older signature systems of security for years. Where AI is already being used is to help with behavioural analysis, spotting patterns that could indicate criminal behaviour rather than simply looking for malware itself. But there are fears that AI is a double-edged sword that could be used to power cyber crime too.
We know that cyber threats are constantly changing. The idea of zero day attacks that are missed by conventional signature-based detection systems has been around for a long time. The benefit of AI is that it can ‘learn’ how an attack behaves, allowing it to identify new and previously unseen threats by the way in which they operate rather than what they look like.
But of course it’s certain that the bad guys are going to be using AI too. Their systems allow them to analyse how defensive systems work and build malware that is able to fool them into thinking it’s legitimate. Security researchers have already created an AI tool that can adjust malware in this way.
There’s also evidence that phishing emails are being improved by AI to look more legitimate and encourage people to open and respond to them. The days of bad grammar and crudely cut-and-pasted logos are largely a thing of the past.
The thing that makes applying AI to cyber security uniquely difficult, is that the target doesn’t stand still. In medical research, for example, once the AI has learned what to look for, it can quite happily go away and keep looking. In security, things are rather more difficult.
Firstly the cyber criminals don’t play by the same rules. They are likely to come up with a new attack vector or one that’s been modified in an unexpected way. You also can’t afford to underestimate the sophistication of the people and technology behind the attacks – particularly where nation state actors are involved. Finally, with most types of task to which AI is applied – self-driving cars for example – there is a plethora of data available. In the cyber security field there may be relatively little and it may not hang around long enough to deliver a clear picture.
What AI can do, however, is to considerably reduce the attack surface. If AI understands how ‘good’ software behaves, then it can extrapolate rules from that and keep them updated as things change, making it harder for attacks to penetrate. In the long term, this approach of making sure things are ‘good’ is likely to be much more effective than the more traditional cyber security method of looking out for the ‘bad’.