Scaling Security with GPUs/DPUs for AI and Machine Learning
- By Kevin Deierling
- Dec 07, 2020
On March 25, before the full impact of coronavirus hit home and the world was still full of hope, Google News showed an eye-wateringly appealing headline: Cybercrime Gangs Promise to Stop Attacks on Healthcare Organizations During the COVID-19 Crisis.
Apparently, some ransomware operators, such as DoppelPaymer, were also promising free decryption services for healthcare organizations mistakenly encrypted. Some hope!
As the article went on to explain, even if the gangs did as promised, the supply chain for healthcare is so complex that an attack on any other organization might end up affecting it. And now, half a year later, the bitter reality is that cybercriminals have – like hyenas stalking the weakest in a herd – taken every advantage of businesses and individuals already crippled by lockdown.
Sending staff to work from home means extending the business network edge to homeworkers who, even if diligent about securing their laptops, are likely to be sharing a Wi-Fi network with other family members. In fact everyone has been spending more time online and criminals rapidly adapted their methods to the new social environment. Anxious and less tech-savvy consumers, desperately seeking the latest Covid-19 news and advice, are especially vulnerable to phishing scams – including e-mails designed to look like they are from official government authorities or financial institutions. There have also been bogus requests for charitable donations.
So why promise to halt all cyber-attack activities? The article suggested that, in this emergency climate, any cybercrime against health organization would be so offensive that it would challenge the federal government’s full security powers to throw every capability it had against the gangs. They were just waving a white flag to save their skins.
If anything, we can expect to emerge from this pandemic with businesses – weakened by the security challenges and economic setbacks – even more vulnerable to attack. Meanwhile the criminal gangs will be fortified with a surge in revenue and a massive harvest of compromised addresses and other data to mine for future attacks.
It is hard to defend against so many unknowns and so many types of attack. Something must be done – something radical and not just more of the same. What might artificial intelligence (AI) and machine learning (ML) have to offer, especially when it is designed to be orchestrated and act along with the network?
The promise of AI and ML
Watch a martial arts demonstration and see that we humans have evolved a brilliant capacity for recognizing and responding at high speed to threats from any direction. With two hands, we can even do quite well against threats from two directions at once. But attack us from three or more directions at once, and our ability to survive falls off dramatically as the number of threats increases.
Cybercrime can attack from any number of directions: phishing e-mails, deepfake personalized messages and campaigns, malware in the network, fake accounts, denial of service attacks, infected websites and more. What’s even more, these attacks are rapid-fire: a University of Maryland study noted an average of 2,224 attacks per day on their Internet-connected computers – a near-constant rate of an attack every 39 seconds.
But speeds like that are nothing to an AI system that can react at thousands or millions of times per second. No AI can yet match the subtlety and perception of the finest human intuition, but machine learning is steadily narrowing even that gap. A massive historic ML database can mine information about every malware that's been detected until this date. Legacy malware can be immediately detected and, when presented with a new type of malware – whether a tweaked variant of old malware, or an entirely new variety – the AI can check it against the database, examine the code and block the attack in a fraction of a second. Because of the AI’s scanning speed, this can even happen when the malicious code has been concealed amongst a mass of benign or useless code – another common trick.
Another problem is a thief’s ability to learn the victim’s patterns of behavior in order to pick a weak point or time in the defense. ML too can track users’ activities daily and create a model of typical behavior. By analyzing this information, AI can not only warn of potential weakness but also note anomalous activity and react accordingly. AI learns to assess the relevance and consequences of a change of behavior, and to develop a proportionate response in real time without needing to over-react by closing down the whole network.
How to support sophisticated AI
AI has been under development for decades, so why are these capabilities not already widely deployed?
The answer is that ML needs massive volumes of data in order to detect relevant patterns, and even the most advanced Computer Processing Units (CPUs) were not up to such tasks. In recent years, however, Graphics Processing Units (GPUs) have been added to the data center to greatly accelerate the learning process. GPUs – originally created to manage sophisticated 3-D graphics for video games – turn out to be far superior at addressing complex problems.
Even more significant has been the very recent addition of Data Processing Units (DPUs) to the network. These lift from the CPUs most of the burden of managing an increasingly complex network, freeing up the ability to compute. They also greatly increase the efficiency and responsiveness of the data center to enable the sort of power needed to support sufficiently sophisticated AI and ML This works, and is already being used to provide upgraded cyber defense for the world’s most sophisticated secure systems.
How relevant are these sophisticated solutions to the average medium to large enterprise currently threatened from cyberattacks? The news is good. Whereas these techniques have been developed and proven in the most sophisticated hyperscale data centers, the components needed – GPUs and DPUs – are relatively inexpensive and can even to be added to enhance existing networks. They have deliberately been designed as “open systems” in the sense that they are compatible with the most prevalent networking hardware and operating systems – without locking the network into any one supplier.
Conclusion
The combination of AI and ML offers business and public services a powerful new weapon in the war against cybercrime. Although we can be sure that criminals are already looking for ways to circumvent it, this time they are up against an intelligent, learning, and very fast-acting opponent.