Xudong Zhang, assistant professor of computer science, has explored how to apply AI models to cybersecurity. Typically a company or organization uses firewalls or intrusion detection systems that rely on a predefined pattern – a signature – that alerts them to potential criminal activity. But as attack techniques evolve to include AI, those systems may not be as effective.
For instance, AI could be used to automate aspects of an attack, such as creating phishing emails, developing malware or spam that can evade traditional defenses, or finding and quickly exploiting weaknesses in a system, Zhang says.
But AI can also be deployed to fight these more sophisticated attacks. By training AI models to know what normal behavior looks like, cybersecurity experts can use AI algorithms to monitor network traffic, user behavior and system activity to detect any anomalies. The more the models learn about what an AI-generated attack looks like, and the patterns associated with it, the better security systems will be at responding to threats in real time, Zhang says.
The only issue is having a powerful enough machine that can process all the data needed to train the AI model, something likely not too far off. “When you create an AI model, it’s like a baby, and the parents teach the baby,” he says. “If we train this model with a large amount of data, then this model will be very powerful.”