Using AI to Predict Cyber Attacks
The next person to join your cybersecurity team may not be a person at all. It may be a robot. That is what Darktrace, a cybersecurity artificial intelligence (AI) company, has made possible with its PREVENT technology.
PREVENT is a bot that can think like a cybercriminal and predict the areas of your infrastructure that are most likely to get attacked. PREVENT can also simulate attacks, perform penetration tests, and determine which threats pose the biggest risk for your organization—all before an attack happens.
How Artificial Intelligence and Machine Learning Can Help Predict Cyber Attacks
AI technology can connect and correlate seemingly unrelated pieces of data in ways humans can never imagine.
To illustrate, let’s say you’re studying data for information about a vulnerability. The dataset is huge, and it contains volumes of threat information. Although you’re combing through the data to learn about a vulnerability, elsewhere in the dataset lies insights about a threat actor’s preferred attack methods. Given the mountain of information you have to read and digest, you may not even notice the information about the attack methodology.
On the other hand, if a machine examines 100,000 pages of data, which it can do very quickly, it can discover that a threat actor favors macOS vulnerabilities, for instance. The machine learning system can also take two previously unconnected datasets, correlate them, and forecast a threat. For example, it can pinpoint the fact that a threat actor prefers to attack macOS systems and then predict how this can impact your organization based on the operating systems and applications your company runs.
Is AI a Benefit or Threat to Cybersecurity?
AI-powered cybersecurity solutions come with benefits in terms of speed and accuracy, but AI can also be used for nefarious purposes.
AI and Hackers
As is the case with many technologies, AI can be a two-edged sword. Hackers can also profit from AI’s evolution because it enables them to launch sophisticated cyberattacks. AI can help them more easily find and exploit gaps in a computer system or network.
The Privacy Issue
AI data is increasingly more personalized, and end-user privacy can be put at risk. AI privacy challenges include:
- Data persistence: Data that exists longer than the systems that created it, especially because of low data storage costs
- Data spillovers: Data collected on subjects who are not part of a data collection project
- Data repurposing: Using data beyond what it was originally intended for
Advantages of AI in Cybersecurity
AI and machine learning make it easier to track down hackers using automated threat detection. They empower security teams to respond more efficiently than with traditional software-driven or manual procedures. Here are some key advantages:
AI can be used for IT asset inventory, which is a precise and thorough list of all devices, users, and apps in your environment. AI-based solutions can then estimate how and where your organization will most likely be compromised based on these assets and their degree of threat exposure. With this kind of tool, you can direct resources to areas of your network with the greatest risks.
Leveraging Bot-Based Defenses
Today, a lot of internet traffic consists of harmful bots. Bots can be a serious threat, causing everything from account takeovers using stolen passwords to the creation of fake accounts and data fraud.
Automated threats can’t be countered solely using manual methods. With AI and machine learning, it’s possible to differentiate between good bots (such as search engine crawlers) and harmful ones—as well as between humans and bots pretending to be human.
Identifying New Threats
AI can be used to identify a range of online threats and criminal activity. The staggering number of new malware launched each week is simply too much for traditional software systems, but an AI-powered cybersecurity platform can absorb and process tons of threat data in seconds.
For example, AI-powered tools are trained to detect malware, recognize patterns, and identify malware or ransomware attacks before they impact a system. This can be done, for instance, through natural language processing (NLP), a branch of AI that collects data by reading research on cyber threats, news stories, and articles.
As a result, AI can produce critical data on novel threats, cyberattack methodologies, and the kinds of defense tactics that are most likely to defeat threats. Armed with these insights, security teams can better prioritize important decisions based on what criminals will likely use to attack systems and which methods will most likely succeed.
Better Endpoint Security
Virtual private networks (VPNs) and antivirus programs can safeguard users from malware and ransomware attacks launched remotely, but these tools often use existing signatures to detect threats. This forces IT teams to ensure their defense mechanisms have the latest and greatest threat signature data—or leave their organization exposed.
If antivirus software is not updated, for example, or the software manufacturer is unaware that virus definitions are out of date, this can result in a breach.
How Is AI Used in Cyberattacks?
The first step in using AI to defend against cyberattacks is to understand the importance of using it correctly. Using an AI system can be simple and easy. However, making even subtle modifications can gradually send your AI system in the wrong direction, allowing threats to slip by.
This is because AI tools are built on the datasets used to train them, creating a delicate chain reaction. For example, modifying input data can quickly result in system errors, which can harm your defenses and expose your organization to threats.
Reverse-Engineering Data Sets
Hackers can reverse-engineer the datasets used to train AI systems. As a result, AI can be used by cybercriminals to pinpoint weak networks, devices, and applications. Malicious actors can then leverage this information to build more effective attack strategies. For example, they can fine-tune their social engineering attacks after using historic data to identify the kinds of employees that are most likely to be manipulated.
Analyzing the Behaviors of Potential Victims
AI can make it easy for hackers to obtain sensitive data because it can discover behavior patterns and personal vulnerabilities. For example, an AI system powered by NLP can be used to study a target’s online activity on social media networks.
A hacker can then use the data produced to figure out which contacts the victim trusts the most or has the closest relationships with. The NLP system can also determine the phrases and punctuation someone uses. Then the attacker can craft an email or message that appears to come from that person, sounding exactly like the person the victim feels they can trust.
This is just one example. Unfortunately, AI will only help hackers get better at levying attacks on social media sites, through emails, and even over the phone.
As another example, an attacker can use AI to publish deepfake content on social media to spread misinformation. Because the deepfake seems so real, people may be enticed to click phishing links, malware, and other dangers that can compromise their personal security or that of an organization.
Recent Cyber Attacks and Settlements
Recent cyber attacks further underscore the need to use AI to gain an advantage on attackers.
Excellus Health Hacked
In response to possible violations of the Health Insurance Portability and Accountability Act (HIPAA), Excellus Health Plan agreed to pay a $5.1 million fine to the Office for Civil Rights of the U.S. Department of Health and Human Services in January 2021.
During a hack in 2015, Excellus’ systems were breached by hackers, potentially exposing the electronically protected health information (ePHI) of over 9.3 million people. A probe into Excellus’ cybersecurity program revealed possible HIPAA violations, including failing to conduct an enterprise risk assessment and implement appropriate security measures.
WhatsApp Hit with a €225 Million Fine
The Data Protection Commission (DPC) of Ireland fined WhatsApp €225 million in September 2021 for violating the GDPR’s requirements for transparency with regard to both users and non-users of WhatsApp services. Max Schrems, a privacy advocate who filed a complaint against WhatsApp on a potential data sharing scenario between WhatsApp and various Facebook entities in 2018, welcomed the decision.
University of California Loses $1.14 Million
San Francisco-based University of California was the target of a ransomware attack that resulted in hackers requesting a $3 million settlement. Malware that could encrypt numerous servers and steal and encrypt crucial data targeted the university’s system. Although UCSF paid a $1.14 million ransom—one of the larger cyber attack settlements recently recorded—it was later discovered that no data had been exposed.
Teaming Up with AI to Prevent Attacks
AI provides both security teams and hackers the tools they need to do their jobs, but if used correctly, AI can give IT teams at least a slight edge. The data needed to predict—and prevent—a wide range of cyber attacks is out there. With AI, security professionals can turn mountains of figures and phrases into cyber protection strategies.
Using AI to Predict Cyber Attacks