AI-Powered Cyberattacks are Expanding: Watch out for These Threats

There’s good news and bad news in the world of cyberattacks. The good news? Major cyberattacks are falling, according to the Nash Squared Digital Leadership Report. Between 2019 and 2023, major attacks went from 32% of companies to 23%. Cybersecurity is improving and it’s paying dividends for companies who have invested in it. But what about AI-Powered Cyberattacks?

But let’s not forget the bad news: our definition of a “major” attack is changing. 

Cyberattacks have become so common, we think of them as part of our daily lives. A DDoS (Distributed Denial of Service) attack was once a catastrophic emergency that might necessitate executive leadership getting involved. These days, a cybersecurity expert might regard a DDoS attack as the average Thursday morning.

But that’s not the only key element that’s changing. We also have to watch for new threats rising thanks to the proliferation of Generative AI technology. AI-Powered Attacks are on the rise!

New Threats from Generative AI: A Second Frontier in Cybersecurity and AI-Powered Cyberattacks

Generative AI is becoming increasingly sophisticated. It’s so hot that every business is trying to find a way to leverage it. Chatbots are becoming more helpful. Content generation is becoming faster, with writers leveraging generative AI as research assistants and editors. Many of us can tap a button on our phone and let AI write emails for us. All we have to do is review the content and send.

Those are all positive developments. But cybercriminals are watching Generative AI, too, and it’s giving them ideas. According to The SSL Store, 85% of security professionals think that when cyberattacks increase, it’s due to Generative AI. 

AI-Powered Cybersecurity

AI-Powered Phishing attacks

What’s happening here? The success of AI-Powered cyberattacks—and the chief threat of cyberattacks—is when they can reach a digital scale. One clever individual might not be enough to deal with a huge organization and its cybersecurity team. But one clever individual backed with Generative AI can mimic real humans in a scalable way, which can help phishing attacks succeed. With Generative AI, one individual with nefarious intentions could launch all sorts of phishing attacks until they find one that works. And an organization can quickly find itself overwhelmed.

AI poisoning

AI might make phishing attacks more sophisticated, but at least the principles are recognizable. There’s one area where an organization’s reliance on AI can lead to an entirely new paradigm of cyberattacks, however: AI poisoning. Think of it as “poisoning the well.”

Through 2022, 30% of all AI cyberattacks leveraged this style. To understand AI poisoning (also known as “training data poisoning”), it helps to understand Generative AI in the first place. A service like ChatGPT can use its algorithms to identify patterns from content and spit out something wholly original. But note that it has to have content to draw from in the first place—like water in the well. AI poisoning “poisons” this well to change the content the AI spits out. 

The resulting Generative AI you get could be problematic in various ways. It could be downright false and provide you with incorrect information. It could be malicious, including offensive language that makes the AI unusable. Or it can be subtle, including biases that are harder to detect but just as damaging. Unfortunately, there have been cases already where lawyers have used ChatGPT to reference and site cases that were not real…destroying their case at court. Understanding where and how to limit the use of these technologies will become more and more important going forward. 

Malware attacks

Generative AI is especially adept at coding: both analyzing and creating snippets of code. Any nefarious actor with Generative AI can use the software to build malware. So far, this hasn’t been a major threat because Generative AI is not a particularly original or creative coder. 

But AI has the possibility of making exponential improvements. It can learn to code quickly, giving any nefarious actor the ability to build malware from Generative AI. And if the AI is sophisticated enough, that malware can be more difficult to block than the malware humans were building before. 

AI gives ordinary people the ability to generate amazing things without the skills these creations might have previously required. A person with minimal skills can log into a Generative AI account and ask for writing and ideas far exceeding what they might otherwise generate. But remember that the reverse is also true: Generative AI could give people without coding skills the ability to build sophisticated malware at scale.

Quantum computing and the AI risk

Generative AI gives nefarious actors the ability to produce attacks at scale. But there’s a hard technical limit to all of this generative AI: computing power. Quantum computing can give companies a leg up, allowing them the processing power to boost cybersecurity practices. But it can just as easily work the other way. It can offer cybercriminals a method to scale their processing power.

Let’s take a specific example: encryption. Encryption often works because cyber criminals don’t have the computing power to bypass it. But what if traditionally encrypted code becomes exposed to cybercriminals with the processing power to attack encrypted information? Secure transactions—like digital ecommerce through an online bank—would suddenly be exposed.

Many organizations are looking at new computing power to accelerate their data processing and AI projects. Technologies like HPE Cray, HPE HPC and other intelligent data platforms are helping organizations realize their business goals but perhaps this processing power can help give them a leg up in the cybersecurity battle as well. 

AI model theft

If your business uses a proprietary AI model to build its Generative AI, you’ll want to protect it. But there’s a vulnerability here. Once you’ve trained your model or embedded it on the cloud, cybercriminals can access your systems and reverse engineer the models you used to build the AI. In other words, they can steal your AI.

Understanding AI-Powered Cyberattacks to Beat Them

To prevent these attacks, you must first understand how cybercriminals are changing their strategies as AI grows more sophisticated. To do that, you’ll have to clamp down on some key elements in your security protocols:

  • Make sure your functions are confidential, meaning that only authorized users can view, access, or store key data like the systems on which your AI is trained.
  • Build redundancies in your system so that everything your business relies on remains available whenever you need to access them. Even though you plan on allowing zero AI cyberattacks, you should have a contingency plan in place for when they do.

AI-Powered cyberattacks might look intimidating from a cybersecurity perspective right now. But AI is also working on your side. Use strong encryption while encryption outpaces the strength and innovation of AI. Ensure your user authentication methods are up-to-date, always installed with the latest patches. Implement monitoring systems to look for anomalies and address them as soon as you identify them. AI might sound like a persistent threat, but cybersecurity isn’t going anywhere, either. Work with the Comport team to get started on your AI-Powered Cybersecurity Strategy, Get Started Today!

Extend the capabilities of your IT team with Comport’s technology services and solutions.

Contact an expert

                        Register Below

                        [text* first-name placeholder "First Name" akismet:author]

                        [text* last-name placeholder "Last Name" akismet:author]

                        [email* email placeholder "Email" akismet:author_email]

                            ComportSecure Streamlines Managed IT Services

                            Take advantage of ComportSecure’s comprehensive managed cloud services and team of experts to transform your cloud. Contact us today to take your cloud solutions to the next level.