The Role of Generative AI in Cybersecurity

By Antonio Sanchez, Principal Cybersecurity Evangelist, Fortra.

  • 2 months ago Posted in

Business is moving forward, and security must move along with it. Thanks largely to generative AI models like ChatGPT, things are moving faster than ever. This is both positive and problematic.

While AI that can research an acquisition and order lunch for the office is an obvious boon, there are other angles to consider when weighing the overall picture. And yet, regardless of the measurement, generative AI is here to stay.

As a security professional myself, I find the best way to deal with inevitable challenges is to arm myself with knowledge. By understanding how threat actors are leveraging ChatGPT to do their bidding, we can best prepare to counter in kind.

Judging by its potential, generative AI will have a huge impact on the future of cybersecurity. By studying its capabilities, uses, and abuses, security practitioners can begin to craft what that impact will be.

ChatGPT: Moving the Needle

Once the genie is out of the bottle, no one wants to put it back. The truth is, there are so many incredible uses for ChatGPT (and other models like it) within the scope of business that it wouldn’t make sense to do it if we could.

With ChatGPT, for example, one can craft creative ad copy or lead generation emails. There is no good reason to send a message with grammatical errors, and if anyone had questions about how to write a professionally toned memo, those queries are now moot.

The Risks of Generative AI

Now, for the bad part.

Remember those perfect emails? ChatGPT can spin them up for professional phishers, too (and does). What about malware? ChatGPT’s got us covered there, too. No more cranking out ransomware by hand (that’s so 2004). In addition to RaaS kits and the crime-as-a-service economy, ChatGPT has made spinning out new zero-day attacks that much easier. Now, it can even evade detection by EDR tools.

And let’s not forget deepfakes, relentlessly probing for sensitive data, the possibility of probing for API vulnerabilities (something that takes a long time sans AI), and potentially offering quick-and-easy training courses to aspiring cybercriminals. That, along with the fact that anything and everything ChatGPT ingests is now the domain of ChatGPT – meaning that someone asking the right questions on the other side of the world could possibly end up with your source code, or anything else you’ve exposed it to. The list of nefarious deeds is bounded only by imagination.

Defending Against AI, With AI

Thankfully, the knife cuts both ways. This genie grants wishes to whoever rubs the lamp, so security practitioners just need to get creative. Luckily for us, that’s always been one of our strong suits.

Here are ways defenders have been leveraging the same dynamic power to thwart those very AI-based attempts at cybercriminal activity.

1. Plugging the cyber talent gap | With ChatGPT, practitioners can get to the bottom of analysis problems quickly, throwing out questions like “what is the baseline behavior for this asset”. Less time analysing data means more time making critical decisions.

2. Automating basic tasks | From searching logs to hunting threats to even patching vulnerabilities and pen testing, the list of “things doable by AI” is widening with generative AI at the helm.

3. Security program analysis | When on the brink of making a decision regarding the future of your security program – say, which tool to add in next – generative AI can help deliver the stats on the current state of your program, so you know better what you need.

4. Security analytics | Necessary, pre-decision busywork like creating YARA rules, IDS signatures, and even search queries can be offloaded to generative AI models. That way, your team has more time to decipher the data (and less time to get to it).

5. Fighting AI with AI | As it was so plainly stated in Forbes, “We no longer have a choice—we are being forced to fight AI cyber threats with AI cybersecurity.” That might very well be true. As generative AI spins up exploits beyond the capacity of current tools, it’s going to take the power that only AI-driven tools can produce to beat it at its own game.

It is a bright future for defenders because the security implications of generative AI are themselves nearly inexhaustible. The technology available to one is available to all, and we’re scrappier and more resourceful than most. But the truth is, given the malicious uses, capacities, and potential of generative AI today, we’re going to have to be.

By Michael McNerney, Vice President Marketing and Network Security, Supermicro.
By Michael Deheneffe, Director Marketing & Innovation at Orange Business Services, Digital & Data.
By Jess Petrella, Director of Product Marketing at Unbounce.
By Wayne Carter, VP Engineering, Couchbase.
By Guy March, Senior Director of EMEA Channels, Tenable.
By Jan Stappers, Director of Regulatory Solutions at NAVEX.
By Stuart Farmer, Sales Director, Mercury Power.