Cyber vendors or cyber criminals, who will win the AI race?

By Brett Raybould, EMEA Solutions Architect, Menlo Security.

  • 1 year ago Posted in

It’s almost a year ago when OpenAI officially launched ChatGPT in November 2022 and there was a collective gasp. A year is not a long time, but when it comes to artificial intelligence, it seems like a lifetime ago – a lot has happened since then.

ChatGPT uses a form of AI that creates new information rather than simply categorising what it sees. The result: lucid, creative, and persuasive text, produced entirely by software. It is just one example of this type of AI, known as a large language model (LLM). Facebook released its own as open source software, and Google has one called BARD. In fact, anyone can use open source LLM models to create their own specialised version.

But where there is new technology, there is also a risk, and you know that cyber criminals will almost certainly look at ways to exploit it. Because many of these LLMs are open source, criminals have already produced 'dark' versions designed to help create and launch attacks. Already we have seen tools like WormGPT. But with attackers quick to innovate, other programs like Evil-GPT are already pushing the scope of what can be done.

The likes of WormGPT and FraudGPT are trained to produce fraudulent phishing texts and malicious code. Asking them to write an SMS message or an email to click on a malicious link, can deliver convincing results.

They can produce clear, grammatically accurate text that makes phishing emails even more convincing, which opens it up to more non-native speakers and potentially lowers the barrier to entry for attackers who are not particularly skilled. It allows means that scammers can create emails at scale.

This is especially useful for snowshoe spammers who bulk register domains to launch thousands of short-run phishing campaigns.

As more and more cyber criminals take advantage of AI tools, we can expect attack volumes to rise. This makes it even more important to understand what is happening in the web browser so that security vendors and partners and can prevent these attacks automatically.

AI attacks targeting the web browser

AI-assisted malware generation is a real concern for organisations, with most attacks delivered via the browser, and an increasing number of them evading detection via traditional tools in the security stack. These include:

Smuggling malware under the radar: One of the most malicious browser-based threats is HTML smuggling. This avoids file scanners by using JavaScript to build files with malicious capabilities directly in the browser.

Bypassing email security: Traditional email scanners look for malicious links, but attackers increasingly use non-email browser communications such as social media messaging to reach victims.

Avoiding URL filters: Link scanners use a domain's reputation when evaluating whether to allow access to it. Attackers increasingly deliver malicious content using trustworthy domains, such as Office 365, that allow user uploads, making phishing attacks and malware harder for traditional tools to detect.

Evading HTTP inspection: Firewalls that inspect page content will miss malicious content that JavaScript constructs using the browser's rendering engine.

Using AI security to fight AI threats

If attackers have already worked out how to use artificial intelligence to hone their skills and deliver more effective attacks, then we as an industry must keep up and fight AI fire with AI.

We can do this by using AI-powered classification algorithms to gather and analyse all of the data about an attack. This will include what is happening in the web browser, and any data about web traffic targeting the company's network. As the main attack vector today, the browser is a focal point at where we can understand more about attack methods.

AI algorithms also thrive on large data sets – and the more relevant the data they have to train on, the more accurate the results will be. AI-powered security tools have lots of available data in the form of web traffic and the behaviour of that traffic inside the browser. By training it on that data to spot abnormal or malicious activity, these tools can provide defence directly in the browser.

New AI-based security tools can use computer vision to 'see' images that scammers insert into emails or web pages to fool scanners. They can apply sophisticated URL risk scoring mechanisms, combining them with an analysis of web page elements. When passed through constantly updated machine learning models, this data can determine the intent of a website in real time while detecting highly evasive attacks.

Using tools like these enables cybersecurity vendors to shine a light on browser security.

The real concern is just how far the technology will go, with bots like DarkBART already looking at features that will be able to provide both text and image responses.

It’s important we constantly evolve our technology to keep up with attack tools and methods. By incorporating AI capabilities now, organisations will have a better chance of preventing sophisticated, automated attacks before they happen.

By Manuel Sanchez, Information Security and Compliance Specialist, iManage.
Anita Mavridis, VP of Product at Zivver, and Sue Musumeci, Director of Quality & Clinical...
By Danny Lopez, CEO of Glasswall.
Nadir Izrael, Co-Founder and CTO at Armis discusses the importance of critical infrastructure...
By Darren Thomson, Field CTO EMEAI at Commvault.
By Asher Benbenisty, Director of Product Marketing at AlgoSec.
By Steve Purser, former Head of Core Operations at the EU Agency for Cybersecurity, and Zivver’s...
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist, Business and...