By Sharon Atieno
Advanced security practices and tools are needed to defend against cybercriminals as they employ increased use of generative artificial intelligence (Gen AI)- technology that can produce various types of content, including text, imagery, audio, and synthetic data.
This is according to Trend Micro Incorporated, a global cybersecurity company, which detected 1.8 million malware targeted at Kenyan businesses and consumers in 2023.
“The speed and scalability of AI is increasing the sophistication of social engineering, while also making it quicker and easier for cybercriminals to trawl through large datasets for information to exploit,” says Zaheer Ebrahim, Solutions Architect, Middle East and Africa at Trend Micro Incorporated.
“To guard against these attacks, defenders need to understand the nature of the threats they are facing and evolve their security practices accordingly.”
Before Gen AI’s breakthrough, cybercriminals had two main phishing strategies. One was to mass-blast a huge number of targets and hope to catch a few vulnerable users. The other was to extensively research specific users and target them manually—a high-effort, high-success method known as ‘harpoon phishing’ or ‘whale phishing’.
The company notes that Gen AI is converging those two models, making it easy for attackers to send targeted, error-free, and tonally convincing messages on a mass scale in multiple languages. And this is already branching beyond emails and texts to include persuasive audio and video ‘deepfakes’ for an even more business-affecting threat.
The rise of readily available app-style interfaces like HeyGen, has made it difficult to deal with these new techniques. Cybercriminals with no coding knowledge or special computing resources can produce customised high-resolution outputs that are humanly undetectable.
Experts at the company predict that large language model (LLM) development efforts which are ill-intentioned are likely to persist in 2024, accompanied by new tools for malware authorship and other tasks.
“As information theft increases, a whole new cybercriminal service- ‘reconnaissance as a service’ (ReconaaS)—is likely to emerge. Certain bad actors will use AI to extract useful personal information from stolen data and sell it to other cybercriminals for ultra-targeted attacks,” a press statement reads.
Another tactic likely to become widespread includes targeting of AI apps but this can be combated by not training public AI tools on user inputs.
To combat these tactics, Trend Micro Incorporated recommends a combination of zero-trust approaches and the use of AI to make security stronger.
With zero-trust approaches, identities must always be verified, and only necessary people and machines can access sensitive information or processes for defined purposes at specific times. This limits the attack surface and slows attackers down.
Additionally, cybersecurity awareness training backed up with defensive technologies is encouraged. AI and machine learning can be used to detect sentiment and tone in messages or evaluate web pages to prevent fraud attempts that might slip by users.
“Over the coming year local businesses should expect to see cybercriminals leverage AI in new and sophisticated ways. However, defenders can use the technology to their own advantage, combining AI with zero-trust security frameworks and a strong security culture to combat evolving criminal tactics,” notes Ebrahim.