By Milliam Murigi

The rapid spread of Artificial Intelligence (AI) tools in public hands is transforming not only how people work and communicate, but also how cybercriminals operate.

According to Carsten Maple, Professor of Cyber Systems Engineering at the University of Warwick’s Cyber Security Centre (CSC) and a fellow of the Alan Turing Institute (ATI), the growing accessibility of AI tools is enabling attackers to carry out faster and more sophisticated fraud schemes.

“AI systems are now widely accessible, marking what analysts describe as a ‘massive change in the democratization of AI,” said Prof. Maple, who was speaking during the MOSIP Connect 2026 Conference.

“Tools such as conversational bots and autonomous AI agents, once confined to research labs or major corporations, are increasingly available to individuals, including malicious actors. While this accessibility fuels innovation, it also expands opportunities for misuse.”

According to him, attackers are already using AI to accelerate and scale cyberattacks. Increased computing power allows fraud attempts to be executed faster and across entire digital infrastructure stacks, particularly in identity systems that underpin government services, financial platforms, and public databases.

One growing concern, he noted, is the rise of high-fidelity impersonation. Advanced deep fake technologies, once associated mainly with viral videos of political figures, are now being deployed in highly targeted attacks. Criminals can generate realistic audio, images or video to impersonate government officials, technical staff, or authorized personnel, potentially gaining access to sensitive systems.

“AI is posing new risks for Digital Public Infrastructure (DPI), particularly by enabling attackers to scale fraud, impersonation, and system intrusions at unprecedented speed and sophistication, which means security systems must evolve just as rapidly to keep pace,” Prof. Maple said.

DPI refers to the foundational digital systems that enable governments and societies to deliver services and conduct transactions securely and efficiently. These typically include digital identity platforms, payment systems, and data-sharing frameworks that support everything from accessing healthcare and social services to voting, banking, and online verification.

Recent international cases highlight the threat. Mexico, for instance, has reportedly been forced to strengthen its digital ID program after a surge in AI-driven attacks. Several other nations, such as Ukraine, India and Israel, have experienced significant DPI attacks, with many 2024-2025 incidents being driven or amplified by AI and automated tools.

“AI is also being used to probe and bypass security defenses,” said Prof. Maple. “Techniques such as automated pattern analysis can identify weak points in detection systems, allowing attackers to subtly alter their behavior and slip past safeguards designed to flag suspicious activity.”

He described this as amplifying “systemic risk,” because vulnerabilities in one layer of infrastructure can cascade across connected services, potentially affecting multiple sectors from government services and healthcare systems to financial institutions and private enterprises, exposing society to far-reaching consequences if defenses are not rapidly updated.

Despite these concerns, Prof. Maple emphasized that AI is not only a threat but also a critical tool for defense. It can help countries achieve faster and more effective early threat detection and response, strengthen measures to prevent fraud and impersonation, enhance the resilience of biometric systems through continuous model evaluation and improve the detection of attempted attacks.

“For example, some research teams at ATI are developing tools capable of detecting manipulated images of IDs or altered selfies with accuracy exceeding current industry standards,” he revealed.

It is against this background that the institute has developed the Digital Identity Systems Trustworthiness Assessment Framework (DISTAF) through its Trustworthy Digital Infrastructure (TDI) programme. DISTAF is designed to evaluate and enhance the privacy, security, and ethical reliability of national digital identity systems.

The framework, according to Mirko Bottarelli from the same institute, provides structured guidelines that help organizations, particularly in the public sector, assess the trustworthiness of their digital infrastructures, ensuring they are secure, privacy-preserving, and ethically sound.

“By offering clear benchmarks and assessment tools, DISTAF aims to support governments and institutions in building digital identity systems that are both resilient to AI-driven threats and aligned with public trust,” said Bottarelli.