In 2025, AI technology played a growing role in cybersecurity, influencing both attacker methods and defensive strategies. According to the Kaseya INKY Email Security Report, AI-generated phishing tactics became widely used in phishing attacks.
The report outlines how AI can be used to produce highly convincing and scalable phishing messages, reducing the reliability of traditional warning signs such as poor grammar, suspicious domains, and obvious links. As a result, defenders are increasingly required to assess the intent and context of emails rather than relying solely on conventional indicators.
Phishing continues to be a leading vector for cyberattacks, accounting for 26% of cybercrime-related complaints filed with the FBI and contributing to financial losses, including $2.8 billion associated with Business Email Compromise. A large proportion of these attacks—up to 82%—target organisations with fewer than 1,000 employees, indicating the exposure of small and mid-sized businesses (SMBs) to cyber threats.
The report also states that among the more than 4.5 billion emails processed by the INKY system in 2025, 281 unique brands were impersonated. Using AI-generated layouts, attackers are able to more closely replicate legitimate communications from financial and retail organisations.
AI is also being applied in defensive tools. INKY reports using generative AI-driven detection models, including intent-based labelling, multi-label classification, and computer vision techniques, to improve threat identification. These approaches are being developed alongside changes in phishing methods, such as the use of calendar invitations, protected document prompts, and callback phone numbers.
As AI continues to influence both attack techniques and security tools, organisations are increasingly adopting updated approaches to address evolving phishing activity.