New capabilities for attackers
Generative models let attackers produce personalized, high-quality messages, fake audio or video snippets, and believable pretexts at scale. These capabilities shorten the time needed to craft convincing scams and increase the likelihood victims will comply.
Common AI-enabled attack patterns
Personalised phishing
Using public data and leaked records, attackers create messages that reference recent events, colleagues' names, or expected workflows, increasing credibility.
Spear-impersonation and voice cloning
Short voice clips or deepfake video can be used to impersonate executives, asking employees to approve urgent transfers or change account settings.
How to defend people and processes
Verification workflows
Design simple, mandatory verification steps for sensitive tasks: multi-channel confirmation, transaction thresholds, and documented approvals. Train staff to follow these consistently even under pressure.
Limit information exposure
Reduce public-facing data that attackers use to craft pretexts. Review directory listings, public repos, and social profiles for unnecessary details.
Tools and training
Use AI-aware security awareness training that includes examples of AI-generated scams. Apply email filtering and data loss prevention that detect unusual language or request patterns. See our practical guidance on spotting phishing: How to Spot and Avoid Phishing Attacks.
Balance technology with human checks
Automation helps filter low-skill attacks, but high-quality AI-driven social engineering requires human verification and process hardening. Regularly rehearse incident response and update playbooks when new attack patterns appear.
Where this fits with Esrok
This article strengthens our security cluster alongside our phishing and account recovery guidance. For account recovery threats see Account recovery scams explained. For technical protections, review our Security pillar.