The signs an average person uses to tell whether an email is legitimate or a scam by checking for misspellings, grammar errors, and lack of cultural context will be harder to spot in 2024. Attackers will continue to use generative AI and large language models (LLM) in phishing, SMS, and other social engineering operations to make the content, including voice and video, appear more legitimate.
Generative AI will also aid malicious activity at scale according to the Google Cloud Cybersecurity Forecast 2024 report. By having access to names, organization, job titles, departments or health data, attackers may not even need to use malicious LLMs as there is nothing inherently malicious about using gen AI to draft an invoice reminder. “