ChatGPT creates mutating malware that evades detection by EDR

Source
Advertisement


A global sensation since its initial release at the end of last year, ChatGPT’s popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.

A recent series of proof-of-concept attacks show how a benign-seeming executable file can be crafted such that at every runtime, it makes an API call to ChatGPT. Rather than just reproduce examples of already-written code snippets, ChatGPT can be prompted to generate dynamic, mutating versions of malicious code at each call, making the resulting vulnerability exploits difficult to detect by cybersecurity tools.

Advertisement