For cybercriminals looking to attack businesses, email continues to be the preferred attack vector. Despite a rapidly changing technology landscape with new innovations such as ChatGPT, cybercriminals are opting to adapt their email-based techniques to improve old tactics rather than create new methods altogether. This is largely because email provides cybercriminals with a direct line…

A global sensation since its initial release at the end of last year, ChatGPT’s popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to…

OX Security, a leader in software supply chain security, today announced the launch of OX-GPT, the first ChatGPT integration to improve software supply chain security. With the new integration, OX now presents developers with customized fix recommendations and cut and paste code fixes, providing for quick remediation of critical security issues across the software supply…

Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules, the government’s privacy watchdog said Friday. The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company…

Microsoft on Wednesday rolled out an AI-powered security analysis tool to automate incident response and threat hunting tasks, showcasing a security use-case for the popular chatbot developed by OpenAI. The new tool, called Microsoft Security Copilot, is powered by OpenAI’s newest GPT-4 model and will be trained on data from Redmond’s massive trove of telemetry…

A proof-of-concept, artificial intelligence (AI)-driven cyberattack that changes its code on the fly can slip past the latest automated security-detection technology, demonstrating the potential for creating undetectable malware. Researchers from HYAS Labs demonstrated the proof-of-concept attack, which they call BlackMamba, which exploits a large language model (LLM) — the technology on which ChatGPT is based…

Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn’t in place for the service. In a recent…

Cybercriminals have lost little time in converting the artificial intelligence capabilities of ChatGPT to malicious purposes by using it to generate malware scripts. Security researchers at Check Point found members of the low-level hacking community Breach Forums posting over the past few weeks the results of interactions with the OpenAI-developed tool. They include a machine-learning…