Advanced persistent threats (APTs) aligned with China, Iran, North Korea, and Russia are all using large language models (LLMs) to enhance their operations. New blog posts from OpenAI and Microsoft reveal that five major threat actors have been using OpenAI software for research, fraud, and other malicious purposes. After identifying them, OpenAI shuttered all their…

Generative artificial intelligence-enabled ransomware and nation-state hacks in the United Kingdom are “almost certainly” likely to surge after this year, the National Cyber Security Center warned. And British lawmakers called on the government to roll out measures to prevent AI scams. In a report evaluating the cyber risk posed by artificial intelligence, the NCSC evaluated…

Researchers uncovered a critical vulnerability in graphic processing units of popular devices that could allow attackers to access data from large language models. The flaw, dubbed LeftoverLocals, affects the GPU frameworks of Apple, AMD and Qualcomm devices. Researchers at security firm Trail of Bits, who uncovered the flaw, said it stems from how the affected…

The British data regulator is set to analyze the privacy implications of processing scrapped data used for training generative artificial intelligence algorithms. The Information Commissioners’ Office on Monday announced that it’s soliciting comments from AI developers, legal experts, and other industry stakeholders on how privacy rights might be affected by developments in generative AI. Since…

Visa’s newest security piece applies AI to customer transactions, analyzing them for their probability of fraud. Payment network Visa will offer a new AI-powered system designed to combat token fraud, analyzing transactions for patterns that could indicate fraudulent activity and help protect financial institutions against losses. The new product, dubbed Visa Provisioning Intelligence, is now…

The new AI Safety Initiative has attracted participation from tech heavyweights Microsoft, Amazon and Google OpenAI and Anthropic and plans to work on tools, templates and data for deploying AI/LLM technology in a safe, ethical and compliant manner. “The AI Safety Initiative is actively developing practical safeguards for today’s generative AI, structured in a way…

Bringing its security and data analysis capabilities to a new potential audience, data security and multicloud data management provider Cohesity is now taking signups for access to its Turing generative AI features via Amazon’s Bedrock front-end for cloud-based AI. Cohesity Turing’s AWS-available features, the company announced Monday, will center on three main areas. The first…

Microsoft launches the Secure Future Initiative to usher in “next generation” of cybersecurity to better protect customers against escalating cybersecurity threats. Microsoft has announced the launch of the Secure Future Initiative (SFI) to improve the built-in security of its products and platforms to better protect customers against escalating cybersecurity threats. The new initiative will bring…

The European Union will soon set up a dedicated office to oversee the implementation of the AI Act, especially by big-tech companies such as OpenAI, said a key European lawmaker. The European Parliament in June approved regulations intended to mitigate AI’s potential for negative effects on society. The AI Act entered final negotiations this month…