Tag: openai
-
ChatGPT Down as Users Report >>Gateway Time-out<< Error
by
in SecurityNewsChatGPT Down: Users report “Gateway time-out” errors. OpenAI’s popular AI chatbot is experiencing widespread outages. Stay updated on the service disruption. First seen on hackread.com Jump to article: hackread.com/chatgpt-down-as-users-report-gateway-time-out-error/
-
Google, OpenAI Push Urges Trump to Ease AI Export Controls
by
in SecurityNewsAI Giants Also Like ‘Fair Use’ Exemptions for Copyrighted Material. OpenAI and Google laid out visions for regulation in response to the Trump administration’s AI Action Plan, which aims to help the United States maintain technological lead over China. Both companies want Biden-era export controls lightened. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/google-openai-push-urges-trump-to-ease-ai-export-controls-a-27739
-
AI Operator Agents Helping Hackers Generate Malicious Code
Symantec’s Threat Hunter Team has demonstrated how AI agents like OpenAI’s Operator can now perform end-to-end phishing attacks with minimal human intervention, marking a significant evolution in AI-enabled threats. A year ago, Large Language Model (LLM) AIs were primarily passive tools that could assist attackers in creating phishing materials or writing code. Now, with the…
-
Invisible C2″Š”, “Šthanks to AI-powered techniques
by
in SecurityNews
Tags: ai, api, attack, breach, business, chatgpt, cloud, communications, control, cyberattack, cybersecurity, data, defense, detection, dns, edr, email, encryption, endpoint, hacker, iot, LLM, malicious, malware, ml, monitoring, network, office, openai, powershell, service, siem, soc, strategy, threat, tool, update, vulnerability, zero-trustInvisible C2″Š”, “Šthanks to AI-powered techniques Just about every cyberattack needs a Command and Control (C2) channel”Š”, “Ša way for attackers to send instructions to compromised systems and receive stolen data. This gives us all a chance to see attacks that are putting us at risk. LLMs can help attackers avoid signature based detection Traditionally, C2…
-
OpenAI’s Operator AI agent can be used in phishing attacks, say researchers
by
in SecurityNewsFirst seen on scworld.com Jump to article: www.scworld.com/news/openais-operator-ai-agent-can-be-used-in-phishing-attacks-say-researchers
-
Symantec Demonstrates OpenAI’s Operator Agent in PoC Phishing Attack
Symantec demonstrates OpenAI’s Operator Agent in PoC phishing attack, highlighting AI security risks and the need for proper cybersecurity. First seen on hackread.com Jump to article: hackread.com/symantec-openai-operator-agent-poc-phishing-attack/
-
Symantec Uses OpenAI Operator to Show Rising Threat of AI Agents
by
in SecurityNewsSymantec threat researchers used OpenAI’s Operator agent to carry out a phishing attack with little human intervention, illustrating the looming cybersecurity threat AI agents pose as they become more powerful. The agent learned how to write a malicious PowerShell script and wrote an email with the phishing lure, among other actions. First seen on securityboulevard.com…
-
DeepSeek R1 Jailbreaked to Create Malware, Including Keyloggers and Ransomware
by
in SecurityNews
Tags: ai, chatgpt, cyber, cybercrime, exploit, google, intelligence, malicious, malware, openai, ransomware, toolThe increasing popularity of generative artificial intelligence (GenAI) tools, such as OpenAI’s ChatGPT and Google’s Gemini, has attracted cybercriminals seeking to exploit these technologies for malicious purposes. Despite the guardrails implemented by traditional GenAI platforms to prevent misuse, cybercriminals have circumvented these restrictions by developing their own malicious large language models (LLMs), including WormGPT, FraudGPT,…
-
Breach Roundup: The Ivanti Patch Treadmill
by
in SecurityNewsAlso: Patch Tuesday, Equalize Scandal Figure Dies and Polymorphic Extension Attack. This week, Ivanti EPM customers should patch, Patch Tuesday, fake web browser extensions, North Korean Android malware, a key figure in Italy’s Equalize scandal dead of heart attack. Also, Apache Camel flaw, OpenAI’s agent automates phishing and Apple patched another zero day. First seen…
-
OpenAI Operator Agent Used in ProofConcept Phishing Attack
by
in SecurityNewsResearchers from Symantec showed how OpenAI’s Operator agent, currently in research preview, can be used to construct a basic phishing attack from start to finish. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/openai-operator-agent-proof-concept-phishing-attack
-
Hackers Exploit Microsoft Copilot for Advanced Phishing Attacks
by
in SecurityNewsHackers have been targeting Microsoft Copilot, a newly launched Generative AI assistant, to carry out sophisticated phishing attacks. This campaign highlights the risks associated with the widespread adoption of Microsoft services and the challenges that come with introducing new technologies to employees, as per a report by Cofense. Microsoft Copilot, similar to OpenAI’s ChatGPT, is…
-
Attackers Can Manipulate AI Memory to Spread Lies
Tested on Three OpenAI Models, ‘Minja’ Has High Injection and Attack Rates. A memory injection attack dubbed Minja turns AI chatbots into unwitting agents of misinformation, requiring no hacking and just a little clever prompting. The exploit allows attackers to poison an AI model’s memory with deceptive information, potentially altering its responses for all users.…
-
Manus mania is here: Chinese ‘general agent’ is this week’s ‘future of AI’ and OpenAI-killer
by
in SecurityNewsPrompts see it scour the web for info and turn it into decent documents at reasonable speed First seen on theregister.com Jump to article: www.theregister.com/2025/03/10/manus_chinese_general_ai_agent/
-
MINJA sneak attack poisons AI models for other chatbot users
by
in SecurityNewsNothing like an OpenAI-powered agent leaking data or getting confused over what someone else whispered to it First seen on theregister.com Jump to article: www.theregister.com/2025/03/11/minja_attack_poisons_ai_model_memory/
-
UK CMA Halts Review of Microsoft, OpenAI Partnership
by
in SecurityNewsProbe into Microsoft’s $13 Billion OpenAI Investment Launched in 2023. The U.K. antitrust regulator won’t open an investigation into a partnership between computing giant Microsoft and artificial intelligence company OpenAI. U.K. Competition Market Authority concludes that there is no relevant merger situation. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/uk-cma-halts-review-microsoft-openai-partnership-a-27666
-
GPT-4.5 Scores EQ Points, but Not Much Else
by
in SecurityNewsModel Appears to Be a Way Station on the Road to Something Greater. OpenAI on Thursday released its latest generative AI model, but don’t call it the next big thing just yet. More thoughtful, persuasive and emotionally intelligent, GPT-4.5 aims to feel less like an algorithm and more like a conversation partner. First seen on…
-
Microsoft targets AI deepfake cybercrime network in lawsuit
by
in SecurityNewsMicrosoft alleges that defendants used stolen Azure OpenAI API keys and special software to bypass content guardrails and generate illicit AI deepfakes for payment. First seen on techtarget.com Jump to article: www.techtarget.com/searchsecurity/news/366619781/Microsoft-targets-AI-deepfake-cybercrime-network-in-lawsuit
-
Microsoft files lawsuit against LLMjacking gang that bypassed AI safeguards
by
in SecurityNewsLLMjacking can cost organizations a lot of money: LLMjacking is a continuation of the cybercriminal practice of abusing stolen cloud account credentials for various illegal operations, such as cryptojacking, abusing hacked cloud computing resources to mine cryptocurrency. The difference is that large quantities of API calls to LLMs can quickly rack up huge costs, with…
-
Microsoft names alleged credential-snatching ‘Azure Abuse Enterprise’ operators
by
in SecurityNewsCrew helped lowlifes generate X-rated celeb deepfakes using Redmond’s OpenAI-powered cloud claim First seen on theregister.com Jump to article: www.theregister.com/2025/02/28/microsoft_names_and_shames_4/
-
Does terrible code drive you mad? Wait until you see what it does to OpenAI’s GPT-4o
by
in SecurityNewsModel was fine-tuned to write vulnerable software then suggested enslaving humanity First seen on theregister.com Jump to article: www.theregister.com/2025/02/27/llm_emergent_misalignment_study/
-
Researchers Jailbreak OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Models
by
in SecurityNewsResearchers from Duke University and Carnegie Mellon University have demonstrated successful jailbreaks of OpenAI’s o1/o3, DeepSeek-R1, and Google’s Gemini 2.0 Flash models through a novel attack method called Hijacking Chain-of-Thought (H-CoT). The research reveals how advanced safety mechanisms designed to prevent harmful outputs can be systematically bypassed using the models’ reasoning processes, raising urgent questions…
-
‘OpenAI’ Job Scam Targeted International Workers Through Telegram
by
in SecurityNewsAn alleged job scam, led by “Aiden” from “OpenAI,” recruited workers in Bangladesh for months before disappearing overnight, according to FTC complaints obtained by WIRED. First seen on wired.com Jump to article: www.wired.com/story/openai-job-scam/
-
OpenAI Purges ChatGPT Accounts: China and North Korea Weaponizing AI for Propaganda
by
in SecurityNewsOpenAI has confirmed that it has begun blocking accounts linked to Chinese and North Korean users who have First seen on securityonline.info Jump to article: securityonline.info/openai-purges-chatgpt-accounts-china-and-north-korea-weaponizing-ai-for-propaganda/
-
OpenAI bans ChatGPT accounts used by North Korean hackers
by
in SecurityNewsOpenAI says it blocked several North Korean hacking groups from using its ChatGPT platform to research future targets and find ways to hack into their networks. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/openai-bans-chatgpt-accounts-used-by-north-korean-hackers/
-
OpenAI cracks down on malicious ChatGPT usage
by
in SecurityNewsFirst seen on scworld.com Jump to article: www.scworld.com/brief/openai-cracks-down-on-malicious-chatgpt-usage
-
China Using AI-Powered Surveillance Tools, Says OpenAI
by
in SecurityNewsReport Also Flags Threats Linked to North Korea, Iran. Chinese influence operations are using artificial intelligence to carry out surveillance and disinformation campaigns, OpenAI said in its latest threat report. The report details two major Chinese campaigns that misused AI tools, including OpenAI’s own models, to advance state-backed agendas. First seen on govinfosecurity.com Jump to…