Art

Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

OpenAI Updates ChatGPT Atlas Browser Against Prompt Injection Attacks

On December 22, 2025, OpenAI announced a security update for the ChatGPT Atlas browser agent aimed at reducing the risk of prompt injection attacks. The update follows internal red teaming that identified new exploit classes, leading to the deployment of an adversarially trained model designed to better resist malicious instructions embedded in web content. OpenAI describes this update as part of a continuous "proactive rapid response loop" to address evolving security challenges in agentic browsing.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Australia’s Social Media Ban: High Court to Hear Reddit and Constitutional Challenges

Australia’s landmark social media minimum age regime, which officially commenced on 10 December 2025, is heading to the High Court for a major constitutional test. The legal challenge, supported by the Digital Freedom Project and Reddit, disputes the validity of the under-16 ban and raises significant questions about the privacy of age assurance technologies. With key hearings scheduled for early 2026, the outcome will determine how the law defines social media platforms and the extent to which the government can regulate digital access for young Australians.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

University of Sydney Cyber Incident: 27,500 Affected by Code Library Data Breach

A cyber incident at the University of Sydney has resulted in the unauthorized access and download of historical data files containing personal information for approximately 27,500 individuals, including current and former staff, students, and alumni. While no evidence of data misuse has been detected as of late December 2025, cybersecurity authorities warn that such breaches increase the risk of sophisticated, AI-driven impersonation and social engineering scams. The University has secured the affected environment and is in the process of notifying those impacted.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Grok Misinformation Around The Bondi Beach Attack Shows How Fast AI Can Mislead During Breaking News

Following the Bondi Beach shooting that left 16 people dead, scrutiny has turned to the role of artificial intelligence in breaking news. While NSW Police worked to update casualty figures, reports from multiple outlets indicate that xAI’s Grok assistant produced incorrect information during the event. The chatbot reportedly misidentified the bystander who disarmed a gunman and cast doubt on the authenticity of verified scene footage, highlighting the challenges of using AI tools for real-time information verification.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

NAB Blocks AUD 100k Loss in 'Kevin Costner' AI Deepfake Scam

A National Australia Bank customer nearly transferred A$100,000 after viewing a convincing deepfake video call mimicking actor Kevin Costner. This incident highlights the escalating sophistication of AI-driven impersonation fraud and underscores the critical need for new banking verification controls and the upcoming government Scams Prevention Framework.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Australia’s Under-16 Social Media Ban: AUD 49.5m Fines & 10 Key Apps Listed

Australia enters the enforcement phase of its under-16 social media ban on December 10, 2025. With penalties reaching AUD 49.5 million, platforms like Facebook and TikTok must now demonstrate reasonable steps to block young users. This report details the legal scope, the 10 affected services, and the privacy safeguards mandated by the new legislation.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

OpenAI Cuts Ties With Mixpanel After Nov 8 Breach Exposes API User Data

OpenAI has terminated its relationship with third-party analytics provider Mixpanel after a targeted phishing campaign exposed limited personal information linked to API users. While OpenAI confirms that ChatGPT consumer data and passwords were not affected, the incident has prompted a broader security review of the company's vendor ecosystem to address supply chain risks.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

FBI Warns of USD 262M in Account Takeover Losses as AI Scams Rise

The Federal Bureau of Investigation (FBI) has reported that cybercriminals have stolen more than USD 262 million from US bank accounts in 2025 through account takeover (ATO) fraud. With more than 5,100 victims reporting incidents this year, the average loss per case has exceeded USD 51,000. Authorities warn that attackers are increasingly leveraging generative AI and deepfake technology to bypass security measures and impersonate trusted contacts.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Deepfakes and Scams Drive USD 4.6 Billion in Global Losses

In late 2025, artificial intelligence has shifted from a background tool to a central driver of financial crime. With global crypto scam losses reaching US$4.6 billion and deepfakes accounting for 40% of high-value fraud, this report analyzes the mechanics behind artist "proof sketch" theft, celebrity deepfakes, and "digital arrest" schemes, while outlining practical defenses for creative professionals and investors.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Cloudflare Outage: 6-Hour Failure Disrupts ChatGPT, Claude and Millions of Websites Worldwide

Cloudflare suffered a major outage on 18 November 2025, triggering global 5xx errors and disrupting AI platforms such as ChatGPT and Claude, along with many mainstream websites. The incident, caused by a configuration error in Cloudflare’s bot-management system, lasted nearly six hours before full recovery.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

ChatGPT Atlas Launch Triggers Security Concerns After 7-Day Prompt Injection Findings

OpenAI’s ChatGPT Atlas browser has drawn early criticism after researchers identified prompt injection vulnerabilities just seven days after launch. While the AI-powered browser promises faster, more intuitive web navigation, experts caution that its agentic features may introduce new security and privacy challenges for everyday users.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

AI Firewalls Gain Momentum as Breach Costs Hit USD 4–5M and Zero-Day Risks Stay Low

AI-driven firewalls are becoming more widely used as organisations seek faster threat detection and stronger network resilience. With global breach costs averaging USD 4–5 million and most incidents linked to stolen credentials or known vulnerabilities, companies are adopting AI tools to manage growing data volumes and evolving attack techniques. This report examines the technology’s capabilities, business impact and emerging trends.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

OpenAI’s Safety Router Sparks Debate as 1M Weekly Chats Trigger Emotional Distress Flags

OpenAI’s introduction of a safety routing feature in ChatGPT has sparked widespread debate among users, professionals, and digital rights advocates. Supporters view the change as a protective measure for individuals in distress, while critics argue it reduces user control and lacks transparency. The controversy highlights broader tensions in how AI systems balance safety, autonomy, and trust.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

85% of Americans Fear AI Is Making Fraud Harder to Detect, Survey Finds

A new survey of more than 2,000 American adults reveals widespread concern that artificial intelligence is enabling more convincing and harder-to-detect scams. With emotional stress and financial losses rising across age groups, consumers increasingly expect banks to strengthen security measures while maintaining fast and convenient services.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Solidus AI Tech Launches NOVA AI Browser Tool to Counter US$2 Billion Web3 Hack Losses

Solidus AI Tech has introduced NOVA AI, an AI-powered browser extension that aims to improve security across Web3 platforms. The tool identifies phishing risks, scans smart contracts for vulnerabilities, and monitors multi-chain activities in real time, offering an added layer of protection for crypto users and developers.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Australia Issues First Sanction Over AI-Generated False Legal Citations

An Australian solicitor has faced professional sanctions for relying on AI-generated legal citations that proved false in a family law case. The Victorian Legal Services Board varied his practising certificate, restricting him to supervised work. The decision sets a precedent for regulating AI use in the legal profession and highlights the risks of unverified reliance on emerging technology.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Anthropic Report Highlights AI Misuse in Cyber Extortion, Fraud and Ransomware

Anthropic released its August 2025 Threat Intelligence Report, documenting cases where threat actors exploited its Claude AI model in cyber extortion, fraudulent remote employment schemes, ransomware development, and phishing attempts. The findings illustrate how AI lowers technical barriers for malicious operations, while also emphasizing the company’s detection measures and collaboration with authorities.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

Kite AI Details Security Vulnerabilities in 'Agentic Internet'

Kite AI has released an analysis of security vulnerabilities in the agentic internet, where autonomous AI agents operate with memory and identity. The company identifies risks such as memory tampering, identity spoofing, and data poisoning, while proposing cryptographic and blockchain-based defenses. These insights come alongside new funding, product integrations, and forecasts of strong market growth.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

OpenAI Tightens Security Measures Amid Espionage and DeepSeek Allegations

OpenAI has stepped up its internal security protocols to safeguard sensitive AI technology, introducing biometric fingerprint access, air-gapped systems, and restricted employee access to critical algorithms. The changes come as U.S. officials raise concerns over foreign espionage and as Chinese startup DeepSeek faces allegations of intellectual property misuse, highlighting intensifying competition in the global AI sector.

Read More
Digital Security TheDayAfterAI News Digital Security TheDayAfterAI News

European Parliament Study Advocates Strict Liability for High-Risk AI Systems

A new study commissioned by the European Parliament calls for a dedicated strict liability regime for high-risk AI systems. The report argues existing EU rules, including the revised Product Liability Directive, remain insufficient to address AI’s unique risks. Without harmonized liability, it warns of national divergences that could impact accountability, innovation, and public trust.

Read More