Cloudflare, one of the world’s largest internet infrastructure providers, faced its most serious outage in more than six years on Tuesday. The disruption lasted almost six hours and caused thousands of websites, apps, and online services to become slow, unstable, or completely unreachable. The incident was not caused by a cyberattack but by an internal technical fault that spread across Cloudflare’s massive Global Network.
Cloudflare powers a large part of the modern internet. Its Global Network spans more than 120 countries, connecting over 13,000 networks, including major ISPs, cloud providers, enterprises, and government organizations. The company is responsible for content delivery (CDN), website security, DDoS protection, and performance optimization. Any disruption in Cloudflare’s systems can have a wide impact across the web — and this outage was a clear example of that.
According to Cloudflare CEO Matthew Prince, the problem began at 11:28 UTC when engineers pushed a routine update to database access permissions. While the change appeared simple, it created an unexpected side effect inside Cloudflare’s Bot Management system.
The permissions update caused a database query to produce duplicate metadata entries. These duplicates were written into a configuration file known internally as a “feature file,” which the Bot Management service uses to analyze and block malicious traffic.
Normally, this file contains around 60 features. But because of the faulty output, the file grew to more than 200 features, crossing a hard limit built into the system. This oversized file caused the software to crash, triggering a cascading failure that affected traffic routing across Cloudflare’s network.
Cloudflare explained that the feature file exceeded its 200-feature maximum, which exists to prevent uncontrolled memory usage. When the limit was breached, the Bot Management module — written in Rust — entered a “panic state.” This panic crashed the proxy responsible for handling and routing internet traffic.
As a result, users across the internet started encountering 5xx HTTP errors, which indicate server-side failures. Some areas saw brief restoration, while others continued to fail. The network entered a loop where every five minutes, depending on which cluster had the updated permissions, the system either generated a correct file or a faulty one. This caused the entire network to constantly switch between working and failing states.
Because Cloudflare sits in front of millions of websites, the outage had a broad and noticeable impact. The following services were disrupted:
Core CDN services
Website security tools
Cloudflare Turnstile
Workers KV
Dashboard and admin panel access
Email security
Zero Trust access authentication
For many businesses, this meant their websites loaded slowly or did not load at all. Cloudflare’s security and performance tools also became temporarily inaccessible.
By 14:30 UTC, engineers had identified the root cause and restored core traffic. They rolled back the problematic file to a known good version, which immediately stabilized the system. Full restoration across all products was complete by 17:06 UTC, ending the six-hour disruption.
Matthew Prince addressed customers in a detailed post-mortem, emphasizing that the outage was purely the result of an internal configuration issue — not a cyberattack.
He stated:
“The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems’ permissions.”
Prince also apologized for the scale of the disruption:
“We are sorry for the impact to our customers and to the Internet in general. Given Cloudflare’s importance in the Internet ecosystem, any outage of any of our systems is unacceptable.”
This incident is now officially Cloudflare’s worst outage since 2019. The company has experienced smaller disruptions over the years — some affecting only the dashboard or individual services — but nothing of this magnitude. This time, a large portion of the core traffic flowing through network was directly impacted.
Earlier in June, Cloudflare also handled another major outage that affected Zero Trust WARP services, authentication systems, and even Google Cloud regions. However, that outage was not as widespread or severe as the one that occurred this week.
The incident highlights how vital Cloudflare has become to global internet operations. A single faulty configuration file triggered a chain reaction that affected millions of users worldwide. Although services were restored quickly, the outage demonstrated how complex, sensitive, and interconnected today’s internet infrastructure truly is.
As Cloudflare continues its post-incident analysis, the company says it will strengthen safeguards to prevent similar failures and improve resilience across its distributed network.
Interesting Article : LinkedIn Phishing: A New Threat to Businesses

Pingback: Grafana Fixes High-Risk CVE-2025-41115 Vulnerability in SCIM Module