Cable cuts, storms, and DNS: a look at Internet disruptions in Q4 2025
嗯,用户让我总结一下这篇文章的内容,控制在一百个字以内,而且不需要用“文章内容总结”之类的开头。直接写描述就行。 首先,我得通读一下这篇文章。看起来是关于2025年第四季度全球互联网中断的情况。文章提到了各种原因,比如政府下令的断网、海底电缆被切断、停电、极端天气、技术问题,还有云平台的问题。 政府下令的断网主要发生在坦桑尼亚,因为选举期间的抗议活动。电缆切断方面,有几个国家都遇到了问题,比如海地、巴基斯坦和喀麦隆。停电影响了多米尼加共和国和肯尼亚的网络连接。极端天气如飓风和台风也导致了 Jamaica 和 Sri Lanka 等地的互联网中断。技术问题方面,英国的Vodafone和印尼的Smartfren都出现了服务中断。此外,云平台如AWS、Azure和Cloudflare也遇到了一些问题。 最后,文章强调了实时数据在维护全球连接中的重要性,并呼吁透明度以便更快地响应问题。 现在要控制在100字以内,我需要提取关键点:时间范围(2025年第四季度)、中断原因(政府行为、电缆故障、停电、天气和技术问题)、受影响国家和地区以及强调透明度的重要性。 所以总结起来应该是:2025年第四季度全球互联网中断频繁发生,原因包括政府断网、电缆故障、停电、极端天气和技术问题等。这些事件影响了多个国家和地区,凸显了全球网络连接的脆弱性,并强调了透明度的重要性。 </think> 2025年第四季度全球互联网中断频繁发生,原因包括政府断网、电缆故障、停电、极端天气和技术问题等。这些事件影响了多个国家和地区,凸显了全球网络连接的脆弱性,并强调了透明度的重要性。 2026-1-26 14:0:0 Author: blog.cloudflare.com(查看原文) 阅读量:5 收藏

2026-01-26

11 min read

In 2025, we observed over 180 Internet disruptions spurred by a variety of causes – some were brief and partial, while others were complete outages lasting for days. In the fourth quarter, we tracked only a single government-directed Internet shutdown, but multiple cable cuts wreaked havoc on connectivity in several countries. Power outages and extreme weather disrupted Internet services in multiple places, and the ongoing conflict in Ukraine impacted connectivity there as well. As always, a number of the disruptions we observed were due to technical problems – with some acknowledged by the relevant providers, while others had unknown causes. In addition, incidents at several hyperscaler cloud platforms and Cloudflare impacted the availability of websites and applications. 

This post is intended as a summary overview of observed and confirmed disruptions and is not an exhaustive or complete list of issues that have occurred during the quarter. These anomalies are detected through significant deviations from expected traffic patterns observed across our network. Check out the Cloudflare Radar Outage Center for a full list of verified anomalies and confirmed outages. 

Government-directed

Tanzania

The Internet was shut down in Tanzania on October 29 as violent protests took place during the country’s presidential election. Traffic initially fell around 12:30 local time (09:30 UTC), dropping more than 90% lower than the previous week. The disruption lasted approximately 26 hours, with traffic beginning to return around 14:30 local time (11:30 UTC) on October 30. However, that restoration proved to be quite brief, with a significant decrease in traffic occurring around 16:15 local time (13:15 UTC), approximately two hours after it returned. This second near-complete outage lasted until November 3, when traffic aggressively returned after 17:00 local time (14:00 UTC). Nominal drops in announced IPv4 and IPv6 address space were also observed during the shutdown, but there was never a complete loss of announcements, which would have signified a total disconnection of the country from the Internet. (Autonomous systems announce IP address space to other Internet providers, letting them know what blocks of IP addresses they are responsible for.)

Tanzania’s president later expressed sympathy for the members of the diplomatic community and foreigners residing in the country regarding the impact of the Internet shutdown. Internet and social media services were also restricted in 2020 ahead of the country’s general elections.

Cable cuts

Digicel Haiti

Digicel Haiti is unfortunately no stranger to Internet disruptions caused by cable cuts, and the network experienced two more such incidents during the fourth quarter. On October 16, traffic from Digicel Haiti (AS27653) began to fall at 14:30 local time (18:30 UTC), reaching near zero at 16:00 local time (20:00 UTC). A translated X post from the company’s Director General noted: “We advise our clientele that @DigicelHT is experiencing 2 cuts on its international fiber optic infrastructure.” Traffic began to recover after 17:00 local time (21:00 UTC), and reached expected levels within the following hour. At 17:33 local time (21:34 UTC), the Director General posted that “the first fiber on the international infrastructure has been repaired” and service had been restored. 

On November 25, another translated X post from the provider’s Director General stated that its “international optical fiber infrastructure on National Road 1” had been cut. We observed traffic dropping on Digicel’s network approximately an hour earlier, with a complete outage observed between 02:00 - 08:00 local time (07:00 - 13:00 UTC). A follow-on X post at 08:22 local time (13:22 UTC) stated that all services had been restored.

Cybernet/StormFiber (Pakistan)

At 17:30 local time (12:30 UTC) on October 20, Internet traffic for Cybernet/StormFiber (AS9541) dropped sharply, falling to a level approximately 50% the same time a week prior. At the same time, the network’s announced IPv4 address space dropped by over a third. The cause of these shifts was damage to the PEACE submarine cable, which suffered a cut in the Red Sea near Sudan. 

PEACE is one of several submarine cable systems (including IMEWE and SEA-ME-WE-4) that carry international Internet traffic for Pakistani providers. The provider pledged to fully restore service by October 27, but traffic and announced IPv4 address space had recovered to near expected levels by around 02:00 local time on October 21 (21:00 UTC on October 20).

Camtel, MTN Cameroon, Orange Cameroun

Unusual traffic patterns observed across multiple Internet providers in Cameroon on October 23 were reportedly caused by problems on the WACS (West Africa Cable System) submarine cable, which connects countries along the west coast of Africa to Portugal. 

A (translated) published report stated that MTN informed subscribers that “following an incident on the WACS fiber optic cable, Internet service is temporarily disrupted” and Orange Cameroun informed subscribers that “due to an incident on the international access fiber, Internet service is disrupted.” An X post from Camtel stated “Cameroon Telecommunications (CAMTEL) wishes to inform the public that a technical incident involving WACS cable equipment in Batoke (LIMBE) occurred in the early hours of 23 October 2025, causing Internet connectivity disruptions throughout the country.” 

Traffic across the impacted providers originally fell just at around  05:00 local time (04:00 UTC) before recovering to expected levels around 22:00 local time (21:00 UTC). Traffic across these networks was quite volatile during the day, dropping 90-99% at times. It isn’t clear what caused the visible spikiness in the traffic pattern—possibly attempts to shift Internet traffic to other submarine cable systems that connect to Cameroon. Announced IP address space from MTN Cameroon and Orange Cameroon dropped during this period as well, although Camtel’s announced IP address space did not change.

Connectivity in the Central African Republic and Republic of Congo was also reportedly impacted by the WACS issues.

Claro Dominicana

On December 9, we saw traffic from Claro Dominicana (AS6400), an Internet provider in the Dominican Republic, drop sharply around 12:15 local time (16:15 UTC). Traffic levels fell again around 14:15 local time (18:15 UTC), bottoming out 77% lower than the previous week before quickly returning to expected levels. The connectivity disruption was likely caused by two fiber optic outages, as an X post from the provider during the outage noted that they were “causing intermittency and slowness in some services.” A subsequent post on X from Claro stated that technicians had restored Internet services nationwide by repairing the severed fiber optic cables.

Power outages

Dominican Republic

According to a (translated) X post from the Empresa de Transmisión Eléctrica Dominicana (ETED), a transmission line outage caused an interruption in electrical service in the Dominican Republic on November 11. This power outage impacted Internet traffic from the country, resulting in a nearly 50% drop in traffic compared to the prior week, starting at 13:15 local time (17:15 UTC). Traffic levels remained lower until approximately 02:00 local time (06:00 UTC) on December 12, with a later (translated) X post from ETED noting “At 2:20 a.m. we have completed the recovery of the national electrical system, supplying 96% of the demand…

A subsequent technical report found that “the blackout began at the 138 kV San Pedro de Macorís I substation, where a live line was manually disconnected, triggering a high-intensity short circuit. Protection systems responded immediately, but the fault caused several nearby lines to disconnect, separating 575 MW of generation in the eastern region from the rest of the grid. The imbalance caused major power plants to trip automatically as part of their built-in safety mechanisms.

Kenya

On December 9, a major power outage impacted multiple regions across Kenya. Kenya Power explained that the outage “was triggered by an incident on the regional Kenya-Uganda interconnected power network, which caused a disturbance on the Kenyan side of the system” and claimed that “[p]ower was restored to most of the affected areas within approximately 30 minutes.” However, impacts to Internet connectivity lasted for nearly four hours, between 19:15 - 23:00 local time (16:15 - 20:00 UTC). The power outage caused traffic to drop as much as 18% at a national level, with the traffic shifts most visible in Nakuru County and Kaimbu County.

Military action

Odesa, Ukraine

Russian drone strikes on the Odesa region in Ukraine on December 12 damaged warehouses and energy infrastructure, with the latter causing power outages in parts of the region. Those outages disrupted Internet connectivity, resulting in traffic dropping by as much as 57% as compared to the prior week. After the initial drop at midnight on December 13 (22:00 UTC on December 12), traffic gradually recovered over the following several days, returning to expected levels around 14:30 local time (12:30 UTC) on December 16.

Weather

Jamaica

Hurricane Melissa made landfall on Jamaica on October 28 and left a trail of damage and destruction in its path. Associated power outages and infrastructure damage impacted Internet connectivity, causing traffic to initially drop by approximately half, starting around 06:15 local time (11:15 UTC), ultimately reaching as much as 70% lower than the previous week. Internet traffic from Jamaica remained well below pre-hurricane levels for several days, and ultimately started to make greater progress towards expected levels during the morning of November 4. It can often take weeks or months for Internet traffic from a country to return to “normal” levels following storms that cause massive and widespread damage – while power may be largely restored within several days, damage to physical infrastructure takes significantly longer to address.

Sri Lanka & Indonesia

On November 26, Cyclone Senyar caused catastrophic floods and landslides in Sri Lanka and Indonesia, killing over 1,000 people and damaging telecommunications and power infrastructure across these countries. The infrastructure damage resulted in disruptions to Internet connectivity, and resultant lower traffic levels, across multiple regions.

In Sri Lanka, regions outside the main Western Province were the most affected, and several provinces saw traffic drop between 80% and 95% as compared to the prior week, including North Western, Southern, Uva, Eastern, Northern, North Central, and Sabaragamuwa.

In Indonesia, Aceh and the Sumatra regions saw the biggest Internet disruptions. In Aceh, traffic initially dropped over 75% as compared to the previous week. In Sumatra, North Sumatra was the most affected, with an early 30% drop as compared to the previous week, before starting to recover more actively the following week.

Known or unspecified technical problems

Smartfren (Indonesia)

On October 3, subscribers to Indonesian Internet provider Smartfren (AS18004) experienced a service disruption. The issues were acknowledged by the provider in an X post, which stated (in translation), “Currently, telephone, SMS and data services are experiencing problems in several areas.” Traffic from the provider fell as much as 84%, starting around 09:00 local time (02:00 UTC). The disruption lasted for approximately eight hours, as traffic returned to expected levels around 17:00 local time (10:00 UTC). Smartfren did not provide any additional information on what caused the service problems.

Vodafone UK

Major British Internet provider Vodafone UK (AS5378 & AS25135) experienced a brief service outage on October 23. At 15:00 local time (14:00 UTC), traffic on both Vodafone ASNs dropped to zero. Announced IPv4 address space from AS5378 fell by 75%, while announced IPv4 address space from AS25135 disappeared entirely. Both Internet traffic and address space recovered two hours later, returning to expected levels around 17:00 local time (16:00 UTC). Vodafone did not provide any information on their social media channels about the cause of the outage, and their network status checker page was also unavailable during the outage.

Fastweb (Italy)

According to a published report, a DNS resolution issue disrupted Internet services for customers of Italian provider Fastweb (AS12874) on October 22, causing observed traffic volumes to drop by over 75%. Fastweb acknowledged the issue, which impacted wired Internet customers between 09:30 - 13:00 local time (08:30 - 12:00 UTC).

Although not an Internet outage caused by connectivity failure, the impact of DNS resolution issues on Internet traffic is very similar. When a provider’s DNS resolver is experiencing problems, switching to a service like Cloudflare’s 1.1.1.1 public DNS resolver will often restore connectivity.

SBIN, MTN Benin, Etisalat Benin

On December 7, a concurrent drop in traffic was observed across SBIN (AS28683), MTN Benin (AS37424), and Etisalat Benin (AS37136). Between 18:30 - 19:30 local time (17:30 - 18:30 UTC), traffic dropped as much as 80% as compared to the prior week at a country level, nearly 100% at Etisalat and MTN, and over 80% at SBIN.

While an attempted coup had taken place earlier in the day, it is unclear whether the observed Internet disruption was related in any way. From a routing perspective, all three impacted networks share Cogent (AS174) as an upstream provider, so a localized issue at Cogent may have contributed to the brief outage.  

Cellcom Israel

According to a reported announcement from Israeli provider Cellcom (AS1680), on December 18, there was “a malfunction affecting Internet connectivity that is impacting some of our customers.” This malfunction dropped traffic nearly 70% as compared to the prior week, and occurred between 09:30 - 11:00 local time (07:30 - 09:00 UTC). The “malfunction” may have been a DNS failure, according to a published report.

Partner Communications (Israel)

Closing out 2025, on December 30, a major technical failure at Israeli provider Partner Communications (AS12400) disrupted mobile, TV, and Internet services across the country. Internet traffic from Partner fell by two-thirds as compared to the previous week between 14:00 - 15:00 local time (12:00 - 13:00 UTC). During the outage, queries to Cloudflare’s 1.1.1.1 public DNS resolver spiked, suggesting that the problem may have been related to Partner’s DNS infrastructure. However, the provider did not publicly confirm what caused the outage.

Cloud Platforms

During the fourth quarter, we launched a new Cloud Observatory page on Radar that tracks availability and performance issues at a region level across hyperscaler cloud platforms, including Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Oracle Cloud Infrastructure.

Amazon Web Services

On October 20, the Amazon Web Services us-east-1 region in Northern Virginia experienced “increased error rates and latencies” that affected multiple services within the region. The issues impacted not only customers with public-facing Web sites and applications that rely on infrastructure within the region, but also Cloudflare customers that have origin resources hosted in us-east-1.

We began to see the impact of the problems around 06:30 UTC, as the share of error (5xx-class) responses began to climb, reaching as high as 17% around 08:00 UTC. The number of failures encountered when attempting to connect to origins in us-east-1 climbed as well, peaking around 12:00 UTC.

The impact could also be clearly seen in key network performance metrics, which remained elevated throughout the incident, returning to normal levels just before the end of the incident, around 23:00 UTC. Both TCP and TLS handshake durations got progressively worse throughout the incident—these metrics measure the amount of time needed for Cloudflare to establish TCP and TLS connections respectively with customer origin servers in us-east-1. In addition, the amount of time elapsed before Cloudflare received response headers from the origin increased significantly during the first several hours of the incident, before gradually returning to expected levels.

Microsoft Azure

On October 29, Microsoft Azure experienced an incident impacting Azure Front Door, its content delivery network service. According to Azure's report on the incident, “A specific sequence of customer configuration changes, performed across two different control plane build versions, resulted in incompatible customer configuration metadata being generated. These customer configuration changes themselves were valid and non-malicious – however they produced metadata that, when deployed to edge site servers, exposed a latent bug in the data plane. This incompatibility triggered a crash during asynchronous processing within the data plane service.

The incident report marked the start time at 15:41 UTC, although we observed the volume of failed connection attempts to Azure-hosted origins begin to climb about 45 minutes prior. The TCP and TLS handshake metrics also became more volatile during the incident period, with TCP handshakes taking over 50% longer at times, and TLS handshakes taking nearly 200% longer at peak. The impacted metrics began to improve after 20:00 UTC, and according to Microsoft, the incident ended at 00:05 UTC on October 30.

Cloudflare

In addition to the outages discussed above, Cloudflare also experienced two disruptions during the fourth quarter. While these were not Internet outages in the classic sense, they did prevent users from accessing Web sites and applications delivered and protected by Cloudflare when they occurred.

The first incident took place on November 18, and was caused by a software failure triggered by a change to one of our database systems' permissions, which caused the database to output multiple entries into a “feature file” used by our Bot Management system. Additional details, including a root cause analysis and timeline, can be found in the associated blog post.

The second incident occurred on December 5, and impacted a subset of customers, accounting for approximately 28% of all HTTP traffic served by Cloudflare. It was triggered by changes being made to our request body parsing logic while attempting to detect and mitigate a newly disclosed industry-wide React Server Components vulnerability. A post-mortem blog post contains additional details, including a root cause analysis and timeline.

For more information about the work underway at Cloudflare to prevent outages like these from happening again, check out our blog post detailing “Code Orange: Fail Small.”

Conclusion

The disruptions observed in the fourth quarter underscore the importance of real-time data in maintaining global connectivity. Whether it’s a government-ordered shutdown or a minor technical issue, transparency allows the technical community to respond faster and more effectively. We will continue to track these shifts on Cloudflare Radar, providing the insights needed to navigate the complexities of modern networking. We share our observations on the Cloudflare Radar Outage Center, via social media, and in posts on blog.cloudflare.com. Follow us on social media at @CloudflareRadar (X), noc.social/@cloudflareradar (Mastodon), and radar.cloudflare.com (Bluesky), or contact us via email.

As a reminder, while these blog posts feature graphs from Radar and the Radar Data Explorer, the underlying data is available from our API. You can use the API to retrieve data to do your own local monitoring or analysis, or you can use the Radar MCP server to incorporate Radar data into your AI tools.

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.

RadarInternet ShutdownInternet TrafficOutageInternet TrendsAWSMicrosoft AzureConsumer Services

文章来源: https://blog.cloudflare.com/q4-2025-internet-disruption-summary/
如有侵权请联系:admin#unsafe.sh