An Amazon Web Services disruption on Oct. 20 knocked out MetaMask and other ETH wallets displays and slowed Base network operations, exposing how cloud infrastructure dependencies ripple through decentralized systems when a single provider fails.

AWS reported a fault in its US-EAST-1 region starting at 03:11 ET, with DNS and EC2 load-balancer health monitoring failures cascading into DynamoDB and other services.

Amazon declared full mitigation by 06:35 ET and complete restoration by evening, though backlog clearing extended into Oct. 21.

Coinbase posted an active incident, noting an “AWS outage impacting multiple apps and services,” while users reported that MetaMask balances were displaying zero and that Base network transactions were experiencing delays.

The mechanical link runs through Infura, MetaMask’s default RPC provider. MetaMask documentation directs users to Infura’s status page during outages because the wallet routes most read and write operations through Infura endpoints by default.

When Infura’s cloud infrastructure wobbles, balance displays and transaction calls can misreport even though funds remain secure on-chain.

The disruption affected Ethereum and layer-2 networks that rely on Infura’s RPC infrastructure, creating UI failures that mimicked on-chain problems despite consensus mechanisms continuing to function.

Base chain metrics from Oct. 21 show $17.19 billion in total value locked, approximately 11 million transactions per 24 hours, 842,000 active addresses daily, and $1.37 billion in DEX volume over the prior day.

Short outages of six hours or less typically reduce DEX volume by 5% to 12% and transaction counts by 3% to 8%, with TVL remaining stable because the issues are cosmetic rather than systemic.

Extended disruptions lasting six to 24 hours can result in a 10% to 25% decrease in DEX volume, an 8% to 20% decrease in transactions, and a 0.5% to 1.5% decrease in bridged TVL, as delayed bridging operations and risk-off rotations to Layer 1 take hold.

However, transaction count and DEX volumes remained steady between Oct. 20 and 21. DEX volumes were $1.36 billion and $1.48 billion, respectively, while transactions amounted to 10.9 million and 10.74 million.

Base daily transactions dropped 8% from 11.2 million to 10.3 million during the Oct. 20-21 AWS outage before recovering to 11 million by Oct. 23.

Base experienced a separate incident on Oct. 10 involving safe head delays from high transaction volume, which the team resolved quickly.

That episode demonstrated how layer-2 networks can hit finality and latency constraints during demand spikes independent of cloud infrastructure issues.

Stacking those demand-side pressures with external infrastructure failures compounds the risk profile for networks running on centralized cloud providers.

Date & time Service Update Symptom Resolved?
Oct 20, 07:11 AWS (us-east-1) Outage identified; internal DNS and EC2 load-balancer health-monitor fault Global API/connectivity errors across major apps “All 142 services restored” by 22:53; some backlogs lingered into Oct 21.
Oct 20, 07:28 → Oct 21, 00:57 Coinbase status Incident opened → resolved Users unable to login, trade, transfer; “funds are safe” messaging Recovered; monitoring through evening Oct 20 (PDT).
Oct 20, 19:46 Decrypt tracker MetaMask balances showing zero; Base/OpenSea struggling as AWS issues persist; Infura implicated Wallet UI misreads and RPC errors across ETH & L2s Ongoing during afternoon; recovery staggered by provider queues.
Oct 10, 21:40 (context) Base status “Safe head delay from high tx volume” (unrelated to AWS) Finality/latency lag (“safe head” behind) Resolved same day; shows L2 latency edges independent of cloud events.

Cloud concentration surfaces as a systemic weakness

The AWS event refreshes longstanding concerns about cloud provider concentration in crypto infrastructure.

Prior AWS incidents in 2020, 2021, and 2023 revealed complex interdependencies across DNS, Kinesis, Lambda, and DynamoDB services that propagate to wallet RPC endpoints and layer-2 sequencers hosted in the cloud.

MetaMask’s default routing through Infura means a cloud hiccup can appear chain-wide to end users, despite on-chain consensus operating normally.

Solana’s five-hour network halt in 2024, caused by a software bug, demonstrated user tolerance for brief downtime when recovery is executed cleanly and communication remains transparent.

Optimism and Base have previously logged unsafe and safe head stalls on their OP-stack architecture, issues that teams can resolve through protocol improvements.

The AWS disruption differs in that it exposes infrastructure dependencies outside the control of blockchain protocols themselves.

Infrastructure teams will likely accelerate multi-cloud failover plans and expand RPC endpoint diversity following this incident.

Wallets may prompt users to configure custom RPCs rather than relying on a single default provider.

Layer-2 teams typically publish post-mortems and service-level objective revisions within one to four weeks of major incidents, potentially elevating client diversity and multi-region deployment priorities in upcoming roadmaps.

What to watch

AWS will release a post-event summary detailing root causes and remediation steps for the US-EAST-1 disruption.

Base and Optimism teams should publish incident post-mortems addressing any sequencer or RPC impact specific to OP-stack chains.

RPC providers, including Infura, face pressure to commit publicly to multi-cloud architectures and geographic redundancy that can withstand single-provider failures.

Centralized exchanges that posted incidents during the AWS outage, including Coinbase, may experience spread widening and volume shifts to decentralized exchanges on less-affected chains during future cloud disruptions.

Monitoring exchange status pages and downdetector curves during infrastructure events provides real-time signals for how centralized and decentralized trading venues diverge under stress.

The event confirms that blockchain’s decentralized consensus cannot fully insulate user experience from centralized infrastructure chokepoints.

The RPC layer concentration remains a practical weak point, where cloud provider failures translate into wallet display errors and transaction delays that undermine confidence in the reliability of Ethereum and layer-2 ecosystems.

The post Why did MetaMask show $0 on Ethereum when AWS went offline? appeared first on CryptoSlate.