4. Supply chain and dependencies
Sixteen hours after the Shock, recovery has ground to a halt. At first, progress was real. A few internal services responded again. Some systems that had gone dark now answer health checks. The IT team thought the worst was behind them. Then recovery stalled.
The nightly database snapshots had completed just in time. On paper, nothing is lost. In practice, those databases were managed services that were never operated by the team. Faced with raw data files, no one knows how to recreate (let alone operate) a database cluster from scratch.
Anything that requires a build is blocked. Most of the source code has been re-assembled from the local backups of some of the old-school engineers - but the thousands of dependencies it requires are inaccessible.
Worse, the services that are technically back remain unreachable. The company no longer controls its DNS. Domains cannot be updated, traffic cannot be rerouted, certificates cannot be reissued. Some systems may be alive - but they are invisible. The company is completely locked outside of the world.
Failure mode
Modern businesses are not standalone systems. They are assemblies of services, stitched together through APIs, SDKs, identity layers, and managed platforms. This architecture enables speed and scale, but it also creates hidden, compounding dependencies that become dangerous under widespread disruption.
Most SaaS failures do not start inside the company:
- a single identity provider gating access to all operational tools,
- DNS or CDN outages severing access to otherwise healthy systems,
- online CI/CD platforms becoming unavailable,
- container images storage systems are unreachable (ex: Docker registry, Kubernetes registry…),
- build dependencies can’t be accessed anymore (ex: open-source repositories, Pypi, npm…),
- critical APIs become unavailable (ex: LLMs, OCR services…),
- logging and monitoring systems failing when they are most needed,
- payment, messaging, or notification providers failing.
Objective
Preparedness in this area has three core objectives:
- Visibility: understand which vendors are critical to operate and recover. Make hidden dependencies visible.
- Control: retain the ability to act even when key vendors fail.
- Recoverability: avoid dependency patterns that make recovery impossible under stress.
The goal is not eliminating vendors. It is preventing any single vendor failure from becoming existential.
Solutions
Map critical vendor dependencies
SaaS tools have become deeply integrated in development workflows: developers code in the cloud, run their version control online, etc… Because these services work reliably most of the time, teams fail to model what happens when they are unavailable.
Preparedness starts with an explicit dependency map that extends beyond infrastructure. Whereas ISO 27001 and SOC 2 assess vendor dependency to preserve service continuity in the event of an isolated provider failure, preparedness aims to define a recovery path toward a Minimal Survivable Service under conditions of systemic provider unavailability.
Create a simple table: list the vendors you currently use. Define whether they are required for your Minimal Survivable Service. Identify a replacement or a workaround. Assess your required steps towards readiness - whether you start now or on the spot.
Make sure you go beyond the obvious Infrastructure and SaaS tools. Review identity, monitoring, billing, third-party APIs, build dependencies, support tools, analytics, etc…
We have started to list sovereign alternatives to the main SaaS tools here.
- Map critical SaaS and infrastructure dependencies.
- Decision on each critical component required for the MSS: is the lock-in acceptable ?
Mandatory XKCD here: