Cross-chain bridges have lost more money to exploits than any other category of DeFi protocol. Ronin, Wormhole, Nomad, Harmony, Multichain, Poly Network, the list is long, the dollar amounts are enormous, and the underlying causes are repetitive.
This is the explainer we give to teams designing or operating bridges, and to teams whose protocols depend on them. Understanding the architecture is the first step to not being on the next list.
The fundamental problem
Blockchains cannot natively read each other's state. Ethereum has no idea what just happened on Polygon; Arbitrum has no idea about Bitcoin. So a "bridge" between two chains is, by necessity, a system that attests that an event occurred on chain A and unlocks corresponding value on chain B.
The attestation is the security model. Every bridge, regardless of marketing, comes down to: who or what attests, and what stops them from lying?
There are four common architectures:
1. Multi-sig of validators
A set of operators (5, 9, or 21 keys, typically) sign messages confirming events on chain A. Chain B trusts the signatures.
This is the simplest design and the most exploited. Ronin ($625M) lost five of nine validator keys to a phishing attack. Multichain ($130M+) was an effective single-signer, contradicting its marketing. The threat model is: steal the threshold of keys, drain the bridge.
2. Light client of the source chain
Chain B runs a light-client implementation of chain A as a smart contract, verifying chain A's consensus directly. No trusted intermediaries; the bridge inherits chain A's security.
This is the strongest model in principle but is operationally heavy. Light clients need to keep up with chain A's headers, which can be expensive and complex (especially for chains with frequent reorgs or PoS slashing logic). Implementations are subtle and have shipped bugs.
3. Optimistic with fraud proofs
Chain B accepts attestations from a set of relayers, but anyone can submit a fraud proof during a challenge window (typically days). If unchallenged, the attestation finalises.
Nomad's $190M loss came from this category, but not from the optimistic logic itself; from a botched upgrade that allowed any input to pass verification. The architecture is sound; the implementation has to be exact.
4. Zero-knowledge proofs
Chain B verifies a ZK proof that a specific event occurred on chain A. The cryptography substitutes for trust. This is the newest and most expensive class of bridges; the trust model is excellent but the bug surface is large and the operational maturity is still developing.
Why bridge bugs are catastrophic
A successful bridge holds the locked supply of every asset that has been bridged through it. A small bug anywhere in the trust pipeline drains the lot, not just the next user's transaction, but every user's deposit since the bridge launched.
This is what makes bridges different from other protocols. A reentrancy in a lending protocol drains a position. A reentrancy in a bridge drains the bridge. The economic stakes are not comparable.
The repetition in incident reports
Reading bridge post-mortems for a year, the same root causes appear:
Operational compromise of validator keys. Multi-sig validators are run by humans on machines. Phishing, supply-chain compromise, social engineering, the same vectors that hit any organization, applied to keys that hold nine or ten figures. The fix is not "more validators", it is the discipline of key management and wallet hygiene, applied to the most adversarial environment in software.
Verification logic that admits invalid inputs. Nomad's bug was a single line in an upgrade that initialised the verification root to zero, which made every message with a zero proof valid. The bug was subtle, the audit missed it, and once one address noticed and started exploiting, copy-paste exploiters drained the rest within hours.
Upgrade paths controlled by a small group. Even when the contract logic is correct, the upgrade path often is not. A bridge whose admin keys can change the contract is only as secure as those admin keys.
Incomplete coverage. A bridge between five chains has more than five attack surfaces, it has the cross-product of the chains plus the relay logic. Audits that cover one chain, not the integration, miss the cracks.
What good bridge security looks like
If you are operating a bridge, or your protocol depends on one, these are the things to verify:
Limit per-epoch flow
A bridge that can move 100% of its TVL out in one block has built no defense in depth. Real bridges cap outflows per epoch, by chain, by asset, sometimes per address. The cap is the difference between a $10M loss and a $500M loss.
Time-lock the upgrade path
Upgrades to bridge contracts should require a multi-day timelock with public visibility. The community needs time to see and react. An immediate-execution upgrade path is where Nomad died.
Diverse validator set
If you are running a multi-sig bridge, validators should be operationally independent: different organizations, different infrastructure providers, different geographies, different signing devices. Five validators on the same Kubernetes cluster is one validator.
Continuous monitoring of cross-chain invariants
The total locked on chain A should equal the total minted on chain B, at every block, for every asset. A divergence is the first signal of an attack, sometimes the first signal of a bug, and it should page someone immediately. Wallet Surveillance on the bridge contracts and validator operational addresses is part of how we deliver this.
Bug bounty proportional to TVL
A $1M bounty on a $1B bridge is not an incentive, it is a number that an attacker laughs at. Serious bridges have bounties that scale with the value at risk: 5-10% of recoverable value, capped reasonably. Immunefi hosts bridge programs with eight-figure caps, and they have paid out.
A documented incident response plan
If the worst happens, who pauses the bridge? Who notifies the chains? Who notifies the dependent protocols? Who runs the on-chain trace? The plan exists, in writing, before the incident. We help teams write theirs as part of the Incident Response engagement.
For users
If you are an end user moving funds across chains, three rules:
- Treat the bridge as a counterparty. You are accepting bridge risk for the duration of the bridged position. Understand the bridge's architecture and accept it deliberately.
- Don't park funds in bridge representations of assets. USDC.e is not USDC; bridged ETH is not ETH. Use the canonical asset on the chain you are operating on, and bridge in only when you intend to use the funds immediately.
- Diversify bridges. If you are moving large amounts repeatedly, spread the route. A single bridge compromise is bad. Concentrating all your cross-chain exposure on one bridge is worse.
Where this is heading
The space is converging on better architectures. ZK bridges are maturing. Light-client bridges are seeing more deployments. Per-epoch flow caps are becoming standard. Immutable bridge contracts (no upgrade path) are being deployed for high-value lanes.
But the operational layer, the keys, the validators, the monitoring, the response, is still a discipline that has to be practiced. The next bridge incident will not be a novel category; it will be a familiar one, in a new project. Don't be the new project.