Back to blog

Blog

How to Read a Smart Contract Audit Report

Audit reports look reassuring at first glance. The signal lives in the parts most readers skip: the scope statement, the medium-severity findings, and what is conspicuously absent.

Most DeFi users, and a surprising number of investors, read audit reports the same way: open the PDF, check whether the critical findings count is zero, close the PDF, decide the protocol is safe.

This is a useful filter for filtering out unaudited code. It is a terrible filter for distinguishing safe protocols from unsafe ones.

This is how we read Smart Contract Audit reports, ours and other firms', when we are doing real diligence.

Page one: the scope statement

Before any finding, before the cover page, find the scope statement and read it. Twice.

A scope statement says, in concrete terms: which Solidity files were reviewed, at which commit hash, by which auditors, over which window of time, with which assumptions held constant.

The good ones list specific files with specific line counts. The bad ones say "the protocol's smart contracts." If the scope is vague, the audit is vague. If the scope is one paragraph, the audit is one paragraph.

Specifically check:

  • Are the files in scope the files actually deployed? Compare to mainnet. Protocols sometimes audit one branch and ship another.
  • Is the upgrade path in scope? A protocol whose implementation is audited but whose proxy is not is a protocol whose admin can swap in unaudited code at will.
  • Is the deployment configuration in scope? Wrong constructor arguments, wrong initial owner, wrong oracle address, these have lost money and live below the contract logic.
  • Are integrations in scope? "We assumed the Chainlink feed is correct" is a real assumption; whether it's the right one depends on the protocol's actual integration.

A scope statement that explicitly enumerates exclusions is a sign of a serious audit. A scope statement that hand-waves is a sign of a marketing document.

Page two onwards: not just the criticals

Every report ranks findings by severity: critical, high, medium, low, informational. Most readers focus on critical and high.

The signal is often in the mediums.

Medium-severity findings are the ones the auditors thought were real but did not yet have a concrete exploit for. They are reentrancy patterns that might be exploitable depending on how the protocol is integrated. They are oracle assumptions that might fail under conditions the team thought unlikely. They are access-control patterns that might be bypassed in an upgrade.

A protocol with five medium findings, all responded with "acknowledged, not fixing because we believe the impact is limited", that is a different protocol from one where five mediums were patched. The latter is more careful; the former is making bets you may not have made.

Read the team's responses to mediums and informationals. The pattern of responses tells you how the team thinks about security. "Will fix" is a mature response. "Disagrees" with no justification is a less mature one. Both are valid in some contexts; the texture matters.

The post-fix reaudit

A first-draft audit report is interesting; a post-fix re-audit is what you actually want to read.

The first-draft report describes the code as it was when the audit started. The team applies fixes. The re-audit describes the code as it is when the team plans to deploy. The first-draft report findings should mostly say "Fixed in commit X" or "Acknowledged, not fixing for reason Y."

If the protocol publishes the first-draft report and not the re-audit, you are reading marketing, not security. If the re-audit shows new findings introduced by the fixes, that is also useful information, fixes regularly introduce new bugs.

What the report cannot tell you

Audit reports are bounded by their scope, which means they are silent on a lot of risk:

Operational security. The most secure contract in the world is exploitable if the deployer EOA has its private key on a Discord backup. Audits do not check this. We do as part of Wallet Setup and ongoing Wallet Surveillance.

Economic design. Whether the tokenomics make the protocol stable, whether the LP incentives create sustainable liquidity, whether the staking mechanism is incentive-compatible, these are economic questions, not bug questions. A protocol with no findings can still die from bad design.

External dependencies. Audit reports usually say "we assumed Chainlink works correctly", "we assumed the integrated lending protocol behaves as documented", "we assumed bridge X is honest." These assumptions are real risks, just not in the audit's scope.

The next deploy. An audit at commit 0xabc is not an audit at commit 0xdef. Protocols that ship continuously must have continuous review, not annual snapshots. Bug bounties and code-review hygiene are the missing layer.

Three signals of a serious audit

Beyond the scope statement, three signals suggest you are reading a serious audit:

1. The findings include things you do not understand

A serious audit surfaces issues that require domain knowledge. If you read the report and every finding is intuitively obvious, the auditor probably did not look very hard. Real findings often involve specific reentrancy variants, oracle manipulation under flash-loan conditions, signature replay across forks, EIP-specific edge cases. They take a paragraph to explain.

2. The recommendations are specific

"Add input validation" is a non-recommendation. "Add require(amount > 0 && amount <= MAX_AMOUNT) at the start of deposit(), with revert reason INVALID_AMOUNT, and emit a corresponding error event" is a recommendation. A report full of vague recommendations is a report from auditors who did not engage with the code.

3. The auditors are named, with bios

The cover page lists "John Smith, Senior Auditor, [profile link]", not "Audit Team." The named auditors have prior reports under their name and a track record of finding things in production. If the report is anonymous, the firm is hiding either junior staff or the absence of staff.

Three signals of an unserious audit

Conversely, signs that the audit may not have done much:

  • No findings or only informational findings. Real code has real issues. A clean report on a complex protocol means either the code is exceptional (rare) or the auditors were not looking hard (common).
  • A short engagement window. A 5-day audit on a 5,000-line codebase is a code review, not an audit. The review work scales with the code complexity, not the auditor's calendar.
  • The same firm audits everyone, every time. Firms with bottomless pipelines often deliver shallow work. Real audit teams have capacity constraints.

What to do with the report

Reading a report does not make a protocol safe to use. It tells you what you can defensibly believe about a specific commit at a specific moment in time.

Combine with:

  • A bug bounty with a payout proportional to the value at stake.
  • Continuous monitoring of the protocol's actual on-chain behavior and admin keys.
  • An incident-response plan for the bugs that nobody, including the auditors, found.

These are the three legs of a security posture. The audit is one leg. Reports without the other two are reports without legs.

The protocols that survive their first year are the ones that read their audit reports the way they would read any other expert report, with full attention to what the report covers and to what it explicitly does not.

Glossary

Concepts in this piece.

Services

How we work on this.

By industry

Who this matters for.

Keep reading

More from the blog.

Have a project that needs a second pair of eyes? Talk to us.