Security

A Practical Guide to Smart Contract Audits

Smart contract code audit

Getting a smart contract audit has become table stakes for any DeFi project that wants to be taken seriously. But audits are widely misunderstood. Teams treat them as a stamp of approval. They're not. An audit is a structured search for vulnerabilities. What it finds depends heavily on who's looking and what they're looking for.

We've had the Defimec protocol contracts audited three times by three different firms. Each found things the previous ones missed. That's not a criticism of any firm — it's how security research works. Fresh eyes see different things. This piece is about how to get actual value from the process.

What an audit actually covers

A typical smart contract security audit reviews the code for several categories of vulnerabilities:

  • Reentrancy: Cases where external calls allow re-entry into a function before state is updated
  • Access control: Functions that should be restricted but aren't, or permissioning logic that can be circumvented
  • Integer overflow/underflow: Arithmetic operations that wrap around — less common now with Solidity 0.8+ but still relevant in assembly-heavy code
  • Flash loan attack vectors: Logic that can be manipulated using borrowed-and-returned capital within a single transaction
  • Front-running: Transaction ordering dependencies that create economic exploitation opportunities
  • Logic errors: Code that does what it says but not what you intend

That last category is the hardest and most valuable. Finding a reentrancy bug is mostly pattern matching — good static analysis tools catch many of them automatically. Finding a logic error that only matters given a specific combination of market conditions requires deep understanding of both the code and the domain.

What an audit doesn't cover

Several things fall outside the typical audit scope and are worth being explicit about:

Gap Why it matters
Economic design attacks Auditors review code correctness, not whether the tokenomics or incentive model can be exploited
Oracle risk Most audits assume oracle inputs are valid — price feed manipulation isn't a code bug
Governance attacks On-chain governance mechanisms can be manipulated without any bug in the contract
Upgradeable contract risks Proxy patterns introduce risks around storage layout collisions that aren't always flagged
Off-chain dependencies Infrastructure components, key management, admin private keys — outside contract scope

The Nomad bridge exploit in 2022 is a good example of a logic error that an audit missed. The contracts were audited. The bug — a trusted root set to zero that allowed any message to be accepted as valid — was present and not caught. This wasn't negligence on the auditor's part; it was a subtle interaction between initialization logic and message validation that required exactly the right framing to spot.

How to get more from the audit process

Write a spec first

Auditors can only evaluate whether code does what it claims to do if they know what it claims to do. A clear specification of intended behavior — written before the audit — gives auditors a baseline for logic error detection. Without it, they're guessing at intent.

Our practice is to write a behavior specification for each contract and include it with the audit brief. Auditors have told us this consistently produces more useful findings than code-only reviews.

Use multiple auditors

We mentioned this above but it's worth emphasizing: each audit firm has different tools, methodologies, and areas of expertise. A firm that specializes in formal verification will catch things a firm that relies primarily on manual review misses, and vice versa. The marginal cost of a third audit is low relative to the risk reduction.

Ask for severity reasoning

Audit reports classify findings by severity — critical, high, medium, low, informational. The classification matters less than the reasoning behind it. Ask the auditor to explain why a finding is high rather than critical. The explanation often reveals assumptions about attacker resources and conditions that are worth stress-testing.

Treat remediation as a second audit

Fixing findings introduces new code. New code has new risks. When addressing audit findings, have the remediation reviewed before deployment — either by the original auditor or an independent reviewer. Several post-audit exploits have come from improperly implemented fixes, not from unfound vulnerabilities in the original code.

Formal verification is different from auditing

Formal verification mathematically proves that code satisfies a set of properties. It's not the same as an audit. An audit finds bugs that exist. Formal verification proves that certain categories of bugs cannot exist, given correct specifications. The two complement each other.

We use Lean 4 for formal verification of the core protocol logic. The scope is narrower than an audit — we specify and verify the most critical properties around fund custody and state transitions. But for those properties, the assurance is categorical rather than probabilistic.

Not every team has the resources for formal verification. But any team shipping contracts that hold material value should at minimum treat the audit as a starting point, not an endpoint.

Continue Reading