April 1, 2026

Designing Solidity Contracts That Don’t Fight Formal Verification

This guide provides concrete examples of patterns that are difficult for the Certora Prover to reason about. It helps teams shape their smart-contract code written in Solidity to be more verification-friendly while maintaining good engineering practices.

1. Solidity: What Makes Code Hard for the Prover?

When verifying Solidity contracts, we rely heavily on summaries in CVL to over-approximate function behavior. A summary captures what a function does: how it reads or modifies storage, returns values and interacts with other contracts without requiring the prover to reason about its full implementation every time.

Many Solidity constructs blur the boundary between what a function does and how it does it. Patterns that mix computation with storage mutation, hide storage access behind assembly, or entangle contracts through complex call graphs make it difficult to write sound summaries. In these cases, the prover must inline behavior, track low-level effects, or reason about tightly coupled components, significantly degrading performance.

A related challenge is "munging the code": situations where otherwise-correct code must be refactored purely to accommodate verification. This usually indicates that computation, state mutation, and control flow are too entangled for robust verification. The goal of this guide is to help teams avoid patterns that force verification-driven refactoring.

1.1 Mixing Computation with Storage Updates

Issue: Functions that compute values and update storage in the same block introduce side-effects, making summarization harder.

Hard Example (Avoid):

function accrueInterest() external { uint256 timeElapsed = block.timestamp - lastAccrualTime; uint256 interestFactor = timeElapsed * ratePerSecond; uint256 newDebt = totalDebt + (totalDebt * interestFactor / 1e18); totalDebt = newDebt; // storage write lastAccrualTime = block.timestamp; // storage write }

Preferred (Split Logic):

function calculateAccruedDebt( uint256 currentDebt, uint256 lastAccrual, uint256 rate ) public pure returns (uint256) { uint256 timeElapsed = block.timestamp - lastAccrual; return currentDebt + (currentDebt * timeElapsed * rate / 1e18); } function accrueInterest() external { totalDebt = calculateAccruedDebt(totalDebt, lastAccrualTime, ratePerSecond); lastAccrualTime = block.timestamp; }

CVL Summary (for the pure function):

function calculateAccruedDebt(uint256 d, uint256 t, uint256 r) returns uint256 => accrueGhost(d, t, r);

Why it helps: (1) The prover can summarize calculateAccruedDebt as a pure mathematical function with precise math of the right type. The stateful wrapper becomes trivial to verify. (2) Further, it allows to abstract away hard math that might be irrelevant to some property and unnecessarily slow down SMT solvers. For this we can summarize the function to simply return a nondeterministic value of the correct return type.

1.2 Overly Monolithic Contracts / Spaghetti Architecture

Issue: Complex call chains, especially cycles like Vault→Strategy→Vault, complicate modular reasoning and can lead to reentrancy vulnerabilities.

Hard Example (Avoid):

contract Vault { Strategy strategy; function deposit(uint256 amount) external { asset.transferFrom(msg.sender, address(this), amount); strategy.invest(amount); // Strategy calls back into Vault } function reportProfit(uint256 profit) external { // Called by Strategy during invest() totalAssets += profit; } } contract Strategy { Vault vault; function invest(uint256 amount) external { // ... do something ... vault.reportProfit(earned); // Circular: Vault → Strategy → Vault } }

Preferred (Unidirectional Dependencies):

library ProfitCalculator { function calculateProfit(uint256 principal, uint256 current) internal pure returns (uint256) { return current > principal ? current - principal : 0; } } contract Strategy { function invest(uint256 amount) external returns (uint256 profit) { // Stateless: compute and return, don't call back uint256 newValue = _deployCapital(amount); return ProfitCalculator.calculateProfit(amount, newValue); } } contract Vault { Strategy immutable strategy; function deposit(uint256 amount) external { asset.transferFrom(msg.sender, address(this), amount); uint256 profit = strategy.invest(amount); // One-way call totalAssets += profit; // Vault updates own state } }

Why it helps: Single-direction call graphs (Vault → Strategy, never reverse) allow modular verification. Each contract can be verified independently.

1.3 Assembly Interacting with Storage

Issue: Inline assembly with sload/sstore creates opaque behavior that breaks storage analysis and summary generation.

Hard Example (Avoid):

function getPrice(bytes32 tokenSlot) external view returns (uint256 price) { assembly { price := sload(tokenSlot) // Opaque: prover can't track which slot } }

Preferred (Restrict Assembly to Memory Operations):

mapping(address => uint256) public prices; // Explicit storage function getPrice(address token) external view returns (uint256) { return prices[token]; // Prover can track this } // Assembly is fine for pure memory operations function efficientHash(bytes32 a, bytes32 b) internal pure returns (bytes32 h) { assembly { mstore(0x00, a) mstore(0x20, b) h := keccak256(0x00, 0x40) } }

1.4 Unnecessary Assembly for Gas Micro-Optimizations

Issue: Assembly blocks that replicate behavior Solidity can express natively force the prover into opaque reasoning or require manual "munging" (rewriting production code just for verification). This is especially common in gas-optimized token standards.

Hard Example (Avoid):

// From a real ERC-1155 implementation: creates two length-1 arrays // Saves ~200 gas by skipping Solidity's default zero-initialization function _asSingletonArrays( uint256 element1, uint256 element2 ) private pure returns (uint256[] memory array1, uint256[] memory array2) { assembly { array1 := mload(0x40) mstore(array1, 1) mstore(add(array1, 0x20), element1) array2 := add(array1, 0x40) mstore(array2, 1) mstore(add(array2, 0x20), element2) mstore(0x40, add(array2, 0x40)) } }

Preferred (Plain Solidity):

function _asSingletonArrays( uint256 element1, uint256 element2 ) private pure returns (uint256[] memory array1, uint256[] memory array2) { array1 = new uint256[](1); array1[0] = element1; array2 = new uint256[](1); array2[0] = element2; }

Why it helps: The assembly version manipulates the free memory pointer and raw memory slots, which can be opaque to the prover. Hence the prover often assumes any memory could be affected (HAVOC) such that storage variables are randomly changed. The Solidity version is functionally identical, fully transparent, and requires zero summarization. The ~200 gas savings is negligible in virtually all real-world usage but creates a significant verification burden. When you encounter assembly like this in code you're verifying, this is the kind of "munge" that's often needed: replacing the assembly with equivalent Solidity so the prover can reason about it.

1.5 Large Parameter Passing and Overloaded Semantics

Issue: Parameters with overloaded meaning (e.g., sign determines mode) make summarization ambiguous.

Hard Example (Avoid):

// Negative = withdraw, positive = deposit, zero = claim rewards function execute(int256 amountOrAction) external { if (amountOrAction < 0) { _withdraw(uint256(-amountOrAction)); } else if (amountOrAction > 0) { _deposit(uint256(amountOrAction)); } else { _claimRewards(); } }

Preferred (Explicit Actions):

enum Action { Deposit, Withdraw, ClaimRewards } function execute(Action action, uint256 amount) external { if (action == Action.Deposit) _deposit(amount); else if (action == Action.Withdraw) _withdraw(amount); else _claimRewards(); }

Why it helps: Each action has unambiguous semantics. The prover can reason about

Action.Deposit independently of withdrawal logic.

1.6 Problematic Byte Handling and abi.encodePacked

Issue: Packed encoding creates ambiguous boundaries that can cause hash collisions and are hard to reason about.

Hard Example (Avoid):

// Can collide: ("ab", "c") vs ("a", "bc") both produce "abc" bytes32 h = keccak256(abi.encodePacked(str1, str2));

Preferred:

bytes32 h = keccak256(abi.encode(str1, str2)); // Length-prefixed, unambiguous

1.7 Complex Math (mulDiv, sqrt, exponentials)

Issue: Inline complex math requires the prover to reason about overflow, precision, and rounding, often intractable without manual summaries.

Hard Example (Avoid):

function getAmountOut(uint256 amountIn, uint256 reserveIn, uint256 reserveOut) external pure returns (uint256) { uint256 amountInWithFee = amountIn * 997; uint256 numerator = amountInWithFee * reserveOut; uint256 denominator = reserveIn * 1000 + amountInWithFee; return numerator / denominator; // Prover struggles with this chain }

Preferred (Use Standard Library + Wrap):

import {Math} from "@openzeppelin/contracts/utils/math/Math.sol"; function getAmountOut(uint256 amountIn, uint256 reserveIn, uint256 reserveOut) external pure returns (uint256) { uint256 amountInWithFee = amountIn * 997; uint256 denominator = reserveIn * 1000 + amountInWithFee; return Math.mulDiv(amountInWithFee, reserveOut, denominator); }

CVL Summary:

function Math.mulDiv(uint256 x, uint256 y, uint256 d) returns uint256 => mulDivGhost(x, y, d); // With axioms for expected properties axiom forall uint256 x. forall uint256 y. forall uint256 d. d != 0 => mulDivGhost(x, y, d) >= (x * y) / d + 1;

1.8 Indirect Storage via ERC-7201 Patterns in Libraries

Issue: Storing structs at computed slots via assembly (ERC-7201 style) makes storage opaque. When access happens through libraries, the prover can't resolve address fields for linking or hook on struct members.

Hard Example (Avoid):

library VaultStorage { bytes32 constant SLOT = keccak256("vault.storage.v1"); struct State { address asset; address strategy; uint256 totalShares; } function load() internal pure returns (State storage $) { assembly { $.slot := SLOT } } } contract Vault { function deposit() external { VaultStorage.State storage s = VaultStorage.load(); // Prover can't link s.strategy to actual Strategy contract IStrategy(s.strategy).invest(...); } }

Preferred (Single Contract-Level Accessor):

contract Vault { struct VaultState { address asset; address strategy; uint256 totalShares; } bytes32 private constant VAULT_STATE_SLOT = keccak256("vault.storage.v1"); function _state() internal pure returns (VaultState storage $) { bytes32 slot = VAULT_STATE_SLOT; assembly { $.slot := slot } } function deposit() external { VaultState storage s = _state(); // All access through single accessor in main contract IStrategy(s.strategy).invest(...); } }

Why it helps: A single accessor in the contract (not a library) gives the prover one place to reason about storage layout. Field access becomes trackable.

1.9 SMT-Solving Deep Dive: Mixing Nonlinear Arithmetic with Bitwise Operations

Issue: SMT solvers don't reason about all of math in one unified engine. Internally, they decompose problems into specialized theories: each an independent decision procedure optimized for a specific domain. Two relevant theories whose combinations are, to this day, a hard and unsolved math problem are:

  • Nonlinear integer arithmetic (NIA): handles multiplication, division, mulmod, and modular arithmetic. Already undecidable in general (per Matiyasevich's theorem), so the solver uses heuristics, that is incomplete search that can time out. More about this here.
  • Bitvector theory (BV): handles fixed-width operations like shifts (>>, <<), bitwise AND/OR/XOR, and two's complement overflow. Decidable but exponential in bit-width: for 256-bit values the search space is enormous.

When both theories appear in the same formula, the solver must run a theory combination procedure (typically Nelson-Oppen or similar). This means the NIA solver and the BV solver exchange equalities and disequalities about shared variables, iterating until they agree. Each exchange can trigger re-solving in both theories, and because NIA is already using heuristics, the cross-theory communication often pushes it into paths it can't resolve, thus leading to timeouts even on functions that look short.

This is a common pattern in DeFi math libraries that implement gas-efficient mulDiv with full 512-bit precision. These functions typically use mulmod for the high-precision product, then bit-shift and mask to perform Newton's method for division: mixing nonlinear arithmetic (multiplication, modular reduction) with bitvector operations (shifts, AND, two's complement tricks) in a single block. The solver sees both theories entangled in every constraint, and the theory combination overhead compounds with the already-difficult nonlinear reasoning.

Example: A mixed-theory mulDiv with bitwise rounding

Consider a real-world pattern from DeFi math libraries: a mulDiv that uses mulmod for the full-precision product, then applies a branchless bitwise rounding trick:

function mulDivRound( uint256 a, uint256 b, uint256 denominator, bool roundUp ) internal pure returns (uint256 result) { unchecked { result = a * b / denominator; //NIA: multiplication + division uint256 remainder = mulmod(a, b, denominator); //NIA: modular arithmetic // Branchless round-up via bitwise ops: uint256 needsRound = remainder | (roundUp ? 1 : 0); //BV: OR result += (needsRound >> 255) ^ (remainder == 0 ? 1 : 0); //BV: SHR, XOR } }

The solver must encode mulmod (NIA theory) and the bitwise |, >>, ^ (BV theory) into a single satisfiability query. Every shared variable (like remainder and needsRound) forces the Nelson-Oppen procedure to synchronize the two solvers. The NIA solver proposes a value for remainder, the BV solver checks whether the bit operations are consistent, disagrees, sends back constraints and this loop repeats. With 256-bit values, the BV solver alone faces 2²⁵⁶ possible states per variable, and coordinating with NIA's heuristic search on top of that is what causes the blowup.

Advanced Technique: Decompose, Verify Separately, Prove Equivalence

When the function can't be simplified (it's a battle-tested math library you don't want to modify), there's a more powerful approach: decompose the function into theory-pure stages, verify each stage in isolation, and then prove that their composition equals the original.

This doesn't always work as it requires the function to have a natural decomposition point where the arithmetic and bitwise stages are sequential rather than deeply interleaved. But when it applies, it lets you verify functions that would otherwise be too hard for SMT-solving.

Step 1: Decompose into theory-pure functions

/// @dev Stage 1: pure arithmetic (NIA only) function _mulDivTruncate( uint256 a, uint256 b, uint256 d ) internal pure returns (uint256 result, uint256 remainder) { result = a * b / d; remainder = mulmod(a, b, d); } /// @dev Stage 2: pure bitwise rounding (BV only) function _applyRounding( uint256 result, uint256 remainder, bool roundUp ) internal pure returns (uint256) { unchecked { uint256 needsRound = remainder | (roundUp ? 1 : 0); return result + ((needsRound >> 255) ^ (remainder == 0 ? 1 : 0)); } } /// @dev Composed: calls both stages sequentially function mulDivRound_decomposed( uint256 a, uint256 b, uint256 d, bool roundUp ) internal pure returns (uint256) { (uint256 result, uint256 remainder) = _mulDivTruncate(a, b, d); return _applyRounding(result, remainder, roundUp); }

Step 2: Verify properties of each stage independently

Because each stage stays in one theory, the solver handles them efficiently:

// Verify the arithmetic stage (solver uses NIA only) rule truncateIsSound(uint256 a, uint256 b, uint256 d) { require d != 0; (uint256 result, uint256 remainder) = _mulDivTruncate(a, b, d); assert result <= (a * b) / d; assert result >= (a * b) / d; // exact truncation assert remainder == (a * b) % d; } // Verify the rounding stage (solver uses BV only) rule roundingAddsAtMostOne(uint256 result, uint256 remainder, bool roundUp) { uint256 rounded = _applyRounding(result, remainder, roundUp); assert rounded >= result; assert rounded <= result + 1; assert !roundUp => rounded == result; // no rounding when not requested }

Step 3: Prove equivalence with the original

The key step — prove that the decomposed version produces identical output to the original for all inputs:

rule decompositionIsEquivalent(uint256 a, uint256 b, uint256 d, bool roundUp) { require d != 0; uint256 original = mulDivRound(a, b, d, roundUp); uint256 decomposed = mulDivRound_decomposed(a, b, d, roundUp); assert original == decomposed; }

This equivalence rule still mixes theories (it references both implementations), but it's a much simpler query: the solver just needs to check whether two uint256 return values can ever differ, not understand why either implementation is correct. In practice, the solver can often discharge this by observing that the decomposed version is just an inlined rewriting of the original — the intermediate variables are identical.

Once equivalence is established, you can summarize the original function using the properties you proved on the decomposed stages. In subsequent rules that call mulDivRound, the prover uses the ghost summary and never touches the mixed-theory implementation again:

function _.mulDivRound(uint256 a, uint256 b, uint256 d, bool roundUp) internal => mulDivGhost(a, b, d, roundUp); // Properties inherited from the decomposed verification: axiom forall uint256 a, b, d. bool roundUp. d != 0 => mulDivGhost(a, b, d, roundUp) >= (a * b) / d; axiom forall uint256 a, b, d. bool roundUp. d != 0 => mulDivGhost(a, b, d, roundUp) <= (a * b) / d + 1; axiom forall uint256 a, b, d. d != 0 => mulDivGhost(a, b, d, false) == (a * b) / d;

When this technique applies: The function must have a natural decomposition point: a place where the arithmetic result is fully computed before bitwise operations begin. This works for mulDiv variants, fee calculations with rounding, and price conversions with bit-packed precision. It does not work when arithmetic and bitwise operations are deeply interleaved (e.g., Newton's method where each iteration mixes both), though even then you can sometimes decompose at the iteration boundary.

When it doesn't apply: Fall back to a pure ghost summary, asserting only the properties your higher-level rules need. You lose the ability to verify the complete implementation's correctness, but you can still verify the protocol-level properties that use the function.

2. General Guidelines

3. Conclusion

By adopting these patterns and avoiding the problematic ones, engineering teams make their systems far easier to verify formally. The result: faster verification cycles, stronger guarantees, and clearer communication between developers and verification engineers.

Key takeaway: If you find yourself extensively refactoring code just to make verification work, that's a signal the original patterns were too entangled. Write code that admits simple, natural summaries from the start.

Get every blog post delivered

Certora Logo
logologo
Terms of UsePrivacy Policy