A practical guide to the trade-offs between the three major polynomial commitment families powering modern zero-knowledge proof systems.

This blogpost was jointly written by Matteo Campagnaro, Leonardo Callegari, Mattia Galassi and Prof. Mauro Conti from the University of Padua, Italy, and Martín Ochoa from zkSecurity.
Zero-knowledge proof systems let one party (the prover) convince another (the verifier) that a statement is true, without revealing anything beyond the statement's validity. They are used today in blockchain scaling (rollups), private transactions, and verifiable computation. At the core of most modern proof systems lies a common primitive: the polynomial commitment scheme (PCS).
Why polynomials? Because most computations in modern proof systems can be encoded as polynomial identities. Once expressed that way, the prover commits to a polynomial (binding it without revealing it) and later opens it at selected points to convince the verifier. A PCS makes this commit-then-open workflow efficient and secure.
In practical constructions, when a SNARK verifies a computation or a STARK proves an execution trace, a polynomial commitment scheme is doing the heavy lifting underneath. But not all PCS are the same. Three families dominate practice today, KZG, IPA/Halo, and FRI, and each makes fundamentally different design trade-offs. This post breaks down what those trade-offs are and when each scheme shines.
The Three Contenders
KZG: Small Proofs, Trusted Setup
KZG commitments, introduced by Kate, Zaverucha, and Goldberg in 2010, are built on an elegantly simple algebraic idea. To prove that a polynomial $\phi(x)$ evaluates to $y$ at a point $z$, the prover shows that $(x - z)$ divides $\phi(x) - y$. That is:
$$ \phi(z) = y \iff (x - z) \mid (\phi(x) - y) $$Equivalently, there exists a quotient polynomial $q(x)$ such that:
$$ \phi(x) - y = q(x) \cdot (x - z) $$This divisibility check is enforced through elliptic curve pairings. The prover commits to $\phi(\alpha)$ and $q(\alpha)$ at a secret point $\alpha$ embedded in the public parameters, and the verifier checks the relation via a bilinear pairing equation:
$$ e\!\big(C / g^{y},\; g\big) \stackrel{?}{=} e\!\big(w,\; g^{\alpha - z}\big) $$where $C = g^{\phi(\alpha)}$ is the commitment and $w = g^{q(\alpha)}$ is the opening witness. This lets the verifier confirm the claim with just two pairing operations, regardless of how large the polynomial is.
The result: constant-size opening proofs (a single group element, $\approx 48$ bytes on BLS12-381) and $\mathcal{O}(1)$ verification.
The catch: KZG requires a trusted setup, a ceremony that generates a structured reference string (SRS) of the form $(g, g^\alpha, g^{\alpha^2}, \dots, g^{\alpha^t})$. If anyone learns $\alpha$, they can forge proofs. In practice, this is mitigated through multi-party computation ceremonies (like Ethereum's "Powers of Tau"), where security holds as long as a single participant is honest.
IPA/Halo: No Trusted Setup, Recursion-Friendly
Inner Product Argument (IPA) schemes take a different path. Instead of pairings and toxic waste, they rely on Pedersen vector commitments over standard elliptic curve groups — no structured reference string needed.
The core idea: suppose a prover wants to convince a verifier that two vectors $\mathbf{a}, \mathbf{b} \in \mathbb{Z}_q^d$ have a certain inner product $z = \langle \mathbf{a}, \mathbf{b} \rangle$, where $d$ is the degree of the committed polynomial. Given independent generators $\mathbf{G}, \mathbf{H}$ and a generator $U$, the prover commits:
$$ C = \langle \mathbf{a}, \mathbf{G} \rangle + \langle \mathbf{b}, \mathbf{H} \rangle + z \cdot U $$A naive opening would require sending both full vectors, that is $\mathcal{O}(d)$ communication. The breakthrough is a folding technique: the prover splits each vector into left and right halves, computes cross-term commitments $L$ and $R$, and the verifier responds with a random challenge $u$. Using $u$, both sides fold the vectors and generators into half-sized versions:
$$ \mathbf{a}' = \mathbf{a}_L + u^{-1} \cdot \mathbf{a}_R, \quad \mathbf{b}' = \mathbf{b}_L + u \cdot \mathbf{b}_R $$The new commitment $C' = u^{-1} \cdot L + C + u \cdot R$ is algebraically consistent with the original, but now involves vectors of half the length. After $\log d$ rounds of folding, the vectors reduce to single scalars that can be checked directly. This brings communication from $\mathcal{O}(d)$ down to $\mathcal{O}(\log d)$.
Halo took this further by enabling recursive proof composition without a trusted setup, a breakthrough for building proof systems that can verify their own proofs. Halo also introduced an optimization where the verifier can derive the final folded generators directly from the originals in $\mathcal{O}(d)$ total time, avoiding per-round recomputation.
The trade-off: proofs grow to $\mathcal{O}(\log d)$ size (typically 1.5–3 KB), and verification requires a linear-size multi-scalar multiplication, making it heavier than KZG's two pairings. But the elimination of trusted setup and native support for recursion make IPA-based schemes compelling for long-lived and upgradable systems.
FRI: Post-Quantum, Hash-Based
FRI (Fast Reed-Solomon Interactive Oracle Proof of Proximity) abandons elliptic curves entirely. The prover evaluates a degree-$d$ polynomial over a large evaluation domain, commits to the evaluations via Merkle trees, and then recursively folds the evaluation table. At each round $i$, a random challenge $\alpha_i$ is used to combine pairs of evaluations:
$$ f_{i+1}(y) = \frac{f_i(x) + f_i(-x)}{2} + \alpha_i \cdot \frac{f_i(x) - f_i(-x)}{2x} $$where $y = x^2$. Intuitively, this decomposes $f_i$ into its even and odd parts and recombines them using the random challenge $\alpha_i$. Each step halves both the domain size and the degree bound. After $\log_2 d$ rounds, the verifier spot-checks consistency along random paths through the folded layers.
Soundness relies on the distance properties of Reed-Solomon codes: two distinct polynomials of degree at most $d$ can agree on at most $d$ evaluation points, so any cheating prover will be caught with high probability by random sampling.
FRI's security rests on collision-resistant hash functions and coding-theoretic distance, with no algebraic hardness assumptions. This makes it plausibly post-quantum secure and fully transparent (no setup ceremony at all).
The cost: proofs are significantly larger, scaling as $\mathcal{O}(\log^2 d)$ and typically landing in the 40–200 KB range.
Head-to-Head Comparison
| KZG | IPA / Halo | FRI | |
|---|---|---|---|
| Proof size | $\mathcal{O}(1)$ — $\approx 48$ B | $\mathcal{O}(\log d)$ — 1.5–3 KB | $\mathcal{O}(\log^2 d)$ — 40–200 KB |
| Verifier complexity | $\mathcal{O}(1)$ — 2 pairings | $\mathcal{O}(\log d)$ rounds + MSM * | $\mathcal{O}(\log^2 d)$ — hash checks |
| Prover complexity | $\mathcal{O}(d)$ — MSM-dominated | $\mathcal{O}(d)$ — MSM-dominated | $\mathcal{O}(d \log d)$ — FFTs + hashing |
| Setup | Trusted (SRS) | Transparent / None | Transparent |
| Assumptions | Pairings, bilinear groups | Discrete log (EC) | Hash functions, coding theory |
| Post-quantum | No | No | Yes |
* IPA verification involves $\mathcal{O}(\log d)$ rounds, but each round's MSM work and the final check can bring naive total cost to $\mathcal{O}(d)$. Modern implementations like Halo amortize this across recursive verification layers.
A key insight from this table: there is no universally best scheme. KZG wins on succinctness, FRI wins on trust assumptions and quantum resilience, and IPA carves out a middle ground with transparent setup and recursion support.
What the Benchmarks Say
Theory is one thing, but how do these schemes actually perform on real hardware? Using Remco Bloemen's polynomial commitment benchmark, we can compare prover-side commitment throughput across laptops, servers, and mobile devices.
Laptop (Apple M1 Max)

Prover-side commitment throughput (field elements/sec) vs. polynomial size on an Apple M1 Max. FRI-based implementations (Winterfell, Plonky2) dominate at most sizes, with Winterfell sustaining $\sim 10^7$ elements/sec. MSM-based schemes (KZG, IPA) plateau around $10^6$ elements/sec. Figure from Remco Bloemen, CC-BY-4.0.
On a laptop, FRI-based schemes win convincingly. Hash operations and field arithmetic are fast and cache-friendly, giving Winterfell (FRI/Blake3) roughly 10x higher throughput than the best KZG implementations at moderate polynomial sizes.
Server (AWS HPC6a, 96 Cores)

Same measurement on server hardware. With 96 cores, MSM-based schemes (especially gnark-KZG) scale dramatically and close the gap with FRI. The larger memory also supports much bigger polynomials. Figure from Remco Bloemen, CC-BY-4.0.
The story changes on beefy servers. Multi-scalar multiplications parallelize well across many cores, so KZG implementations like gnark close the gap significantly. FRI still leads, but the advantage shrinks from 10x to roughly 2–3x.
Mobile (iPhone 13 Pro, A15)

On mobile hardware, memory limits cap polynomial sizes at $\sim 2^{24}$. FRI schemes (Winterfell, Plonky2) maintain a large throughput advantage under constrained resources. Figure from Remco Bloemen, CC-BY-4.0.
On constrained mobile hardware, FRI's advantage is even more pronounced. Memory limits and thermal throttling hit MSM-based schemes hard, while FRI's hash-based operations remain efficient. This makes FRI particularly attractive for client-side proving in privacy-preserving mobile applications.
The Takeaway from Benchmarks
- In this benchmark, FRI achieves the highest single-machine throughput, especially with limited cores, thanks to fast hash-based operations and good cache behavior. Results may differ across implementations and workloads.
- MSM-based schemes (KZG, IPA) scale better with parallelism. On 96-core servers, they become competitive.
- Memory is often the bottleneck, not compute. FRI is more memory-hungry and hits limits sooner at very large polynomial sizes.
On-Chain Verification: A Different Game
For blockchain applications, prover speed is only half the story. Verification cost on-chain, measured in gas, is often the binding constraint. The following table estimates the on-chain cost of verifying each PCS on Ethereum L1; actual costs vary by implementation and calldata pricing.
| Scheme | Dominant Cost | Approx. Gas |
|---|---|---|
| KZG | Pairing precompiles | $\sim 100\text{k}$–$200\text{k}$ |
| IPA / Halo | EC ops in VM | $\mathcal{O}(d)$ — scales linearly |
| FRI | Hashing + calldata | $\sim 700\text{k}$–$3\text{M}$ |
KZG verification requires just two pairing operations. Post EIP-1108, the ecPairing precompile costs $34{,}000 \times k + 45{,}000$ gas (where $k$ is the number of pairings), placing a single KZG opening check at around 113k gas.
For IPA/Halo-style polynomial commitments, we do not give a fixed gas estimate because verification is degree-dependent, dominated by large multi-scalar multiplications that scale linearly with the polynomial degree. No Ethereum precompile exists for this operation, making direct on-chain IPA verification prohibitively expensive in practice. Systems that use IPA-based proving internally typically rely on recursion, aggregation, or alternative backends like KZG when targeting on-chain verification.
FRI verification involves hash checks along Merkle authentication paths which are relatively cheap at ~50k–100k gas. However, FRI proofs are large (40–200 KB), and calldata dominates: at 16 gas per non-zero byte, proof data alone costs 640k–3.2M gas. This is consistent with independent estimates placing raw STARK verification on Ethereum in the multi-million-gas range. In practice, systems like StarkEx amortize this across thousands of transactions or wrap STARKs inside SNARKs to bring costs down, at the expense of reintroducing pairing-based assumptions.
When to Use What
Choose KZG when:
- Verification cost must be minimal (on-chain settlement, Ethereum rollups)
- Proof size is critical (bandwidth-constrained environments)
- A one-time trusted setup ceremony is acceptable
- You're building on infrastructure with pairing precompile support
Choose IPA/Halo when:
- Trusted setup is unacceptable but proof size is critical (bandwidth-constrained environments)
- The system must be upgradable without re-running ceremonies
- Verification happens client-side, offline, or in environments where higher verifier cost is acceptable
- You are building privacy-preserving authentication, credential, or identity systems
Choose FRI when:
- Post-quantum security is a priority
- Full transparency is required
- You want strong recursion support in a FRI/STARK-oriented stack
- The prover runs on constrained or low-core hardware
- You are building STARKs or can tolerate larger proofs
The Bottom Line
Polynomial commitment schemes are not interchangeable components. They encode deep trade-offs between proof size, verification cost, trust assumptions, and future-proofing. The "right" choice depends on your deployment context: the target hardware, whether you're verifying on-chain, how much you care about post-quantum security, and whether a trusted setup is feasible.
Beyond KZG, IPA, and FRI, emerging alternatives include code/folding-based PCS (Brakedown, BaseFold, Binius), transparent pairing-based PCS (Dory), and hidden-order-group PCS (DARK). These are promising, but today they are less widely deployed in production than the big three. As proof systems mature and these newer constructions gain traction, the trade-offs will continue to shift. But for now, understanding the KZG/IPA/FRI triangle is essential for anyone building with zero-knowledge proofs.
Acknowledgements
We thank Nicolas Mohnblatt for helpful feedback on this blogpost.