How to Calculate Polymarket Slippage with Historical L2 Order Books
Estimate realistic fill prices on Polymarket using historical L2 depth, weighted fills, and basis-point slippage reporting by regime.

A 7,000-contract buy on a typical Polymarket market at a quoted ask of 0.530 doesn't cost 0.530. It costs 0.5356 — because the book only has 2,000 contracts at the best level, 3,000 at the next, and you walk into a third level for the remaining 2,000. That's ~58 basis points of slippage on a single trade. If your expected edge is 40 bps, you just went negative before accounting for fees.
This is the calculation most Polymarket backtests skip. The L2 data to do it properly is available through the historical order book endpoint — here's how to use it.
Why "best quote" isn't your fill price
On a centralized limit order book, you see depth. You know that 2,000 contracts sit at 0.530 and the next level is at 0.535. On Polymarket, the same structure exists — it's a CLOB — but most research tools only surface the top-of-book price, which creates the illusion that you can fill at the mid or best quote regardless of size.
The right question when your signal fires isn't "what was the last price?" It's "how much was available at each level at the exact moment I wanted to trade?" That's what the L2 snapshot gives you.
Define slippage consistently — once, and in writing
Before you write any code, pick a reference price and stick with it throughout your research. The two most common choices are the bar midpoint (the average of best bid and best ask at signal time) and the best quote on your side (best ask for buys, best bid for sells). Either is defensible; mixing them between strategy versions is not.
For a buy order: slippage = (avg_fill − reference_price) / reference_price × 10,000
For a sell order: slippage = (reference_price − avg_fill) / reference_price × 10,000
Report in basis points. It's easier to compare across markets and regimes than raw price differences, and it makes the "does this strategy survive execution costs?" question concrete.
The weighted fill function
def weighted_fill(levels, target_size):
"""
Walk a sorted order book to estimate fill price for target_size contracts.
levels: list of [price, size] pairs, sorted best-to-worst
(asks: ascending price; bids: descending price)
target_size: number of contracts to fill
Returns: (avg_fill_price, filled_quantity, unfilled_quantity)
"""
remaining = float(target_size)
filled = notional = 0.0
for price, size in levels:
take = min(remaining, float(size))
notional += take * float(price)
filled += take
remaining -= take
if remaining <= 0:
break
if filled == 0:
return None, 0.0, float(target_size)
avg_fill = notional / filled
unfilled = max(0.0, float(target_size) - filled)
return avg_fill, filled, unfilled
def slippage_bps(avg_fill, reference_price, side):
if avg_fill is None or reference_price <= 0:
return None
if side == "buy":
return (avg_fill - reference_price) / reference_price * 10_000
if side == "sell":
return (reference_price - avg_fill) / reference_price * 10_000
raise ValueError("side must be 'buy' or 'sell'")
This is deliberately simple. Transparent code is easier to audit and easier to defend to a skeptical reader of your research.
The worked example in full
Here's the ask ladder at decision time:
| Level | Price | Size |
|---|---|---|
| 1 | 0.530 | 2,000 |
| 2 | 0.535 | 3,000 |
| 3 | 0.542 | 5,000 |
Requested buy size: 7,000 contracts.
Fill path:
- 2,000 contracts at 0.530
- 3,000 contracts at 0.535
- 2,000 contracts at 0.542
Weighted average fill:
(2,000 × 0.530 + 3,000 × 0.535 + 2,000 × 0.542) / 7,000 = 0.5356
If your reference price (bar midpoint) was 0.5325, slippage is (0.5356 − 0.5325) / 0.5325 × 10,000 ≈ 58 bps.
In Python, using the functions above:
ask_levels = [[0.530, 2000], [0.535, 3000], [0.542, 5000]]
ref_price = 0.5325
avg_fill, filled, unfilled = weighted_fill(ask_levels, 7000)
slip = slippage_bps(avg_fill, ref_price, "buy")
print(f"Avg fill: {avg_fill:.4f}")
print(f"Filled: {filled:.0f} / 7000 contracts")
print(f"Unfilled: {unfilled:.0f}")
print(f"Slippage: {slip:.1f} bps")
# Avg fill: 0.5356
# Filled: 7000 / 7000 contracts
# Unfilled: 0
# Slippage: 58.3 bps
That 58 bps number isn't an edge case. On most mid-size Polymarket markets, a 7,000-contract order at normal spreads will land somewhere between 30 and 80 bps depending on depth conditions. If your strategy's gross edge estimate is below that range, it doesn't survive execution.
Pulling historical L2 snapshots from the API
import os
import requests
API_KEY = os.environ["POLYMARKETDATA_API_KEY"]
BASE = "https://api.polymarketdata.co/v1"
HEADERS = {"X-API-Key": API_KEY}
slug = "will-btc-exceed-100k-by-march-2025" # replace with your target
resp = requests.get(
f"{BASE}/markets/{slug}/books",
headers=HEADERS,
params={
"start_ts": "2025-01-01T00:00:00Z",
"end_ts": "2025-01-07T00:00:00Z",
"resolution": "5m",
},
timeout=30,
)
resp.raise_for_status()
snapshots = resp.json()["data"]
# Each snapshot has: t (timestamp), bids [[price, size], ...], asks [[price, size], ...]
snap = snapshots[100]
print(f"Snapshot: {snap['t']}")
print(f"Top 3 asks: {snap['asks'][:3]}")
print(f"Top 3 bids: {snap['bids'][:3]}")
The resolution parameter controls how frequently you get a snapshot. For most execution modeling, 5-minute snapshots give you good granularity without pulling enormous payloads. For high-frequency signal work, drop to 1-minute.
Four slippage edge cases that trip up real backtests
Partial fills. If your requested size exceeds the visible depth, unfilled > 0. Don't assume the remainder got filled somewhere — model it as unfilled and decide how your strategy handles it. "The position is half-on" is a real scenario that needs an explicit policy.
Snapshot lag. The book at t_book is your best approximation of the book at t_signal, but there's a gap. During fast-moving windows — news drops, event results, large block trades — a 5-minute-old snapshot can be wildly stale. Track t_signal − t_book and treat any trade with lag above 10 minutes as data quality unknown.
Event-window spread shocks. Spreads on Polymarket prediction markets can widen dramatically around catalysts. A market sitting at 3-cent spread most of the week might jump to 8+ cents in the 30 minutes before a major announcement. Your slippage model needs to capture this, not average over it.
Asymmetric side behavior. Buy slippage and sell slippage often behave differently on the same market. Directional sentiment, liquidity provision patterns, and event proximity all create asymmetries. Segment your slippage analysis by side and by market type rather than pooling everything.
A minimal slippage dashboard
Once you've run this on a real strategy, the numbers to track are: median slippage in bps, P90 slippage in bps, fill ratio by size bucket (what fraction of 500-contract, 1,000-contract, 5,000-contract orders actually filled), and slippage broken out by volatility regime. That last cut usually shows you the most — calm markets and event windows are often different enough to need separate execution policies.
If P90 slippage is more than 2× the median, your strategy is surviving on favorable conditions and occasionally getting crushed. That's a different risk profile than it looks on paper.
All data from the polymarketdata.co API. Historical order book endpoint docs at polymarketdata.co/docs.