Solana Compute Units Explained: How CUs Affect Your Fees
Compute units (CUs) are Solana's measure of computational resources consumed during transaction execution. Every operation — arithmetic, memory access, syscall, and cross-program invocation — costs a predetermined number of CUs. Transactions have a default maximum of 200,000 CUs and an absolute cap of 1.4 million CUs.
CUs directly affect your priority fee costs. The priority fee formula is: ceil(compute_unit_price × compute_unit_limit ÷ 1,000,000) lamports. If you request 200,000 CUs at a price of 1,000 micro-lamports/CU, your priority fee is 200 lamports. Reducing your CU request to 50,000 reduces the fee to 50 lamports.
Every unused compute unit you request still costs money in priority fees — precise CU limits are the single best way to reduce your Solana transaction costs.
How to Optimize Your Compute Unit Limit
The most impactful optimization for reducing Solana fees is setting a realistic compute unit limit with SetComputeUnitLimit instruction. Use the simulateTransaction RPC method to measure actual CU consumption, then set your limit to approximately 110–120% of the simulated value to allow a safety margin without wasting compute budget.
Common CU consumption benchmarks: a simple SOL transfer uses ~300 CUs, a basic SPL token transfer uses ~4,000–6,000 CUs, a Jupiter swap uses ~80,000–120,000 CUs, and complex multi-hop DeFi routes can use 200,000–400,000 CUs. Knowing your program's CU profile lets you set precise limits.
Developers should note that the priority fee is calculated on the requested CU limit, not actual usage. A transaction requesting 400,000 CUs but using only 50,000 still pays the priority fee on the full 400,000 request. This is why accurate CU estimation is a critical optimization, especially for high-frequency applications.




