How this works
PCR efficiency tells you how much of the theoretical maximum amplification each cycle is actually achieving. The theoretical maximum is exact doubling per cycle — efficiency 100% means each cycle multiplies the product by exactly 2. The practical way to measure this is a standard curve: prepare a tenfold serial dilution series of your template, run qPCR on each dilution, and plot Ct against log₁₀(input concentration). The points should fall on a straight line. The slope of that line is what feeds this calculator.
The formula is E = 10^(−1/slope) − 1, where E is the efficiency expressed as a fraction (multiply by 100 for percent). The reasoning is geometric. If amplification is perfect, each tenfold drop in starting material adds exactly log₂(10) = 3.322 cycles to the Ct, because that's how many doublings you need to make up the missing factor of 10. So a "perfect" qPCR has a standard-curve slope of −3.322. A slope steeper than that (more negative, e.g. −3.6) means you need more than 3.322 cycles to amplify a tenfold dilution — i.e. amplification is less than perfect doubling, efficiency below 100%. A slope shallower than −3.322 (e.g. −3.0) means you're somehow gaining ground faster than doubling can explain, which is biologically impossible and indicates inhibitors, primer-dimer or template contamination. The 10^(−1/slope) algebra just inverts the geometry to give you the per-cycle multiplier directly.
Acceptable working range is 90-110% efficiency, with R² ≥ 0.98 across the standard curve points. Below 90% indicates the reaction is struggling — primer suboptimal, template too dilute, inhibitors present — and the resulting Ct values are noisier than they should be. Above 110% almost always means an artefact, most commonly primer-dimer in the no-template lanes that contaminates the high-dilution wells. R² below 0.98 means individual standard-curve points are off the line, often because the dilutions weren't made cleanly or the high-dilution end is hitting the limit of detection. If you're going to use ΔΔCt fold change downstream, efficiency for both target and reference primers should ideally agree to within about 5 percentage points; if not, switch to the Pfaffl method which puts the actual efficiencies into the formula.
The formula
Slope is the slope of the line on a Ct vs log₁₀(input concentration) plot, fitted across at least 4-5 points of a tenfold serial dilution. It will always be negative (Ct decreases as concentration increases). E is the efficiency as a fraction; multiply by 100 to express as a percent. f is the per-cycle amplification factor: 2.000 means perfect doubling, 1.800 means each cycle multiplies the product by 1.8. R² should be ≥ 0.98 for the standard curve to be considered well-fitted; the calculator accepts an R² value as a quality indicator but does not use it in the efficiency math itself.
Example calculation
- Run a 10-fold serial dilution standard curve, get a slope of −3.32 with R² = 0.999.
- E = 10^(−1/−3.32) − 1 = 10^0.3012 − 1 = 2.001 − 1 = 1.001 ≈ 100%.
- Per-cycle factor f = 2.001 — the reaction is doubling per cycle as designed. Verdict: good. Safe to use ΔΔCt.
Frequently asked questions
What slope range is acceptable?
A slope of exactly −3.322 corresponds to 100% efficiency. Acceptable working range is roughly −3.10 to −3.60, which maps to efficiencies of 90-110%. Outside that range you should troubleshoot before trusting downstream Ct comparisons. Slope steeper than −3.60 (e.g. −3.8 or −4.0) means efficiency below 90% — the reaction is amplifying poorly; check primer design, look for inhibitors carried over from RNA prep, and consider whether your highest-Ct points are too dilute to be in the linear range. Slope shallower than −3.10 (e.g. −2.9 or −3.0) means apparent efficiency above 110% — biologically impossible; almost always primer-dimer contamination of the high-dilution wells, or pipetting drift in the dilution series. R² needs to be ≥ 0.98 in addition; otherwise the slope itself is not well-determined.
How many standard-curve points should I use?
At least four, ideally five, all in technical triplicate. Five points covering five orders of magnitude (e.g. 10⁵, 10⁴, 10³, 10², 10¹ copies per reaction) gives the slope plenty of leverage and lets you spot which dilutions, if any, are falling out of the linear range. Three points or fewer makes the slope estimate noisy enough that the resulting efficiency can swing 10+ percentage points just from pipetting variation in the dilutions. The dilutions themselves matter as much as the count — make a single master dilution stock, then aliquot freshly each time rather than re-diluting from a working stock that has been freeze-thawed multiple times. The most-dilute point is the one most likely to fail; if you see your highest-Ct triplicate spreading across multiple cycles, that point is below your limit of quantification and you should drop it from the regression.
My efficiency is 85% — should I just keep going?
Probably not for publication-quality work, and definitely not for ΔΔCt comparisons against a reference primer with markedly different efficiency. 85% means each cycle is multiplying by about 1.85 instead of 2, which compounds across 30+ cycles into a meaningful systematic bias in your apparent fold change. Two practical paths. (1) Troubleshoot: redesign the primers (most common fix — try a primer-design tool with strict secondary-structure and Tm matching), check for inhibitors (a 1:10 dilution of your sample should run with a Ct exactly 3.32 cycles higher; if not, inhibitors are present), and verify your dilution series wasn't carried over from old buffer. (2) If 85% is what you have and you must press on, switch from Livak ΔΔCt to the Pfaffl method, which uses the actual measured efficiency for each primer pair. Pfaffl gives a less biased fold change but does not fix sensitivity issues — you'll still need more starting template than a 100%-efficient assay would.
How does PCR efficiency relate to ΔΔCt fold change?
The 2^(−ΔΔCt) Livak formula assumes both target and reference primers amplify at exactly 100% efficiency — i.e. each cycle doubles the product for both. When efficiencies are close (say target 98% and reference 102%, both within ~5 points of 100%), the assumption introduces a small enough error that Livak ΔΔCt is fine for most discovery work. When efficiencies differ meaningfully (e.g. target 88% and reference 105%) the simple ΔΔCt formula is no longer valid: the same Ct difference no longer corresponds to the same fold change for both genes. The fix is the Pfaffl method, which generalises the formula to ratio = (E_target)^(−ΔCt_target) / (E_ref)^(−ΔCt_ref), where the E values come from this calculator (one per primer pair). In practice, validate efficiencies for each primer pair before relying on any qPCR fold change, and either match efficiencies (by improving the worse primer) or use Pfaffl when they don't match.