How this works
A z-score (also called a standard score) tells you how far a single value sits from the mean of its distribution, measured in standard deviations. The formula is z = (x − μ) / σ, where x is the raw value, μ is the population mean, and σ is the population standard deviation. A z of 0 means the value sits exactly at the mean; +1 means one standard deviation above; −2 means two standard deviations below. The sign tells you the direction, the magnitude tells you how unusual the value is.
The usefulness of a z-score is that it strips away the units and the scale of the original measurement. A test score of 78 doesn't mean much on its own — but a z of +1.2 tells you the student did better than about 88% of the cohort, regardless of whether the test was out of 100 or out of 250. The same logic applies to lab measurements (a patient's lab value relative to a reference range), quality control (how far a part deviates from spec), finance (a return relative to the historical mean), and any other context where you want to compare values from differently-scaled distributions on equal footing.
The percentile and probability outputs in the calculator above assume the underlying distribution is approximately normal — that is, bell-shaped and symmetric around the mean. Many real-world measurements are close enough to normal that this works fine: heights, blood pressure, exam scores in a large class, manufacturing tolerances. If your data is skewed (income, reaction times, biological growth) the z-score itself is still well-defined, but the percentile interpretation is wrong — you'd need a different reference distribution. A quick sanity check: if your raw data has a long tail or is bounded at zero, the normal-distribution percentile shown is approximate at best. For strictly normal data, the percentile is exact (give or take rounding) and the rules of thumb are: ±1σ contains 68% of values, ±2σ contains 95%, ±3σ contains 99.7%.
The formula
x is the raw value, μ is the population mean, σ is the population standard deviation. Φ(z) is the cumulative distribution function of the standard normal distribution — it gives the probability that a normally-distributed variable falls at or below z. The calculator computes Φ(z) using the Abramowitz-Stegun approximation, which is accurate to about seven decimal places — more than enough for any practical use. If you only have a sample mean and sample standard deviation rather than the population parameters, the resulting "z" is technically a t-statistic and the normal-distribution percentile will be slightly off for small samples (n < 30); for large samples the difference is negligible.
Example calculation
- A student scores 78 on a test. The class mean is 65 with a standard deviation of 10.
- z = (78 − 65) / 10 = +1.30 — i.e. 1.3 standard deviations above the mean.
- Φ(1.30) ≈ 0.9032, so the student outperformed about 90% of the class (10% scored higher).
Frequently asked questions
When can I trust the percentile output?
When the underlying distribution is approximately normal — bell-shaped and roughly symmetric around the mean. For exam scores in a large cohort, IQ tests, height, blood pressure, manufacturing tolerances, and most aggregated measurements, this assumption is good enough that the percentile is accurate within a percentage point or two. For data that is skewed (income, reaction times, anything bounded at zero with a long upper tail), the z-score is still well-defined as a distance-from-mean measure, but the percentile interpretation can be badly wrong — a z of +2 in a heavy-tailed distribution might correspond to the 90th percentile, not the 97.5th. A quick rule of thumb: if you can plot a histogram of your data and it doesn't look like a bell, use the z-score for the magnitude only and don't quote the percentile.
Should I use population or sample standard deviation?
Use the population standard deviation (σ) when you genuinely have data for the entire population — exam scores for every student in the cohort, lab values for every patient in the registry. Use the sample standard deviation (s) when you only have a subset and you're using it to estimate the population parameter. For practical purposes the calculator doesn't care which you enter — the arithmetic is identical — but the interpretation of the percentile differs. With population σ, the resulting z is a true z-statistic and the normal-distribution percentile is exact (assuming normality). With sample s in a small sample (n < 30), the resulting "z" is technically a t-statistic; the normal percentile will be slightly under-estimating tail probability. For most practical lab and classroom uses with reasonable sample sizes, the difference is small enough to ignore.
What does a negative z-score mean?
It means the raw value sits below the mean. The magnitude is the same as for a positive z — a z of −1.5 is exactly as far from the mean as +1.5, just on the other side. The percentile interpretation flips: a z of −1.5 puts the value at roughly the 7th percentile (only 7% of the distribution is lower), while +1.5 puts it at the 93rd. Negative is not "bad" by itself — for some metrics low is better (cholesterol, error rates, response times) and for others high is better (test scores, fitness measures). The sign just tells you direction; you decide whether direction is favourable based on context.
Is a z-score the same as a t-score or standardised score?
Almost. A z-score uses the population standard deviation σ; a t-statistic uses the sample standard deviation s and refers to a t-distribution rather than a normal distribution. For samples of n ≥ 30, the t-distribution is very close to the normal distribution and the two are interchangeable in practice. "Standardised score" is an umbrella term that usually means a z-score but is sometimes used for any rescaling that produces zero-mean / unit-variance values. Two related-but-distinct conventions you may meet: the T-score in psychometrics (mean 50, SD 10 — a linear rescaling of z to avoid negative numbers) and the Stanine (1-9 scale, mean 5). All of these are simple linear transformations of the underlying z, so once you have z, converting between them is one multiplication and addition.