Scientific Notation Converter

Convert between standard decimal numbers and scientific notation (a × 10ⁿ). Also shows engineering notation (powers of 3) for SI-friendly output.

How this works

Scientific notation expresses any real number as a × 10ⁿ where 1 ≤ |a| < 10 and n is an integer. It exists because human-readable numbers can't handle the range science deals with: the mass of an electron is 0.000000000000000000000000000000910938 kg, which is unreadable, but 9.10938 × 10⁻³¹ kg is fine. The same notation expresses both galaxy distances (10²⁰ m+) and atomic spacings (10⁻¹⁰ m), which span 30+ orders of magnitude — far beyond what plain decimals can convey legibly.

The rules are mechanical. Move the decimal point so exactly one non-zero digit sits to the left of it; the number of moves is n (positive if you moved left, negative if right). 4500 → move left 3 places → 4.5 × 10³. 0.000456 → move right 4 places → 4.56 × 10⁻⁴. Going back is the inverse: 7.2 × 10⁵ means move the decimal 5 places right → 720000. The same number can be written multiple ways (4.5 × 10³ = 45 × 10² = 0.45 × 10⁴) but standard scientific notation requires 1 ≤ |a| < 10, which makes the form unique.

Engineering notation is a variant where n is restricted to multiples of 3, lining up with SI prefixes (kilo = 10³, mega = 10⁶, giga = 10⁹, milli = 10⁻³, micro = 10⁻⁶, nano = 10⁻⁹, etc.). 4.5 × 10³ stays as 4.5 × 10³ (= 4.5k), but 4.5 × 10⁴ becomes 45 × 10³ in engineering form. Useful when units are involved — "45 kHz" reads naturally; "4.5 × 10⁴ Hz" doesn't. This calculator shows both forms simultaneously so you can pick whichever fits the context.

The formula

Scientific: a × 10ⁿ where 1 ≤ |a| < 10 Engineering: a × 10ⁿ where 1 ≤ |a| < 1000 and n is a multiple of 3 From decimal: count digit-shifts; sign of n depends on direction (left = positive). From scientific to decimal: shift the decimal point n places (right if n > 0).

a is the coefficient (mantissa). n is the power of 10. Standard scientific notation enforces 1 ≤ |a| < 10 to make the form unique. Engineering notation enforces n ∈ {…, -6, -3, 0, 3, 6, 9, …} to align with SI prefixes.

Example calculation

  • Convert 0.000456 to scientific notation.
  • Move decimal 4 places right to get 4.56 (one digit before decimal). Sign of n is negative because we moved right. Result: 4.56 × 10⁻⁴.
  • Engineering form: −4 isn't a multiple of 3, so shift to −6: 0.456 × 10⁻⁶ → 456 × 10⁻⁶ (= 456 micro-units, e.g. 456 µm).

Frequently asked questions

What's the "E" notation on calculators (e.g. 4.56E-4)?

Same as scientific notation, just typed in plain ASCII. "E" stands for "× 10^". So 4.56E-4 = 4.56 × 10⁻⁴ = 0.000456. Programming languages, spreadsheets, and most calculators use this format because superscripts aren't available on a keyboard. Lowercase e (4.56e-4) is identical. Don't confuse with the mathematical constant e ≈ 2.718, which is unrelated.

How many significant figures should I keep?

Match the precision of your input. If you measured something to 3 significant figures (e.g. 2.34 cm), the result should also be reported to 3 sig figs (e.g. 2.34 × 10² mm, not 2.340 × 10² mm). For multi-step calculations, carry one or two extra digits through intermediate steps and round at the end. Reporting more digits than your data justifies is a common error and overstates measurement confidence.

When do I use scientific vs engineering notation?

Scientific (1 ≤ |a| < 10) is the default in academic physics, chemistry and pure-math contexts. Engineering (n is a multiple of 3) is preferred where SI prefixes apply: electronics (45 kΩ, 220 µF), telecommunications (2.4 GHz), distances (5 km, 3 mm), and engineering specs in general. They're the same number, just different "framing": scientific emphasises the most-significant digit; engineering emphasises the prefix-aligned magnitude. Most fields conventionally pick one and stick with it.

Related calculators