# Scientific notation

Number of water molecules in one drop of water: | 1.5 × 10^{21} |

Number of stars in the Universe: | 2 × 10^{23} |

Number of grains of sand in the Sahara desert: | 1.5 × 10^{24} |

## Introduction

Names for large or small numbers are not commonly used by mathematicians, scientists and engineers.
Large number's names are more common in popular science and news articles intended for a general audience.
In finance "million", "billion" and "milliard" are commonly used, but names of larger numbers are rarely found.
Scientists and technicians usually express (very) large or (very) small numbers in a compact form: the *scientific notation*.
When a number represents a quantity rather than a count,
the unit of measurement combined with SI prefixes is often used: "2.36 millimeter" instead of "2.36 thousandth of a meter"
(2.36 × 10^{−3} meter in scientific notation).
*Normalized scientific notation* and
*engineering notation* are forms of scientific notation.

In scientific notation, numbers are written in the form: `m` × 10^{n},
where exponent `n` is an integer and
`m` a decimal rational number. For instance; 12 345 000 000 = 12.345 × 10^{9}.
Scientific notation is fundamentally the same as the usual presentation of a number in the decimal numeral system.
Earlier we discussed that 3 000 000 = 3×1 000 000, pronounced as "three million".
Writing this as 3 × 10^{6} is writing the number in the scientific notation.
We simply write a million, 1 000 000, as a power of ten:
10^{6}.

Most calculators and computer programs present large and small results in scientific notation by using "*E-notation*".
In this case `m`E`n` or `m`e`n` is used to represent
`m` × 10^{n}.
For example, 1.6 × 10^{−35} displays as 1.6E−35 or 1.6e−35
and a googol
displays as 1E100 or 1e100 in E-notation.
Typing a number in E-notation on a calculator usually requires pressing a key labeled `EXP`, `EE`, `EX` or alike.

In computing a similar concept is used, called *floating-point representation*.
Floating-point representation is the digital analogon of scientific notation, generally referring to the way numbers are stored in computer memory.

## Normalized scientific notation

The standard form of scientific notation is the
*normalized scientific notation*.
Normalized scientific notation is often simply called "scientific notation", "standard form" (in the UK) or "exponential notation".
In normalized scientific notation `m`
is a real number with only one non-zero decimal digit before the decimal point, i.e., 1 ≤ |`m`| < 10,
in which |`m`| is the absolute value
of `m`.

Examples:

380 | = | 300 + 80 |

= | (3 × 10^{2}) + (0.8 × 10^{2}) | |

= | 3.8 × 10^{2} |

8442.056 = 8.442056 × 10^{3}

0.5 = 5 × 10^{−1}

0.52 = 5.2 × 10^{−1}

−0.003 123 45 = −3.12345 × 10^{−3}

A rule of thumb: The exponent indicates how many digits the decimal point shifts to the left (positive exponent) or to the right (negative exponent). For example, in 380 the decimal point moves two places to the left, thus the exponent must be 2.

The exponent `n` in a normalized scientific notation equals the
*order of magnitude* of the number, although
*order of magnitude* may also be defined with other (smaller) intervals of `m`, i.e. other than 1 ≤ |m| < 10.
The order of magnitude is used to roughly compare numbers.
For example, 5.36 × 10^{9} is *two orders of magnitude* more than 9.22 × 10^{7}, which
means that it is closer to being 100 times larger than 1000 times larger.
Two numbers of the same order of magnitude have roughly the same scale:
a larger value is less than ten times a smaller value of the same order of magnitude.
Scientific notation allows orders of magnitude to be more easily compared.

## Engineering notation

An other often used form of scientific notation in applied science is
*engineering notation*.
In engineering notation exponent `n` is restricted to multiples of 3, which means 1 ≤ |m| < 1000.
This way engineering notation explicitly matches corresponding names for large numbers and corresponding
SI prefixes.
For example:

120 × 10^{6} is "a hundred and twenty million".

12.36 × 10^{−9} s = 12.36 ns (12.36 nanosecond).

Often scientific calculators can display in engineering notation, usually by invoking `ENG` mode.

## Significant figures

A number may represent a value obtained by some measurement. A measurement is always carried out with a certain
precision.
The *significant figures*
(or the *significant digits*) of such a number are digits that carry meaningful contributions to
the number's precision. The left-most significant digit in a number is the *most significant*,
the right-most significant digit in a number is the the *least significant*.

For example, due to precision of measurement the population of a country might only be precise to the nearest hundred thousand. The result of the measurement is presented as 17 000 000 inhabitants. So, only the first 3 digits (170) are significant. The third significant digit just coincidentally happens to be zero.

However, the precision is not clear from the presented measurement result itself. A number 17 000 000 may as well be precise to the nearest, say, ten thousand, resulting in 4 significant digits (17 00). The last two significant digits just coincidentally happen to be zero. When we write this number in scientific notation:

1.70 × 10^{7} (3 significant digits).

1.700 × 10^{7} (4 significant digits).

17.00 × 10^{6} (4 significant digits).

So, representing a number in scientific notation provides a way to indicate the number of significant figures. When writing a number in scientific notation, trailing digits are generally rounded to its significant figures.

The number of significant digits of a measured value is independent of the size (magnitude) of a number.
Leading zeros and the position of the decimal sign (decimal point or comma) do not matter.
Leading zeros in a number are never significant, they are just place holders.
For example, 0.00062 has two significant figures: 6 and 2.
Also in this case scientific notation provides a way to write only and all significant digits:
0.00062 = 6.2 × 10^{−4}.

Trailing zeros may or may not be significant. They are significant if they carry meaningful contributions to the number's precision. They are insignificant if they simply serve as placeholders.

"Precise to the nearest ten" means that the tens place is the right-most significant figure.
The right-most significant place is dictated by the measurement's precision or uncertainty.
For instance, if we measure a length with a ruler with the smallest interval between marks at 1 mm,
we need to estimate lengths that end somewhere in between two of these marks.
The estimated digit is the last significant digit. This digit holds some uncertainty.
Say we measured a length of 11.8 mm, then digit 8 (contributing 0.8 mm) was estimated, somewhere between 11 and 12 mm (closer to 12 than to 11).
Digits 1, 1 and 8 are the number's significant digits. A reasonable *uncertainty* when measuring with a ruler could be ±0.2 mm.
So, the measurement is 11.8±0.2 mm, meaning that the actual length falls between 11.6 mm and 12.0 mm.

Uncertainties are specified to one or at most two digits, as a higher uncertainty is unlikely to be reliable or meaningful: 11.8±10.2 mm is a rather meaningless measurement.

The quantity should be expressed with the same precision as the uncertainty: 11.8±0.002 mm is incorrect, as it does not make sense to have a more accurate uncertainty than the estimation. So, the digit positions of the last significant figures in the quantity and in the uncertainty are the same: 11.8±0.2 mm and 11.8±1.2 are correct.

By the way: "significant digit" does *not* mean that such digit necessarily represents the actual value!
It carries meaningful contributions to the number's precision.
For example, let's say we have a measurement of 3.96 km with an uncertainty of ± 0.05 km.
This means that the actual distance falls between 3.91 km and 4.01 km.
Yet, all three digits 396 are significant. They carry significance since they indicate the actual distance within the range of uncertainty.

### Rounding

Trailing non-significant digits (*spurious* digits) can be introduced by calculations giving the number more precision than the original data allows.
In the next example, the calculated result (1.15) should have 2 significant figures and
therefore needs to be rounded to its right-most significant place.

2.3 × 0.5 | = | 1.15 |

≈ | 1.2 |

In the example above, 1.15 is the exact result of the multiplication. But the last digit 5 implies a larger precision than possible, given the precision of the measurements. Therefore the result should be approximated by rounding to a less precise number.

There are several ways to round a number. The most commonly used type of rounding is rounding to the nearest. Rounding 542.349 to 4 significant figures (rounded to the nearest tenth) results in 542.3 and rounding 542.351 to 4 significant figures results in 542.4. This is because 542.349 is nearer to 542.3 than to 542.4 and 542.351 is nearer to 542.4 than to 542.3

Rounding to the nearest requires some tie-breaking rule for those cases where digits after the last significant figure form exactly half the value of the last significant place. "Round half up" is the most often used tie-breaking rule: 82150 rounded to 3 significant figures (rounded to the nearest 100) results in 82200.

As mentioned before: leading zeros in a number are never significant: 0,0125 rounded to 2 significant digits is 0,013.

Suppose we round off 5 036 789 to the nearest multiple of 100 000 (to the nearest 100 000). The rounded number is 5 000 000. The rounded number has 2 significant digits (5 and 0) and all the next trailing zeros are insignificant. But this is not obvious from the number itself. By writing this number in scientific notation, we can indicate the significant digits:

5.0 × 10^{6} (2 significant digits).