Samiksha Jaiswal (Editor)

Normalized number

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In applied mathematics, a number is normalized when it is written in scientific notation with one nonzero decimal digit before the decimal point. Thus, a real number when written out in normalized scientific notation is as follows:

Contents

± d 0 . d 1 d 2 d 3 × 10 n

where n is an integer, d 0 , d 1 , d 2 , d 3 ... are the digits of the number in base 10, and d 0 is not zero. That is, its leading digit (i.e. leftmost) is not zero and is followed by the decimal point. This is the standard form of scientific notation. An alternative style is to have the first non-zero digit after the decimal point.

Examples

As examples, the number x = 918.082 in normalized form is

9.18082 × 10 2 ,

while the number −0.00574012 in normalized form is

5.74012 × 10 3 .

Clearly, any non-zero real number can be normalized.

Other bases

The same definition holds if the number is represented in another radix (that is, base of enumeration), rather than base 10. In base b a normalized number will have the form

± d 0 . d 1 d 2 d 3 × b n ,

where again d 0 0 , and the "digits" d 0 , d 1 , d 2 , d 3 ... are integers between 0 and b 1 .

In many computer systems, floating point numbers are represented internally using this normalized form for their binary representations; for details, see Normal number (computing) Converting a number to base two and normalizing it are the first steps in storing a real number as a binary floating-point number in a computer, though bases of eight and sixteen are also used. Although the point is described as "floating", for a normalised floating point number its position is fixed, the movement being reflected in the different values of the power.

References

Normalized number Wikipedia