Supriya Ghosh (Editor)

Single precision floating point format

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

Single-precision floating-point format is a computer number format that occupies 4 bytes (32 bits) in computer memory and represents a wide dynamic range of values by using a floating point.

Contents

In IEEE 754-2008 the 32-bit base-2 format is officially referred to as binary32. It was called single in IEEE 754-1985. In older computers, different floating-point formats of 4 bytes were used, e.g., GW-BASIC's single-precision data type was the 32-bit MBF floating-point format.

One of the first programming languages to provide single- and double-precision floating-point data types was Fortran. Before the widespread adoption of IEEE 754-1985, the representation and properties of the double float data type depended on the computer manufacturer and computer model.

Single-precision binary floating-point is used due to its wider range over fixed point (of the same bit-width), even if at the cost of precision. A signed 32-bit integer can have a maximum value of 231 − 1 = 2,147,483,647, whereas the maximum representable IEEE 754 floating-point value is (2 − 2−23) × 2127 ≈ 3.402823 × 1038. All integers with 6 or fewer significant decimal digits can be converted to an IEEE 754 floating-point value without loss of precision, some integers up to 9 significant decimal digits can be converted to an IEEE 754 floating-point value without loss of precision, but no more than 9 significant decimal digits can be stored. As an example, the 32-bit integer 2,147,483,647 converts to 2,147,483,650 in IEEE 754 form.

Single precision is termed REAL in Fortran, float in C, C++, C#, Java, Float in Haskell, and Single in Object Pascal (Delphi), Visual Basic, and MATLAB. However, float in Python, Ruby, PHP, and OCaml and single in versions of Octave before 3.2 refer to double-precision numbers. In most implementations of PostScript, the only real precision is single.

IEEE 754 single-precision binary floating-point format: binary32

The IEEE 754 standard specifies a binary32 as having:

  • Sign bit: 1 bit
  • Exponent width: 8 bits
  • Significand precision: 24 bits (23 explicitly stored)
  • This gives from 6 to 9 significant decimal digits precision (if a decimal string with at most 6 significant decimal digits is converted to IEEE 754 single-precision form and then converted back to the same number of significant decimal digits, then the final string should match the original; and if an IEEE 754 single-precision number is converted to a decimal string with at least 9 significant decimal digits and then converted back, then the final number must match the original).

    Sign bit determines the sign of the number, which is the sign of the significand as well. Exponent is either an 8-bit signed integer from −128 to 127 (2's complement) or an 8-bit unsigned integer from 0 to 255, which is the accepted biased form in IEEE 754 binary32 definition. If the unsigned integer format is used, the exponent value used in the arithmetic is the exponent shifted by a bias – for the IEEE 754 binary32 case, an exponent value of 127 represents the actual zero (i.e. for 2e − 127 to be one, e must be 127). Exponents range from −126 to +127 because exponents of −127 (all 0s) and +128 (all 1s) are reserved for special numbers.

    The true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit (to the left of the binary point) with value 1, unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format, but the total precision is 24 bits (equivalent to log10(224) ≈ 7.225 decimal digits). The bits are laid out as follows:

    The real value assumed by a given 32-bit binary32 data with a given biased sign, exponent e (the 8-bit unsigned integer), and a 23-bit fraction is

    ( 1 ) b 31 × ( 1. b 22 b 21 b 0 ) 2 × 2 ( b 30 b 29 b 23 ) 2 127 ,

    which in decimal yields

    value = ( 1 ) sign × ( 1 + i = 1 23 b 23 i 2 i ) × 2 ( e 127 ) .

    In this example:

  • sign = b 31 = 0 ,
  • ( 1 ) sign = ( 1 ) 0 = + 1 { 1 , + 1 } ,
  • e = b 30 b 29 b 23 = i = 0 7 b 23 + i 2 + i = 124 { 1 , , ( 2 8 1 ) 1 } = { 1 , , 254 } ,
  • 2 ( e 127 ) = 2 124 127 = 2 3 { 2 126 , , 2 127 } ,
  • 1. b 22 b 21 . . . b 0 = 1 + i = 1 23 b 23 i 2 i = 1 + 1 2 2 = 1.25 { 1 , 1 + 2 23 , , 2 2 23 } [ 1 ; 2 2 23 ] [ 1 ; 2 ) .
  • thus:

  • value = ( + 1 ) × 1.25 × 2 3 = + 0.15625 .
  • Note:

  • 1 + 2 23 1.000 000 119 ,
  • 2 2 23 1.999 999 881 ,
  • 2 126 1.175 494 35 × 10 38 ,
  • 2 + 127 1.701 411 83 × 10 + 38 .
  • Exponent encoding

    The single-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 127; also known as exponent bias in the IEEE 754 standard.

  • Emin = 01H−7FH = −126
  • Emax = FEH−7FH = 127
  • Exponent bias = 7FH = 127
  • Thus, in order to get the true exponent as defined by the offset-binary representation, the offset of 127 has to be subtracted from the stored exponent.

    The stored exponents 00H and FFH are interpreted specially.

    The minimum positive normal value is 2−126 ≈ 1.18 × 10−38 and the minimum positive (denormal) value is 2−149 ≈ 1.4 × 10−45.

    Converting from decimal representation to binary32 format

    In general, refer to the IEEE 754 standard itself for the strict conversion (including the rounding behaviour) of a real number into its equivalent binary32 format.

    Here we can show how to convert a base 10 real number into an IEEE 754 binary32 format using the following outline:

  • consider a real number with an integer and a fraction part such as 12.375
  • convert and normalize the integer part into binary
  • convert the fraction part using the following technique as shown here
  • add the two results and adjust them to produce a proper final conversion
  • Conversion of the fractional part: consider 0.375, the fractional part of 12.375. To convert it into a binary fraction, multiply the fraction by 2, take the integer part and re-multiply new fraction by 2 until a fraction of zero is found or until the precision limit is reached which is 23 fraction digits for IEEE 754 binary32 format.

    0.375 x 2 = 0.750 = 0 + 0.750 => b−1 = 0, the integer part represents the binary fraction digit. Re-multiply 0.750 by 2 to proceed

    0.750 x 2 = 1.500 = 1 + 0.500 => b−2 = 1

    0.500 x 2 = 1.000 = 1 + 0.000 => b−3 = 1, fraction = 0.000, terminate

    We see that (0.375)10 can be exactly represented in binary as (0.011)2. Not all decimal fractions can be represented in a finite digit binary fraction. For example, decimal 0.1 cannot be represented in binary exactly. So it is only approximated.

    Therefore, (12.375)10 = (12)10 + (0.375)10 = (1100)2 + (0.011)2 = (1100.011)2

    Since IEEE 754 binary32 format requires real values to be represented in ( 1. x 1 x 2 . . . x 23 ) 2 × 2 e format, (see Normalized number, Denormalized number) so that 1100.011 is shifted to the right by 3 digits to become ( 1.100011 ) 2 × 2 3

    Finally we can see that: ( 12.375 ) 10 = ( 1.100011 ) 2 × 2 3

    From which we deduce:

  • The exponent is 3 (and in the biased form it is therefore 130 = 1000 0010)
  • The fraction is 100011 (looking to the right of the binary point)
  • From these we can form the resulting 32 bit IEEE 754 binary32 format representation of 12.375 as: 0-10000010-10001100000000000000000 = 41460000H

    Note: consider converting 68.123 into IEEE 754 binary32 format: Using the above procedure you expect to get 42883EF9H with the last 4 bits being 1001. However, due to the default rounding behaviour of IEEE 754 format, what you get is 42883EFAH, whose last 4 bits are 1010.

    Ex 1: Consider decimal 1. We can see that: ( 1 ) 10 = ( 1.0 ) 2 × 2 0

    From which we deduce:

  • The exponent is 0 (and in the biased form it is therefore 127 = 0111 1111 )
  • The fraction is 0 (looking to the right of the binary point in 1.0 is all 0 = 000...0)
  • From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 1 as: 0-01111111-00000000000000000000000 = 3f800000H

    Ex 2: Consider a value 0.25. We can see that: ( 0.25 ) 10 = ( 1.0 ) 2 × 2 2

    From which we deduce:

  • The exponent is −2 (and in the biased form it is 127+(−2)= 125 = 0111 1101 )
  • The fraction is 0 (looking to the right of binary point in 1.0 is all zeros)
  • From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 0.25 as: 0-01111101-00000000000000000000000 = 3e800000H

    Ex 3: Consider a value of 0.375. We saw that 0.375 = ( 1.1 ) 2 × 2 2

    Hence after determining a representation of 0.375 as ( 1.1 ) 2 × 2 2 we can proceed as above:

  • The exponent is −2 (and in the biased form it is 127+(−2)= 125 = 0111 1101 )
  • The fraction is 1 (looking to the right of binary point in 1.1 is a single 1 = x1)
  • From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 0.375 as: 0-01111101-10000000000000000000000 = 3ec00000H

    Single-precision examples

    These examples are given in bit representation, in hexadecimal and binary, of the floating-point value. This includes the sign, (biased) exponent, and significand.

    3f80 0000 = 0 01111111 00000000000000000000000 = 1 c000 0000 = 1 10000000 00000000000000000000000 = −2 7f7f ffff = 0 11111110 11111111111111111111111 = (1 − 2−24) × 2128 ≈ 3.402823466 × 1038 (max finite positive value in single precision) 0080 0000 = 0 00000001 00000000000000000000000 = 2−126 ≈ 1.175494351 × 10−38 (min normalized positive value in single precision) 0000 0000 = 0 00000000 00000000000000000000000 = 0 8000 0000 = 1 00000000 00000000000000000000000 = −0 7f80 0000 = 0 11111111 00000000000000000000000 = infinity ff80 0000 = 1 11111111 00000000000000000000000 = −infinity 3eaa aaab = 0 01111101 01010101010101010101011 ≈ 1/3

    By default, 1/3 rounds up, instead of down like double precision, because of the even number of bits in the significand. The bits of 1/3 beyond the rounding point are 1010... which is more than 1/2 of a unit in the last place.

    Converting from single-precision binary to decimal

    We start with the hexadecimal representation of the value, 41c80000, in this example, and convert it to binary:

    41c8 000016 = 0100 0001 1100 1000 0000 0000 0000 00002

    then we break it down into three parts: sign bit, exponent, and significand.

  • Sign bit: 0
  • Exponent: 1000 00112 = 8316 = 131
  • Significand: 100 1000 0000 0000 0000 00002 = 48000016
  • We then add the implicit 24th bit to the significand:

  • Significand: 1100 1000 0000 0000 0000 00002 = C8000016
  • and decode the exponent value by subtracting 127:

  • Raw exponent: 8316 = 131
  • Decoded exponent: 131 − 127 = 4
  • Each of the 24 bits of the significand (including the implicit 24th bit), bit 23 to bit 0, represents a value, starting at 1 and halves for each bit, as follows:

    bit 24 = 1 bit 23 = 0.5 bit 22 = 0.25 bit 21 = 0.125 bit 20 = 0.0625 bit 19 = 0.03125 . . bit 0 = 0.00000011920928955078125

    The significand in this example has three bits set: bit 23, bit 22, and bit 19. We can now decode the significand by adding the values represented by these bits.

  • Decoded significand: 1 + 0.5 + 0.0625 = 1.5625 = C80000/223
  • Then we need to multiply with the base, 2, to the power of the exponent, to get the final result:

    1.5625 × 24 = 25

    Thus

    41c8 0000 = 25

    This is equivalent to:

    n = ( 1 ) s × ( 1 + m 2 23 ) × 2 x 127

    where s is the sign bit, x is the exponent, and m is the significand.

    Precision limits on integer values

  • Integers in [ 16777216 , 16777216 ] can be exactly represented
  • Integers in [ 33554432 , 16777217 ] or in [ 16777217 , 33554432 ] round to a multiple of 2
  • Integers in [ 2 26 , 2 25 1 ] or in [ 2 25 + 1 , 2 26 ] round to a multiple of 4
  • ....
  • Integers in [ 2 127 , 2 126 1 ] or in [ 2 126 + 1 , 2 127 ] round to a multiple of 2 103
  • Integers in [ 2 128 + 2 104 , 2 127 1 ] or in [ 2 127 + 1 , 2 128 2 104 ] round to a multiple of 2 127 23
  • Integers larger or equal than 2 128 or smaller or equal than 2 128 are rounded to "infinity".
  • Optimizations

    The design of floating-point format allows various optimisations, resulting from the easy generation of a base-2 logarithm approximation from an integer view of the raw bit pattern. Integer arithmetic and bit-shifting can yield an approximation to reciprocal square root (fast inverse square root), commonly required in computer graphics).

    References

    Single-precision floating-point format Wikipedia