Trisha Shetty (Editor)

Decimal128 floating point format

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In computing, decimal128 is a decimal floating-point computer numbering format that occupies 16 bytes (128 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

Contents

Decimal128 supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i.e. ±0.000000000000000000000000000000000×10^−6143 to ±9.999999999999999999999999999999999×10^6144. (Equivalently, ±0000000000000000000000000000000000×10^−6176 to ±9999999999999999999999999999999999×10^6111.) Therefore, decimal128 has the greatest range of values compared with other IEEE basic floating point formats. Because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations; 1×102=0.1×103=0.01×104, etc. Zero has 12288 possible representations (24576 if you include both signed zeros).

Decimal128 floating point is a relatively new decimal floating-point format, formally introduced in the 2008 version of IEEE 754 as well as with ISO/IEC/IEEE 60559:2011.

Representation of decimal128 values

IEEE 754 allows two alternative representation methods for decimal128 values. The standard does not specify how to signify which representation is used, for instance in a situation where decimal128 values are communicated between systems.

In one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer.

The other, alternative, representation method is based on densely packed decimal for most of the significand (except the most significant digit).

Both alternatives provide exactly the same range of representable numbers: 34 digits of significand and 3×212 = 12288 possible exponent values.

In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of 5 bits in the combination field. The remaining combinations encode infinities and NaNs.

In the case of Infinity and NaN, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.

Binary integer significand field

This format uses a binary significand from 0 to 1034−1 = 9999999999999999999999999999999999 = 1ED09BEAD87C0378D8E63FFFFFFFF16 = 0111101101000010011011111010101101100001111100000000110111100011011000111001100011111111111111111111111111111111112. The encoding can represent binary significands up to 10×2110−1 = 12980742146337069071326240823050239 but values larger than 1034−1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).

As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002 to 01112), or higher (10002 or 10012).

If the 2 bits after the sign bit are "00", "01", or "10", then the exponent field consists of the 14 bits following the sign bit, and the significand is the remaining 113 bits, with an implicit leading 0 bit:

s 00eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 01eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 10eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt

This includes subnormal numbers where the leading significand digit is 0.

If the 2 bits after the sign bit are "11", then the 14-bit exponent field is shifted 2 bits to the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining 111 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" in the true significand.

s 1100eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 1101eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 1110eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt

The "11" 2-bit sequence after the sign bit indicates that there is an implicit "100" 3-bit prefix to the significand. Compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the "00", "01", or "10" bits are part of the exponent field.

For the decimal128 format, all of these significands are out of the valid range (they begin with 2^113 > 1.038×1034), and are thus decoded as zero, but the pattern is same as decimal32 and decimal64.

In the above cases, the value represented is

(−1)sign × 10exponent−6176 × significand

If the four bits after the sign bit are "1111" then the value is an infinity or a NaN, as described above:

s 11110 xx...x ±infinity s 11111 0x...x a quiet NaN s 11111 1x...x a signalling NaN

Densely packed decimal significand field

In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses the densely packed decimal (DPD) encoding.

Unlike the binary integer significand version, where the exponent changed position and came before the significand, this encoding combines the leading 2 bits of the exponent and the leading digit (3 or 4 bits) of the significand into the five bits that follow the sign bit.

This twelve bits after that are the exponent continuation field, providing the less-significant bits of the exponent.

The last 110 bits are the significand continuation field, consisting of eleven 10-bit declets. Each declet encodes three decimal digits using the DPD encoding.

If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits after that are interpreted as the leading decimal digit (0 to 7):

s 00 TTT (00)eeeeeeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt] s 01 TTT (01)eeeeeeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt] s 10 TTT (10)eeeeeeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]

If the first two bits after the sign bit are "11", then the second two bits are the leading bits of the exponent, and the last bit is prefixed with "100" to form the leading decimal digit (8 or 9):

s 1100 T (00)eeeeeeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt] s 1101 T (01)eeeeeeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt] s 1110 T (10)eeeeeeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]

The remaining two combinations (11110 and 11111) of the 5-bit field are used to represent ±infinity and NaNs, respectively.

The DPD/3BCD transcoding for the declets is given by the following table. b9...b0 are the bits of the DPD, and d2...d0 are the three BCD digits.

The 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results. (The 8×3 = 24 non-standard encodings fill in the gap between 103=1000 and 210=1024.)

In the above cases, with the true significand as the sequence of decimal digits decoded, the value represented is

( 1 ) signbit × 10 exponentbits 2 6176 10 × truesignificand 10

References

Decimal128 floating-point format Wikipedia