Code Explanation:
1. Import the Decimal class
from decimal import Decimal
The decimal module provides the Decimal class for high-precision arithmetic.
Unlike floating-point numbers (float), Decimal avoids most rounding errors that come from binary representation.
2. Add two Decimal numbers
a = Decimal("0.1") + Decimal("0.2")
Here, "0.1" and "0.2" are passed as strings to Decimal.
Decimal("0.1") represents exactly 0.1 in decimal.
Similarly, Decimal("0.2") is exactly 0.2.
Adding them gives Decimal("0.3") (perfect precision).
So a = Decimal("0.3").
3. Add two floats
b = 0.1 + 0.2
0.1 and 0.2 are stored as binary floating-point.
In binary, 0.1 and 0.2 cannot be represented exactly.
When added, the result is actually 0.30000000000000004.
So b ≈ 0.30000000000000004, not exactly 0.3.
4. Compare the values
print(a == Decimal("0.3"), b == 0.3)
First comparison:
a == Decimal("0.3") → Decimal("0.3") == Decimal("0.3") → True.
Second comparison:
b == 0.3 → 0.30000000000000004 == 0.3 → ❌ False (due to floating-point error).
5. Final Output
True False


0 Comments:
Post a Comment