Floating Point Arithmetic: Issues and Limitations

I just started the Computer Science course using Python & got to the part about floating points & clicked on the link to the Python page about it (15. Floating Point Arithmetic: Issues and Limitations — Python 3.10.4 documentation).

I think it’s extremely hard to follow what they are trying to say here & was wondering how critical it is that I really understand this at my current level of learning?

The gist of it is that when you store certain decimal values in a Python program, what Python stores as the value is an approximation and not an exact value. The supplementary reading may be a bit… advanced for the point you’re at, but if you can come back to it later on with more knowledge of the math it’ll make more sense.

TL;DR: If you ask Python to output a value and you get a long run of numbers after the decimal, e.g. 1.1000000000000001, then it’s possibly the quirky way that floating point arithmetic works and not necessarily a problem in your code.