Zeros and "Zeros"

One of the largest sources of bugs, outages, issues, and other problems with computer systems is using a single concept for multiple non-related or only slightly related purposes. A particularly common case of such concept reuse is the use value of zero to indicate both the number zero and absence of information about the value. It's intuitive to think this way because in the real world the number zero indicates the absence of something. But transferring this rationale to computer systems has devastating effects, because arithmetically the number zero and empty value behave very differently. The first has no effect on basic arithmetic operations such as summation and subtraction, while the second renders most arithmetic operations invalid.

Programmers have two defining traits that hold for almost all of them: optimism and laziness. Taken together these characteristics explain very well why programmers are shooting themselves in the legs for the past 60 years by conflating zero and "zero". Optimism makes programmers believe that nothing will ever go wrong and that things will behave as expected. While a pessimist would consider all potential cases and embrace for the worst, the optimist just does it and hopes for the best. Laziness on the other hand only exacerbates the issue since most programming languages make it easy to use number 0 as indicator for undefined value, while they make it quite difficult (for basic types at least) to implement such functionality properly.

The computer science knows exactly what should be done in order to prevent such things from happening. Unfortunately, our field lacks engineering rigor well established in other fields of engineering. These other fields learned the hard way (mostly by piling up of corpses) why doing things properly is important. On the other hand, the field of software development still pretends that what we do has no influence on the real world.