Negative Zero

My wife brings up the following story any time she wants to make the point that I’m pedantic: When one of my daughters was in second grade, her math teacher told the class that any number divided by zero was one. I dashed off an impassioned email to the teacher, insisting that the result had to be undefined. Supposedly this is evidence that I’m sometimes difficult to be around.

Turns out the joke might be on me — although it’s still hard to support the second-grade teacher’s answer. I recently learned a bunch of things I didn’t know about floating point math:

  • There is a value for negative zero, separate from regular (positive?) zero. These two zeroes are defined to be equal to each other and yet they are distinct values.
  • x ÷ 0.0, for x ≠ ±0.0, is not an error. Instead, the result is either positive infinity or negative infinity, following the usual sign convention.
  • The case of ±0.0 ÷ ±0.0 is an error (specifically it’s “not a number” or NaN).
  • –0.0 + –0.0 = –0.0, –0.0 + 0.0 = 0.0, and –0.0 × 0.0 = –0.0

These rules stem from the IEEE 754 “Standard for Floating-Point Arithmetic,” which standardized floating point representations across platforms. The most recent version of the standard was completed in 2008 but the original version was issued in 1985, so this behavior is not new. The rules above are true in both C (gcc) and Swift on my Mac, and also true in Swift on an iPhone. Python on the Mac supports negative zero for floats, but throws an exception when you attempt to divide by zero of any sign.

There are a couple of surprising corollaries to these rules:

  • Because 0.0 and -0.0 must compare as equal, the test (x < 0.0) does not return true for every negative number—it fails for negative zero. Therefore, to determine the sign of a zero value, you need to use the platform’s built-in sign function, for instance Double.sign in Swift. Or I guess you could bit-manipulate the raw representation of the double, which is very much a C programmer’s answer.
  • If a = b ÷ c, it does not necessarily follow that b = a × c, because this also fails for the case where c is zero of either sign.

I’m not a number theorist, but I find the concepts above surprising.

One immediate problem: Infinity is not a number, like zero or 3.25 or π. Rather, infinity is a concept. It is true that the rational numbers are countably infinite—but infinity is not a member of the set of rational numbers.

Furthermore, from a number theory perspective, division by zero is nonsensical. You can understand why if you get precise about what division means. Technically, “division” is “multiplication by a number’s inverse,” where the inverse satisfies: a × a-1 = 1. Zero is the only number in the set of real numbers that simply has no multiplicative inverse. And since this inverse doesn’t exist, we can’t go around multiplying by it.

But surely the people who designed floating point numbers knew all this. So, I got wondering about why the behavior described came to be written into the IEEE standard.

To start, let’s consider the problem that floating-point math is trying to address. The real numbers are uncountably infinite, and yet we wish to represent this entire set within the bounds of finite computer memory. With a 64-bit double, there are 264 possible symbols, and the designers of the IEEE standard were trying to map these symbols onto the set of real numbers in a way that was both useful to real-world applications and also economically feasible given the constraints of early 80s silicon. Given the basic requirements, clearly approximations were going to be used.

The reasoning for negative zero appears to date to a 1987 paper1 by William Kahan, a Berkeley professor who is considered the “father of floating point” and who later won the Turing Award for his work in drafting IEEE 754. It turns out that the existence of negative zero is intimately tied to the ability to divide by zero.

Let’s start by discussing the usual reason that division by zero is not allowed. A naïve approach to division by zero is the observation that:

\lim\limits_{x\rightarrow0^{+}}\dfrac{1}{x}\rightarrow\infin

In other words, as x gets smaller, the result of 1/x gets larger. But this is only true when x approaches 0 from the positive side (which is why there’s that little plus sign above). Running the same thought experiment from the negative side:

\lim\limits_{x\rightarrow0^{-}}\dfrac{1}{x}\rightarrow-\infin

As a result, the generic limit of 1/x as x approaches 0 is undefined because there is a discontinuity (what Kahan calls a slit) in the function 1/x.

However, by introducing a signed zero, Kahan and the IEEE committee could work around the difficulty. Intuitively, the sign of a zero is taken to indicate the direction the limit is being approached from. As Kahan states in his 1987 paper:

Rather than think of +0 and -0 as distinct numerical values, think of their sign bit as an auxiliary variable that conveys one bit of information (or misinformation) about any numerical variable that takes on 0 as its value. Usually this information is irrelevant; the value of 3+x is no different for x := +0 than for x := -0…. However, a few extraordinary arithmetic operations are affected by zero’s sign; for example 1/ (+0) = +∞ but 1/ (–0) = –∞.

I’ve made my peace with the concept by adopting a rationalization proposed by my partner Mike Perkins: The 264 available symbols are clearly inadequate to represent the entirety of the set of real numbers. So, the IEEE designers set aside a few of those symbols for special meanings. In this sense, ∞ doesn’t really mean “infinity”—instead, it means “a real number that is larger than we can otherwise represent in our floating-point symbol set.” And therefore +0 doesn’t really mean “zero,” but rather “a real number that is larger than true 0 but smaller than any positive number we can represent.”

Incidentally, while researching this issue, I discovered that even Kahan doesn’t love the idea of negative zero:

Signed zero “well, the signed zero was a pain in the ass that we could eliminate if we used the projective mode. If there was just one infinity and one zero you could do just fine; then you didn’t care about the sign of zero and you didn’t care about the sign of infinity. But if, on the other hand, you insisted on what I would have regarded as the lesser choice of two infinities, then you are going to end up with two signed zeros. There really wasn’t a way around that and you were stuck with it.” (From an interview of Kahan conducted in 2005.)

I’m not certain if writing a blog post ten years later makes up for railing against a poor second-grade teacher. For her part, my daughter, now in high school, just rolled her eyes when I started talking about division by zero at dinner. So maybe that “difficult to be around” thing is hereditary.

 

Kahan, W., “Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing’s Sign Bit,” The State of the Art in Numerical Analysis, (Eds. Iserles and Powell), Clarendon Press, Oxford, 1987, available here.