I was quite disappointed to read a BBC news story about the a new “theory” that supposedly solves the difficulty of dividing zero by zero (thanks to the blog “Web Pruned by Dawdling Monkeys” for pointing it out). The article discusses a Dr. James Anderson from the United Kingdom who has declared that this value should be called “nullity” and that with this definition, he has solved a “1,200-year-old problem.” While I was annoyed enough to write about it myself, I would actually recommend you read more about this incident at “Good Math, Bad Math,” which is much better written than what I could do.
First, some background. In mathematics zero divided by zero is considered indeterminate (and is undefined, as well). In approximate lay terms, this is because if we try to evaluate it in different ways, we get different results. In general, division by zero is not defined in mathematics. One way to look at this is to think of division as the inverse of multiplication. When we ask what twelve divided by three is, another way to word it would be to ask what number, when multiplied by three, gives twelve.
So say you want to divide twelve by zero. You could ask what number, when multiplied by zero, gives twelve. No real (or complex) number fits this. Any number, when multiplied by zero, gives zero. How about zero divided by zero? What number, when multipled by zero, gives zero? In this case, any real number will, and the result is therefore indeterminate.
This Dr. Anderson apparently felt this was a problem, and so has defined the quantity zero divided by zero to be “nullity.” That’s, fine, but it’s pointless as well. In fact, computer scientists have already come up with a better name: in early programming languages, division by zero would cause the program to instantly crash. Nowadays, dividing by zero usually returns the value “not a number” or NaN. It doesn’t cause the program to crash, but it also clearly indicates that this is not a number. And just as you’d expect, it can’t be used in computations with numbers, or compared to numbers, and so on. If you try to add three and NaN, the answer will be NaN as well, which makes sense: adding three to something that is not a number, like a chair, clearly will not be a number. Of course, what your program does with the NaN depends on how well you wrote its error tolerance; if it asks you how many people you wish to divide the money among and you say zero, it should alert you that that is invalid input, not report that each person gets “NaN” dollars.
Now, I would not normally devote the time or energy to debunk this sort of thing, but what really disturbed me was the BBC article about it. It presents this as plausible or useful; not only is this “theory” ludicrous, but there is not even any independent commentary from other mathematicians. It is quite irresponsible, I feel, to simply present someone’s “revolutionary idea” without offering any perspective. In fact, I even double-checked the URL to be sure it actually was a BBC article. I then assumed it was some sort of satire, the way CNN features stores from The Onion, but those at least are clearly labeled as satire, and this appears to be a genuine article. The article is so bad that I would like to critique it line-by-line:
Dr James Anderson, from the University of Reading’s computer science department, says his new theorem solves an extremely important problem — the problem of nothing.
Of course, it does nothing of the sort. This is not a “problem.” It’s not as if we have been wondering for centuries what zero divided by zero is, and all along the answer was just to give it a name. And again, naming it doesn’t mitigate any of the difficulties with these quantities.
“Imagine you’re landing on an aeroplane and the automatic pilot’s working,” he suggests. “If it divides by zero and the computer stops working — you’re in big trouble. If your heart pacemaker divides by zero, you’re dead.”
It is true that division by zero could derail a program, but so can any faulty calculation. The division will produce the NaN result, and while it may not crash the program, it will probably not be able to produce any meaningful information either. In general, if you’re dividing by zero in a program, it means you haven’t written the program properly. In any case, what will the program do when it gets a result of “nullity?” It has no more meaning than NaN.
Watch a video report from BBC South Today’s Ben Moore, then let Dr Anderson talk you through his theory in simple steps on the whiteboard:
I did watch the videos, and I’m sorry I did. It offers nothing new; Dr. Anderson certainly did not have any better explanation than what’s in the article and the reporter just demonstrates his lack of qualification to be reporting on this sort of thing.
Computers simply cannot divide by zero. Try it on your calculator and you’ll get an error message.
Yes. And if my calculator were to say “nullity” instead of “error,” would it matter?
But Dr Anderson has come up with a theory that proposes a new number — “nullity” — which sits outside the conventional number line (stretching from negative infinity, through zero, to positive infinity).
Well, he’s given it a new name, and though he declared it to be a number, it doesn’t seem to have any use or connection to the other numbers.
The theory of nullity is set to make all kinds of sums possible that, previously, scientists and computers couldn’t work around.
I think the scientists were doing just fine, and again, it doesn’t help solve anything.
“We’ve just solved a problem that hasn’t been solved for twelve hundred years — and it’s that easy,” proclaims Dr Anderson having demonstrated his solution on a whiteboard at Highdown School, in Emmer Green.
Actually, what he had done (it’s shown on the video; the reporter appears clueless) is use some dubious mathematics to convert 00 to 0/0, which of course then equals “nullity.” 00 is another indeterminate quantity. (Briefly, 0 to any [nonzero] power is 0, but and nonzero number to the 0 power is 1; for this and other reasons, its value is indeterminate.] Interestingly, 00 is often defined as being 1, because it allows for some useful mathematical manipulation in several contexts. Like Dr. Anderson’s idea, it is rather arbitrary, but it’s actually useful to define it this way. In fact, redefining it as “nullity” removes that functionality.
“It was confusing at first, but I think I’ve got it. Just about,” said one pupil.
This is what really bothered me. It’s one thing to talk about this to a newspaper, but he’s teaching it to tenth-grade students!
“We’re the first schoolkids to be able to do it — that’s quite cool,” added another.
No, it’s not cool at all. I feel bad for these students. He should be teaching it to higher-level math students who would have the ability to properly evaluate it instead of to high-school students who will blindly accept what he teaches.
Despite being a problem tackled by the famous mathematicians Newton and Pythagoras without success, it seems the Year 10 children at Highdown now know their nullity.
Newton and Pythagoras succeeded just fine. And this “Highdown” should reevaluate how it selects its lecturers.
Readers can leave comments at the end of the BBC article, and so many people have left critical comments that they’ve now updated the article to state that the Dr. Anderson will be addressing concerns at the BBC on Tuesday. I also left a complain for the BBC in their complaint section over the sloppy journalism. I hope they will revise their policies.
Incidentally, if you’re a interested, a slightly more formal way to evaluate these expressions is to use limits. In place of the zero, we put a variable (or expression), and then slowly adjust the value of the variable towards zero to see if we can spot a trend. For one divided by zero, we can consider the limit of one divided by x as x approaches zero. One divided by one is one, one divided by one-half is two, one divided by 0.1 is 10, one divided by 0.0001 is 10,000, and so on. The closer x gets to zero, the larger the value of the fraction gets, and so we may indicate this by reporting the value as infinity (∞). Of course, if we had started x as a negative number, one divided by x would be values like -2, -10, -10,000 and so on, but still becoming arbitrarily large.
However, zero divided by zero cannot be evaluated in this manner. Depending on what expressions we choose for the zeros, we’ll get different limits. An obvious pair of examples is as follows: First, consider x divided by x. As x approaches zero, this fraction will always be 1, so the limit is 1. This makes sense. But if we had instead chosen 2x divided by x, then the limit would have been 2. And if we choose a fraction like x divided by x2, the limit will head towards infinity.