I’ve gotten a few emails about a math professor who claims to have solved the problem of dividing by zero.
With the caveat that I am not a professional mathematician, I’m pretty sure this is silly. For one, any time that you have a major, front-page scientific or mathematical result reported by a mainstream news organization that does not contain some version of the phrase “this discovery, which was published in [name of major peer-reviewed journal],” you are probably looking at a news organization that is not doing their job. Until I see a group of mathematicians look over his results and say that they are consistent and significant extensions of the current body of mathematics, I’m really not buying it.
The other reason I’m not buying it is that I don’t see how you can “solve” that problem. The article says that the whole idea is that this can make divide-by-zero crashes go away. This doesn’t really make sense. When I worked with LabVIEW, one thing I noticed was that the result of dividing by zero was propagated through the system as “NaN” — Not a Number. This didn’t make anything work better. It just pointed out that I had made a mistake somewhere. If you have set up an equation where you are trying to divide by zero, you have done something WRONG. You can make the system fail gracefully or not, but that’s a matter of crash handling. Just spitting out the result “Nullity” doesn’t fix things. Sure, you could then make the next part of the program handle “Nullity” as a special case. But that’s not mathematical, that’s algorithmic.
Now, it’s true that this is sounds similar to the way the mathematical community responded to the idea of the square root of minus one being treated as a number, which was only really accepted in the 19th century, or the way negative numbers were dealt with before that. But I don’t think dividing by zero is really the same thing. Until someone can do more than just use a word like “Nullity” to mean “Undefined” (I remember trying that in 10th grade), this isn’t a useful concept, and it’s not going to stop programs from crashing when the programmer writes a function with an equation that’s supposed to spit out a real number and doesn’t. We’ve been using symbols to refer to the result of dividing by zero for years now. They don’t mean anything mathematically and they don’t solve any problem.