Benford’s Law
Pick a number, any number. Well, not just any number — go pick one out of a book, newspaper, TV, street sign, anywhere, at random. Pick out the population of Mumbai out of the almanac. Look up the salaries of executives in your town. Root through your neighbor’s trash and pull out a cancelled check, to see how much they paid for that giant gargoyle sitting in their living room. What’s the first digit of that number? Chances are, about a third of the time, that digit will be a “1”. Amazingly enough, for lots of the numbers we come across in daily life, the first digit of that number is much more likely to be a “1” than a “9”. If you read that last sentence with a stifled yawn, you might as well stop reading, because it doesn’t get any better. But if you’re like me, you’re incredulous that the the chances of getting a first digit is anything other than completely random for 1 through 9. So basically, if you’re like me, you’re a huge dork. Well, push up your glasses, move that stack of computer printout data over (oop, don’t knock over the Mountain Dew!), and take a seat, for I’m about to relate (in a nasal voice) the tale of Benford’s Law, the law of probabilities for the first digit of lots of common numbers.
Again, if you’re like me (and let’s be frank, who else would be reading this far?), you have carpelled your tunnel on a computer or two, and have come across the concept of random numbers. And not just “pick a number between one and three!”, but instead random numbers spit out of your computer, or from data that you collected, or in that statistics class you took as your “elective”. When I ask my computer for a supply of random numbers, any particular number is as likely to pop out as any other — think of rolling dice, lottery numbers, roulette wheels, or the random-number generator on your Dell workstation running Redhat Linux (you know, the computer you’re writing a couple shareware games on, in your spare time).
For picking numbers at random, from anywhere around you in life, you’d think the randomness would tend to even out things, that you wouldn’t see any patterns to what comes back. Every conceivable number should be just about as likely as any other number. If you’re free to choose your number from anywhere, not just street signs, you’d think you’d see nearly uniformly equal chances of getting any number whatsoever. And you’d think this would apply to the first digit of that number, too — you’d expect to see “1” just about as often as “9”. But amazingly enough, real life doesn’t work that way — apparently, things don’t “average out” enough to get rid of a really interesting pattern you see in the chances of getting certain numbers for the first digit.
For lots of numbers you come across in daily life, from the atomic weight of elements, to the expense accounts of corporate executives, to the atomic weight of corporate executives, the first digit of that number can be anybody from 1 to 9, but these values are not equally likely. In fact, “1” is the most likely, “2” a bit less than “1”, and each succeeding number less and less likely until we get to pathetic “9”. We’re over six times as likely to see “1” than “9”. Personally, I was floored when I first heard of this. I mean, floored. It changed my life, man. Well, no it didn’t actually, at least not as much as breath mints, deodorant, and other tools for improving social communication skills. But seriously, it is pretty remarkable.
I’ve been trying unsuccessfully to use this in a bar bet. Here’s how I think it may go:
[Scene: a dank, gritty bar, just like the one from Patrick Swayze’s “Roadhouse”. Our hero is talking to a large hairy biker, who is wearing a leather jacket, the chain from a chainsaw for a belt, and a human ear in his beard.]
Our hero: So, Butch my friend, I’ve got a gentleman’s wager for you.
Butch: Yeah? Let’s hear it.
Our hero: I’ll bet you a lager that the probability distribution for the value of the first digit of many numbers is not uniform!
Butch: [Sound of eyes blinking]
[Cut to the hospital, where our hero is being visited by his D&D guild members…]
Our hero: The doctor said I’ll make a full recovery.
Friend: So where did he put your 68-sided dice again?
[End Scene]
That’s the opening scene in the screenplay I’m currently working on1. I hear Jackie Chan is interested…
The discovery of this strange fact has to qualify as one of the easiest ways to get yourself a “Law” in the history of science — first Simon Newcomb, then Frank Benford several years later, noticed that the pages of a book of logarithm tables were a lot more dirty and worn towards the front, meaning that people had been looking up the log of numbers starting with “1” or “2” a lot more than “8” or “9”. (See, kids, back in the dark ages, people didn’t have calculators with a “log” button [See, kids, a “logarithm” is… uh, let’s drop it.]) Whilst perusing a logarithm or two, as a break from hour-long photography posing, and dropsy, people had been inadvertently proving this unusual law by leaving finger grease behind in logarithm table books. When you had to take the logarithm of a number, you’d have to look in a book containing a giant table of the logarithm of every possible number. The books were ordered numerically, by the first digit of the number you wanted to take the logarithm of, so looking at which pages were the most used was a nice indirect way of telling how often each digit comes up in everyday life.
Frank Benford was the first to come up with the mathematical formula for the probability of each digit, and his reward was to get a law named after him. For a given digit d (which can be one through nine), the probability is given by this equation: So you wind up with a probability of 30.1% for the number one, 17.6% for two, and a little less for each successive digit, down to nine with a probability of 4.6%. I don’t know about you, but my guess would have been that they would all have equal probability, so the chance of getting any given digit would be 1/9th. If you actually would have guessed that formula, or if you can derive it, you need to stop reading this article and get back to writing your PhD thesis.
It turns out the second digit in a number isn’t completely random, either — the probability distribution for the second digit is similar to the distribution for the first, just not as quite skewed toward 1. In fact, there’s a non-uniform distribution for every digit on down the line, though they flatten out after a while, so that by the time you get a few dozen places down the line the digits are pretty close to uniformly-distributed. At each place, the digit “1” is the most likely, the digit “9” is least likely, and everybody else is somewhere in between. So that implies that the number 1,111,111,111…(and so on) should be the most likely number in the entire world. Sure enough, that happens to be my cholesterol count.
So why is this true? For quite a long time after the law’s discovery, no one quite had nailed down exactly why many sets of numbers follow this, and more importantly, which numbers follow it. Some numbers don’t — zip codes, phone numbers, social security numbers, and many other random sets of numbers don’t follow Benford’s Law. Some in fact do have completely uniform distributions for the first digit. A few people came up with some intuitive explanations for the law, but it wasn’t until recently that this law was really put onto solid mathematical foundations. Not that this will help me explain it — even with a rigorous proof behind it, the explanation is not what I’d call “intuitive” unless you’re really comfortable with logarithms1. It ain’t no coincidence that the law was first noticed in a table of logarithms, as the distribution itself is logarithmic.
Imagine that for some reason you have decided to go on a systematic, intense program to gain a serious amount of weight. “I’m going to be a fatass”, you think to yourself, “that is what I’m going to do with my life.” Let’s say you devote yourself fanatically to the cause, and you start when you’re, say, 100 pounds. And, let’s say for the sake of argument that you can keep it up — your strict program of deep-fried twinkies pays off. Your first goal is to get from 100 to 200 pounds — it’s a tall order, but you pour eggnog on your pancakes, and you get it done. To grow from 100 to 200 pounds, you’ve got to double your weight, but you’ve only gone from “1” to “2” in the first-digit of your weight. Fast-forward to a couple years hence, when you’re at 800 pounds aiming for 900. It’s going to be a lot easier to gain 100 pounds now, since 100 pounds is only 1/8 of your weight. Instead of doubling your weight, you just have to up your weight by just over ten percent. When you’re sitting at 112 pounds, it’s a long slog up to that next first-digit number. When you’re up near 7, 8, and 9, it’s easy to jump up another number. And once you get up to 9? You’re just a couple hollandaise martinis away from 1,000 pounds — no problem. Well, now you’re a thousand pounds, and to get to the next higher first-digit number, you’re going to have to crack 2,000 pounds!
So if massive weight gain is your goal, you’re going to be spending a lot of your time with a 1 in the first place of your weight. It’s just the nature of measuring something with numbers — it’s a whole lot farther from 1 to 2 than from 8 to 9, in proportional terms2. And that happen to obey Benford’s Law often tend to be measurements of something, be it weight, salary, length of rivers, the Dow Jones average, etc. Numbers that have units, like “pounds”, or “meters”, or “cholesterol points”. Phone numbers aren’t measurements of anything — they’re essentially chosen at random, so it’s no surprise that their first-digit probabilities don’t obey Benford’s Law. Think of the Dow Jones average, as another example, and picture to yourself how quickly it shot through the numbers above 7,000, and how long it’s been stuck between 10,000 and 20,000. It was no big deal back in the 90’s to see it jump around between 7,000 and 9,000 over the span of a few months, but it’s not going to grow to above 20,000 for a loooong time. So the Dow Jones tends to spend a disproportionate amount of time with a 1 in the first digit, and comparatively less time with higher digits like 7, 8, and 9. If you think of all the other flavors of numbers you see in your daily life that behave the same way, you will begin to understand why so many numbers obey Benford’s Law.
Crystal clear, right? If that explanation didn’t do it for you, here’s another: it turns out that measurement data needs to follow Benford’s Law, precisely because measurements have units, or dimensions. When you measure the prizewinning watermelon you’re growing in your backyard for the county fair, you measure the length in feet, or inches, or millimeters. You can measure your speed on the highway in miles per hour, or kilometers per hour. Back in college, friends of mine in physics class would always compute answers to homework questions in the most awkward units they could think of — for a problem dealing with the speed of a baseball, they’d give the answer in “leagues per fortnight”. You can always change the units of measurements to anything you want — if you want, you can measure your height in angstroms, or light-years, or whatever scale you choose. As it turns out, when you change units you’re also changing the probabilities of the first digit — instead of 1 being most common, you might cause 7 to be most common. It turns out that the probability distribution given by Benford’s Law is the only one that sticks around if you change the units — any other distribution you choose would get all messed up when you change to new units.
So what? Not exactly a compelling reason, perhaps, but it’s another bit of evidence to suggest why the distribution exists. It’s a general feeling in science and engineering that nothing should change just because the units are changed — after all, we humans made up units convenient to ourselves, so why should anything about the universe depend on what units we like? It would be unusual and quite suspicious if the first-digit distribution did change with units, so it seems more natural that the distribution stays fixed for all unit systems.
My attempts to explain why the first-digit distribution should follow Benford’s Law aren’t really proofs — I haven’t yet really explained exactly where this mathematical pattern comes from. And I ain’t going to. There’s quite a lot of math involved, and frankly I haven’t read it all the way through. Fortunately for all you readers, Dr. Ted Hill at Georgia Tech has developed a detailed proof of why Benford’s Law applies to lots of numbers out there in the world, and has explained which groups of numbers the law should apply to. Dr. Hill has gone even further, showing how Benford’s Law is a sort of “master distribution”, one that emerges when you combine lots of numbers from many different distributions. It turns out that if you pool together a bunch of randomly-chosen numbers, each of which come from a bunch of arbitrary probability distributions, the collective group of numbers will obey Benford’s Law for the first digits. It’s reminiscent of the central limit theorem, that says the value of randomly-chosen data from random distributions will approach a Gaussian distribution (or a “bell curve”) when you add them all together. Head on over to his webpage for the full details.
Now, I know what you’re thinking. “Well, this is all well and good, but how can I apply this law to my job as an accountant for the mob?” Just like so many obscure mathematical topics before it, Benford’s Law has turned out to have found a practical home in the world of finance, in particular as a way to weed out fraud. People who cook their financial books are most likely unfamiliar with Benford’s Law, and so the fake numbers they enter in their account statements will have wildly incorrect distributions for the first digits. There are lots of little tricks you can use to differentiate between genuine random data and a typical person’s attempt to create fake random data — it turns out we have a pretty poor idea of how to make truly random-looking data. A quick little analysis run on a computer will tell you whether data in a financial statement follows Benford’s Law, and if not, is a pretty good indication that some of the numbers have been faked. This technique has already been used to pick out fraudulent finances here and there, and is particularly effective since it’s so hard to design your fake data to conform to the law. So as you write up your fake books, remember to keep track of those first-digit occurrences.
So there we have it — a strange, counterintuitive law that was right under our noses all the time. I’m particularly fond of this little mathematical tidbit, since it was in retrospect so easy to discover — after all, you can confirm it yourself anytime you want by simply picking numbers out of the newspaper. Tripping over an odd fact by flipping through a logarithm book strikes me as about the easiest way to get a law named after you. I’m being unfair to Dr. Benford, of course, since he actually gave the first mathematical explanation of the law, but still — it appeals to the lazy side of me, as you’d think that Laws this easy to find would have been discovered way back in the 17th century. It makes you wonder what other “Laws” are out there for the discovering. If I’m going to get a law named after me, I had been thinking that I’d have to spend my nights in an observatory in Chile or underneath a giant accelerator in Belgium. But now I’m inspired to search around for statistical anomalies in hobbies a little closer to my interests. Stay tuned, for if there are any laws to be found watching VH-1’s “I Love the 80’s”, I’ll publish it here first, and thank all of you readers in my Nobel acceptance speech…
Footnotes
1. The title of my screenplay is “Comfortable With Logarithms”. I’m also working on the script for the sequel: “Benford’s Law 2: Benford’s Revenge”. We’re hoping to get Russell Crowe as Benford, with guest star Samuel L. Jackson as the thumb-soiled logarithm book. “MMM MMM! THAT’S A TASTY THEOREM!”
2. Those with the experience with logarithmic data will recognize this — the time spent at any particular value for the first digit tends to be logarithmic, not uniform. Most people are pretty unfamiliar with logarithmic patterns like this, as most of the numbers you come across in daily life don’t have this property. The value of the Dow Jones average isn’t logarithmic, after all — if you look at the entire number and not just the first digit, then it’s a bit more like a true uniformly random number. Even so, occasionally you’ll see logarithmic numbers pop up in everyday life. Sometimes the experts will go ahead and take the logarithm of the number, which has the effect of evening out the data and making it more intuitive for the general public — think of earthquake scales and sound intensity levels (decibels). Both of these measurements happen to be logarithmic in their raw form, so the dweebs in charge take a “log10” to convert to units that are uniformly distributed. (The experts in the audience will recognize that I’m blatantly confusing “exponential” with “logarithmic”, but to them I say, “you need to relax at least 15 dB”.)