Mike Zullo’s investigation into the authenticity of President Obama’s birth certificate has been plagued by inexpert testimony. First Zullo presented image analysis by people who didn’t know what they were talking about. Next he presented contextual criticism of the document based faulty memory, lack of expertise on vital records and a falsified historical document. He accepted the results of investigators who knew nothing about vital records in his debunked certificate numbering scheme. Finally Zullo found a real expert, a handwriting analyst, who seemed to agree with Zullo, but who had no known background in the field of electronic documents and high-end compression algorithms. (The Reed Hayes report was never shown to the public.)
Most recently in Zullo’s December 15, 2016, attempt, he seemed to be making a statistical claim, even though his sources were not qualified by Zullo as statisticians, and Zullo’s refusal to disclose the methodology and analysis used confounds peer review.
What I will do in this article is talk in general about a statistical fallacies that may underlay Zullo’s argument, and then in Part 2 present my own experiment and analysis.
Debunking many birther claims is within the reach of the non-expert. If a birther says “X” is impossible, it is only necessary to show an example of “X” to prove it false. This business of the date stamp angles is going to require some expertise. Plausible–sounding statistical arguments can be wrong. As I frequently say, “I am not a real doctor, but I have a Masters Degree, in Science!” For this debunking, I am going to play the expert card, my MS in Mathematics from Clemson University.
The week I was born a man made 28 passes in a row at a Las Vegas dice table, something said to have only one chance in ten million of happening. Remarkable? The fact is that millions play dice every year, and that when something like this happens, it makes the newspapers. If an experiment is tried enough times, then unlikely outcomes become likely to occur. Ignoring the number of trials is the fallacy. Unusual events pique our interest, but they should not surprise us. These things happen every week.
A great example is the Pick 3 lottery number winner in Illinois the night after Obama was elected President: 666. What are the odds of that? Would you say “one in a thousand” (.001)? Notice that the event happened the day after the election, not on the day of the election. If it happened on the day of the election, the same claim of oddity would have been made. So isn’t the question better answered “what is the probability that 666 would come up within two days of the election?” So the .001 probability becomes .001999. But wouldn’t an anomaly have been declared if the number came up on Obama’s birthday? Inauguration day? The day the Electoral College voted? And would a claim had been made if the number came up in Hawaii’s lottery? If a longer winning number started or ended with “666”? The question becomes: “what is the probability that 666 could come up in some context related somehow to Barack Obama over some period of time?” There are many significant Obama events and things that can be coincidental with them. So what the actual question is: if you look at every detail of Obama’s life and every item coincident to it, what is the probability that a few odd things will be found? (And what about my 666 watch story?)
Another statistical fallacy, the prosecutor’s fallacy, was made by Christopher Monckton’s when he used calculations of the improbability of a combination of anomalies he thought were in Obama’s birth certificate as evidence that it was a fake.
At its heart, the [prosecutor’s] fallacy involves assuming that the prior probability of a random match is equal to the probability that the defendant is guilty.
For instance, if a perpetrator is known to have the same blood type as a defendant and 10% of the population share that blood type, then to argue on that basis alone that the probability of the defendant being guilty is 90% makes the prosecutor’s fallacy (in a very simple form).
His case was further undermined by the fact that his anomalies weren’t anomalous and that he used calculations for independent events, when the events were correlated.
Improbable events occur in our lives all the time. What were the chances that my wife visiting Kiev (population 2.8 million) in the Ukraine would have a chance meeting on the street with another graduate of Auburn University, when neither of them was wearing any emblem of that school? Do the math!
Here’s another example: The old philosophical problem of proving a negative. The example is the proposition, “all ravens are black.” It can’t be proven because it is false, but white ravens are rare. What is the probability that I would come across not one but three of them? If I decided to go looking in my back yard, the answer is “extremely small” (I don’t get ravens at all), but if I went looking for “white raven” in Google Images, not so unusual. And to tweak the result further, in all honesty I wasn’t looking for a set of three when I started out. I changed the rules after the fact.
The story has gone around that college math professors get extra income by betting their classes that at least two people in the class will have the same birthday. Would you take that bet? Let’s do the math:
We’ll pick our first student and then compare that one to each of the others. Each student we compare has a 1 in 365 chance of matching, and a 364/365 chance of not matching. For the next student there are 363 dates that won’t match the first two, so the chance of theirs being different is 363/365. One multiples the two fractions together to get the compound probability of the teacher losing the bet for two students. The amazing result is that at 23 students, the odds are roughly 50/50 that there will be a match. For a class size of 30 the professor has a 75% chance of winning and with a class of 100, the chance of the professor losing is about one in a million. The object of the story is that improbable events are likely to occur in large samples.
The same mistake, ignoring sample size, leads to false identifications associating two online personages. What are the chances that two different people posted photos from the same PhotoBucket account and live in the same state, and have initials “RB.” I don’t know the odds, but there are two.
Let’s bring the examples a little closer to home. Let’s say I have a Hawaiian birth certificate, and an instance on one form where two characters have a certain spatial relationship, and then find another form where the two letters are in exactly the same relationship to each other. Let’s say that through some argument (that may be fallacious) you determine that the odds are one in a thousand that pairs would correspond. Have I found a very unlikely event? The answer is no for at least two reasons. First, I picked the pair after I found the correspondence. There are 196 typed characters on Obama’s birth certificate, yielding 19,110 pairs of characters to compare. So finding a one in a thousand event in a sample of 19,110 is not unlikely at all; it’s almost inevitable (better than the one in a million in our birthday example). The second error is assuming that the positions of the characters is independent. In fact a typewriter is designed to consistently put characters in exactly the same relation to other characters, line by line, day by day, in a grid that is 6 lines per inch vertically and 10 characters per inch horizontally. So rather than compute a probability assuming that the spacing is random (events are independent), we should be asking the question of how probable is it that these two character pairs are in the same relative position given that they were typed on the same model of typewriter (and based on font analysis, it appears that the same model Kapi’olani hospital typewriter typed all of its birth certificates), and likely the same typewriter.
Let me emphasize that we do not know anything about the methodology, analysis or assumptions made in the reports that Zullo talks about, but refuses to release. They may be sophisticated or naive, but they are almost inevitably wrong unless you assess the probability of a massive mutigenerational cover-up of the facts of Obama’s birth involving multiple administrations of both parties in the Hawaii Department of Health, 1961 Honolulu newspapers, and numerous White House staffers and the President of the United States as having a probability greater than one in a thousand. I doubt that we will ever see the analyses from Mike Zullo. I cannot critique what I haven’t seen and the confidentiality of his relationships with his experts is a screen that Mike Zullo hides behind.
What I do know is that the analysts Zullo consulted did their work based on samples that Zullo supplied, samples that could have been selected to skew the outcome. For example, we know that Reed Hayes was given the White House birth certificate PDF to look at, while not being shown the photographs of it, photographs that call into question his conclusions that no paper document existed. Hayes naively saw the pixelated portion of Stanley Ann Dunham Obama’s signature as proof that the signature was done in two parts, when in fact this was Xerox layer separation artifact that Hayes would not have seen had he been given other images of the birth certificate to work with. Was ForLab shown all of the date stamp samples in my article, or was the very close 1959 date angle omitted, and the very different Nordyke certificate, stamped by a different clerk, included?
Stay tuned for Part 2 where Doc gets his hands dirty with a real experiment.