Poll FaultKos
publishes a critique of the polling firm he used for the last year or so, and comes to the (correct in my opinion) conclusion that he was defrauded.
A combination of random sampling error and systematic difference should make the M results differ a bit from the F results, and in almost every case they do differ. In one respect, however, the numbers for M and F do not differ: if one is even, so is the other, and likewise for odd. Given that the M and F results usually differ, knowing that say 43% of M were favorable (Fav) to Obama gives essentially no clue as to whether say 59% or say 60% of F would be. Thus knowing whether M Fav is even or odd tells us essentially nothing about whether F Fav would be even or odd.
Thus the even-odd property should match about half the time, just like the odds of getting both heads or both tails if you tossed a penny and nickel. If you were to toss the penny and the nickel 18 times (like the 18 entries in the first two columns of the table) you would expect them to show about the same number of heads, but would rightly be shocked if they each showed exactly the same random-looking pattern of heads and tails.
Were the results in our little table a fluke? The R2K weekly polls report 778 M-F pairs. For their favorable ratings (Fav), the even-odd property matched 776 times. For unfavorable (Unf) there were 777 matches.
That's pretty obviously hanky-panky; indeed, it suggests that the three outliers were mistakes by the person compiling--err, creating--the poll results.