Quote: John Charnock "The authors of the report did not try to judge whether any decision was correct or not. It was a statistical analysis of how the number of decisions going each teams' way depended on the nationality of the referee. After allowing for variables such as the quality of opposition, which team was at home, et cetera, the conclusion was that teams were favoured by referees of the same nationality. Some referees were worse than others and there was less bias shown in games that were televised live. I am not a statistician, but on reading quickly through the paper it looked like a fair analytical study.'"
You can collect statistical data from anything and find trends and patterns in it. It is the nature of statistical data.
That does not mean the statistical data has any value in determining cause and effect in actual events, unless it is grounded in a corroborating analysis of those actual events.
i.e. - if you match penalty counts against nationality of the referee, it will either show refs give more penalties against their own nation, or it will show they give less. Identical results are most unlikely, though variations may be very small or, for whatever reason (correlation or coincidence) the variation might be quite large.
It's interesting you say "After allowing for variables such as the quality of opposition, which team was at home, et cetera"... this highlights some of the problems in drawing conclusions from such statistical data. What are the effects of those variables? Does a higher-quality opposition concede less penalties because they are better (and so higher in the league), or is a team higher-placed in the league there because they concede less penalties and so win more games? Does being at home increase your penalty count, or does it decrease it? Are referees sympathetic to home crowds, or antagonised by them? And will the decisions and patterns of behaviour of one official make the sllightest difference to the patterns from another? In other words, how can you isolate one factor from another... where's the evidence in the study? and where's the bias in the interpretation of results?
Stats will always subject to interpretation and presentation, but you cannot evaluate the decisions of an official based on stats and averages. You cannot, in fact, do it without observing the performance of the official on individual decisions in individual games.
If I looked at the penalty count against Lancashire teams and against Yorkshire teams, it would either be higher or lower. If it was higher, I could say Lanky sides were unfairly punished by the Leeds-based RFL. If it were lower I could say that referees lean west. Or I could say that the higher proportion of good Lancashire teams (Wigan, Saints, Wire vs Salford, 3-1 high-quality vs poor quality) means that their sides have better discipline than the more evenly-split Yorkshire teams. Or I could say that the higher rainfall in western England means matches are slower and less open, so there is less time in which to concede penalties, and there are fewer offsides because there are fewer quick ptb's in heavier conditions.
But - and here is the most important point about my hypothetical study, and about the real one mentioned in the Independent - it would all be spurious, made-up b0llocks.