No, not really, I just made that number up. However, if I had put that percentage into a graph or colourful pie chart then most people would not question it (and I am including myself in that!).
If someone had told me a week ago that I would be sitting down to write a blog post on statistics because it interested me, well I’d probably have laughed and carried on with my day, but that is exactly what I am going to do. My memories of statistics in school are of staring at bar charts and line graphs with a blank expression and wondering how this can ever be worth learning.
Kennelly (2010) suggests that we have been raised to think that numbers represent absolute fact, and that in a maths, there is one and only one correct answer. Much less emphasis is put on the fact that in the real world numbers don’t give any information without units, or some other frame of reference. We are almost hard wired into believing that the numbers we see in everyday life were measured with infinite precision, and that can lead to some major misinterpretations! I remember sitting in a class in high school with my tutor, who (as an English teacher was very vocal about his issues with maths) said, “If you go to the Stonehenge one year, and the tour guide says it’s 5,000 years old, if you went back a year later, the tour guide won’t say it’s 5,001 years old.” Although arguably bitter about how numbers are sometimes dubious, he is (in hindsight) correct. We can’t ever really know the age of the Stonehenge that precisely because of the methods used to determine its age.
This therefore leads me onto the actual reliability of statistics and how simple maths can have wide reaching affects such as changing our perceptions and determining our unconscious biases. Statistics are undoubtedly persuasive. So much so that people, businesses, and whole countries base some of their most important decisions on this data. But any set of statistics might have something lurking inside it that can turn the results completely upside down. One particular study suggested that smokers were shown to have a higher survival rate over 20 years. The cynic in me suggests that this study is entirely fictional. However, it was a genuine study. What they didn’t tell us? The group of non-smokers were significantly older by average. It is therefore important that we carefully study the statistics and the groups that they describe before determining our perceptions.
While numbers don’t actually lie, they can be used to mislead with ‘half-truths’ (Kennelly, 2010). This is known as the “misuse of statistics.” It is often individual people or companies wanting to gain a profit from distorting the truth. For example, I am an avid user (and promoter) of good old hand sanitiser. My mum even used to suggest using it instead of touching the dirty taps and handles in public toilets (gross). Why? Because they kill 99.9% of germs of course! But in fact (to my horror) when reading up about statistics, I found out that it only kills as few as 46% in real world settings rather than a laboratory. This means only one thing. I am going to have to break the news to mum. Just kidding, this is one statistic that I am just going to have to pretend I did not see, not every Monday morning has to be filled with bad news.
Unfortunately, the murky web of statistical deceit only gets deeper from here. A 1998 real-life journal, ‘The Lancet’, claimed that the vaccination for measles, mumps, and rubella for babies was causing an increase in autism. It was later confirmed that the author lied about his findings. Unfortunately, this statistical misinformation led to a huge increase of people who were afraid to immunise their babies, which led to an increase in diagnoses of measles across the US. As well as this, a 2009 investigative survey by Dr. Daniele Fanelli from The University of Edinburgh found that 33.7% of scientists surveyed admitted to questionable research practices, including modifying results to improve outcomes, subjective data interpretation, withholding analytical details and dropping observations because of ‘gut feelings’ (Lebied, 2018).
So how can statistics be made to be misleading, either on purpose or by accident?
Faulty polling
The ways that questions are phrased can have a huge impact on the way a participant answers them. Specific wording can have a persuasive effect and cause individuals to answer in a way that the question asker requires for their study. For example, being asked ‘Do you believe that you should be taxed soother people don’t have to go to work?’ and ‘Do you think that the government should help people who cannot find work?’ These are examples of loaded questions. A better question would be ‘Do you support government’s assistance programs for unemployment?’
Flawed correlations
The big issue with correlations is that if you measure enough variables, eventually it will appear that some of them correlate. Studies can be manipulated (with enough data) to prove a correlation that does not exist or that is not significant enough to prove causation.
Misleading data visualisation
Graphs and charts can be misleading if they do not; use an appropriate scale, start at 0 and have an appropriate method of calculation such as a time period. Basically, if the graph looks good people may believe it.
Selective bias
This is about the nature of the people surveyed, for example asking only young people about lowering the legal age limit for drinking.
So what can I conclude from these surprising findings? (Other than to trust no one!). If anything, this has made me more determined and passionate about teaching maths. It is obvious that it is so important to teach children the real world value of numbers and to engage them, because if it is not something as small as misunderstanding how many germs you’re actually killing, it could be as large as a dangerous perception of immunisation. If I were taught with this knowledge, perhaps I would have paid more attention to these seemingly ‘boring and useless’ graphs.