I cannot help laughing when someone says that they only believe in science and what papers published in peer-reviewed journals say, as if 100% of what they describe was truthful and correct.
Even leaving aside the various well-known cases of outright science fraud (and the many we do not yet know about), it is crazy to think scientists are always correct and simply cannot make mistakes or be subject to bias. Anyone working in academia (or data analysis) is painfully aware this is not the case. (see references)
I am reminded of something that happened probably a decade ago now.
A UK University professor was chastised for using the wrong kind of statistical analysis on the data used in the studies her group had been publishing. She responded with an apology, and added she was sorry about the mistake they had made, though she was a bit puzzled as her team had been publishing papers along the same lines (using the same type of data and the same type of statistical analysis) for 5 years and various peer-reviewed journals had – up until then – happily kept publishing their papers without comment.
What does this illustrate?
– (non-statistics) professors and their teams may make mistakes in areas outside their main scope of expertise, such as statistics
– professors may not be doing their own stats for use in their own papers (or they may really suck at stats or be subject to some bias related to looming deadlines, publication requirements, need for further funding and such)
– research assistants and research students should be up to scratch with statistical knowledge or get the required help from someone in the area. A statistics specialist should probably be enlisted in each group project. This does not happen all the time, it seems, though specialist statisticians and statistics Master study programmes have been cropping up all over the UK in recent years.
– the team leader should probably oversee and check over everything, but this may in practice not be happening for various reasons, including time constraints
– these mistakes may not be spotted for a long time, leading to many incorrect results being deemed correct and spread around the research universe and referenced by other researchers (when in fact they are incorrect), thus corrupting the integrity of the “total body of evidence”
– peer-reviewed journals and their vetting processes are far from infallible
– you always need to use your own head and not rely on other people’s interpretation of the data (possibly even questioning their data sample)
– sometimes, the data can be questionably “massaged” so as to give some welcome results. This may be harder to spot (without full access to the initial dataset), but this procedure is apparently often used by large companies and sometimes government organisations.
– exercise caution in accepting any results
– use your head. Don’t just read the results section (as someone with limited understanding once recommended), critically check details of all aspects of the study and check how the researchers arrived at their conclusions
– if the correctness of a study are important to you, really dig in, obtain the original data, information about any methods used and try to reproduce the results. This is a very informative process.
– If you cannot reproduce the result, don’t feel too bad. It may be a common case of replication crisis!
– If the paper uses a lot of jargon, a high degree of linguistic obfuscation and fewer first-person pronouns than expected, you may have another reason to be suspicious of the result it reports.
This is by no means an extensive list. A more technical explanation of what can go wrong at all stages of a statistical analysis and how to avoid these mistakes and biases, please see the “Recommended Resources” below.
Tags: research-quality, scientific-method, statistics
Recommended Resources:
- Statistics Done Wrong
- How to Lie with Statistics
- Mistakes in Science: Mistakes Happen in Science (dynamicecology.wordpress.com), Good Scientists Make Mistakes
- Why Most Published Research Findings Are False: Problems in the Analysis
- How Scientists Fool Themselves and How They Can Stop
- Fifteen Common Mistakes Encountered in Clinical Research
- Top 6 Common Statistical Errors Made By Data Scientists
- Replication crisis: http://www.bbc.co.uk/news/science-environment-39054778 and https://en.wikipedia.org/wiki/Replication_crisis
- Stanford researchers uncover patterns in how scientists lie about their data