Scientific research is a much more difficult and competitive field than the layman would assume. Not only do you have to pick the best technique and to make sure that the research is valid, but you also have to contend with countless criticisms and attempted takedowns by other researchers. So it’s obvious why researchers have to be really good at what they do.
But when a study attempts to disprove all other studies performed in a certain field, you know that things are going to get messy. After replicating the results of a team that for a study replicated the results of 100 psychology studies in order to disprove them, a second team disproves 2015 meta-analysis of psychological studies.
2015 meta-analysis
Ok, so this might seem like an entirely undecipherable jumble, but once you know the backstory of the 2015 study, it’s quite simple. So, let’s start by talking about the meta-analysis going by the name of the Reproducibility Project.
Back in 2015, a team of researchers decided to go about and prove that most psychological studies were poorly performed. By looking at 100 studies published in leading journals, the team reached the conclusion that less than 40% of all psychological studies could be replicated in a subsequent experiment.
So, the team replicated each and every single one of the 100 studies and stirred huge waves of controversy after publishing their conclusion. Contradicting a single study is one thing, but accusing an entire science of incorrect methodology is a whole other matter.
2016 meta-analysis analysis
As expected, arguments broke out, and many gullible people believed the findings of the study. This led to psychological findings being ignored to this day by certain groups of people, similar to the whole “vaccines cause autism” debacle.
So, a second team started a different research, attempting to verify the conclusions of the 2015 study. By replicating the same 100 experiments chosen by the first team, the second team managed to get a nearly 100% reproducibility rate.
Ironically, the team found no evidence of falsifying data in the 2015 study, so the only rational explanation is that the team used the wrong procedures to attempt replicating the studies. So basically, the team attempting to prove that a whole field was using faulty methodology ended up with bad results because of using faulty methodology.
Of course, the team contests the second team’s findings, to pretty much no avail. They were objectively proven wrong, but sadly that isn’t something people care about in this country. Just like with the “vaccines = autism” and global warming denial groups, once false information reaches the masses it will keep spreading like cancer despite the source being removed.
Image source: Wikimedia