Uh oh: More evidence that “scientific” research is flawed …

Still more re: why “the science” is losing the public trust. 


It’s not a new issue! From the HomaFiles archives… circa 2015

In a prior post, we reported that Dr. John Ioannidis, a director of Stanford University’s Meta-Research Innovation Center, estimated that about half of published results across medicine were inflated or wrong

For details, see Uh-oh: Most published research findings are false…

Now, the NY Times is reporting findings published in the Journal of Science which concludes that more than half of all studies published in the 3 most prominent psychology journals are seriously flawed and that their results can’t be replicated.

The Times says:

The report appears at a time when the number of retractions of published papers is rising sharply in a wide variety of disciplines.

Scientists have pointed to a hypercompetitive culture across science that favors novel, sexy results and provides little incentive for researchers to replicate the findings of others, or for journals to publish studies that fail to find a splashy result.


Here’s the basis for the conclusion that the majority of the studies reported flawed conclusions …


According to the Times…

The project began in 2011, when a University of Virginia psychologist decided to find out whether suspect science was a widespread problem.

He and his team recruited more than 250 researchers, identified the 100 studies published in 2008, and rigorously redid the experiments  in close collaboration with the original authors.

The analysis was done by research psychologists, many of whom volunteered their time to double-check what they considered important work.

In most cases, sample sizes were enlarged to boost statistical significance.


The bottom line: Most of the studies didn’t pass statistical muster.

While there was no evidence of fraud and no conclusions proved to be directionally false … the majority of the major conclusions couldn’t be statistically confirmed.

“Strictly on the basis of significance — a statistical measure of how likely it is that a result did not occur by chance — 35 of the 100 studies held up, and 62 did not.”

That’s a big deal since “the vetted studies were considered part of the core knowledge by which scientists understand the dynamics of personality, relationships, learning and memory.”

And, experts opine that the non-replicatability problem could be even worse in other fields, including cell biology, economics, neuroscience, clinical medicine, and animal research.

Uh oh.



Follow on Twitter @KenHoma            >> Latest Posts

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: