"Replication crisis" in science

The race to publish, or publish-or-die, I think, is a terrible methodology across all of academia. The substance of the publications is what matters not the quantity no matter what field you're in. This fecks around with that. I'd rather no paper (social sciences or even hard sciences) but a great phd thesis than a few mediocre articles as prerequisite to phd. If you can do both, then fantastic, but not everyone works the same way. A massive problem, no doubt, when instead of writing about cultural fields you apply this market-driven insanity to medicine and find out, later down the line, that the stuff was junk.
 
1) This is so true in many fields, and the solution is beyond simple. Already happened in AI fields like 15 years ago, but for no reason, no other field is doing that.

Make the peer-review system double blinded. The authors do not know whom is reviewing their paper. The reviewers do not know whose paper they are reviewing. The area chair / action editor might know the reviewers but does not know the authors. The authors must put domain conflicts which mean that the reviewers and action editor cannot be from the same institution or from institutions they work closely with.

Far from perfect, but it minimizes the amount of scientific cheating going on in the review process.


2) Agree. The amount of fight going on who gets the last authorship in papers with collaborations is insane. Even in cases where the egomaniac PI has done the grand total of feck all.

———

Depending on the field, standards are beyond unrealistic. Again talking about AI, cause that I am familiar with, but to get accepted for a PhD in a top university nowadays, you must have 2+ first-author top-tier papers. Which is insane, cause five years ago that was enough to get the PhD degree. China is extremely competitive, there are students in their bachelor degree who managed to get 3-5 first-author papers during their undergrad studies. I mean, not very long ago, that was enough for a top-tier postdoc position, and is still enough for a research scientist in FAANG.

Btw, I have an intern of such level (now a second year PhD student). Who has more top-tier papers, a higher h-index, and more citations than me. Complete insanity.
Journals who adhere to the COPE guidelines do much of this anyway

Peer reviwers are essentially doing the work for free and in some fields there are only a few of them, the system is based on honesty but of course there are bad eggs as there are in every walk of life.

The whole system is outdated now, it's still geared towards print publications whilst todays world is online and in quantities that were unthinkable 15-20 years ago, impact factors still carry a lot of weight but it's not really the best measure anymore, no one has come up with anything better yet though
 
1) This is so true in many fields, and the solution is beyond simple. Already happened in AI fields like 15 years ago, but for no reason, no other field is doing that.

Make the peer-review system double blinded. The authors do not know whom is reviewing their paper. The reviewers do not know whose paper they are reviewing. The area chair / action editor might know the reviewers but does not know the authors. The authors must put domain conflicts which mean that the reviewers and action editor cannot be from the same institution or from institutions they work closely with.

Far from perfect, but it minimizes the amount of scientific cheating going on in the review process.

2) Agree. The amount of fight going on who gets the last authorship in papers with collaborations is insane. Even in cases where the egomaniac PI has done the grand total of feck all.

———

Depending on the field, standards are beyond unrealistic. Again talking about AI, cause that I am familiar with, but to get accepted for a PhD in a top university nowadays, you must have 2+ first-author top-tier papers. Which is insane, cause five years ago that was enough to get the PhD degree. China is extremely competitive, there are students in their bachelor degree who managed to get 3-5 first-author papers during their undergrad studies. I mean, not very long ago, that was enough for a top-tier postdoc position, and is still enough for a research scientist in FAANG.

Btw, I have an intern of such level (now a second year PhD student). Who has more top-tier papers, a higher h-index, and more citations than me. Complete insanity.
The double blind is the same, as far as I know, in the bio sciences as well. Well, in principle it is, but in reality there is no way it can be. I guarantee you I could identify the lab the wrote most papers, even if the authors are hidden. The intro section is almost always 5 million words that basically says "look what I've done in the past", to the point that it can be worded as "in our previous paper...." with a citation. Either that or papers with the same last author sited continuously are a dead give away. Also, the lazy methods and materials sections where its a small blurb referencing a previous paper does my head in.
 
The race to publish, or publish-or-die, I think, is a terrible methodology across all of academia. The substance of the publications is what matters not the quantity no matter what field you're in. This fecks around with that. I'd rather no paper (social sciences or even hard sciences) but a great phd thesis than a few mediocre articles as prerequisite to phd. If you can do both, then fantastic, but not everyone works the same way. A massive problem, no doubt, when instead of writing about cultural fields you apply this market-driven insanity to medicine and find out, later down the line, that the stuff was junk.
Yeah, that's not going to happen. My department, 20 years ago, had a 2 first author paper requirement to graduate. It's why the average time to graduate was 6 years. PhD students are cheap labor and a well trained and seasoned student (year 4 and on) is an invaluable and cheap resource. Better to set up hurdles to keep them around.

There is a reason PhD Comics is so depressing

phd071610s.gif
 
PhD students are cheap labor and a well trained and seasoned student (year 4 and on) is an invaluable and cheap resource. Better to set up hurdles to keep them around.
This I agree with. Not just PhD but all post-grads really if you mean tutoring and reviewing and other works rendered free. It's a mad system because most of these people are on a barely-living wage (unless you have a hefty grant or come from money)
 
As it happens, there is an interesting article on this in Nature this week. Here is the blurb from Nature Briefing:

Which universities top the retraction chart?

Nature’s first-of-its-kind analysis reveals the institutions that have the highest retraction rates of scientific articles worldwide. Jining First People’s Hospital in China tops the charts at more than 100 papers retracted from 2014-2024 — 50 times the global average — with institutes from Saudi Arabia, India and Pakistan also featuring in the data, which includes lists of universities with the most retractions. Retraction data alone can’t act as an absolute indication of the countries, fields or institutions associated with low-quality work. However, this kind of analysis “could lead to some positive action” if institutions respond by examining what is leading to the patterns, says integrity sleuth Dorothy Bishop.
The full article is here: https://www.nature.com/articles/d41586-025-00455-y

It contains a whole bunch of interesting graphs and data, but to quote just two:
d41586-025-00455-y_50649248.jpg


d41586-025-00455-y_50649240.jpg
 
As it happens, there is an interesting article on this in Nature this week. Here is the blurb from Nature Briefing:

The full article is here: https://www.nature.com/articles/d41586-025-00455-y

It contains a whole bunch of interesting graphs and data, but to quote just two:
d41586-025-00455-y_50649248.jpg


d41586-025-00455-y_50649240.jpg
It would be interesting to know which journals these are being published in, the actual numbers are quite small overall, apparently 40K in the last decade out of 50 million or less that 0.1% of published articles
 
1) This is so true in many fields, and the solution is beyond simple. Already happened in AI fields like 15 years ago, but for no reason, no other field is doing that.

Make the peer-review system double blinded. The authors do not know whom is reviewing their paper. The reviewers do not know whose paper they are reviewing. The area chair / action editor might know the reviewers but does not know the authors. The authors must put domain conflicts which mean that the reviewers and action editor cannot be from the same institution or from institutions they work closely with.

Far from perfect, but it minimizes the amount of scientific cheating going on in the review process.

2) Agree. The amount of fight going on who gets the last authorship in papers with collaborations is insane. Even in cases where the egomaniac PI has done the grand total of feck all.

———

Depending on the field, standards are beyond unrealistic. Again talking about AI, cause that I am familiar with, but to get accepted for a PhD in a top university nowadays, you must have 2+ first-author top-tier papers. Which is insane, cause five years ago that was enough to get the PhD degree. China is extremely competitive, there are students in their bachelor degree who managed to get 3-5 first-author papers during their undergrad studies. I mean, not very long ago, that was enough for a top-tier postdoc position, and is still enough for a research scientist in FAANG.

Btw, I have an intern of such level (now a second year PhD student). Who has more top-tier papers, a higher h-index, and more citations than me. Complete insanity.

Double-blindedness wouldn't help that much at all.

The average bio paper, is between 8 and 20 pages. Plus another 5-10 pages of supplementary information. The data that comes with it is a few GB of sequencing.
Peer review is an unpaid job which must be completed in 2 weeks, alongside regular work. It includes reading the paper and understanding the concept behind it, (in some cases, doing a literature search to make sure they aren't missing or misrepresenting past research), reading the paper's results and figures and supplementary figures very closely and see if they actually are what they claim to be, sometimes pouring through the Methods section to understand how the data was processed to generate those figures, finally, a quick once-over to see if the whole thing is coherent.

So, in this process, which takes at least 1-2 full working days, the reviewer hasn't looked at the raw data, or done their own processing of it to see if it produces the same figures as the paper. And that's just one source of potential fraud. Western blots are faked with impunity and there's no data trail to follow for them, gels can be easily image-manipulated in general, ...

As a reviewer, I tend to look at the paper itself and see if it makes sense internally and with known literature, not recreate the entre experiment. I'm not sleuthing around for image manipulation or just fake data processing. And that's regardless of single- or double-blindedness.
 
Double-blindedness wouldn't help that much at all.

The average bio paper, is between 8 and 20 pages. Plus another 5-10 pages of supplementary information. The data that comes with it is a few GB of sequencing.
Peer review is an unpaid job which must be completed in 2 weeks, alongside regular work. It includes reading the paper and understanding the concept behind it, (in some cases, doing a literature search to make sure they aren't missing or misrepresenting past research), reading the paper's results and figures and supplementary figures very closely and see if they actually are what they claim to be, sometimes pouring through the Methods section to understand how the data was processed to generate those figures, finally, a quick once-over to see if the whole thing is coherent.

So, in this process, which takes at least 1-2 full working days, the reviewer hasn't looked at the raw data, or done their own processing of it to see if it produces the same figures as the paper. And that's just one source of potential fraud. Western blots are faked with impunity and there's no data trail to follow for them, gels can be easily image-manipulated in general, ...

As a reviewer, I tend to look at the paper itself and see if it makes sense internally and with known literature, not recreate the entre experiment. I'm not sleuthing around for image manipulation or just fake data processing. And that's regardless of single- or double-blindedness.
I did not say that it solves the problem, but it minimizes the amount of cheating. It stops collusion rings which we know happens in the field. You can still fake the results, of course.
 
I did not say that it solves the problem, but it minimizes the amount of cheating. It stops collusion rings which we know happens in the field. You can still fake the results, of course.
The amount of cheating is minimal at best, less than 0.1% of published papers in the last decade, the real puzzle to be solved is not cheating but how to get a more robust peer review system in place, a system that depends on the goodwill of volunteers is just not practical today
 
The amount of cheating is minimal at best, less than 0.1% of published papers in the last decade, the real puzzle to be solved is not cheating but how to get a more robust peer review system in place, a system that depends on the goodwill of volunteers is just not practical today
It does seem a mountain to climb. I mean, look what it took to "fix" the PNAS direct submission issue! We used to choose those papers for journal club to get a laugh and to find examples of what NOT to do when writing a paper.
 
It does seem a mountain to climb. I mean, look what it took to "fix" the PNAS direct submission issue! We used to choose those papers for journal club to get a laugh and to find examples of what NOT to do when writing a paper.
I work for one of the main publishing companies and there have been countless initatives and tools developed, not all successful, over the years

When I started in 1997, electronic publishing was in it's infancy, everything revolved around the print version, now it's the the complete opposite, half the time we were inventing the process as we went along!