The Role of Preprints in Life Science Research with bioRxiv Co-Founder Dr. Richard Sever

Dr. Richard Sever is Assistant Director of Cold Spring Harbor Laboratory Press at Cold Spring Harbor Laboratory in New York and he co-founded the preprint servers bioRxiv and medRxiv in 2013 and 2019, respectively.
After receiving a degree in biochemistry from Oxford University, Richard obtained his PhD at the MRC Laboratory of Molecular Biology in Cambridge, UK. He then moved into editorial work, serving as an editor at Current Opinion in Cell Biology and, later, Trends in Biochemical Sciences. He subsequently served as an executive editor of the Journal of Cell Science before moving to Cold Spring Harbor Laboratory in 2008. In this article, which includes the edited transcript from Dr. Sever’s Lab Coats & Life™ Podcast episode with STEMCELL’s Director of Brand & Scientific Communications, Dr. Nicole Quinn, and co-host of the Stem Cell Podcast, Dr. Daylon James, Richard discusses his involvement with open access publishing and explores how preprints can help advance science, the role of funding, and the future of peer review.
In the ever-evolving landscape of scientific research, the need for collaboration and rapid dissemination of scientific research is more critical than ever. Preprint servers have emerged as a revolutionary platform in this context, offering researchers a way to share their work swiftly and openly prior to submission to a peer-reviewed journal. This article explores the role of preprints and their impact on the scientific community; it also addresses common concerns while highlighting their potential to democratize access to scientific knowledge. Preprint publishing in the life sciences is transforming the way researchers communicate and collaborate, highlighting the importance of understanding its benefits and challenges.
Podcast published April, 2024.
The following interview has been edited for clarity and brevity. The views expressed in this interview are those of the individuals and do not necessarily reflect the views of STEMCELL Technologies.
The Rise of Preprint Servers in Life Sciences
When and how did preprint publishing begin?
Richard Sever: I'm one of those people who left the lab pretty soon after doing a PhD. So most of my working life has been as an editor of journals. But most recently I've been spending my time on preprint servers, and specifically the preprint server bioRxiv, which now has a sibling, medRxiv. It's an opportune time to chat because we just celebrated our 10th anniversary, and it's interesting looking back. I think there's quite a few people who didn't think we would ever get to 10 years.
Nicole Quinn: I remember attending an AAAS panel discussion in 2016 or '17 about open access and preprints, and it was the first time I'd ever heard of a preprint. At the time, I remember thinking, I hope it works, but I don't know if it's going to. So congratulations on 10 years. It's made a huge impact in how science is communicated. What were some of the hurdles and milestones along the way?
RS: We have more than a quarter of a million preprints across bioRxiv and medRxiv, so that's a nice landmark to pass after 10 years. It's been an interesting journey and one that you couldn't necessarily have forecasted. It's really important to remember that bioRxiv was predated by arXiv in physics and computational science and math, and they've been doing it for a long time. That started in 1991. One with Paul Ginsparg, who launched arXiv. There were a number of conversations over the years about, could this be done in biology? Could this be done in medicine? A lot of people said no. Not only did people say no, people tried and failed. Then we launched bioRxiv in 2013, and it's interesting to think about why it succeeded [when past attempts had failed]. People had said in the past that biology is different; physics produces easily verifiable conclusions whereas biology is messy and completely different. They thought that everyone would get totally confused and that the field would disintegrate. Then it was interesting talking to people like Paul Ginsparg, who said, “Well, these people don't know anything about physics. Physics is not easily verifiable. It's messy, too.” We were lucky to be in Cold Spring Harbor because we had people who had been physicists, who then became biologists. They thought we ought to be doing this in biology as well. That was a real turning point for me.
There was also a lot of fear about how journals might react, for the individual users. People were worried that if they put their preprint online, journals were not going to look at it because it was previously published. So that was a big fear, and that fear for some people continues to this day even though, in the interim, it became very clear that all the journals were not only fine with preprints, but they had to be fine with it because the movement is growing.
People were worried that if they put their preprint online, journals were not going to look at it because it was previously published. So that was a big fear, and that fear for some people continues to this day even though, in the interim, it became very clear that all the journals were not only fine with preprints, but they had to be fine with it because the movement is growing.
Dr. Richard Sever
RS: My colleague, John Inglis, and I talked more about the idea and decided to give it a go, while really listening to the community. We got a lot of good reception, particularly in the population genetics community; it was clear that they had been thinking about this, too. I think there's a general level of dissatisfaction with science publishing that everybody hears about, and in particular, that it takes so long. You go to a conference and you hear somebody talking about work, and you think, wow, that's amazing. But then the paper would appear a year later. I think people were just generally thinking, with the World Wide Web, why does it have to be like this? Of course, the answer is, it doesn't. You can decouple making the information available from the process of peer review, which had all been conflated in the journal world.
What has the impact been on publishing in the life sciences?
RS: A lot of scientists, particularly in genomics, evolutionary biology, and computational neuroscience, got immediately involved. There was a wave of those kinds of people who got involved and started posting papers on bioRxiv. Then there was almost an osmotic process throughout the community. A stem cell biologist would say, “Oh, I just read this genomics paper about some aspect of development,” and then people in developmental biology and cell biology began to do it. Next, funders started realizing that it was a good thing.
It became most evident [that preprints were needed] during the COVID-19 pandemic. We had launched medRxiv, for clinical information, six months before, expecting that it would have a slow growth like bioRxiv did and ramp up over a number of years. However, six months later, there were 10 million people looking at it every month, and we had 4,000 papers because everybody understood that for COVID work and understanding the virus, getting the information out really, really quickly was important. That became a real demonstration of the effectiveness of preprints. That demonstrated that it could work in medicine.
Addressing Concerns About Preprints in Life Sciences
Why do we need preprint servers?
NQ: I think a year is generous when you say it takes a year to publish, as sometimes it’s a lot more than that. You also mentioned conferences, and I think we're lucky if we see something at a conference because they are a pretty exclusive place to be as well, which highlights how research can be hidden for some time.
Daylon James: You underscored it there, both of you, it's the speed. I think one of the major responses to the question, “Why do we need a preprint server?” is that it takes too long to get the stuff out. I heard you in a talk you gave online about how Steve Quake at the Biohub Network had done some back-of-the-envelope calculations and came around to the idea that preprints can accelerate science by roughly 5 times over the course of a decade. It's obvious why it seeds collaboration and benefits young scientists, as it’s great for visibility, and there's no paywall on these preprints. It seems like a no-brainer. But there's the counterargument that I think a lot of people make about speed, also considered as haste, or that maybe the work is a little bit less vetted or there is less rigor. It's more about just getting it out there, making the claim, than actually supporting it. I'm sure there have been a lot of haters out there who have been telling you why it's a bad idea or why it's going to fail or how it's going to lower the standard.
What are the arguments against preprints?
DJ: What's your response to those people? When people talk about the muddying of the waters or the reduction of rigor, is anyone pointing towards any specific evidence? Or are they all just saying, “Oh, this could happen?” What are the arguments that you hear against the preprint?
RS: I think the possibility that there's information out there that is wrong is a concern. But my feeling is that that is already the case without preprints. When we were planning medRxiv [a few years after launching bioRxiv], I was talking to my dad, who's an older generation physician, about this concern, and he said to me, “Well, I get what you're doing with bioRxiv, but I'm not sure about medRxiv. I worry about the potential for misinformation coming out.” We talked about it a little bit, and then he came back to me about three weeks later. When I spoke with him on the phone, he said, “Oh, I've been thinking about your medRxiv thing. I've decided it isn't a problem, actually, because all the crap gets published somewhere anyway.” It's flippant, and I don't want to seem like I don't care about this, but I do think we have to set it against the backdrop of the fact that you can now put any information on the web. By not doing preprints, you aren’t stopping things from getting out.
But not only that, you can pretty much publish anything somewhere these days. There's really been a race to the bottom with journal publications. There was a discussion on X [formerly Twitter] recently about this dreadful paper that's appeared in a peer-reviewed journal, which everybody thinks is complete nonsense. There are lots of examples of that. The good thing about a preprint is it comes with a massive sign on the front, basically saying, this has not been peer-reviewed. We're not making any claims about it at all. It could all be wrong. So I think it focuses people's attention on this issue, but it's very transparent in that aspect. I do find it hard to find examples of preprints that provide evidence that this is a problem. I actually find it quite easy to find examples of journal articles which purport to be peer-reviewed and are incredibly misleading.
People sometimes think we just put every paper we get online, but actually everything is looked at by a scientist. It's not peer review, but we look to see whether there's something that's potentially dangerous if it were wrong.
Dr. Richard Sever
It’s also worth emphasizing that one of our major concerns with bioRxiv and medRxiv was always that some papers might be dangerous, and that if they were wrong, there could be bad consequences, and so we read all the papers. People sometimes think we just put every paper we get online, but actually everything is looked at by a scientist. It's not peer review, but we look to see whether there's something that's potentially dangerous if it were wrong. In those cases, we turn them away.
How do you dispel myths about preprint servers?
RS: Really, the worry is that it might take you down the wrong direction experimentally. But people point out that preprints are being read for the most part by people who are perfectly equipped to peer review them anyway. It’s like a conference, but with six billion potential people in the audience. Most of the people reading preprints are reading arcane bits of science and biology or health science. So most people who will read them, will be experts, and will be equipped to evaluate them. If they aren't, then they should ask somebody else. I find it hard to point to specific instances where there's been a problem.
.DJ: I want to elevate that point because I think that's something that's lost on many scientists that I've spoken to about preprints; there’s this idea that they are like a beefed-up Twitter [now X] where everyone can post whatever they think or whatever they've found. Whereas, in fact, there is a screening process. I think it's important to emphasize, with medRxiv in particular, that misinformation that might be exploited by bad actors is excluded, where, as you said, sometimes you have peer-reviewed journals that are happy to publish it.
The Evolution of Preprints
Are the types of articles that are going to bioRxiv changing?
RS: One change that’s clear is that different disciplines adopt preprints at different rates. Looking back at the original arXiv, it started with high energy physicists, and then condensed matter physicists came later and various other people later. There's still some areas of physics that haven’t adopted preprints. We saw something very similar with bioRxiv. Initially, it was mainly genomics and genetics, and then bioinformatics. But as I said, the developmental biologists came along and then the cell biologists. So the population of papers is changing, but mainly because more people are getting on board in different subjects. I think around 70 to 80% of these papers go on to appear in journals.
What’s happening with the 20 to 30% of papers in preprint that aren’t published later in journals?
RS: That 70 to 80% figure is calculated after two years. So some fraction of the 20 to 30% will appear in a journal [because, as mentioned earlier, we know it often takes longer than two years to publish]. But there will be a class of paper that doesn't appear in a journal. What are those? Some of them may be so bad that they never get into a journal, although I don’t think it’s many. Some of them may be papers where they morphed and they became part of something else. But some will be in a category of paper where people say, “I want to get this information out there. I'm done with it now. I don't need to send it to journals. It doesn't need to be peer-reviewed.” There are certain types of study where people know that it’s enough that it's out there for people to read. I think it's going to be really interesting to watch that fraction. Some people predict that that could become 50% of papers [in preprint]. If we lose this obsession with the branding and impact factor, you may have a lab that puts half of their papers on bioRxiv and sends the others to a journal because there's some reason that they need to go through the formal peer-review process.
Clearly, during COVID, there was a certain type of paper where people wanted to get the results out and read fast. By the time it could be peer reviewed it would have been totally out of date. They were of pretty variable quality. We saw some modeling of the epidemic. If a lab had started modeling the epidemic in March of 2020, how useful are those predictions if the paper is published in December of 2020? So there are various reasons why some papers never move on to formal peer review. It's always interesting when I talk to scientists about the future of peer review and I ask, “How many papers should be peer reviewed?” I get answers from zero to 100%. Some people think peer review is so broken and so wrong that we shouldn't do formal peer review of any papers. Other people think that absolutely every single thing should be peer reviewed and that the entire field will disintegrate if the papers aren't. I think that'll be very interesting to watch over the next few years.
Have preprints helped to debunk any published results?
DJ: I was interested in this semi-superconductor, LK-99. There was a series of preprint papers that nucleated this massive effort across many scientists who ultimately showed that it wasn't a superconductor, and they explained why it wasn't, and why the false result emerged. They figured it all out completely outside of the sphere of peer review. Although it was true peer review.
RS: Yes, that was on arXiv, as it’s basically a physics paper. But we have had examples on bioRxiv where people have repeated other people's experiments. There was a great example by a former Cold Spring Harbor scientist, Yaniv Erlich, who debunked a paper that Craig Venter had published in PNAS claiming that their group could predict appearance from a genome sequence. By the end of the same day that Venter’s paper appeared in PNAS, Erlich had posted a paper on bioRxiv debunking that finding. He showed that what they were really doing was predicting what the average ethnicity was of someone. That was a real-time peer review.
How did the practice of peer review originate?
RS: I wrote an article about this for PLOS Biology on the history of science publishing, recently. A lot of people think that 300 or 400 years ago, when journals were invented, peer review appeared on day one and was practiced like it is today. But actually, it's a relatively recent intervention. Most journals only started doing it after the Second World War. It's a formal process that's been enacted. But actually, peer review is the repetition and analysis over a period of time of work. That's really what peer review means. There is a possibility for a preprint server to do that because people can replicate the work or fail to replicate the work and explain what the problems were. There have been a handful of really great examples on bioRxiv where people have done that. One was about this tiny little eukaryote; there was a paper in PNAS that said it had huge amounts of horizontal gene transfer, the likes of which had never been seen in a eukaryote. Within a couple of months, there was a paper on bioRxiv that showed that this was completely wrong, and that it was some form of contamination in the sequencing. That was the best form of peer review.
Most journals only started doing [peer review] after the Second World War. It's a formal process that's been enacted. But actually, peer review is the repetition and analysis over a period of time of work. That's really what peer review means. There is a possibility for a preprint service to do that because people can replicate the work or fail to replicate the work and explain what the problems were.
Dr. Richard Sever
Peer Review and Its Future in Preprint Publishing
Should we be moving towards preprints entirely?
NQ: Earlier, you talked about the race to the bottom, and that there are more and more journals, including predatory journals, that are just trying to publish anything. I’ve always wished that there was a journal of negative results showing what didn't work.
RS: I think there actually is a journal like that. What's interesting is that people don't put papers in it because they don't think it will benefit their careers.
NQ: Maybe it won't benefit their careers, but I believe the point is to benefit science. There's a huge amount of papers that could say, “This didn't work. Don't go down this road.” I think you're also onto something there with the small pieces of information that don't make it into that big paper that you end up submitting that are still useful pieces of information that should be learned and used and built upon by other scientists. Peer reviewing is also a huge endeavor. I know I don't peer review anymore, but back when I was doing research, I would dread that invitation to peer review something. Not only because it was a huge time commitment, but because I felt an enormous amount of pressure to get it right, knowing that there's only a couple of people that are going to be looking at the paper. Whereas on a preprint server, you're now doing this crowdsource review. I would say it’s true peer review instead of a handful of selected people.
RS: The great thing about bioRxiv is it doesn't constrain what you do next because it decouples the peer review from the dissemination. I think that's where the opportunities are. I think there's a real chance now for people to experiment and ask the question that you're asking, which is really, “What should peer review look like?”.
What should peer review look like?
RS: There are a couple of things that immediately spring to mind. One is the fact that peer review looks pretty standard whatever field you're in; you send a paper to two or three people and nag them for 14 days to get a decision or written analysis of the paper. I think back to what we were saying earlier about peer review; maybe if the information is out on the preprint server, you have much more freedom, both in time and mechanism for peer review. I always think of the examples of people doing synthetic biology and building tools. Maybe the best peer review of those is somebody actually testing them in a lab. If the information is already out there, you have the freedom in time to do that because you don't have to do it really quickly. Which touches on the burden of peer review.
We have these simultaneous problems with lots of scientists saying, “I'm so burdened with peer review. I'm asked all the time. Peer review is broken. The system is overloaded.” But, as a journal editor, I know that many journal editors get emails from people saying that they would love to be asked to peer review papers or that they never get asked. So there are a lot of early career scientists who are never asked, who would love to be asked. There are lots of people outside of Europe and North America who would like to be asked and are never asked. So that's another example of not necessarily crowdsourcing, but being able to have a much broader net cast. It could solve some of those problems and increase equity by involving more people. It could get us to a point where we can do a better job and decide what exactly it is that peer review is doing. I mentioned that article I wrote recently in PLOS Biology, but one of the key points of that article is that peer review conflates a huge number of things.
There are a lot of early career scientists who are never asked [to peer review a paper], who would love to be asked. There are lots of people outside Europe and North America who would like to be asked and never asked. So that's another example of not necessarily crowdsourcing, but being able to have a much broader net cast. It could solve some of those problems and increase equity by involving more people.
Dr. Richard Sever
RS: The peer reviewer is examining the science and asking, “Is this correct? Or, do I think it's correct?” But there's also, “Is this any good? Is this important?” It becomes a filter of some sort. You can say, “I only want to read the good papers, so I read the ones in certain journals because I believe the peer review system has put them there.” However, what's interesting is that there have been a number of studies that have shown that most papers don't change very much from being a preprint to being in a peer-reviewed journal. So, you can argue that peer review is not doing anything, but it is doing something; it's deciding what journal papers go into. So even if the peer review doesn't change the paper, one thing it does is decide what journal that paper goes into, which people use as a filter to read. Lots of people will dispute whether that's a good filter, but it absolutely is a filter that people use.
That's something that we could look at and ask: Now we have decoupled the dissemination in the peer review, can we deconflate aspects of peer review around things like the quality of the article, as opposed to the level of broad interest, or impact as a word that people throw around? Another point I always want to make is that there are other things that journals and preprint servers do that are important and will be increasingly important around the verification of information. We need to do a better job. It's hard to assure that the person who said they wrote the article did write the article or that the data in the article are bona fide and are not created by ChatGPT or the latest large language model or something like that. So there are all sorts of things that I think are not talked about enough—the kinds of evaluation of content that are not quite peer review, but maybe increasingly important in the whole post-truth world.
Decoupling Impact Factor: Rethinking the Value of Scientific Publications
Where is publishing in academia going next?
NQ: The PLOS Biology paper that you published is a really comprehensive history of publishing and has some food for thought about where things are going next. Thank you for publishing that and putting the effort into that.
DJ: I loved reading that. I want to quote from the abstract: While it would seem to maybe be a disincentive to adopt the preprint model, it’s not mutually exclusive as you say, “Journal brand and Impact Factor have meanwhile become quality proxies that are widely used to filter articles and evaluate scientists in a hypercompetitive prestige economy.” That last point is what I want to focus on; the hypercompetitive prestige economy seems to be like a zero sum. There’s a loss to scientists who want to show they are doing esteemed research with their big result by publishing in a high-impact journal. But it's not mutually exclusive. I think what we're talking about here is that you can get results out in a preprint and get your line in the sand. Then you're saying that it is still important to review the results, for those proxy legacy reasons of deciding what results have high impact or importance. I don't love this as a scientist, but it's how we all live and breathe. It's how we advance in our careers as scientists. I would love to live in a world where we don't, but it seems like we need these proxies to judge ourselves in this hypercompetitive prestige economy.
I'm aware of this, but I didn't understand the timing of how the big prestige journals, Nature, Science, Cell, gobbled up all the more niche subject matter with their sibling journals, like Cell Stem Cell, Nature Medicine, Nature Biotech, for example, to the point where now essentially the big three or big four, they own all prestige science, which I think is a bit of a money grab on the publisher's side. I don't know why else they would do that. But it does raise the question that a similar thing could happen with preprints, as you get these specialty preprint servers, is there ultimately going to be a point where you have prestige preprint servers? Are there going to be papers that get into bioRxiv that have a different level of impact than other, more niche, preprint servers?
Will preprint servers ever have an impact factor?
RS: One of the things that people said to me early on is that there are great papers on bioRxiv. I said that I'm glad that there are great papers on bioRxiv, but let's just be clear, the goal is not to have only great papers on bioRxiv. The goal is to have all the papers on bioRxiv about biology and medicine. So that means that, on average, they'll be mediocre—because most ones are. But what you do is you decouple that filtering. And so maybe you can make it better. There are a number of problems that people have with this idea that if you get a paper in Nature, that you get a job immediately. Should those kinds of brands be used as the quality proxy? There's a whole debate about that. But even if you don't have too much problem with the concept itself, you ought to have a problem with the fact that that decision is made really soon by the opinions of two or three people.
Even if you don't mind that, there is the problem that you have this immutable quality indicator that is assigned before anybody gets to read the paper. Even 20 years later, a Nature paper is still a Nature paper. So people think it must be a great paper because it was in Nature, even if for the previous 19 years, people have said it’s garbage and they don't know why it was published in Nature. It still has the word Nature. I think that's one thing we want to get away from. But as you said, I think people always want proxies, and one can't be naive about that.
Another thing that I think people need to be careful of is if you do get away from a third party for objective proxies, for want of a better word, you have to be careful. So in that Craig Venter scenario, you mentioned Craig Venter gets the paper in PNAS, but if PNAS goes away, then you have to be very worried that people don't say, “Oh, well, it's a Craig Venter paper, therefore it's good”. It goes back to what we talked about before where that hypercompetitive prestige economy comes about because there's a real narrowing of the pipeline once you get up to permanent academic positions.
Even 20 years later, a Nature paper is still a Nature paper. So people think it must be a great paper because it was in Nature, even if for the previous 19 years, people have said it’s garbage and they don't know why it was published in Nature. It still has the word Nature. I think that's one thing we want to get away from.
Dr. Richard Sever
What is creating the “hypercompetitive prestige economy” in academia?
RS: Everybody's fighting for permanent academic positions and for grants. That is why you have this prestige economy. Simultaneously, people are deluged with information, so they want signals. If you remove the signal “Nature,” people find something else. They say, “Oh, this is a paper by Craig Venter”, or “This is from somebody at Harvard.” So we have to be aware of those kinds of things. We can question that through this decoupling process, by having more longitudinal assessments, we can get to a point where there are better ways [to know if research being published is good quality] and processes that are more multidimensional. Rather than having a brand [as the signal]. It's quite confusing to me that scientists value their objectivity, but they turn out to be suckers for a brand as everybody who buys Nike trainers and wears Calvin Klein underwear, for example. It's always interesting to me that people complain about impact factors. But whenever Nature launches a new journal, before it even gets an impact factor, everybody has decided that “Nature X” is the best journal in field X apart from Nature. And that happens immediately before they publish any papers.
They're very smart people in Nature. They have good editors, and they've got a good track record. But it has built the brands to become very, very powerful. We have to understand that that is a reaction and that's capitalizing on the academic career structure. That's the one thing I always think we should be very careful not to lose sight of; often, people point the finger at publishers or journals, but they're just responding to the career pyramid and the structure of academia. I joke that 50 years ago, people didn't worry about which journal they got their article in because they were guaranteed to get a job anyway.
NQ: As the director of brand at STEMCELL, you just spoke right to my heart when you mentioned that “a brand is how people feel about you”. I'm always told that scientists are trained to not pay attention to their feelings, and it's really about the data. But scientists also buy Apple products and Nike products. Just like people are tied to Nature and Cell.
The Shifting Relationship Between Journals and Preprint Servers
Are journals now watching bioRxiv?
NQ: You spoke before about the fact that around 70% of papers on bioRxiv end up in journals. If there's a paper that's getting a lot of traffic, comments, or attention, are journal editors now going and seeking out the authors and saying, “Hey, please submit,” or, “You're pre-approved”? Or even waiving the fees?
RS: Yes, and it is amazing because it happened really quickly. We launched at the end of 2013, and very early in 2014, I remember being contacted by someone. I think it was a guy called Leonid Kruglyak, and he showed me an email that said, “Dear Leonid, we saw your paper on bioRxiv. Would you be interested in submitting to our very high-profile journal?” Bear in mind that Cell, Science, and Nature have loads of editors who go around conferences, hearing talks, looking at posters, trying to find really interesting work and encouraging the authors to submit. So obviously they would do this. I think that is quite common. The challenge is that that's a very targeted approach by smart editors who think this is good science. On the other end of the spectrum, however, there are clearly predatory journals who will email people who post on bioRxiv asking them to submit. Many scientists have experience with this. Because I'm not an active scientist, I don't publish papers that often. But the minute that I publish a paper like the one you mentioned in PLOS Biology, within the next couple of days, I get an invitation to write a review on forestry or gastroenterology, or something that I have no expertise in whatsoever, which serves to reveal just how much of this is spam.
This is a funny example: Every now and then, there'll be some instance where something goes wrong at bioRxiv and it misfires and we have to remove something. We actually got an email from a predatory journal that said, “Dear bioRxiv, we were very interested in your paper, ‘file removed because of technical error.’ We would like to encourage you to submit to our journal.” I thought that was absolutely fantastic.
NQ: Back to the prestige of journals, if Nature came knocking after you put something up on bioRxiv, it could save somebody a lot of time with reformatting and help accelerate science. Perhaps scientists can just put their papers in one place and say, “Let's see if somebody comes to us.”
Could scientists self-promote in preprint servers?
DJ: Thinking about this new forum for exposing research for editors, I wonder if in the modern era of slight manipulation of people's feelings with algorithms and social media at its worst, there's no real direct correlation sometimes to whether or not the things [that we see online] have or do not have value. Someone, who shall remain nameless, told me that they gamed bioRxiv to draw attention to an article, which I think perhaps isn’t unethical, but it raises some questions. In this case, the preprint went up, and then there was a bit of an echo chamber because it was shared with collaborators and also people who were very much against the idea. Within that relatively small sphere, let's say 10 researchers, a ton of discourse was created.
That caught the attention of the editors at a major journal. They invited this person to submit. I'm not saying there’s something wrong with that. But it seems like in that system, it becomes a really important skill for the advancement of your science to be able to promote it or to self-promote. I think for someone like myself who has zero social media presence and cringes at the idea of sharing anything because I assume that anything I think is fundamentally wrong [that promoting myself would be difficult]. I don't know if that's an idea that has always been in science, but I think it's a relatively new idea that you get out there, get it on the preprint, and then promote it. What are your thoughts about this? The idea that the value of science from an editorial standpoint is not so much about the editor’s own experience? I think it's necessary, because these editors can't know everything. If people are talking about it, it must be important. I just wonder if that changes the game and if you think that's a positive or negative change.
In terms of marketing the work from individuals, people do that an awful lot already. We have to. Many PIs spend their whole time going around the world giving talks to people.
Dr. Richard Sever
RS: I think there are pluses and minuses. I think on the plus side, one of the things that social media does, for example, is allow people to have a voice who didn't necessarily have a voice. But one of the things that was key to the success of bioRxiv, in part, was because it arrived at a time when there was a lot more peer-to-peer conversation. I could have never met you before, but you could put your paper on Twitter, and I could reply to you and say, “Oh, that's really interesting, but had you considered the possibility of this artifact in your phase separation section,” or something like that. I could also then build some reputation as somebody who thinks about this and so people would follow me and say, “Oh, well, he knows about this stuff, so I'll follow him,” rather than have my interest dictated by who the editor had selected to write the news and views articles. So I think that's the other thing. There is a downside, because the most awful aspects of social media don't omit science. The same things happen. They're phenomena within social media that are terrible, and they manifest in scientific discussions. So that's a bit of a problem.
I think in terms of marketing the work from individuals, people do that an awful lot already. We have to. Many PIs spend their whole time going around the world giving talks to people. I remember talking to a friend of mine once about what he had to do in the next couple of weeks; he said, “I've got to give a talk at Harvard, then I've got to give a talk at Stanford, then I'm off elsewhere.” I thought, you don't have to. You're doing this because you think that there is value and perhaps there’s a naive perspective that it's beneficial for these people to hear from you. The more selfish view is that actually this is sustaining my reputation as a thinker in the field. They say yes to every invitation to give a talk because that spreads their scientific brand around the country. So there are undoubtedly bad aspects, and some of the worst features of these social media platforms are manifest in science.
Funding Responsible Screening and Permanent Hosting for Preprint Servers
How are preprint servers currently funded?
NQ: We know that bioRxiv has been generously funded by Cold Spring Harbor and the Chan Zuckerberg Initiative. But somebody's paying to host these papers and paying to vet them in the way that they are vetted. We know that there's a whole funding apparatus, and for-profit apparatus, behind the traditional journals that is maybe a little undesirable in certain situations. How is this going to move forward? How does this work in the future?
RS: I think that's a very good question and it's something we think a lot about. The first thing to say that's really important is, in the grand scheme of things, it's not very much. It costs millions of dollars each year to run these things. But the scientific publishing industry is a US$10 billion-a-year industry. A journal article costs in the region of US$4,000 to US$10,000 dollars a paper. There are 2 million papers published each year. That's how you get to a multi-billion-dollar industry. The cost of hosting a preprint is tens of dollars, not thousands. The logic is that research costs hundreds of thousands of dollars. The journal article costs thousands. The preprint costs tens. So I think it would be a terrible failure of the scientific community if we cannot find tens of dollars for papers. It really is the change found at the bottom of the sofa in the grand scheme of things. So the question is, how do you get the tens of dollars per paper to allow you to do this responsible screening and permanent hosting and all the features of bioRxiv?
What’s the future of preprint funding?
RS: Right now, we have generous funding from Cold Spring Harbor and the Chan Zuckerberg Initiative. Going forward, we want to diversify that and have more people contribute. Personally, it makes far more sense to me to have a relatively small number of groups that represent the scientific community writing large cheques, rather than trying to administer hundreds of thousands of payments of US$10. We don't want a customer-facing thing where you have to pay to put your paper on bioRxiv. We never wanted that. The whole point is that it's free to read and it's free to post.
We think that that can be done because the total cost is relatively small in the grand scheme of things. If the current scientific ecosystem is funding billions of dollars for publishing papers, and tens or hundreds of billions of dollars for all the research, we ought to be able to find a group of funders who could earmark this very small amount of money, relatively speaking, to fund preprints.There's then another question about the whole peer review process and what that looks like. But the nice thing about having the preprint ecosystem is that you can take care of that infrastructure piece of the papers and then have a conversation about how you fund peer review, how it evolves, and who pays, that’s not constrained by having to cover the hosting of the paper online as well.
I think we need to really—as a scientific community—do some thinking about how science is accessed and who's paying for it and who's benefiting from it, and how that money gets distributed.
Dr. Nicole Quinn
NQ: I agree. I'd add that, yes, the Article Processing Charges (APCs) are funding this multibillion-dollar industry, but also the subscription fees. I think a lot of people in the academic world probably assume that those are paid by their institution. But coming from a small- to medium-sized biotech world, we don't have that access anymore. As soon as you move into the for-profit world, you no longer get those academic subscriptions. I would argue that science is moving forward with startups at this point, and startups can't afford to have subscriptions to these journals, and the subscription fees also go up when you're not an academic. That's coming back to open access and the discussion around open access and who pays. But I think we need to really—as a scientific community—do some thinking about how science is accessed and who's paying for it and who's benefiting from it, and how that money gets distributed.
What’s the effect of journal subscription fees on start-ups?
RS: I've heard from a number of small organizations in biotech that they can't afford the subscriptions. The flip side is that people in the nonprofit sector will say the for-profits should contribute in some way. It would be nice if there's some way to contribute without it being this huge amount. A small biotech company cannot afford to have all the subscriptions that Harvard does, which will have a multimillion-dollar library budget.
Final Words of Advice
What’s the value of preprint publishing for early-career researchers?
RS: The one thing I always want to underscore is the value of all this for early career researchers. Going back to that hypercompetitive prestige economy; that's all fueled by the labor of young scientists who are in very precarious positions career-wise. Preprints afford all sorts of opportunities for them. They can get their work out more quickly. They can get that new job more quickly. They get exposure, and they're in control of the timing of that exposure.
Because of this evolving phenomenon of people doing peer review of preprints, there's an opportunity for a lot of early career researchers to get involved in that. I'm particularly enthusiastic about that because I was one of those people many years ago who chose not to become an academic. But I would go to a job interview and they would ask, “What can you do?” I’d say, “I can run SDS-Page and I can sequence stuff, although I'm not going to be doing that in my new job. And here I've got a PhD thesis that you're not going to read.”
RS: So it would have been nice to have had a bunch of peer reviews that I could make public, and I could say, “Look, I read across all these other fields. I can talk about other things.” I've always thought that this whole notion of preprint peer review is something that potentially benefits young scientists, who are going to stay in science and those who are going to move out of benchwork as well.
NQ: Yes, it’s about accessibility. Accessibility across all sectors of who can contribute to science and who gets a seat at the table. I think that open access is one of the keys to providing that.
DJ: You've certainly normalized the idea of preprints. When I was a trainee, it was confusing to think that someone would want me to just share my work. But nowadays it's almost implicit that while waiting for the work to reach a point of maturity, that it's not going to bounce around in review forever after the preprint. It's important to get it out there for your own benefit and particularly to benefit young scientists. Any resistance comes from these legacy scientists for whom this whole notion of getting scooped in prestige journals is still the norm and where the hypercompetitive economy is baked in. It's going to take this younger generation of scientists to change the paradigm. I think the really important step that you've made in the past decade plus, that there's zero resistance to the idea of preprints now, which I could have never predicted. I would have never predicted that in such a short time, people could have such a different notion of that when it's going from self-preservation, looking over your shoulder, to an open science economy. I wouldn't say we're there yet in an open science economy, but at least it's not an anathema.
You can find Dr. Richard Sever on X @chsperspectives.
Don't forget to sign up to our email list at www.labcoatsandlife.com.
To get show notes, episode summaries, and links to useful information, or to learn more about STEM mentorship, see www.stemcell.com/labcoatsandlife.
You can also reach out to us on X via @STEMCELLTech, or via email at info@labcoatsandlife.com with feedback or to suggest guests.
Have guest suggestions? Let us know!
Explore These Resources
Request Pricing
Thank you for your interest in this product. Please provide us with your contact information and your local representative will contact you with a customized quote. Where appropriate, they can also assist you with a(n):
Estimated delivery time for your area
Product sample or exclusive offer
In-lab demonstration