Publications and open science experiences in times of COVID-19 – two wildly different experiences

SARS-CoV-2 and COVID-19 has brought out both the best and worse in medical research. There are numerous examples that highlight this, such as the deplorable state of COVID-19 prediction models and the controversy around a quite well known meta-researcher. I won’t go into those topics. Instead, I will share two of my own experience that will illustrate both sides of the coin when it comes to going the extra mile by adopting some open sciences practices.

On the good side of things there is our effort of implementing an ordinal scale to measure the functional outcome after COVID-19. After our letter to the editor in which we only propose the idea, we kept the project going by publishing a manual to standardize efforts. On top of that, we were lucky that many researchers and clinicians were willing to contribute time and effort with translations and adaptations for their own patients. To this day, we have 20 translations, and many more in the making. Next to this, the scale is now included in several clinical guidelines and relevant clinical studies. This is a story of success – not for us, but for the open character of science in pandemic times. We had an idea, were able to share it quickly in a traditional journal with a letter to the editor, and afterwards supported by OSF to keep the project going.

The bad side of things is the little call-to-action I wrote with my colleague DM. He observed that there was an interesting pattern in the percentage of patients that were admitted to the hospital with COVID-19 – with up to a twofold difference, as you can see in the graph above. The lower the weekly SARS-CoV-2 infection numbers, the higher the percentage of patients that were admitted in the hospitals. Our argument at the time was that, in preparation for the third wave of infections in the Netherlands, medical professionals (GP, ER docs, home nurses, etc) should try to keep the hospitals as empty as possible and keep the beds ready for the large numbers of patients that we knew were coming our way. Because it was more of an opinion piece, we skipped a detailed methods section, but we provided everything (code, data, graph, methods description) on a dedicated OSF page which we kept up-to-date. We offered our thoughts as a comment to the Dutch journal “huisarts en wetenschap”, who after a two-week review period decided to turn it down. Two thoughts on this: 1) I am not sure why there was peer review as it was a call-to-action and not original research and 2) the reason they gave us for the rejection (after revision!) is that the message was perhaps too complicated in these already confusing times – I will leave this without further comment. Next up was the journal NTVG, a more general medical journal in the Netherlands. The peer review process (again!) gave us another delay of two weeks, before the paper was accepted. But you might have guessed it – the third wave had already started and left our call-to-action redundant. In the end, we talked to the editor and mutually decided that we would withdraw our paper, and that at a later moment in time, we might actually write a research paper on the causes behind the variation in percentage of positive tested patients admitted to the hospital.

All in all a very dissatisfying experience, especially in comparison with the PCFS story. The weird thing here is that in both cases we did the same thing – we had an idea, wanted to share with those that might be triggered by it. We started with some information on OSF with dedicated project pages with their content growing overtime as the story developed. In both cases we were commended for using OSF to share additional material in support of the letter to the editor. Still, the outcome is very different. This can of course just be that the ideas are inherently different, but the delay in the publishing process not only killed the second project but also killed some of my enthusiasm for open science. Luckily, it is only some, and I hope / am sure that some future experiences will help me regain some of that.

medRxiv: the pre-print server for medicine

Pre-print servers are a place to place share your academic work before actual peer review and subsequent publication. They are not so new completely new to academia, as many different disciplines have adopted pre-print servers to quickly share ideas and keep the academic discussion going. Many have praised the informal peer-review that you get when you post on pre-print servers, but I primarily like the speed.

But medicine is not one of those disciplines. Up until recently, the medical community had to use bioRxiv, a pre-print server for biology. Very unsatisfactory; as the fields are just too far apart, and the idiosyncrasies of the medical sciences bring some extra requirements. (e.g. ethical approval, trial registration, etc.). So here comes medRxiv, from the makers of bioRxiv with support of the BMJ. Let’s take a moment to listen to the people behind medRxiv to explain the concept themselves.

source: https://www.medrxiv.org/content/about-medrxiv

I love it. I am not sure whether it will be adopted by the community at the same space as some other disciplines have, but doing nothing will never be part of the way forward. Critical participation is the only way.

So, that’s what I did. I wanted to be part of this new thing and convinced with co-authors for using the pre-print concept. I focussed my efforts on the paper in which we describe the BeLOVe study. This is a big cohort we are currently setting up, and in a way, is therefore well-suited for pre-print servers. The pre-print servers allow us to describe without restrictions in word count, appendices or tables and graphs to describe what we want to the level of detail of our choice. The speediness is also welcome, as we want to inform the world on our effects while we are still in the pilot phase and are still able to tweak the design here or there. And that is actually what happened: after being online for a couple of days, our pre-print already sparked some ideas by others.

Now we have to see how much effort it took us, and how much benefit w drew from this extra effort. It would be great if all journals would permit pre-prints (not all do…) and if submitting to a journal would just be a “one click’ kind of effort after jumping through the hoops for the medRxiv.

This is not my first pre-print. For example, the paper that I co-authored on the timely publication of trials from Germany was posted on biorXiv. But being the guy who actually uploads the manuscript is a whole different feeling.

Results dissemination from clinical trials conducted at German university medical centers was delayed and incomplete.

My interests are broader than stroke, as you can see my tweets as well as my publications. I am interested in how the medical scientific enterprise works – and more importantly how it can be improved. The latest paper looks at both.

The paper, with the relatively boring title “Results dissemination from clinical trials conducted at German university medical centres was delayed and incomplete.” is a collaboration with QUEST, and carried by DS and his team. The short form of the title might just as well have been “RCT don’t get published, and even if they do it is often too late.”

Now, this is not a new finding, in the sense that older publications also showed high rates of non-publishing. Newer activities in this field, such as the trial trackers for the FDAA and the EU, confirm this idea. The cool thing about these newer trackers is that they rely on continuous data collection through bots that crawl all over the interwebs to look for new trials. This upside thas a couple of downsides though: with constant being updated, these trackers do not work that well as a benchmarking tool. Second, they might miss some obscure type of publication which might lead to underreporting of reporting. Third, to keep the trackers simple they tend to only use one definition as what counts as “timely publication” even though the field, nor the guidelines, are conclusive.

So our project is something different. To get a good benchmark, we looked at whether trials executed by/at German University medical centers were published in a timely fashion. We collected the data automatically as far as we could, but also did a complete double check by hand to ensure we didn’t skip publications (hint, we did, hand search is important, potentially because of the language thing). Then we put all the data in a database, made a shiny app so that readers themselves can decide what definitions and subsets they are interested in. The bottomline, on average only ~50% of trials get published within two years after their formal end. That is too little and too slow.

shiny app

This is a cool publication because it provides a solid benchmark that truly captures the current state. Now, it is up to us, and the community to improve our reporting. We should track progress in the upcoming years by automated trackers, and in 5 years or so do the whole manual tracking once more. But that is not the only reason why it was so inspiring to work on the projects; it was the diverse team of researchers from many different groups that made the work fun to do. The discussions we had on the right methodology were complex and even led to an ancillary paper by DS and his group. But the way this publication was published in the most open way possible (open data, preprint, etc) was also a good experience.

The paper is here on Pubmed, the project page on OSF can be found here and the preprint is on bioRxiv, and let us not forget the shiny app where you can check out the results yourself. Kudos go out to DS and SW who really took the lead in this project.

FVIII, Protein C and the Risk of Arterial Thrombosis: More than the Sum of Its Parts.

maxresdefault
source: https://www.youtube.com/watch?v=jGMRLLySc4w 

Peer review is not a pissing contest. Peer reviewing is not about findings the smallest of errors and delay publication because of it. Peer review is not about being right. Peer review is not about rewriting the paper under review. Peer review is not about asking for yet another experiment.

 

Peer review is about making sure that the conclusions presented in the paper are justified by the data presented and peer review is about helping the authors get the best report on what they did.

At least that what I try to remind myself of when I write my peer review report. So what happened when I wrote a peer review about a paper presenting data on the two hemostatic factors protein C and FVIII in relation to arterial thrombosis. These two proteins are known to have a direct interaction with each other. But does this also translate into the situation where a combination of the two risk factors of the “have both, get extra risk for free”?

There are two approaches to test so-called interaction: statistical and biological. The authors presented one approach, while I thought the other approach was better suited to analyze and interpret the data. Did that result in an academic battle of arguments, or perhaps a peer review deadlock? No, the authors were quite civil to entertain my rambling thoughts and comments with additional analyses and results, but convinced me in the end that their approach have more merit in this particular situation. The editor of thrombosis and hemostasis saw this all going down and agreed with my suggestion that an accompanying editorial on this topic to help the readers understand what actually happened during the peer review process. The nice thing about this is that the editor asked me to that editorial, which can be found here, the paper by Zakai et al can be found here.

All this learned me a thing or two about peer review: Cordial peer review is always better (duh!) than a peer review street brawl, and that sharing aspects from the peer review process could help readers understand the paper in more detail. Open peer review, especially the parts where peer review is not anonymous and reports are open to readers after publication, is a way to foster both practices. In the meantime, this editorial will have to do.