Retracting our own paper

I wrote a series of emails in the last couple of weeks I never thought I would need to write: I gave the final okay on the wording of a retraction notice for one of the papers that I have worked on during my time in Berlin. Let me provide some more insight than a regular retraction notice provides.

Let’s start with the paper that we needed to retract. It is a paper in which we investigate the so-called smoking paradox – the idea that those who smoke might have more beneficial effects from thrombolysis treatment for stroke. Because of the presumed mechanisms, as well as the direct method of treatment delivery IA thrombolysis is of particular interest here. The paper, “The smoking paradox in ischemic stroke patients treated with intra-arterial thrombolysis in combination with mechanical thrombectomy–VISTA-Endovascular”, looked at this presumed relation, but we were not able to find evidence that was in support of the hypothesis.

But why then the retraction? To study this phenomenon, we needed data rich with people who were treated with IA thrombolysis and solid data on smoking behavior. We found this combination in the form of a dataset from the VISTA collaboration. VISTA is founded to collect useful data from several sources and combine them in a way to further strengthen international stroke research where possible. But something went wrong: the variables we used did not actually represent what we thought they did. This is a combination of limited documentation, sub-optimal data management, etc etc. In short, a mistake by the people who managed the data made us analyze faulty data. The data managers identified the mistake and contacted us. Together we looked at whether we could actually fix the error (i.e. prepare a correction to the paper), but the number of people who had the treatment of interest in the corrected dataset is just too low to actually analyze the data and get to a somewhat reliable answer to our research question.

So, a retraction is indicated. The co-authors, VISTA, as well as the people on the ethics team at PLOS were all quite professional and looking for the most suitable way to handle this situation. This is not a quick process, by the way – from the moment that we first identified the mistake, it took us ~10 weeks to get the retraction published. This is because we first wanted to make sure that retraction is the right step, get all the technical details regarding the issue, then we had to inform our co-authors and get their formal OK on the request for retraction, then got in touch with the PLOS ethics team, then we had two rounds of getting formal OK’s on the final retraction text, etc, and only then the retraction notice went into production. The final product is only the following couple of sentences:

After this article [1] was published, the authors became aware of a dataset error that renders the article’s conclusions invalid.

Specifically, due to data labelling and missing information issues, the ‘IAT’ data reflect intra-arterial (IA) treatment rather than the more restricted treatment type of IA-thrombolysis. Further investigation of the dataset revealed that only 24 individuals in the study population received IA-thrombolysis, instead of N = 216 as was reported in [1]. Hence, the article’s main conclusion is not valid or reliable as it is based on the wrong data.

Furthermore, due to the small size of the IA-thrombolysis-positive group, the dataset is not sufficiently powered to address the research question.

In light of the above concerns, the authors retract this article.

All authors agree with retraction.

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0279276

Do you know what is weird? You know you are doing the right thing, but still… it feels as if it is not the sciency thing to do. I now have to recognize that retracting a paper, even when it is to correct a mistake without any scientific fraud involved, triggers feelings of anxiety. What will people actually think of me when I have a retraction on my track record? Rationally, I can argue the issue and explain why it is a good thing to have a retraction on your record when it is required. But still, those feeling pop up in my brain from time to time. When that happens, I just try to remember the best thing that came out of this new experience: my lectures on scientific retractions will never be the same.

When good intentions collide: double-blind peer review vs preprints

Papers get rejected all the time. Desk rejections are a different breed. “Out of the scope”, “no interest”, “we just had a similar paper like this”, “not interested”… I heard it all. In fact, I like desk rejections – it allows you to move on and find a better, or at least a different platform, platform for your work. But the desk rejection I got today is different:

The easy availability and promotion of the preprint mean that our practice of blinded peer review is not possible for your manuscript.

Quote from a desk rejection email

The email had comments, further explaining the desk rejection, but that is not relevant for this blog-post. It also doesn’t matter which journal rejected which preprint, and it doesn’t matter who was the editor. The evidence / merits / consequences of double-blind peer review also do not matter to be honest, because, in essence, it is just a fact – a preprint excludes double-blind peer review.

I have two observations on this:

First, if a journal wants to have double-blind peer review as part of its procedures, then one might conclude that it is their prerogative. “You do you”, and what not. The problem is that the scientific enterprise doesn’t work if all individuals just started to do things their own way. There has to be some commonality, a common way of approaching the idea when we think about what it means “to do science”. All the variations in the publishing and peer review system (preprints, registered reports, post-publication peer review, blinded, optionally open, mandatory open). I know that innovation require (temporal) variation and failures, but one thing is sure, .it doesn’t get simpler.

Second, is blind peer review even possible anymore? Preprints are an obvious problem, but more and more snippets of our research can be found online. For example: post-COVID conferences are now more and more online, preregistration of projects exist and should be findable, and might be linked to FAIR databases.

These observations are not new, and definitely not just mine, as they were in part inspired by this nice little exchange on Twitter. Nonetheless, they do make me wonder – when do we all know enough to shift from one older way to do science to a newer way to do science?

PS, after the desk rejection, I exchanged some emails with the editor where we both were looking if and how the issue even could be solved if we wanted to. The jury is still out if we are going to resubmit, also because there were more relevant points and comments. But the desk rejection gave me the lesson that good intentions sometimes collide – a very relevant lesson in an ever-changing research enterprise.

The world conference on research integrity: a conference like no other

The World Conferences on Research Integrity foster the exchange of information and discussion about responsible conduct of research”, at least that is what they say about themselves on their website. And that is indeed what I experienced. People with various backgrounds (researchers, whistleblowers, publishers, ombudsmen, and policymakers) talking about research integrity, using one of the broadest definitions that I might have seen.

This opened up my eyes and sparked my interest, but it also makes me wonder whether or not the discussion/audience was too broad for a single conference. Besides this, the conference was set up to look beyond the usual by programming, rightfully, a lot of time and attention to the role of the global south. Words like equity, fairness, capacity building etc were often used in various sessions of this conference with the tagline “‘Fostering Research Integrity in an Unequal World’. I understand the need, and I support the idea of making this the main theme, yet I have the feeling that the theme was spread too thin over the whole conference and thereby lost some of its power.

Anyway, I was not only there to consume. I chaired a couple of sessions (I especially liked oral session 15!) and I gave a talk on our newest project. That project aims to build an open, indexed, searchable, and complete overview of all cases of scientific misconduct allegations in the Netherlands. In my talk, I gave the arguments for such a platform and provided a first overview of the design requirements. We decided to strengthen our message by sharing the slides and our preprint that we shared on our OSF page – a decision I do not regret, as you might understand from the stats in the tweet below.

But the most important thing about the conference was meeting others. There is simply no good substitute (online or otherwise) for haphazard meetings with interesting people while waiting in the coffee line.

From Latour to Twitter: Social Media and the Scientific Enterprise

I have a love/hate relationship with Twitter – it feeds me with scientific knowledge, I learn about new developments in science and I stay in touch with friends and their interesting projects. But it also gives me anxiety and low-key FOMO: I see research papers and op-eds being published that I have had in my head for months, people celebrating grants and prizes that I didn’t get, and meetings and conferences described in a way that I am sure I missed the most important meeting of the year. And, being an epidemiologist, there is the usual COVID-19 vitriol from trolls and #doyourownresearch “researchers”.

So, in that sense, #AcademicTwitter is in no way different from normal Twitter. Why then am I going to advocate that Ph.D. candidates should all become (semi-)active Twitter users on an upcoming course of the Dutch Society for Thrombosis and Hemostasis? The answer starts with Latour’s “Laboratory Life”.

In this book, the authors partake for quite some time in academic research at the Salk Institute. This approach makes up a useful sociology insight into a single lab – What goes in? What does it produce? How do the people in the lab work towards shared or individual goals? What are those goals anyway? But with “The Construction of Scientific Facts” as its subtitle, I think we should think a bit bigger. After all, can a single lab from 1986 paint the full picture of how we do and organize our research in 2022?

From this book, I take that science takes place in a much broader context than a single experiment, lab, or even institute. It is this broader context and the interplay between all players that I want to help improve, slowly but surely, with the QI program that wants to “improve the way we do and organize our research”. Let us take a look at how social media plays a role in this goal: since a large part of how we do research is online, individual scientists need an online presence. Various online platforms have popped up in the last decade, ranging from author disambiguation services to scientific profiles, and open science platforms. These platforms are tools for science, and therefore should be in the toolbox of every scientist. It is also very useful for meta-research if I might add. However, they often lack the connectivity with peers and members of the public that is needed to debate that other question – how do we organize our research?

The Scientific Enterprise is subject to change. Especially with recent technological, methodological, moral, organizational, and societal developments driving the rate of change, how we will organize our research ten years from now will be different from today. To understand the rate of change, we only have to look at this 10-year-old report from the royal society that calls for a more open scientific enterprise and compare it to the current implementation of open science practices, later reports, as well as the “recognition and rewards” movement in Dutch Academia.

That debate, that helps shape the future of the Scientific Enterprise, is of course part of symposia, meetings, conferences, and small talk at the water cooler. But it is also held online – on social media, especially on Twitter. So, for that reason alone, I think that all researchers should become “active” on Twitter. Please note that it depends on your goals whether active here means actually means actively engaging – sometimes just knowing what the debate is all about is more than enough. Actually, take your time, and actively ask yourself some questions to help identify your goals before you actually start with social media. Make sure that how you are going to use social media is in line with those goals. Make it is a nice place to spend your time usefully, that you consume content that is in line with your goals, and that you only post what you want to share. After all, despite that it is part of the online scientific enterprise, academic social media is still social media.

My slide-deck for this talk can be found on OSF: osf.io/xzwq9/

A fair warning – I think you should determine your goals before you start to develop your online presence, otherwise, you will end up dancing TikTok dances almost 4 hours a day without even realizing it. (picture is part of the slide deck on OSF: osf.io/xzwq9/

updates on my COVID-19 activities

Irrespective of what specific field they are working in, epidemiologists have had an interesting two years. On the one side, we might get the flack for the countermeasures taken to battle the corona pandemic, on the other side we do not need to explain anymore what an epidemiologist actually does. You win some, you lose some.

But we are not all working on COVID-19 24/7. Still, even when you are not an infectious disease epidemiologist, chances are you that your work has been affected. Take my case, for example. My expertise lies in clinical epidemiology, especially from non-communicable diseases – I know a lot about how to study the causes and consequences of diseases that are not transferable between people. Even though I leave the hardcore infectious disease epidemiology to the experts, my skill set can bring useful insights and contributions. I have listed my CVOID-19 contributions below.

  • The PCFS: mentioned on this blog before, we have come up with an ordinal outcome in May 2020 of which we think is useful in measuring and thereby understanding the long term consequences of COVID-19 (aka long-covid). The scale is adopted in various ways (clinically, in guidelines, observational studies and trials). We published a couple of papers, and are now looking into the use and misuse of the PCFS in the current literature, with student JL in the lead. More on the OSF can be found on this OSF page, and in this earlier post.
  • COVID-19 hospital admissions in NL – the Dutch hospital admission numbers are given daily. That in and of itself is useful, but their context is important. We sketch out this contact, especially in light of a so-called code black, an ominous yet valid way of saying that we need to take into account non-medical factors into account to triage patients as we only have limited resources on the ICU and wards. More on this can be found on this OSF page. The apper is submitted to a Dutch Journal and expected to be published in the next couple of days.
  • The WHO definition of long covid – I have contributed in various meetings/Delphi rounds etc to the WHO definition of long COVID, published in October 2021. This definition reads: Post COVID-19 condition occurs in individuals with a history of probable or confirmed SARS CoV-2 infection, usually 3 months from the onset of COVID-19 with symptoms and that last for at least 2 months and cannot be explained by an alternative diagnosis. Common symptoms include fatigue, shortness of breath, cognitive dysfunction but also others and generally have an impact on everyday functioning. Symptoms may be new onset following initial recovery from an acute COVID-19 episode or persist from the initial illness. Symptoms may also fluctuate or relapse over time.
  • Post corona core outcome set – I am a participant in the PC-COS effort to come to a core outcome set that helps to study all that happens after COVID-19. Standardization of outcomes is critical, otherwise we are comparing apples to oranges (or even mushrooms for that matter!). If you want to learn more, take a look at the COMET website with some information on the study, or the PC-COS project page. Or just take a look at the video below:

New position for a PhD candidate in the QI team.

We are prepping for a new PhD candidate position for the team. Since we are more in the planning phase of things, this is not a formal job opening (yet). We are currently looking more into what type of individuals would like to join the team, in order to have the final job description be a bit more clear. Since this is not grant money, there is no fixed project, and thereby no fixed job description. The candidate could jump onto one of the following projects, but own ideas and interests are very welcome, especially at this planning stage.

  • Jurisprudence for research integrity platform
    The candidate will initially work on setting up-up and subsequently analyzing an open and searchable jurisprudence database for research integrity. This database will include all ~500 outcomes of investigations into scientific misconduct in the Netherlands and will grow with ~50 cases per year. The candidate will work with various disciplines (lawyers, data scientists) to qualitatively and quantitatively analyze clusters and patterns over time with varied methodologies ranging from expert interviews, surveys to natural language processing both in isolation as well as mixed-methods. These analyses, together with a description of landmark cases will form the jurisprudence platform that will be used during future investigations of scientific fraud.
  • Differences in data handling and analysis
    Many meta-research projects that look to whether methodologies and statistics are used correctly focus on analyses of publications or existing databases of research output. The individual researcher is often hard to reach, and those that do participate in this type of research are not likely to be representative. The candidate will execute several smaller subprojects that combine principles of “many analyst” and “multi-verse analysis” approach to study the use and misuse of methods and statistics in some selected medical specializations. The lessons learned hopefully lead to the development of a platform that is designed to lower the threshold of participation and thus increase representativeness.
  • Authors behind retractions
    There are various papers describing the frequency and some characteristics of scientific articles that were retracted from literature. The focus is less often on the people behind those retractions. Can we distinguish the culprits from the innocent bystanders? What are the causes and consequences of retractions for the individual researcher? Are there differences between various scientific fields? The candidate will approach these questions with the various epidemiological tools and study designs that we have at our disposal.

Some provisional requirements and other details

Open from:Jan 1st   2022 at the earliest. Please get in touch.
Open to:People who hold (or almost hold) a MSc degree in relevant disciplines, which include -but are not limited- to biomedical sciences, medicine, biology or even law. Although no formal requirement, some experience with quantitative research and knowledge of open science practices are welcomed
Language:For some subprojects it is pivotal to be fluent in both Dutch (mother tongue or at least C1) and English (mother tongue or at least C1).

What we offer – The QI activities are driven by a small interdisciplinary team of researchers and policy makers from the medical sciences. Evidence driven, we aim to improve the way we do science through research and policy changes through collaboration with numerous researchers from within and outside the LUMC. For this project, we team up with prof dr. Frits Rosendaal (LUMC, co-chair CWI Leiden) and dr. Yvonne Erkens (Leiden Law School, co-chair CWI Leiden). The research of the team is formally embedded within the department of clinical epidemiology (dr Bob Siegerink / Frits Rosendaal). Take a look at our relative new OSF page of our activities – https://osf.io/2syfm/ and https://osf.io/ku9rh/ for more information.

Interested? Please get in touch with primary supervisor Bob Siegerink (b.siegerink@lumc.nl / bobsiegerink.com / @bobsiegerink). If interested, please send a short motivational statement, and perhaps a short resume that outlines your educational background, research experience as well as relevant software skills. Your responses will help us to further define the roles/activities and corresponding requirements of the new candidate in the team.

PhD defense

The thesis of BO can be found in the Leiden repository

Some days ago I was a member of the opposition committee, this time for candidate BO. She did a fantastic job of combining qualitative and quantitative research on the effect of the academic and scientific training we give our medical students, both within and outside the regular curriculum. She showed, in various studies, how the approach chosen in the LUMC seems to stimulate young MD to do research. I won’t do in too much detail, because I am afraid I won’t do the work justice to be honest -BO was not only diligent in her methodology, but also strong in showing the strong and weak sports in her methods and reasoning. So, if you want to know, you can find her thesis, with the inspiring title ‘Future physician-scientists: let’s catch them young! Unraveling the role of motivation for research’ in the repository of the Leiden University

The Dutch PhD defense seems sometimes more of a ritual, than an actual exam. This is because the outcome is clear to all involved – if the candidate is allowed to defend (sometimes referred to as the viva), the candidate will get her doctorate. I think the only way you will not be awarded your doctorate is if you actually pass out or something else massive, which renders answering any question impossible. The questions are thought to be an exchange of thoughts, and function thus as a way to showcase your work and not so much to challenge or test it. Or is it?

I was not the only thinking this was excellent work. BO got Cum Laude, which is the highest evaluation that is only given to about the top 5% of candidates. Cum Laude not only requires more external evaluators (all asked their opinions in secrecy), but is also dependent on the defense. Indeed, it is possible for a candidate to not perform well enough, and the candidates just passes the defense like everybody else. So it is always a surprise for the candidate to hear those words at the end of what sometimes really is an exam: You have passed the exam with the highest honors.

Publishing for science or science for publications?

Sometimes, open science seems to be the thing that is going to solve all the nasty bits that are part of the current scientific enterprise. No added value from peer review? Registered report! Duplicate work? Preprints and preregistration! People working on the same problem as you are? Share data!

But recently I came across a stark reminder that that is not the whole story. I was asked to review a short research letter from people who were looking to gather all evidence to answer a particular clinical question. While doing so, they actually stumbled upon a deluge of meta-analyses: 20 meta-analyses were published in 17 different journals. What a body of evidence is then perhaps the first reaction, but the problem was that is actually all these analyses were based on only 4 trials.

The authors pointed towards the enormous waste of energy and even talked about an epidemic of meta-analyses. I see in this a great example how the root cause lies in that science is being done for publications and not publications are made for the betterment of science. This is the root cause of many problems in the scientific enterprise. And as long as that is not addressed, open science practices are just a stopgap – something which might work for a bit, but it does not address the root cause and therefore does not solve the problem. If you want to read the whole argument, with some details on what open science practices were actually in place with these 20 meta-analyses, the paper is only a minute 5-minute read.

This commentary is a joint production with FRR. This paper was a preprint first on our OSF page and is now published in the journal of thrombosis and hemostasis and can be accessed here (open access).

A small kerfuffle in Dutch academia over changes in rewards and recognition for work in academia

There is a change in the Dutch academic air. There is a broad movement, fuelled by those who pay for most of the grants, to change how we reward and recognizes individual contributions to academia. The bottom line is simple: journal impact factors and the number of publications might be used to assess these contributions, but they are just not the right metrics. They incentivize behavior that is not desirable, but most of all – a focus on scientific excellence makes that we are not valuing important work in academia. To strive for scientific excellence might be a good thing, but teaching, patient care and outreach should also be valued.

On that background, a group of 170 (mostly senior) researchers published an open letter arguing against this new approach. Their main argument – impact factor and Indices are, despite its flaws, the current international standard, and leaving them behind us will jeopardize the international standing of our researchers. A lot can be said of that point of view – it is old-fashioned, it is short-sighted, and it is the story of only the winners of the old system. But then again, they are right, aren’t they?

Yes and no. Yes – impact factors are indeed a hallmark of various hiring and promotion procedures in a large part of international academia. No – more and more organizations come around and see that the way we evaluate contributions to science should go beyond a counting how many “impact points” one has collected. DORA has been signed by many Dutch organizations like VSNU, NLU, NFUKNAWNWO en ZonMw but most importantly, the European Research Council just joined as well. And also, the various new formats for recognition and rewards in the Netherlands just add the possibilities to explain your success story, which might or might not include a publication in Nature.

This makes this movement not fringe, weird or leaving out those people who were successful in the old system. It adds possibilities. The premise of their main point – the Netherlands is one of the first countries to make this change – is true and this has consequences. It is up to Dutch Academia to move from idea to policy in a healthy way. That means that some ideas of these researchers need to be taken into account in order not to create a split civitas academia.

These ideas are the backbone of a response that a bunch of young members of the scientific enterprise wrote – me included. The little kerfuffle even reached the regular news media, with a short piece in the Volkskrant.

Know, be consistent and open about when, who, and how to count when you use the PCFS – do not forget the dead!

The post COVID-19 functional status scale has been adopted in 100+ clinics or research projects. I think this is a great testimony to the power that science and collaborations brings to this pandemic. But with a new tool comes also a way of thinking that is perhaps standard in stroke research but perhaps not so obvious outside that field. I might have underestimated that when we proposed the PCFS. To provide some guidance, just think about this. Know, be consistent and open about when, what and who you count when assessing the PCFS in your clinic or cohort. And yeah, do not forget the dead.

  • WHEN: when you count the PCFS in your patients is just perhaps one of the most important aspects of the PCFS when you want to make use of all its potential. When you assess the PCFS must be standardized. It would be best if it were standardized between studies or clinics (e.g. @discharge and 4 & 8 weeks thereafter, as we suggest in the original proposal) but this might not be suitable in all instances. If you can’t keep this, try at least to standardize the moment of assessing the PCFS constant, with a narrow time window, within one data collection. As COVID-19 patients are likely to improve overtime, it matters when who is interviewed. Irrespective of whether you were able to keep the time window as tight as possible, make sure to report the details on the window in your papers. Better even – share the data.
  • WHO: If you only assess the PCFS in the survivors of COVID-19, you build in a selection. And when there is a selection, selection bias is around the corner. The clearest example comes from the comparison of those patients admitted to the ICU vs those who were not. If we do not count the dead in the ICU population, but we do in the other group, it might well be that the PCFS distribution amongst those with the PCFS assessment is in favor of being in the ICU group. All patients who enter the cohort need a PCFS assessment, also the dead. Again, whatever you did, make sure you describe who were assessed and who weren’t in your methods and results of your papers.
  • HOW: in our proposal we give the option to do an interview or a self-assessment questionnaire. We don’t have enough evidence to support one over the other. We think interviews provide a little more depth, but the bottom line is that professionals should choose the one that fits their needs the best. Both outcome assessments methods will provide different scores and that is just fine, as long as it is clear to others what you actually did. But be aware: mixing two types in one study or cohort can bring some bias – see above. Make sure you provide an adequate description of what you did, even when you follow the proposed methods in our manual – the PCFS is not completely standardized in the literature, so you need to bring your colleagues up to speed.

In a clinical setting it is easy to take these three variables into account when discussing a single patient. After all, you only need to put the PCFS in the context of one individual. At most, you need to consider the PCFS measured over multiple time points. But if you want to learn from your experiences, it is best to make the assessment as standardized as possible. It will help to interpret the data of an individual patients quicker, see patterns within and perhaps between patients, and as a final kicker, might make it possible to do some research with that valuable data.

More information on the PCFS can be found on https://osf.io/qgpdv/.

Publications and open science experiences in times of COVID-19 – two wildly different experiences

SARS-CoV-2 and COVID-19 has brought out both the best and worse in medical research. There are numerous examples that highlight this, such as the deplorable state of COVID-19 prediction models and the controversy around a quite well known meta-researcher. I won’t go into those topics. Instead, I will share two of my own experience that will illustrate both sides of the coin when it comes to going the extra mile by adopting some open sciences practices.

On the good side of things there is our effort of implementing an ordinal scale to measure the functional outcome after COVID-19. After our letter to the editor in which we only propose the idea, we kept the project going by publishing a manual to standardize efforts. On top of that, we were lucky that many researchers and clinicians were willing to contribute time and effort with translations and adaptations for their own patients. To this day, we have 20 translations, and many more in the making. Next to this, the scale is now included in several clinical guidelines and relevant clinical studies. This is a story of success – not for us, but for the open character of science in pandemic times. We had an idea, were able to share it quickly in a traditional journal with a letter to the editor, and afterwards supported by OSF to keep the project going.

The bad side of things is the little call-to-action I wrote with my colleague DM. He observed that there was an interesting pattern in the percentage of patients that were admitted to the hospital with COVID-19 – with up to a twofold difference, as you can see in the graph above. The lower the weekly SARS-CoV-2 infection numbers, the higher the percentage of patients that were admitted in the hospitals. Our argument at the time was that, in preparation for the third wave of infections in the Netherlands, medical professionals (GP, ER docs, home nurses, etc) should try to keep the hospitals as empty as possible and keep the beds ready for the large numbers of patients that we knew were coming our way. Because it was more of an opinion piece, we skipped a detailed methods section, but we provided everything (code, data, graph, methods description) on a dedicated OSF page which we kept up-to-date. We offered our thoughts as a comment to the Dutch journal “huisarts en wetenschap”, who after a two-week review period decided to turn it down. Two thoughts on this: 1) I am not sure why there was peer review as it was a call-to-action and not original research and 2) the reason they gave us for the rejection (after revision!) is that the message was perhaps too complicated in these already confusing times – I will leave this without further comment. Next up was the journal NTVG, a more general medical journal in the Netherlands. The peer review process (again!) gave us another delay of two weeks, before the paper was accepted. But you might have guessed it – the third wave had already started and left our call-to-action redundant. In the end, we talked to the editor and mutually decided that we would withdraw our paper, and that at a later moment in time, we might actually write a research paper on the causes behind the variation in percentage of positive tested patients admitted to the hospital.

All in all a very dissatisfying experience, especially in comparison with the PCFS story. The weird thing here is that in both cases we did the same thing – we had an idea, wanted to share with those that might be triggered by it. We started with some information on OSF with dedicated project pages with their content growing overtime as the story developed. In both cases we were commended for using OSF to share additional material in support of the letter to the editor. Still, the outcome is very different. This can of course just be that the ideas are inherently different, but the delay in the publishing process not only killed the second project but also killed some of my enthusiasm for open science. Luckily, it is only some, and I hope / am sure that some future experiences will help me regain some of that.

Quality and Integrity @ LUMC – new job, new projects

A lot has happened in the last six months. It all started with moving from Berlin back to my Alma Mater in Leiden, but also shifting from a group leader in neurovascular epidemiology to a position as leader of the program Quality and Integrity of Biomedical Research. This program is a combined effort of the department of clinical epidemiology as well as the directorate of research policy, both at the LUMC. The goal of the program is both to understand and improve the quality of research. The tools of the trade will be research-on-research on one side and policymaking on the other – with the combination being our secret weapon.

The projects

Over the last six months a lot of ideas have already taken shape. Here are some examples:

  • Retractors and retractees– in this project, in collaboration with the people at retraction watch and the CWTS, we will look not only at just the retracted articles but also at the people behind the retractions, the “retractors and the retractees”.
  • Research on researchers – a lot of meta-research is based on publications and other “made to look good” output. This approach leaves the actual process during the research relatively untouched. I am looking to change that – meta-research should go from research on published research to research on researchers. This idea is also similar to the retractees project which focuses on the individual behind the retraction.
  • Understanding misconduct – in this project I will collect the decisions of all committees for scientific misconduct active in the Netherlands. By collecting these and subsequently making them available, we can not only measure the variability of the decisions, understand patterns over time, but also help to standardize future decisions and rulings.
  • Cohort of PhD students – All LUMC Ph.D. candidates enroll in this graduate school, and with ~150-200 Ph.D. candidates per year. I think this is the biggest graduate school in the university. I want to build a cohort of Ph.D. students, to follow their successes over time, and improve the way we train and prepare these young scientists for a future in science. I am also looking forward to building some much-needed training modules on the topic of quality and integrity.
  • Redesigning IRBs @ LUMC – When you do research in the Netherlands, there are various laws and guidelines that have to adhere to. An independent committee that looks at your research plans before you actually start is needed. The current system at the LUMC is outdated, and I have the task to help redesign the system.

The people

The last six months have also given me the opportunity to slowly build a team again. I have written about building a team before, with some lessons I drew from my experiences from starting my new job in Berlin six years ago. That time, I took over a team. This time, I am starting from scratch.

I will run the team together with FRR from the clinical epi, and JT from the directorate of research. I am quite happy with ALL, who is a young and bright Ph.D. student on the team. She will be working with me on most of the projects mentioned above, specifically the retractors project. Other Ph.D. students from the department are also likely to join the QI for a project, as an addition to their own research. The same will go for various colleagues of the department, outside the department outside the LUMC, or even outside the university (some old colleagues at QUEST come to mind).

I am now supervising two students (AdK and LP), both on the retractees projects. I am keen to have two to four students on at any given time working on either the research side or the policy side of things.  If you are a student looking for an interesting project, don’t be afraid to get in touch!

The dreams for the future

One big dream of mine is to organize some summer labs: bring a group of roughly 10 young minds together into one room to tackle a problem within 4-6 weeks, pressure cooker style. I hope that this summer will be my first edition, but I am unsure if this is a realistic idea given the direction of the COVID-19 pandemic in the Netherlands.

Another ultimate dream is to build up a network, both within the LUMC and within the university, not unlike what we did with BEMC. Not just a social network for drinks, or a network of peers to learn from best practices, but a network that can help change policy for the better and both broaden and deepen research projects.

The other stuff

I will continue to work with some collaborators in the neurovascular field (both in Leiden and at the CSB). Some general epidemiological projects might also just find their way on my desk, but to make the best out of QI, I will need to focus. The only thing I now need to learn is to say “no” more often.

Onward!

New paper – Coagulation factor XII, XI, and VIII activity levels and secondary events after first ischemic stroke

Another paper focussed on coagulation. After our paper on FVIII and cognitive function, a paper on the genetic determinants of FXI and FXII, and various papers on intrinsic coagulation protein in the etiology of first stroke and myocardial infarction (for example here, here and here), it was time to make the next step: risk of secondary events.

But studying causal effects in patient groups comes with its own difficulties such as the critical timing of the blood draw to avoid acute phase reactants showing up as causal factors, as well as a phenomenon of index event bias. JLR, who took lead in this project, made it all work, and the team indeed found out that increased FXI and FVIII was related to higher cardiovascular risk in the patients included in the PROSCIS-B study. Even though the figure only shows unadjusted results, they kinda get the whole idea of just looking at this Kaplan-Meier.

Figure 2 from the paper.

Does this mean that we should be measuring or even targeting FXI and FVIII immediately in all stroke patients? No, not really, at least not yet. The HRs are modest (HR of ~2) and therefore not likely to lead to enormous improvement of stroke prediction models – perhaps we can expect a modest contribution in the future, but only in combinations with some other biomarkers which are measured in a simple and reliable way. Targeting FXI and FVIII for treatment is an interesting option, but we are nowhere near a final stage. These proteins (esp. FVIII) is critical in proper hemostasis, and therefore messing with FVIII could lead to some serious side-effects. This is less of a problem for FXI, but still, some more works needs to be done, including the identification of the groups of patients that that will have the highest benefit.

The paper, again with the wonderful JLR in the lead, used to be a preprint at medRxiv, was published in the JTH, is open access, and can be found on my Publons profile and Mendeley profile.

New Paper – Smoking Does Not Alter Treatment Effect of Intravenous Thrombolysis in Mild to Moderate Acute Ischemic Stroke

In 2013, one of my then very new colleagues in Berlin published a very interesting paper with the title “Smoking-thrombolysis paradox: recanalization and reperfusion rates after intravenous tissue plasminogen activator in smokers with ischemic stroke”. When you translate this to something less technical, you will conclude that it appears that stroke patients who smoke seem to react better to acute treatment. The drug that they get administered seems to perform better with opening up the blood vessels in the brain so that the oxygen rich blood for can flow again. It sounds weird, even to us at the time, but there is -albeit unlikely – biological scenario that could actually explain this finding.

Irrespective of biology , this first finding was too preliminary to draw strong conclusions as it is based on imaging outcomes only. It didn’t say anything on whether the patients who were active smokers ended up having fewer symptoms. So that’s where this project came is. Using data from the Dutch PSI study, we were able to study the effect of the treatment and whether that effect was actually different in patients who smoked.

Background: The smoking-thrombolysis paradox refers to a better outcome in smokers who suffer from acute ischemic stroke (AIS) following treatment with thrombolysis. Source

The short answer is no – we didn’t find evidence that patients who were active smokers would actually have more benefit from thrombolysis. If we had found this effect, the RR should have been much extremer in the +/+ category than the effect in the +/- category. In fact, all that we saw showed that there was no real difference. But there are some serious limitations to our study, with the main one being that the patients included in this dataset might not have been the best subset of stroke patient to study this phenomenon. So, even though we didn’t see evidence of the phenomenon, we can’t rule it out and conclude, as so often, “that future research is needed”. Before you ask, yes, we indeed did that future research ourselves. The paper is currently under review, but I can already tell you that the conclusion is not going to change a lot. But that paper is still under review (hint: nothing changes)

Our paper, with AK in the lead, was published in Frontiers of Neurology last september (sorry for the delay), and can be found on my Publons profile and Mendeley profile.

New paper – Endothelial and leukocyte-derived microvesicles and cardiovascular risk after stroke

Kaplan Meier for quartiles of endothelial risk factors, taken from our paper.

Micro vesicles have for some years now been a topic in cardiovascular research, mainly in cardiology. The source of these vesicles are various cell types, and their function remains in large unclear -are they active parts of the bodies’ system, or are they mere bystanders.

Irrespective of that, if these MV are related to cardiovascular risk in cardiology patients, it is interesting to know to what extent they are related to cardiovascular risk in stroke patients. If so, that will be an indication that they are actually part of the causal mechanism or perhaps a good biomarker that might help stratify patients in meaningful subgroups.

So what did we do? We teamed up with cardiologist specialized in MV to measures various subtypes in 600+ patients with a first-ever ischemic stroke. We then looked at the risk of recurrent events and all cause mortality over a span of three years. Our findings tell a clear story – the higher the levels of MW, the higher the risk. The interpretation however, remains as unsure as when we started. We still do not know whether these MV are a cause, a bystander. More research, also just some hardcore basic research will be needed to further elucidate this distinction. In any case, the HR are not too impressive in this mild to moderate stroke cohort, so don’t expect MV to be added to any risk screening panel anytime soon, especially as the measurement is quite laborious.

Our paper, with SH in the lead, is published in Neurology, and can be found here as well as on my Publons profile and Mendeley profile.

PhD defense – The early identification of patients with an unfavorable prognosis

“things change”

When I teach undergraduates what kind of different research questions one might want to answer, I sometimes use the mental image of a patient in the doctor’s office. The questions that are asked can be roughly categorized in “What is wrong with me?”, “How did I get it?”, “What will happen from now on?”, and “What can we do about it?”. Students with a quick mind can recognize the concepts of diagnosis, etiology, prognosis, and treatment hidden in these questions. I usually treat these as quite separate questions, and that for each category different study designs and statistical techniques are preferable.

But that changed after I prepared for the PhD defense of SB. As a member of the “opposite”, I was tasked to examine the PhD candidate on her knowledge and her work presented in her thesis. Titled “The early identification of patients with an unfavorable prognosis”, this thesis focussed on the theme of whether treatment response could be incorporated in prediction models in order to improve the prediction of outcome. I think this is an interesting concept and potentially heavily underutilized and at least heavily understudied.

SB started out by showing that in psychiatry the majority of care is consumed by a minority of patients, thereby conceptually proving the need of identifying patients with an unfavorable outcome. The next chapters test whether adding information on treatment response for three different diseases: depression, asthma and high blood pressure.

Being the last of the opposition committee to examine the candidate, I was able to ask more about the methodological details (presence of co-linearity, the calibration/validation of the final models, the use of complete case analyses, etc). These introductory questions led us to the broader question that focused on her approach to include information about treatment response. SB consistently used the delta of two absolute measurements as an indicator of treatment response, and I asked whether that could be replaced or even complimented by using the delta in the variation of certain measurements. The bottom line is of course is that we don’t know, but that is also not the objective of a Dutch PhD ceremony. The objective is to start a conversation during which the candidate can show that he or she masters the material and is ready to become an independent researcher. And that is what SB did, congratulations!

So what did I learn from all this? Perhaps using the four categories of research questions is too restrictive and that sometimes, by combining the ideas and concepts from questions on treatment and prediction we can actually improve the care we can better distribute the care we can provide for all our patients.

New paper – Coagulation factor VIII, white matter hyperintensities and cognitive function: Results from the Cardiovascular Health Study.

Overview of the “long” timeline of the CHS data collection, taken from the paper discussed.

The newest addition to our publications is this paper on the role of high levels of coagulation factor VIII and cognitive function, as well as white matter intensities in the brain. The theory behind is that since hypercoagulability is related to young, overt, stroke, would hypercoagulability perhaps also be linked to non-overt cerebrovascular mishap? Hypercoagulability here is measured as high levels of FVIII, one of the most potent risk factors for thrombosis, and the cerebrovascular mishap is the presence and intensity of white matter lesions. This paper has a long history in three very different, but meaningful ways.

The first “long”aspect is that you need to have a very long, and complex, follow-up to study this. Where clinical stroke is a sudden onset type disease, what were are studying in this project has a for more gradual character. So not months, but years. Decades even! And not a lot of studies have the prerequisites to study this question: first there must be the possibility to measure FVIII, which means citrate. But most long term follow-up studies do not have citrate. Second, there must be MRI data available throughout the study. However, most large longitudinal studies only have MRI at baseline as an exposure measure. Third, The studies must go on for a very long time, which comes with the complication that often the participation in these type of studies can dwindle over the years. So, all in all, there was really only one cohort who had all the data already collected and ready to analyze: the Cardiovascular Health Study. So we requested access to the data with a focus on FVIII, cognitive functioning and multiple measures of white matter lesions in order to assess worsening of lesions and were ready to analyze!

Interestingly, this brings us to the second “long” aspect of this paper – getting acces to and using the CHS data. The idea for this paper came roughly in 2013, when I got in touch with some CHS researchers for the first time. That is a long time ago from today, end of 2020. So what took us so long? Well first there was the move to Berlin. I decided to take this project me, but immigrating to a different country and starting a new job at the same tends to put some delays on ongoing projects. A second reason is that the CHS data is open for non CHS researchers to use, under one very strict condition – CHS researchers don’t just hand our data but they help you set up and execute your plan. This approach is not completely “open science” but it might be better. After all, it does ensure that the knowledge and experience that comes from actually collecting the data is taken into account when you prepare for and actually analyze the data. But that process takes time, especially when working with collaborators several time zones away.

A third and final “long” aspect was the time between first time submission and final publication. Our paper got rejected by 4 different journals before we got accepted at PLOS one. This is definitely not a record, but the delay isn’t pretty. The reasons for rejection were different at these journals, but the fact that this was a “null” finding in a general population cohort did certainly not help.

The paper, with the title “Coagulation factor VIII, white matter hyperintensities and cognitive function: Results from the Cardiovascular Health Study” is published in PLOS One. You can also find it at the usual places. JLR took the lead of the project after I moved to Berlin, masterfully navigating and combining all the comments and input from this group of co-authors. Well done to all.

PhD defense – Thrombosis prophylaxis after knee arthroscopy or during lower leg cast immobilization

Thesis cover. Source

A couple of weeks ago I was part of the “promotiecommissie” of RvA at the university of Leiden. This is the committee that evaluates the thesis of a PhD candidate, and which then subsequently is also part of the opposition during the in-person defense, sometimes known as the “viva“.

I was quite impressed by the work described in the thesis. Not just because the individual projects described in each chapter was solid, but also how one clinical problem was approached from several angles with each research question answered with a different methodology. But every chapter it was clear that it contributed to the central theme: is the current practice of venous thrombosis prophylaxis in certain orthopedic patients justified.

The candidate started out with a description of risk factors of venous thrombosis related to tho either lower leg cast immobilization or arthroscopy of the knee and thus establishing that indeed there is an increased risk and that certain risk factors contribute to this risk. A survey amongst colleagues subsequently showed that the prophylactic treatment given to these patients differs quite substantially. This relatively simple element of the thesis is crucial, as it shows equipoise for the treatment – even though there is some evidence from trials, the evidence is weak and methodological unsound (they mostly use ultrasound diagnosed venous thrombosis, not a clinical diagnosis) resulting in highly varied practices.

So the stage is set for a trial. In fact, two trials, one for each of the two patient groups is presented in Chapters 5 and 6 of the thesis. Impressive stuff, which found its way into the NEJM, which showed that treatment with LMWH is in fact not better than placebo. Given this result, this is normally the end of it, but “compared to placebo” should normally raise some eyebrows. Is that the right comparison group? In this case, you can argue that it is, but even if you don’t think so, just go to chapter 7, wherein another group at high risk of venous thrombosis a comparison with compression stockings is made – again, here no evidence that LMWH is better. The candidate here presented this is an IV-analyses. Interesting thought, but I disagreed – the idea behind the comparison between centers has some IV elements in it in the rationalization, but there is no actual IV-analysis being done. Potato, Potato perhaps, but hey, it is a PhD defense! The last two chapters were the first step to a prediction model for venous thrombosis in orthopedic patients (prediction in case-control design, no validation). The idea behind is that if you can identify among all patients the high-risk group, treatment with LMWH might still be useful.

But for now, the evidence is clear – no LMWH in these orthopedic patients for the prevention of venous thrombosis. And that brings me to the lesson that I took from this thesis – it is possible, and necessary, that we evaluate medical practices already in place. It is the whole premise behind the book “Ending medical reversal”. I got that book as a gift from a colleague in Berlin, but I never got around to start. But after reading this thesis, I grabbed the book and read it cover to cover in just two days. Easy read, and interesting ideas, on how medical reversals, its causes, and how to prevent them from happening in the future. Some of my questions during the defense were even based on the book – for example, whether a cluster-randomized trial design should not be the golden standard in medical reversal research.

But the bottom line of the book+thesis combo is clear: there are a lot of medical practices used on a daily basis that should be re-evaluated. Except for one: “LMWH for the prevention of venous thrombosis in all patients with below the knee immobilization or arthroscopy” can be taken off the list.

The full text of the thesis can be found here.

PhD defenses – finding myself on the other side of the table

The traditions and ceremonies surrounding PhD thesis and defenses thereof differ per country. Now that I moved back to the Netherlands, my guess is that I will be participating in more Dutch PhD defenses, not as a candidate or paranymph, but on the other side of the table as a member of the “oppositie- / promotiecommittee”. The promotion committee is the committee that actually reads and judges your thesis whether you will be allowed to defend your thesis in public. That defense consist of a 45 minutes session where you need to debate your thesis with the opposition committee. As a side note, these committees overlap, but are in fact separate. There is also a difference in the duties – When you are in the “promotiecommissie” you are expected to read and evaluated the whole thesis in much detail, which naturally takes up quite some time. The members of the “oppositiecommissie” typically divide up the work, as you only get to discuss the thesis with the thesis for 5-10 minutes during that 45 minutes “viva”.

Anyway, in the last two months, I have been a member of two if those committees. Yes, that does take away some of your time for research, but it is not time lost. In pre-COVID times, these defenses were big happenings (I described the whole ceremony before). They were are a great way to catch up with old friends, and of course you learn a lot from the research presented by and discussed with the candidate. Interestingly, you meet a lot of new individuals as well – and with that a lot of new research ideas and collaborations just might develop. However, PhD defenses are now “COVID-19 proof” which is just a euphemism for “ZOOM” and a lot of that cool stuff that made PhD defenses worthwhile are now lost.

Although a disappointing state of affairs, I have decided to not let this ZOOM/COVID-19 spoil my opportunity to learn. And to track that, I have I will write a post every time I am a member of a PhD committee. These topics will be quite varied and there might be some critical notes here or there, but I will finish every time with what lesson I learned while reading the thesis.

The post COVID-19 Functional Status – an update

A binary outcome is the standard practice in most clinical research and as such, regression models like the binary logistic and the Cox proportional hazards are among the most used in the literature. This is not the case in stroke literature, where ordinal outcomes are now the standard practice in clinical trials and observational studies and registries. The idea behind it is that with more levels in your outcome it is possible to pick up more subtle yet still meaningful effects.

Based on this idea, I helped to propose and develop an ordinal scale for “post venous thrombosis” research. I described this effort shortly in a previous blog post “Three new papers – part III”. That post also describes the “post COVID-19 functional status” scale, or the PCFS. The name is quite self-explanatory I think, so I won’t dive into too much detail on the scale itself. I do want to describe what happened next: our proposal was published, and we got quite some traction. Over 70 colleagues contacted us saying that they are interested in using the PCFS. They delivered.

The PCFS is now available in 14 languages, is included in at least 4 national guidelines, is part of 1 published paper and 1 pre-print. For an up to date overview, you can take a look at the PCFS-section on this page, or even better, the dedicated OSF website.

https://osf.io/qgpdv/

New paper: Long-Term Mortality Among ICU Patients With Stroke Compared With Other Critically Ill Patients

Stroke patients can be severely affected by the clot or bleed in their brain. With the emphasis on “can”, because the clinical picture of stroke is varied. The care for stroke cases is often organized in stroke units, specialized wards with the required knowledge and expertise. I forgot who it was – and I have not looked for any literature to back this up – but a MD colleague told me once that stroke units are the best “treatment” for stroke patients.

Why am I telling you this? Because the next paper I want to share with you is not about mild or moderately affected patients, nor is it about the stroke unit. It is about stroke patients who end up at the intensive care unit. Only 1 in 50 to 100 of ICU patients are actually suffering from stroke, so it is clear that these patients do not make up the bulk of the patient population. So, all the more reason to bring some data together and get a better grip on what actually happens with these patients.

That is what we did in the paper “Long-Term Mortality Among ICU Patients With Stroke Compared With Other Critically Ill Patients”. The key element of the paper is the sheer volume of data that were available to study this group: 370,386 ICU patients, of which 7,046 (1.9%) stroke patients (of which almost 40% with intracerebral hemorrhage, a number far higher than natural occurrence).

The results are basically best summed up in the Kaplan Meier found below – it shows that in the short run, the risk of death is quite high (this is after all an ICU population), but also that there is a substantial difference between ischemic and hemorrhagic stroke. Hidden in the appendix are similar graphs where we plot also different diseases (e.g. traumatic brain injury, sepsis, cardiac surgery) that are more prevalent in the ICU to provide MDs with a better feel for the data. Next to these KM’s we also model the data to adjust for case-mix, but I will keep those results for those who are interested and actually read the paper.

Source: https://journals.lww.com/ccmjournal/Fulltext/2020/10000/Long_Term_Mortality_Among_ICU_Patients_With_Stroke.30.aspx

Our results are perhaps not the most world shocking, but it is helpful for the people working in the ICU’s, because they get some more information about the patients that they don’t see that often. This type of research is only possible if there is somebody collecting this type of data in a standardized way – and that is where NICE came in. “National Intensive Care Evaluation” is a Dutch NGO that actually does this. Nowadays, most people know this group from the news when they give/gave updates on the number of COVID-19 patients in the ICU in the Netherlands. This is only possible because there was this infrastructure already in place.

MKV took the lead in this paper, which was published in the journal Critical Care Medicine with DOI: 10.1097/CCM.0000000000004492.

PhD position for Q&I

As you might have read in my previous post, I start sept 1st at the LUMC, Leiden in the Netherlands, where I will be working to improve the quality and integrity of (biomedical) science. Locally and hopefully beyond the walls of the LUMC as well.  

There will also be a 4 year PhD-position available, starting as early Sept 1. The PhD candidate will be supervised by me and prof dr Frits Rosendaal. Even though the theme is fixed (i.e. Q&I of science), we are still working on the exact topic and projects. For now, all is quite open. Please note that we are open to applications from various fields, such as medicine, anyone of the biomedical sciences, medical humanities, meta-research, law, ethics, psychology etc.

If you know somebody who might be interested, please share this email with that person directly. If not, please share with people that might. Those who are interested/want to get more information can get in touch by sending an email with a short(!) bio-sketch to b.siegerink@gmail.com with “PhD position Q&I” in the subject line.

Three new papers – part III

As explained here and here, I temporarily combine the announcements of published papers in one blog to save some time. This is part III, where I focus on ordinal outcomes. Of all recent papers, these are the most exciting to me, as they really are bringing something new to the field of thrombosis and COVID-19 research.

Measuring functional limitations after venous thromboembolism: Optimization of the Post-VTE Functional Status (PVFS) Scale. I have written about our call to action, and this is the follow-up paper, with research primarily done in the LUMC. With input from patients as well as 50+ experts through a Delphi process, we were able to optimize our initial scale.

Confounding adjustment performance of ordinal analysis methods in stroke studies. In this simulation study, we show that ordinal data from observational can also be analyzed with a non-parametric approach. Benefits: it allows us to analyze without the need of the proportional odds assumption and still get an easy to understand point estimate of the effect.

The Post-COVID-19 Functional Status (PCFS) Scale: a tool to measure functional status over time after COVID-19. In this letter to the European Respiratory, colleagues from Leiden, Maastricht, Zurich, Mainz, Hasselt, Winterthur, and of course Berlin, we propose to use a scale that is basically the same as the PVFS to monitor and study the long term consequence of COVID-19.

Three new papers published – part II

In my last post, I explained why I am at the moment not writing one post per new paper. Instead, I group them. This time with a common denominator, namely the role of cardiac troponin and stroke:

High-Sensitivity Cardiac Troponin T and Cognitive Function in Patients With Ischemic Stroke. This paper finds its origins in the PROSCIS study, in which we studied other biomarkers as well. In fact, there is a whole lot more coming. The analyses of these longitudinal data showed a – let’s say ‘medium-sized’ – relationship between cardiac troponin and cognitive function. A whole lot of caveats – a presumptive learning curve, not a big drop in cognitive function to work with anyway. After all, these are only mild to moderately affected stroke patients.

Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis. This paper investigates several patient populations -the general population, increased risk population, and stroke patients. The number of patients individuals in the title might, therefore, be a little bit deceiving – I think you should really only look at the results with those separate groups in mind. Not only do I think that the biology might be different, the methodological aspects (e.g. heterogeneity) and interpretation (relative risks with high absolute risks) are also different.

Response by Siegerink et al to Letter Regarding Article, “Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis”. We did the meta-analysis as much as possible “but the book”. We pre-registered our plan and published accordingly. This all to discourage ourselves (and our peer reviewers) to go and “hunt for specific results”. But then there was a letter to the editor with the following central point: Because in the subgroup of patients with material fibrillation, the cut-offs used for the cardiac troponin are so different that pooling these studies together in one analysis does not make sense. At first glance, it looks like the authors have a point: it is difficult to actually get a very strict interpretation from the results that we got. This paper described our response. Hint: upon closer inspection, we do not agree and make a good counterargument (at least, that’s what we think).

Three new papers published

Normally I publish a new post for each new paper that we publish. But with COVID-19, normal does not really work anymore. But i don’t want to completely throw my normal workflow overboard. Therefore, a quick update on a couple of publications, all in one blogpost, yet without a common denominator:

Stachulski, F., Siegerink, B. and Bösel, J. (2020) ‘Dying in the Neurointensive Care Unit After Withdrawal of Life-Sustaining Therapy: Associations of Advance Directives and Health-Care Proxies With Timing and Treatment Intensity’, Journal of Intensive Care Medicine A paper about the role of advanced directives and treatment in the neurointensive care unit. Not normally the topic I publish about, as the severity of disease in these patients is luckily not what we normally see in stroke patients.

Impact of COPD and anemia on motor and cognitive performance in the general older population: results from the English longitudinal study of ageing. This paper makes use of the ELSA study – an open-access database – and hinges on the idea that sometimes two risk factors only lead to the progression of disease/symptoms if they work jointly. This idea behind interaction is often “tested” with a simple statistical interaction model. There are many reasons why this is not the best thing to do, so we also looked at biological (or additive interaction).

Thrombo-Inflammation in Cardiovascular Disease: An Expert Consensus Document from the Third Maastricht Consensus Conference on Thrombosis. This is a hefty paper, with just as many authors as pages it seems. But this is not a normal paper – it is the consensus statement of the thrombosis meeting last year in Maastricht. I really liked that meeting, not only because I got to see old friends, but also because of a number of ideas and papers were the product of this meeting. This paper is, of course, one of them. But after this one, some papers on the development of an ordinal outcome for functional status after venous thrombosis. But they will be part of a later blog post.

New paper – Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative

I report often on this blog about new papers that I have co-authored. Every time I highlight something that is special about that particular publication. This time I want to highlight a paper that I co-authored, but also didn’t. Let me explain.

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000576#sec014

The paper, with the title, Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative, was published in PLOS Biology and describes the QUEST center. The author list mentions three individual QUEST researchers, but it also has this interesting “on behalf of the QUEST group” author reference. What does that actually mean?

Since I have reshuffled my research, I am officially part of the QUEST team, and therefore I am part of that group. I gave some input on the paper, like many of my colleagues, but nowhere near enough to justify full authorship. That would, after all, require the following 4(!) elements, according to the ICMJE,

  • Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
  • Drafting the work or revising it critically for important intellectual content; AND
  • Final approval of the version to be published; AND
  • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

This is what the ICMJE says about large author groups: “Some large multi-author groups designate authorship by a group name, with or without the names of individuals. When submitting a manuscript authored by a group, the corresponding author should specify the group name if one exists, and clearly identify the group members who can take credit and responsibility for the work as authors. The byline of the article identifies who is directly responsible for the manuscript, and MEDLINE lists as authors whichever names appear on the byline. If the byline includes a group name, MEDLINE will list the names of individual group members who are authors or who are collaborators, sometimes called non-author contributors, if there is a note associated with the byline clearly stating that the individual names are elsewhere in the paper and whether those names are authors or collaborators.”

I think that this format should be used more, but that will only happen if people take the collaborator status seriously as well. Other “contribution solutions” can help to give some insight into what it means to be a collaborator, such as a detailed description like in movie credits or a standardized contribution table. We have to start appreciating all forms of contributions.

On the value of data – routinely vs purposefully

I listen to a bunch of podcasts, and the podcast “The Pitch” is one of them. In that podcast, Entrepreneurs of start-up companies pitch their ideas to investors. Not only is it amusing to hear some of these crazy business ideas, but the podcast also help me to understand about professional life works outside of science. One thing i learned is that it is ok if not expected, to oversell by about a factor 142.

Another thing that I learned is the apparent value of data. The value of data seems to be undisputed in these pitches. In fact, the product or service the company is selling or providing is often only a byproduct: collecting data about their users which subsequently can be leveraged for targeted advertisement seems to be the big play in many start-up companies.

I think this type of “value of data” is what it is: whatever the investors want to pay for that type of data is what it is worth. But it got me thinking about the value of data that we actually collect in medical. Let us first take a look at routinely data, which can be very cheap to collect. But what is the value of the data? The problem is that routinely collected data is often incomplete, rife with error and can lead to enormous biases – both information bias as well as selection bias. Still, some research questions can be answered with routinely collected data – as long as you make some real efforts to think about your design and analyses. So, there is value in routinely collected data as it can provide a first glance into the matter at hand.

And what is the case for purposefully collected data? The idea behind this is that the data is much more reliable: trained staff collects data in a standardised way resulting in datasets without many errors or holes. The downside is the “purpose” which often limits the scope and thereby the amount collected data per included individual. this is the most obvious in randomised clinical trials in which often millions of euro’s are spent to answer one single question. Trials often do no have the precision to provide answers to other questions. So it seems that the data can lose it value after answering that single question.

Luckily, many efforts were made to let purposefully collected keep some if its value even after they have served their purpose. Standardisation efforts between trials make it now possible to pool the data and thus obtain a higher precision. A good example from the field of stroke research is the VISTA collaboration, i.e the Virtual International Stroke Trials Archive”. Here, many trials – and later some observational studies – are combined to answer research questions with enough precision that otherwise would never be possible. This way we can answer questions with high quality of purposefully collected data with numbers otherwise unthinkable.

This brings me to a recent paper we published with data from the VISTA collaboration: “Early in-hospital exposure to statins and outcome after intracerebral haemorrhage”. The underlying question whether and when statins should be initiated / continued after ICH is clinically relevant but also limited in scope and impact, so is it justified to start a trial? We took the the easier and cheaper solution and analysed the data from VISTA. We conclude that

… early in-hospital exposure to statins after acute ICH was associated with better functional outcome compared with no statin exposure early after the event. Our data suggest that this association is particularly driven by continuation of pre-existing statin use within the first two days after the event. Thus, our findings provide clinical evidence to support current expert recommendations that prevalent statin use should be continued during the early in-hospital phase.1921

link

And this shows the limitations of even well collected data from RCT: as long as the exposure of interest is potentially provided to a certain subgroup (i.e. Confounding by indication), you can never really be certain about the treatment effects. To solve this, we would really need to break the bond between exposure and any other clinical characteristic, i.e. randomize. That remains the golden standard for intended effects of treatments. Still, our paper provided a piece of the puzzle and gave more insight, form data that retained some of its value due to standardisation and pooling. But there is no dollar value that we can put on the value of medical research data – routinely or purposefully collected alike- as it all depends on the question you are trying to answer.

Our paper, with JD in the lead, was published last year in the European Stroke Journal, and can be found here as well as on my Publons profile and Mendeley profile.

The story of a paper on the relationship between cancer and stroke that is both new and not so new.

Science is not quick. In fact, it is slow most of the time. Therefore, most researchers work on multiple papers at the same time. This is not necessarily bad, as parallel activities can be leveraged to increase the quality of the different projects. But sometimes this approach leads to significant delays. Imagine a paper that is basically done, and then during the peer review process, all the lead figures in the author team get different positions. Perhaps a Ph.D. student moves institutes for a post-doc, or junior doctors finish their training and set up their own practices, or start their demanding clinical duties in an academic medical center. All these steps are understandable and good for science in general but can hurt the speediness of individual papers.

This happened for example with a recently published paper in the Dutch PSI study. I say recently published because the work started > 5 years ago and has been finished more or less for the majority of that time. In this paper, we show that cancer prevalence is higher for stroke patients. But not all cancers are affected: it is primarily bladder cancer and head and neck type of effect. This might be explained by the shared risk factor smoking (bladder cancer, repository tract) and perhaps cancer treatment (central nervous system/ head and neck cancer). Not world shocking results with direct clinical implications, but relevant if you want to have a clear understanding of the consequences of cancer.


Now don’t get me wrong, I am very glad we, in the end, got all their ducks in a row and find a good place for the paper to be published. But the story is also a good warning: It was the willpower of some in the team to make this happen. Next time such a situation comes around, we might not have the right people with will right amount of power to keep on going with a paper like this. 

How to avoid this? Is “pre-print” the solution? I am not sure. On the surface, it indeed seems the answer, as it will give others at least the chance to see the work we did. But I am a firm believer that some form of peer review is necessary – just ‘dumping’ papers on a pre-print server is really a non-solution and I am afraid that a culture like that will only diminish the drive to get things formally published is even lower when manuscripts are already in the public domain. Post-publication peer review then? I am also skeptical here, as I the idea of pre-publication peer review is so deeply embedded within the current scientific enterprise that  I do not see post-publication peer review playing a big role anytime soon. The lack of incentive for peer review – let alone post-publication peer review – is really not helping us to make the needed changes anytime sooner. 


Luckily, there is a thing called intrinsic motivation, and I am glad that JW and LS had enough to get this paper published. The paper, with the title “Cancer prevalence higher in stroke patients than in the general population: the Dutch String-of-Pearls Institute (PSI) Stroke study. is published in European Journal of Neurology and can be found on Pubmed, as well as on my Mendeley and Publons profile.

Helping patients to navigate the fragmented healthcare landscape in Berlin: the NAVICARE stroke-atlas

the cover the Berlin Stroke Atlas

Research on healthcare delivery can only do so much to improve the lives of patients. Identifying the weak spots is important to start off with, but is not going to help patients one bit if they don’t get information that is actually useful let alone in time.

It is for that reason that the NAVICARE project not only focusses on doing research but also to provide information for patients, as well as bringing healthcare providers together in the NAVICARE network. The premise of NAVICARE is that somehow we need to help patients to navigate the fragmented healthcare landscape. We do so by using the diseases stroke and lung cancer as model diseases, prototypical diseases that help us focus our attention.

One deliverable is the stroke atlas: a document that provides different healthcare providers – and people and organizations who can help you in the broadest sense possible once you or your loved one is affected by a stroke. This stroke atlas, in conjunction with our personal approach at the stroke service point of the CSB/BSA, will help our patients. You can find the stroke atlas here (in German of course).

But this is only a first step. the navigator model is currently being further developed, for which NAVICARe has received additional funding this summer. I will not be part of those steps (see my post on my reshuffled research focus), but others at the CSB will.

Five years in Berlin and counting – reshuffling my research

I started to work in the CSB about 5 years ago. I took over an existing research group, CEHRIS, which provided services to other research groups in our center. Data management, project management and biostatisticians who worked on both clinical and preclinical research where all included in this team. My own research was a bit on the side, including old collaborations with Leiden and a new Ph.D. project with JR.

But then, early summer 2018 things started to change. The generous funding under the IFB scheme ran out, and CSB 3.0 had to switch to a skeleton crew. Now, for most research activities this had no direct impact, as funding for many key projects did not come from the CSB 2.0 grant. However, a lot of services to make our researchers perform at peak capability were hit. this included my team. CEHRIS, the service group ready to help other researchers was no longer.

But I stayed on, and I used the opportunity to focus my efforts on my own interest. I detached myself from projects I inherited but were not so engaged with, and I engaged myself with projects that interested me. This was, of course, a process over many months, starting end 2017. I feel now that it is time to share with you that I have a clear idea of what my new direction is. It boils downs to this:

My stroke research focuses on three projects in which we collect(ed) data ourselves: PROSCIS, BSPATIAL, BELOVE. The data collection in each of these projects is in different phases, and more papers will be coming out of these projects sooner later than later. Next to this, I will also help to analyze and publish data from others – that is after all what epidemiologists do. My methods research remains a bit of a hodgepodge where I still need to find focus and momentum. The problem here is that funding for this type of research has been lacking so far and will always be difficult to find – especially in Germany. But many ideas that come to from stroke projects have ripened into methodology working papers and abstracts, hopefully resulting in fully published papers quite soon. The third pillar is formed by the meta-research activities that I undertake with QUEST. Until now, these activities were a bit of a hobby, and always on the side. That has changed with the funding of SPOKES.

SPOKES is a new project that wants to improve the way we do biomedical research, especially translational research. Just pointing towards the problem (meta-research) or new top-down policy (ivory tower alert) is not enough. There has to be time and money for early and mid-career researchers to chip in as well in the process. SPOKES is going to facilitate that by making both time and money available. This starts with dedicated time and money for myself: I now work one day a week with the QUEST team. I will provide more details on SPOKES in a later post, but for now, I will just acknowledge that looking forward to this project within the framework of the Wellcome Trust Translational Partnership.

So there you have it, the three new pillars of my research activities in a single blog post. I have decided to lose the name CEHRIS to show that the old service focussed research group is no more. I have been struggling with choosing a new name, but in the end, I have settled for the German standard “AG-Siegerink”. Part lack of imagination, part laziness, and part underlining that there are three related but distinct research lines within that research group.

Up to the next 5 years!?

STEMO, our stroke ambulance, has had a bumpy ride…

STEMO in front of our clinic, source.

Pfew, there has been quite some excitement when it comes to the STEMO, the stroke ambulance in Berlin. The details are too specific -and way too German- for this blog, but the bottomline is this: during our evaluation of the STEMO, we noticed that STEMO was not always used as it should be. And if you do not use a tool like you should, it is hot half as effective. So we keep on trying to improve how STEMO is used in Berlin, even though the evaluation is going on.

We need to take these changes into account, so we wrote a new plan to evaluate STEMO, which was published open access the new BMC journal Neurological Research and Practice. The money to continue the evaluation was secured and we thought we were ready to go. But then reality set in: during budget negotiations a lower committee from the Berlin Senate said simply “NO” to STEMO. A day later however, the Major of Berlin used a “Machtword”, an informal veto to say that STEMO will be kept in the budget in order to finish the formal evaluation.

A true rollercoaster, which will show how directly our research has an impact on the society. The numerous calls, tweets and emails we have received in support of our now 3 STEMO ambulances over last couple of weeks underlines this even more (just the fact that a complete stranger started a petition with all nuances of the case taken into account is just mind boggeling !). But the science has to speak, and we still need to definitively evaluate the effectiveness of STEMO when used like it should be – something we will do over the next months with renewed energy in the whole team.

Auto-immune antibodies and their relevance for stroke patients – a new paper in Stroke

KMfor CVD+mortatily after stroke, stratified to serostatus for the anti-NMDA-R auto-antibody. taken from (doi: 10.1161/STROKEAHA.119.026100)

We recently published one of our projects embedded within the PROSCIS study. This follow-up study that includes 600+ men and women with acute stroke forms the basis of many active projects in the team (1 published, many coming up).

For this paper, PhD candidate PS measured auto-antibodies to the NMDAR receptor. Previous studies suggest that having these antibodies might be a marker, or even induce a kind of neuroprotective effect. That is not what we found: we showed that seropositive patients, especially those with the highest titers have a 3-3.5 fold increase in the risk of having a worse outcome, as well as almost 2-fold increased risk of CVD and death following the initial stroke.

Interesting findings, but some elements in our design do not allow us to draw very strong conclusions. One of them is the uncertainty of the seropositivity status of the patient over time. Are the antibodies actually induced over time? Are they transient? PS has come up with a solid plan to answer some of these questions, which includes measuring the antibodies at multiple time points just after stroke. Now, in PROSCIS we only have one blood sample, so we need to use biosamples from other studies that were designed with multiple blood draws. The team of AM was equally interested in the topic, so we teamed up. I am looking forward to follow-up on the questions that our own research brings up!

The effort was led by PS and most praise should go to her. The paper is published in Stroke, can be found online via pubmed, or via my Mendeley profile (doi: 10.1161/STROKEAHA.119.026100)

Update January 2020: There was a letter to the editor regarding our paper. We wrote a response.

Now hiring!

The text below is the English version of the official and very formal German text.

The QUEST center is looking for a project manager for the SPOKES project. SPOKES is part of the Wellcome Trust translational partnership program and aims to “Create Traction and Stimulate Grass-Root Activities to Promote a Culture of Translation Focused on Value”. SPOKES will be looking for grassroots activities from early and mid-career scientist who want to sustainably increase the value of the research in their own field.

The position will be located within the QUEST Center for Transforming Biomedical Research at the Berlin Institute of Health (BIH). The goal of QUEST is to optimize biomedical research in terms of sound scientific methodology, bio-ethics and access to research.

SPOKES is a new program organized by the QUEST Team at the Berlin Institute of Health. SPOKES enables our own researchers at the Charité / BIH to improve the way we do science. Your task is to identify and support these scientists. More specifically, we expect you to:

  • Promote the program within the BIH research community (interviews, newsletters, social media, events, etc)
  • Find the right candidates for this program (recruiting and selection)
  • Organize the logistics and help prepare the content of all our meetings (workshops, progress meetings, symposia, etc)
  • Support the selected researchers in their projects where possible (design, schedule and execute)

Next to this, there is an opportunity to perform some meta-research yourself.

We are looking for somebody with

  • A degree in biomedical research (MD, MSc, PhD or equivalent)
  • Proficiency in both English and German (both minimally C1)
  • Enthusiasm for improving science – if possible demonstrated by previous courses or other activities

Although no formal training as a project manager is required, we are looking for people who have some experience in setting up and running projects of any kind that involve people with different (scientific) backgrounds.

Intrinsic Coagulation Pathway, History of Headache, and Risk of Ischemic Stroke: a story about interacting risk factors

Yup, another paper from the long-standing collaboration with Leiden. this time, it was PhD candidate HvO who came up with the idea to take a look at the risk of stroke in relation to two risk factors that independently increase the risk. So what then is the new part of this paper? It is about the interaction between the two.

Migraine is a known risk factor for ischemic for stroke in young women. Previous work also indicated that increased levels of the intrinsic coagulation proteins are associated with an increase in ischemic stroke risk. Both roughly double the risk. so what does the combination do?

Let us take a look at the results of analyses in the RATIO study. High levels if antigen levels of coagulation factor FXI are associated with a relative risk of 1.7. A history of severe headache doubles the risk of ischemic stroke. so what can we then expect is both risks just added up? Well, we need to take the standard risk that everybody has into account, which is RR of 1. Then we add the added risk in terms of RR based on the two risk factors. For FXI this is (1.7-1=) 0.7. For headache that is 2.0-1=) 1.0. So we would expect a RR of (1+0.7+1.0=) 2.7. However, we found that the women who had both risk factors had a 5-fold increase in risk, more than what can b expected.

For those who are keeping track, I am of course talking about additive interaction or sometimes referred to biological interaction. this concept is quite different from statistical interaction which – for me – is a useless thing to look at when your underlying research is of a causal nature.

What does this mean? you could interpret this that some women only develop the disease because they are exposed to both risk factors. IN some way, that combination becomes a third ‘risk entity’ that increases the risk in the population. How that works on a biochemical level cannot be answered with this epidemiological study, but some hints from the literature do exist as we discuss in our paper

Of course, some notes have to be taken into account. In addition to the standard limitations of case-control studies, two things stand out: because we study the combination of two risk factors, the precision of our study is relatively low. But then again, what other study is going to answer this question? The absolute risk of ischemic stroke is too low in the general population to perform prospective studies, even when enriched with loads of migraineurs. Another thing that is suboptimal is that the questionnaires used do not allow to conclude that the women who report severe headache actually have a migraine. Our assumption is that many -if not most- do. even though mixing ‘normal’ headaches with migraines in one group would only lead to an underestimation of the true effect of migraine on stroke risk, but still, we have to be careful and therefore stick to the term ‘headache’.

HvO took the lead in this project, which included two short visits to Berlin supported by our Virchow scholarship. The paper has been published in Stroke and can be seen ahead of print on their website.

Migraine and venous thrombosis: Another important piece of the puzzle

Asking the right question is arguably the hardest thing to do in science, or at least in epidemiology. The question that you want to answer dictates the study design, the data that you collect and the type of analyses you are going to use. Often, especially in causal research, this means scrutinizing how you should frame your exposure/outcome relationship. After all, there needs to be positivity and consistency which you can only ensure through “the right research question”. Of note, the third assumption for causal inference i.e. exchangeability, conditional or not, is something you can pursue through study design and analyses. But there is a third part of an epidemiological research question that makes all the difference: the domain of the study, as is so elegantly displayed by the cartoon of Todays Random Medical News or the twitter hash-tag “#inmice“.

The domain is the type of individuals to which the answer has relevance. Often, the domain has a one-to-one relationship with the study population. This is not always the case, as sometimes the domain is broader than the study population at hand. A strong example is that you could use young male infants to have a good estimation of the genetic distribution of genotypes in a case-control study for venous thrombosis in middle-aged women. I am not saying that that case-control study has the best design, but there is a case to be made, especially if we can safely assume that the genotype distribution is not sex chromosome dependent or has shifted through the different generations.

The domain of the study is not only important if you want to know to whom the results of your study actually are relevant, but also if you want to compare the results of different studies. (as a side note, keep in mind the absolute risks of the outcome that come with the different domains: they highly affect how you should interpret the relative risks)

Sometimes, studies look like they fully contradict with each other. One study says yes, the other says no. What to conclude? Who knows! But are you sure both studies actually they answer the same question? Comparing the way the exposure and the outcome are measured in the two studies is one thing – an important thing at that – but it is not the only thing. You should also make sure that you take potential differences and similarities between the domains of the studies into account.

This brings us to the paper by KA and myself that just got published in the latest volume of RPTH. In fact, it is a commentary written after we have reviewed a paper by Folsom et al. that did a very thorough job at analyzing the role between migraine and venous thrombosis in the elderly. They convincingly show that there is no relationship, completely in apparent contrast to previous papers. So we asked ourselves: “Why did the study by Folsom et al report findings in apparent contrast to previous studies?  “

There is, of course, the possibility f just chance. But next to this, we should consider that the analyses by Folsom look at the long term risk in an older population. The other papers looked at at a shorter term, and in a younger population in which migraine is most relevant as migraine often goes away with increasing age. KA and I argue that both studies might just be right, even though they are in apparent contradiction. Why should it not be possible to have a transient increase in thrombosis risk when migraines are most frequent and severe, and that there is no long term increase in risk in the elderly, an age when most migraineurs report less frequent and severe attacks?

The lesson of today: do not look only at the exposure of the outcome when you want to bring the evidence of two or more studies into one coherent theory. Look at the domain as well, as you might just dismiss an important piece of the puzzle.

Results dissemination from clinical trials conducted at German university medical centers was delayed and incomplete.

My interests are broader than stroke, as you can see my tweets as well as my publications. I am interested in how the medical scientific enterprise works – and more importantly how it can be improved. The latest paper looks at both.

The paper, with the relatively boring title “Results dissemination from clinical trials conducted at German university medical centres was delayed and incomplete.” is a collaboration with QUEST, and carried by DS and his team. The short form of the title might just as well have been “RCT don’t get published, and even if they do it is often too late.”

Now, this is not a new finding, in the sense that older publications also showed high rates of non-publishing. Newer activities in this field, such as the trial trackers for the FDAA and the EU, confirm this idea. The cool thing about these newer trackers is that they rely on continuous data collection through bots that crawl all over the interwebs to look for new trials. This upside thas a couple of downsides though: with constant being updated, these trackers do not work that well as a benchmarking tool. Second, they might miss some obscure type of publication which might lead to underreporting of reporting. Third, to keep the trackers simple they tend to only use one definition as what counts as “timely publication” even though the field, nor the guidelines, are conclusive.

So our project is something different. To get a good benchmark, we looked at whether trials executed by/at German University medical centers were published in a timely fashion. We collected the data automatically as far as we could, but also did a complete double check by hand to ensure we didn’t skip publications (hint, we did, hand search is important, potentially because of the language thing). Then we put all the data in a database, made a shiny app so that readers themselves can decide what definitions and subsets they are interested in. The bottomline, on average only ~50% of trials get published within two years after their formal end. That is too little and too slow.

shiny app

This is a cool publication because it provides a solid benchmark that truly captures the current state. Now, it is up to us, and the community to improve our reporting. We should track progress in the upcoming years by automated trackers, and in 5 years or so do the whole manual tracking once more. But that is not the only reason why it was so inspiring to work on the projects; it was the diverse team of researchers from many different groups that made the work fun to do. The discussions we had on the right methodology were complex and even led to an ancillary paper by DS and his group. But the way this publication was published in the most open way possible (open data, preprint, etc) was also a good experience.

The paper is here on Pubmed, the project page on OSF can be found here and the preprint is on bioRxiv, and let us not forget the shiny app where you can check out the results yourself. Kudos go out to DS and SW who really took the lead in this project.

Joining the PLOS Biology editorial board

I am happy and honored that I can share that I am going to be part of the PLOS Biology editorial board. PLOS Biology has a special model for their editorial duties, with the core of the work being done by in-house staff editors – all scientist turned professional science communicators/publishers. They are supported by the academic editors – scientists who are active in their field and can help the in-house editors with insight/insider knowledge. I will join the team of academic editors.

When the staff editors asked me to join the editorial board, it quickly became clear that they invited because I might be able to contribute to the Meta-research section in the journal. After all, next to some of my peer review reports I wrote for the journal, I published a paper on missing mice, the idea behind sequential designs in preclinical research, and more recently about the role of exact replication.

Next to the meta-research manuscripts that need evaluation, I am also looking forward to just working with the professional and smart editorial office. The staff editors already teased a bit that a couple of new innovations are coming up. So, next to helping meta-research forward, I am looking forward to help shape and evaluate these experiments in scholarly publishing.

Kuopio Stroke Symposium

Kuopio in summer

Every year there is a Neurology symposium organized in the quiet and beautiful town of Kuopio in Finland. Every three years, just like this year, the topic is stroke and for that reason, I was invited to be part of the faculty. A true honor, especially if you consider the other speakers on the program who all delivered excellent talks!

But these symposia are much more than just the hard cold science and prestige. It is also about making new friends and reconnecting with old ones. Leave that up to the Fins, whose decision to get us all on a boat and later in a sauna after a long day in the lecture hall proved to be a stroke of genius.

So, it was not for nothing that many of the talks boiled down to the idea that the best science is done with friends – in a team. This is true for when you are running a complex international stroke rehabilitation RCT, or you are investigating whether the lower risk in CVD morbidity and mortality amongst frequent sauna visitors. Or, in my case, about the role of hypercoagulability in young stroke – pdf of my slides can be found here –

My talk in Augsburg – beyond the binary

@BobSiegerink & Jakob Linseisen discussing the p-values. Thank you for your visit and great talk pic.twitter.com/iBt5ZQxaMi— Sebastian Baumeister (@baumeister_se) 3 May 2019

I am writing this as I am sitting in the train on my way back to Berlin. I was in Augsburg today (2x 5.5 hours in the train!), a small University city next to Munich in the south of Berlin. SB, fellow epidemiologist and BEMC alumnus, invited me to give a talk in their Vortragsreihe.

I had a blast – in part because this talk posed a challenge for me as they have a very mixed audience. I really had to think long and hard how I could provide something a stimulating talk with a solid attention arc for everybody on the audience. Take a look at my slides to see if I succeeded: http://tiny.cc/beyondbinary

My talk at Kuopio stroke symposium

In 6 weeks or so I will be traveling to Finland to speak at the Kuopio stroke symposium. They asked me to talk about my favorite subject, hypercoagulability and ischemic stroke. although I still working on the last details of the slides, I can already provide you with the abstract.

The categories “vessel wall damage” and “disturbance of blood flow” from Virchow’s Triad can easily be used to categorize some well known risk factors for ischemic stroke. This is different for the category “increased clotting propensity”, also known as hypercoagulability. A meta-analysis shows that markers of hypercoagulability are stronger associated with the risk of first ischemic stroke compared to myocardial infarction. This effect seems to be most pronounced in women and in the young, as the RATIO case-control study provides a large portion of the data in this meta-analysis. Although interesting from a causal point of view, understanding the role of hypercoagulability in the etiology of first ischemic stroke in the young does not directly lead to major actionable clinical insights. For this, we need to shift our focus to stroke recurrence. However, literature on the role of hypercoagulability on stroke recurrence is limited. Some emerging treatment targets can however can be identified. These include coagulation Factor XI and XII for which now small molecule and antisense oligonucleotide treatments are being developed and tested. Their relative small role in hemostasis, but critical role in pathophysiological thrombus formation suggest that targeting these factors could reduce stroke risk without increasing the risk of bleeds. The role of Neutrophilic Extracellular Traps, negatively charged long DNA molecules that could act as a scaffold for the coagulation proteins, is also not completely understood although there are some indications that they could be targeted as co-treatment for thrombolysis.

I am looking forward to this conference, not in the least to talk to some friends, get inspired by great speakers and science and enjoy the beautiful surroundings of Kuopio.

postscript: here are my slides that I used in Kuopio