New Paper – Smoking Does Not Alter Treatment Effect of Intravenous Thrombolysis in Mild to Moderate Acute Ischemic Stroke

In 2013, one of my then very new colleagues in Berlin published a very interesting paper with the title “Smoking-thrombolysis paradox: recanalization and reperfusion rates after intravenous tissue plasminogen activator in smokers with ischemic stroke”. When you translate this to something less technical, you will conclude that it appears that stroke patients who smoke seem to react better to acute treatment. The drug that they get administered seems to perform better with opening up the blood vessels in the brain so that the oxygen rich blood for can flow again. It sounds weird, even to us at the time, but there is -albeit unlikely – biological scenario that could actually explain this finding.

Irrespective of biology , this first finding was too preliminary to draw strong conclusions as it is based on imaging outcomes only. It didn’t say anything on whether the patients who were active smokers ended up having fewer symptoms. So that’s where this project came is. Using data from the Dutch PSI study, we were able to study the effect of the treatment and whether that effect was actually different in patients who smoked.

Background: The smoking-thrombolysis paradox refers to a better outcome in smokers who suffer from acute ischemic stroke (AIS) following treatment with thrombolysis. Source

The short answer is no – we didn’t find evidence that patients who were active smokers would actually have more benefit from thrombolysis. If we had found this effect, the RR should have been much extremer in the +/+ category than the effect in the +/- category. In fact, all that we saw showed that there was no real difference. But there are some serious limitations to our study, with the main one being that the patients included in this dataset might not have been the best subset of stroke patient to study this phenomenon. So, even though we didn’t see evidence of the phenomenon, we can’t rule it out and conclude, as so often, “that future research is needed”. Before you ask, yes, we indeed did that future research ourselves. The paper is currently under review, but I can already tell you that the conclusion is not going to change a lot. But that paper is still under review (hint: nothing changes)

Our paper, with AK in the lead, was published in Frontiers of Neurology last september (sorry for the delay), and can be found on my Publons profile and Mendeley profile.

New paper – Endothelial and leukocyte-derived microvesicles and cardiovascular risk after stroke

Kaplan Meier for quartiles of endothelial risk factors, taken from our paper.

Micro vesicles have for some years now been a topic in cardiovascular research, mainly in cardiology. The source of these vesicles are various cell types, and their function remains in large unclear -are they active parts of the bodies’ system, or are they mere bystanders.

Irrespective of that, if these MV are related to cardiovascular risk in cardiology patients, it is interesting to know to what extent they are related to cardiovascular risk in stroke patients. If so, that will be an indication that they are actually part of the causal mechanism or perhaps a good biomarker that might help stratify patients in meaningful subgroups.

So what did we do? We teamed up with cardiologist specialized in MV to measures various subtypes in 600+ patients with a first-ever ischemic stroke. We then looked at the risk of recurrent events and all cause mortality over a span of three years. Our findings tell a clear story – the higher the levels of MW, the higher the risk. The interpretation however, remains as unsure as when we started. We still do not know whether these MV are a cause, a bystander. More research, also just some hardcore basic research will be needed to further elucidate this distinction. In any case, the HR are not too impressive in this mild to moderate stroke cohort, so don’t expect MV to be added to any risk screening panel anytime soon, especially as the measurement is quite laborious.

Our paper, with SH in the lead, is published in Neurology, and can be found here as well as on my Publons profile and Mendeley profile.

PhD defense – The early identification of patients with an unfavorable prognosis

“things change”

When I teach undergraduates what kind of different research questions one might want to answer, I sometimes use the mental image of a patient in the doctor’s office. The questions that are asked can be roughly categorized in “What is wrong with me?”, “How did I get it?”, “What will happen from now on?”, and “What can we do about it?”. Students with a quick mind can recognize the concepts of diagnosis, etiology, prognosis, and treatment hidden in these questions. I usually treat these as quite separate questions, and that for each category different study designs and statistical techniques are preferable.

But that changed after I prepared for the PhD defense of SB. As a member of the “opposite”, I was tasked to examine the PhD candidate on her knowledge and her work presented in her thesis. Titled “The early identification of patients with an unfavorable prognosis”, this thesis focussed on the theme of whether treatment response could be incorporated in prediction models in order to improve the prediction of outcome. I think this is an interesting concept and potentially heavily underutilized and at least heavily understudied.

SB started out by showing that in psychiatry the majority of care is consumed by a minority of patients, thereby conceptually proving the need of identifying patients with an unfavorable outcome. The next chapters test whether adding information on treatment response for three different diseases: depression, asthma and high blood pressure.

Being the last of the opposition committee to examine the candidate, I was able to ask more about the methodological details (presence of co-linearity, the calibration/validation of the final models, the use of complete case analyses, etc). These introductory questions led us to the broader question that focused on her approach to include information about treatment response. SB consistently used the delta of two absolute measurements as an indicator of treatment response, and I asked whether that could be replaced or even complimented by using the delta in the variation of certain measurements. The bottom line is of course is that we don’t know, but that is also not the objective of a Dutch PhD ceremony. The objective is to start a conversation during which the candidate can show that he or she masters the material and is ready to become an independent researcher. And that is what SB did, congratulations!

So what did I learn from all this? Perhaps using the four categories of research questions is too restrictive and that sometimes, by combining the ideas and concepts from questions on treatment and prediction we can actually improve the care we can better distribute the care we can provide for all our patients.

New paper – Coagulation factor VIII, white matter hyperintensities and cognitive function: Results from the Cardiovascular Health Study.

Overview of the “long” timeline of the CHS data collection, taken from the paper discussed.

The newest addition to our publications is this paper on the role of high levels of coagulation factor VIII and cognitive function, as well as white matter intensities in the brain. The theory behind is that since hypercoagulability is related to young, overt, stroke, would hypercoagulability perhaps also be linked to non-overt cerebrovascular mishap? Hypercoagulability here is measured as high levels of FVIII, one of the most potent risk factors for thrombosis, and the cerebrovascular mishap is the presence and intensity of white matter lesions. This paper has a long history in three very different, but meaningful ways.

The first “long”aspect is that you need to have a very long, and complex, follow-up to study this. Where clinical stroke is a sudden onset type disease, what were are studying in this project has a for more gradual character. So not months, but years. Decades even! And not a lot of studies have the prerequisites to study this question: first there must be the possibility to measure FVIII, which means citrate. But most long term follow-up studies do not have citrate. Second, there must be MRI data available throughout the study. However, most large longitudinal studies only have MRI at baseline as an exposure measure. Third, The studies must go on for a very long time, which comes with the complication that often the participation in these type of studies can dwindle over the years. So, all in all, there was really only one cohort who had all the data already collected and ready to analyze: the Cardiovascular Health Study. So we requested access to the data with a focus on FVIII, cognitive functioning and multiple measures of white matter lesions in order to assess worsening of lesions and were ready to analyze!

Interestingly, this brings us to the second “long” aspect of this paper – getting acces to and using the CHS data. The idea for this paper came roughly in 2013, when I got in touch with some CHS researchers for the first time. That is a long time ago from today, end of 2020. So what took us so long? Well first there was the move to Berlin. I decided to take this project me, but immigrating to a different country and starting a new job at the same tends to put some delays on ongoing projects. A second reason is that the CHS data is open for non CHS researchers to use, under one very strict condition – CHS researchers don’t just hand our data but they help you set up and execute your plan. This approach is not completely “open science” but it might be better. After all, it does ensure that the knowledge and experience that comes from actually collecting the data is taken into account when you prepare for and actually analyze the data. But that process takes time, especially when working with collaborators several time zones away.

A third and final “long” aspect was the time between first time submission and final publication. Our paper got rejected by 4 different journals before we got accepted at PLOS one. This is definitely not a record, but the delay isn’t pretty. The reasons for rejection were different at these journals, but the fact that this was a “null” finding in a general population cohort did certainly not help.

The paper, with the title “Coagulation factor VIII, white matter hyperintensities and cognitive function: Results from the Cardiovascular Health Study” is published in PLOS One. You can also find it at the usual places. JLR took the lead of the project after I moved to Berlin, masterfully navigating and combining all the comments and input from this group of co-authors. Well done to all.

PhD defense – Thrombosis prophylaxis after knee arthroscopy or during lower leg cast immobilization

Thesis cover. Source

A couple of weeks ago I was part of the “promotiecommissie” of RvA at the university of Leiden. This is the committee that evaluates the thesis of a PhD candidate, and which then subsequently is also part of the opposition during the in-person defense, sometimes known as the “viva“.

I was quite impressed by the work described in the thesis. Not just because the individual projects described in each chapter was solid, but also how one clinical problem was approached from several angles with each research question answered with a different methodology. But every chapter it was clear that it contributed to the central theme: is the current practice of venous thrombosis prophylaxis in certain orthopedic patients justified.

The candidate started out with a description of risk factors of venous thrombosis related to tho either lower leg cast immobilization or arthroscopy of the knee and thus establishing that indeed there is an increased risk and that certain risk factors contribute to this risk. A survey amongst colleagues subsequently showed that the prophylactic treatment given to these patients differs quite substantially. This relatively simple element of the thesis is crucial, as it shows equipoise for the treatment – even though there is some evidence from trials, the evidence is weak and methodological unsound (they mostly use ultrasound diagnosed venous thrombosis, not a clinical diagnosis) resulting in highly varied practices.

So the stage is set for a trial. In fact, two trials, one for each of the two patient groups is presented in Chapters 5 and 6 of the thesis. Impressive stuff, which found its way into the NEJM, which showed that treatment with LMWH is in fact not better than placebo. Given this result, this is normally the end of it, but “compared to placebo” should normally raise some eyebrows. Is that the right comparison group? In this case, you can argue that it is, but even if you don’t think so, just go to chapter 7, wherein another group at high risk of venous thrombosis a comparison with compression stockings is made – again, here no evidence that LMWH is better. The candidate here presented this is an IV-analyses. Interesting thought, but I disagreed – the idea behind the comparison between centers has some IV elements in it in the rationalization, but there is no actual IV-analysis being done. Potato, Potato perhaps, but hey, it is a PhD defense! The last two chapters were the first step to a prediction model for venous thrombosis in orthopedic patients (prediction in case-control design, no validation). The idea behind is that if you can identify among all patients the high-risk group, treatment with LMWH might still be useful.

But for now, the evidence is clear – no LMWH in these orthopedic patients for the prevention of venous thrombosis. And that brings me to the lesson that I took from this thesis – it is possible, and necessary, that we evaluate medical practices already in place. It is the whole premise behind the book “Ending medical reversal”. I got that book as a gift from a colleague in Berlin, but I never got around to start. But after reading this thesis, I grabbed the book and read it cover to cover in just two days. Easy read, and interesting ideas, on how medical reversals, its causes, and how to prevent them from happening in the future. Some of my questions during the defense were even based on the book – for example, whether a cluster-randomized trial design should not be the golden standard in medical reversal research.

But the bottom line of the book+thesis combo is clear: there are a lot of medical practices used on a daily basis that should be re-evaluated. Except for one: “LMWH for the prevention of venous thrombosis in all patients with below the knee immobilization or arthroscopy” can be taken off the list.

The full text of the thesis can be found here.

PhD defenses – finding myself on the other side of the table

The traditions and ceremonies surrounding PhD thesis and defenses thereof differ per country. Now that I moved back to the Netherlands, my guess is that I will be participating in more Dutch PhD defenses, not as a candidate or paranymph, but on the other side of the table as a member of the “oppositie- / promotiecommittee”. The promotion committee is the committee that actually reads and judges your thesis whether you will be allowed to defend your thesis in public. That defense consist of a 45 minutes session where you need to debate your thesis with the opposition committee. As a side note, these committees overlap, but are in fact separate. There is also a difference in the duties – When you are in the “promotiecommissie” you are expected to read and evaluated the whole thesis in much detail, which naturally takes up quite some time. The members of the “oppositiecommissie” typically divide up the work, as you only get to discuss the thesis with the thesis for 5-10 minutes during that 45 minutes “viva”.

Anyway, in the last two months, I have been a member of two if those committees. Yes, that does take away some of your time for research, but it is not time lost. In pre-COVID times, these defenses were big happenings (I described the whole ceremony before). They were are a great way to catch up with old friends, and of course you learn a lot from the research presented by and discussed with the candidate. Interestingly, you meet a lot of new individuals as well – and with that a lot of new research ideas and collaborations just might develop. However, PhD defenses are now “COVID-19 proof” which is just a euphemism for “ZOOM” and a lot of that cool stuff that made PhD defenses worthwhile are now lost.

Although a disappointing state of affairs, I have decided to not let this ZOOM/COVID-19 spoil my opportunity to learn. And to track that, I have I will write a post every time I am a member of a PhD committee. These topics will be quite varied and there might be some critical notes here or there, but I will finish every time with what lesson I learned while reading the thesis.

The post COVID-19 Functional Status – an update

A binary outcome is the standard practice in most clinical research and as such, regression models like the binary logistic and the Cox proportional hazards are among the most used in the literature. This is not the case in stroke literature, where ordinal outcomes are now the standard practice in clinical trials and observational studies and registries. The idea behind it is that with more levels in your outcome it is possible to pick up more subtle yet still meaningful effects.

Based on this idea, I helped to propose and develop an ordinal scale for “post venous thrombosis” research. I described this effort shortly in a previous blog post “Three new papers – part III”. That post also describes the “post COVID-19 functional status” scale, or the PCFS. The name is quite self-explanatory I think, so I won’t dive into too much detail on the scale itself. I do want to describe what happened next: our proposal was published, and we got quite some traction. Over 70 colleagues contacted us saying that they are interested in using the PCFS. They delivered.

The PCFS is now available in 14 languages, is included in at least 4 national guidelines, is part of 1 published paper and 1 pre-print. For an up to date overview, you can take a look at the PCFS-section on this page, or even better, the dedicated OSF website.

https://osf.io/qgpdv/

New paper: Long-Term Mortality Among ICU Patients With Stroke Compared With Other Critically Ill Patients

Stroke patients can be severely affected by the clot or bleed in their brain. With the emphasis on “can”, because the clinical picture of stroke is varied. The care for stroke cases is often organized in stroke units, specialized wards with the required knowledge and expertise. I forgot who it was – and I have not looked for any literature to back this up – but a MD colleague told me once that stroke units are the best “treatment” for stroke patients.

Why am I telling you this? Because the next paper I want to share with you is not about mild or moderately affected patients, nor is it about the stroke unit. It is about stroke patients who end up at the intensive care unit. Only 1 in 50 to 100 of ICU patients are actually suffering from stroke, so it is clear that these patients do not make up the bulk of the patient population. So, all the more reason to bring some data together and get a better grip on what actually happens with these patients.

That is what we did in the paper “Long-Term Mortality Among ICU Patients With Stroke Compared With Other Critically Ill Patients”. The key element of the paper is the sheer volume of data that were available to study this group: 370,386 ICU patients, of which 7,046 (1.9%) stroke patients (of which almost 40% with intracerebral hemorrhage, a number far higher than natural occurrence).

The results are basically best summed up in the Kaplan Meier found below – it shows that in the short run, the risk of death is quite high (this is after all an ICU population), but also that there is a substantial difference between ischemic and hemorrhagic stroke. Hidden in the appendix are similar graphs where we plot also different diseases (e.g. traumatic brain injury, sepsis, cardiac surgery) that are more prevalent in the ICU to provide MDs with a better feel for the data. Next to these KM’s we also model the data to adjust for case-mix, but I will keep those results for those who are interested and actually read the paper.

Source: https://journals.lww.com/ccmjournal/Fulltext/2020/10000/Long_Term_Mortality_Among_ICU_Patients_With_Stroke.30.aspx

Our results are perhaps not the most world shocking, but it is helpful for the people working in the ICU’s, because they get some more information about the patients that they don’t see that often. This type of research is only possible if there is somebody collecting this type of data in a standardized way – and that is where NICE came in. “National Intensive Care Evaluation” is a Dutch NGO that actually does this. Nowadays, most people know this group from the news when they give/gave updates on the number of COVID-19 patients in the ICU in the Netherlands. This is only possible because there was this infrastructure already in place.

MKV took the lead in this paper, which was published in the journal Critical Care Medicine with DOI: 10.1097/CCM.0000000000004492.

PhD position for Q&I

As you might have read in my previous post, I start sept 1st at the LUMC, Leiden in the Netherlands, where I will be working to improve the quality and integrity of (biomedical) science. Locally and hopefully beyond the walls of the LUMC as well.  

There will also be a 4 year PhD-position available, starting as early Sept 1. The PhD candidate will be supervised by me and prof dr Frits Rosendaal. Even though the theme is fixed (i.e. Q&I of science), we are still working on the exact topic and projects. For now, all is quite open. Please note that we are open to applications from various fields, such as medicine, anyone of the biomedical sciences, medical humanities, meta-research, law, ethics, psychology etc.

If you know somebody who might be interested, please share this email with that person directly. If not, please share with people that might. Those who are interested/want to get more information can get in touch by sending an email with a short(!) bio-sketch to b.siegerink@gmail.com with “PhD position Q&I” in the subject line.

Leaving Berlin, returning to Leiden

Minerva, patron of the Leiden University, photographed by Erwin Olaf (collectie Lakenhal)

It is time.

After almost six years in Berlin, it is time to move on. And when I say move on, I mean move back to Leiden to work at my old Alma Mater, the University of Leiden / Leiden University Medical Center. The move is mainly driven by personal reasons – it will be great for my family to be closer to our friends and extended families.

But there is also an exciting job waiting for me focussed around the theme of the Quality and Integrity of science. For 50% of my time, I will be appointed as an assistant professor at the department of clinical epidemiology and set up a Q&I (meta)research line. The other 50% of my time, I will be working at the “directorate of research”, the team that supports LUMC researchers in general and the dean specifically. I will be responsible for the new program “Quality and Integrity of science”. The idea behind that program is that I will come up, execute and evaluate several interventions – big and small, some visible, some not – to improve science is executed at the LUMC.

I cannot provide any details, as they are simply not yet known. First, it is time to wrap up up my different projects here, all whilst working under corona pandemic circumstances. That makes these last weeks bittersweet – looking forward to a new chapter, whilst realizing what a great time I had in Berlin. I learned so much, was able to do so many things, and worked with so many interesting and smart people.

I will miss Berlin dearly.

Three new papers – part III

As explained here and here, I temporarily combine the announcements of published papers in one blog to save some time. This is part III, where I focus on ordinal outcomes. Of all recent papers, these are the most exciting to me, as they really are bringing something new to the field of thrombosis and COVID-19 research.

Measuring functional limitations after venous thromboembolism: Optimization of the Post-VTE Functional Status (PVFS) Scale. I have written about our call to action, and this is the follow-up paper, with research primarily done in the LUMC. With input from patients as well as 50+ experts through a Delphi process, we were able to optimize our initial scale.

Confounding adjustment performance of ordinal analysis methods in stroke studies. In this simulation study, we show that ordinal data from observational can also be analyzed with a non-parametric approach. Benefits: it allows us to analyze without the need of the proportional odds assumption and still get an easy to understand point estimate of the effect.

The Post-COVID-19 Functional Status (PCFS) Scale: a tool to measure functional status over time after COVID-19. In this letter to the European Respiratory, colleagues from Leiden, Maastricht, Zurich, Mainz, Hasselt, Winterthur, and of course Berlin, we propose to use a scale that is basically the same as the PVFS to monitor and study the long term consequence of COVID-19.

Three new papers published – part II

In my last post, I explained why I am at the moment not writing one post per new paper. Instead, I group them. This time with a common denominator, namely the role of cardiac troponin and stroke:

High-Sensitivity Cardiac Troponin T and Cognitive Function in Patients With Ischemic Stroke. This paper finds its origins in the PROSCIS study, in which we studied other biomarkers as well. In fact, there is a whole lot more coming. The analyses of these longitudinal data showed a – let’s say ‘medium-sized’ – relationship between cardiac troponin and cognitive function. A whole lot of caveats – a presumptive learning curve, not a big drop in cognitive function to work with anyway. After all, these are only mild to moderately affected stroke patients.

Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis. This paper investigates several patient populations -the general population, increased risk population, and stroke patients. The number of patients individuals in the title might, therefore, be a little bit deceiving – I think you should really only look at the results with those separate groups in mind. Not only do I think that the biology might be different, the methodological aspects (e.g. heterogeneity) and interpretation (relative risks with high absolute risks) are also different.

Response by Siegerink et al to Letter Regarding Article, “Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis”. We did the meta-analysis as much as possible “but the book”. We pre-registered our plan and published accordingly. This all to discourage ourselves (and our peer reviewers) to go and “hunt for specific results”. But then there was a letter to the editor with the following central point: Because in the subgroup of patients with material fibrillation, the cut-offs used for the cardiac troponin are so different that pooling these studies together in one analysis does not make sense. At first glance, it looks like the authors have a point: it is difficult to actually get a very strict interpretation from the results that we got. This paper described our response. Hint: upon closer inspection, we do not agree and make a good counterargument (at least, that’s what we think).

Three new papers published

Normally I publish a new post for each new paper that we publish. But with COVID-19, normal does not really work anymore. But i don’t want to completely throw my normal workflow overboard. Therefore, a quick update on a couple of publications, all in one blogpost, yet without a common denominator:

Stachulski, F., Siegerink, B. and Bösel, J. (2020) ‘Dying in the Neurointensive Care Unit After Withdrawal of Life-Sustaining Therapy: Associations of Advance Directives and Health-Care Proxies With Timing and Treatment Intensity’, Journal of Intensive Care Medicine A paper about the role of advanced directives and treatment in the neurointensive care unit. Not normally the topic I publish about, as the severity of disease in these patients is luckily not what we normally see in stroke patients.

Impact of COPD and anemia on motor and cognitive performance in the general older population: results from the English longitudinal study of ageing. This paper makes use of the ELSA study – an open-access database – and hinges on the idea that sometimes two risk factors only lead to the progression of disease/symptoms if they work jointly. This idea behind interaction is often “tested” with a simple statistical interaction model. There are many reasons why this is not the best thing to do, so we also looked at biological (or additive interaction).

Thrombo-Inflammation in Cardiovascular Disease: An Expert Consensus Document from the Third Maastricht Consensus Conference on Thrombosis. This is a hefty paper, with just as many authors as pages it seems. But this is not a normal paper – it is the consensus statement of the thrombosis meeting last year in Maastricht. I really liked that meeting, not only because I got to see old friends, but also because of a number of ideas and papers were the product of this meeting. This paper is, of course, one of them. But after this one, some papers on the development of an ordinal outcome for functional status after venous thrombosis. But they will be part of a later blog post.

New paper – Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative

I report often on this blog about new papers that I have co-authored. Every time I highlight something that is special about that particular publication. This time I want to highlight a paper that I co-authored, but also didn’t. Let me explain.

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000576#sec014

The paper, with the title, Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative, was published in PLOS Biology and describes the QUEST center. The author list mentions three individual QUEST researchers, but it also has this interesting “on behalf of the QUEST group” author reference. What does that actually mean?

Since I have reshuffled my research, I am officially part of the QUEST team, and therefore I am part of that group. I gave some input on the paper, like many of my colleagues, but nowhere near enough to justify full authorship. That would, after all, require the following 4(!) elements, according to the ICMJE,

  • Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
  • Drafting the work or revising it critically for important intellectual content; AND
  • Final approval of the version to be published; AND
  • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

This is what the ICMJE says about large author groups: “Some large multi-author groups designate authorship by a group name, with or without the names of individuals. When submitting a manuscript authored by a group, the corresponding author should specify the group name if one exists, and clearly identify the group members who can take credit and responsibility for the work as authors. The byline of the article identifies who is directly responsible for the manuscript, and MEDLINE lists as authors whichever names appear on the byline. If the byline includes a group name, MEDLINE will list the names of individual group members who are authors or who are collaborators, sometimes called non-author contributors, if there is a note associated with the byline clearly stating that the individual names are elsewhere in the paper and whether those names are authors or collaborators.”

I think that this format should be used more, but that will only happen if people take the collaborator status seriously as well. Other “contribution solutions” can help to give some insight into what it means to be a collaborator, such as a detailed description like in movie credits or a standardized contribution table. We have to start appreciating all forms of contributions.

On the value of data – routinely vs purposefully

I listen to a bunch of podcasts, and the podcast “The Pitch” is one of them. In that podcast, Entrepreneurs of start-up companies pitch their ideas to investors. Not only is it amusing to hear some of these crazy business ideas, but the podcast also help me to understand about professional life works outside of science. One thing i learned is that it is ok if not expected, to oversell by about a factor 142.

Another thing that I learned is the apparent value of data. The value of data seems to be undisputed in these pitches. In fact, the product or service the company is selling or providing is often only a byproduct: collecting data about their users which subsequently can be leveraged for targeted advertisement seems to be the big play in many start-up companies.

I think this type of “value of data” is what it is: whatever the investors want to pay for that type of data is what it is worth. But it got me thinking about the value of data that we actually collect in medical. Let us first take a look at routinely data, which can be very cheap to collect. But what is the value of the data? The problem is that routinely collected data is often incomplete, rife with error and can lead to enormous biases – both information bias as well as selection bias. Still, some research questions can be answered with routinely collected data – as long as you make some real efforts to think about your design and analyses. So, there is value in routinely collected data as it can provide a first glance into the matter at hand.

And what is the case for purposefully collected data? The idea behind this is that the data is much more reliable: trained staff collects data in a standardised way resulting in datasets without many errors or holes. The downside is the “purpose” which often limits the scope and thereby the amount collected data per included individual. this is the most obvious in randomised clinical trials in which often millions of euro’s are spent to answer one single question. Trials often do no have the precision to provide answers to other questions. So it seems that the data can lose it value after answering that single question.

Luckily, many efforts were made to let purposefully collected keep some if its value even after they have served their purpose. Standardisation efforts between trials make it now possible to pool the data and thus obtain a higher precision. A good example from the field of stroke research is the VISTA collaboration, i.e the Virtual International Stroke Trials Archive”. Here, many trials – and later some observational studies – are combined to answer research questions with enough precision that otherwise would never be possible. This way we can answer questions with high quality of purposefully collected data with numbers otherwise unthinkable.

This brings me to a recent paper we published with data from the VISTA collaboration: “Early in-hospital exposure to statins and outcome after intracerebral haemorrhage”. The underlying question whether and when statins should be initiated / continued after ICH is clinically relevant but also limited in scope and impact, so is it justified to start a trial? We took the the easier and cheaper solution and analysed the data from VISTA. We conclude that

… early in-hospital exposure to statins after acute ICH was associated with better functional outcome compared with no statin exposure early after the event. Our data suggest that this association is particularly driven by continuation of pre-existing statin use within the first two days after the event. Thus, our findings provide clinical evidence to support current expert recommendations that prevalent statin use should be continued during the early in-hospital phase.1921

link

And this shows the limitations of even well collected data from RCT: as long as the exposure of interest is potentially provided to a certain subgroup (i.e. Confounding by indication), you can never really be certain about the treatment effects. To solve this, we would really need to break the bond between exposure and any other clinical characteristic, i.e. randomize. That remains the golden standard for intended effects of treatments. Still, our paper provided a piece of the puzzle and gave more insight, form data that retained some of its value due to standardisation and pooling. But there is no dollar value that we can put on the value of medical research data – routinely or purposefully collected alike- as it all depends on the question you are trying to answer.

Our paper, with JD in the lead, was published last year in the European Stroke Journal, and can be found here as well as on my Publons profile and Mendeley profile.

The story of a paper on the relationship between cancer and stroke that is both new and not so new.

Science is not quick. In fact, it is slow most of the time. Therefore, most researchers work on multiple papers at the same time. This is not necessarily bad, as parallel activities can be leveraged to increase the quality of the different projects. But sometimes this approach leads to significant delays. Imagine a paper that is basically done, and then during the peer review process, all the lead figures in the author team get different positions. Perhaps a Ph.D. student moves institutes for a post-doc, or junior doctors finish their training and set up their own practices, or start their demanding clinical duties in an academic medical center. All these steps are understandable and good for science in general but can hurt the speediness of individual papers.

This happened for example with a recently published paper in the Dutch PSI study. I say recently published because the work started > 5 years ago and has been finished more or less for the majority of that time. In this paper, we show that cancer prevalence is higher for stroke patients. But not all cancers are affected: it is primarily bladder cancer and head and neck type of effect. This might be explained by the shared risk factor smoking (bladder cancer, repository tract) and perhaps cancer treatment (central nervous system/ head and neck cancer). Not world shocking results with direct clinical implications, but relevant if you want to have a clear understanding of the consequences of cancer.


Now don’t get me wrong, I am very glad we, in the end, got all their ducks in a row and find a good place for the paper to be published. But the story is also a good warning: It was the willpower of some in the team to make this happen. Next time such a situation comes around, we might not have the right people with will right amount of power to keep on going with a paper like this. 

How to avoid this? Is “pre-print” the solution? I am not sure. On the surface, it indeed seems the answer, as it will give others at least the chance to see the work we did. But I am a firm believer that some form of peer review is necessary – just ‘dumping’ papers on a pre-print server is really a non-solution and I am afraid that a culture like that will only diminish the drive to get things formally published is even lower when manuscripts are already in the public domain. Post-publication peer review then? I am also skeptical here, as I the idea of pre-publication peer review is so deeply embedded within the current scientific enterprise that  I do not see post-publication peer review playing a big role anytime soon. The lack of incentive for peer review – let alone post-publication peer review – is really not helping us to make the needed changes anytime sooner. 


Luckily, there is a thing called intrinsic motivation, and I am glad that JW and LS had enough to get this paper published. The paper, with the title “Cancer prevalence higher in stroke patients than in the general population: the Dutch String-of-Pearls Institute (PSI) Stroke study. is published in European Journal of Neurology and can be found on Pubmed, as well as on my Mendeley and Publons profile.

Helping patients to navigate the fragmented healthcare landscape in Berlin: the NAVICARE stroke-atlas

the cover the Berlin Stroke Atlas

Research on healthcare delivery can only do so much to improve the lives of patients. Identifying the weak spots is important to start off with, but is not going to help patients one bit if they don’t get information that is actually useful let alone in time.

It is for that reason that the NAVICARE project not only focusses on doing research but also to provide information for patients, as well as bringing healthcare providers together in the NAVICARE network. The premise of NAVICARE is that somehow we need to help patients to navigate the fragmented healthcare landscape. We do so by using the diseases stroke and lung cancer as model diseases, prototypical diseases that help us focus our attention.

One deliverable is the stroke atlas: a document that provides different healthcare providers – and people and organizations who can help you in the broadest sense possible once you or your loved one is affected by a stroke. This stroke atlas, in conjunction with our personal approach at the stroke service point of the CSB/BSA, will help our patients. You can find the stroke atlas here (in German of course).

But this is only a first step. the navigator model is currently being further developed, for which NAVICARe has received additional funding this summer. I will not be part of those steps (see my post on my reshuffled research focus), but others at the CSB will.

Five years in Berlin and counting – reshuffling my research

I started to work in the CSB about 5 years ago. I took over an existing research group, CEHRIS, which provided services to other research groups in our center. Data management, project management and biostatisticians who worked on both clinical and preclinical research where all included in this team. My own research was a bit on the side, including old collaborations with Leiden and a new Ph.D. project with JR.

But then, early summer 2018 things started to change. The generous funding under the IFB scheme ran out, and CSB 3.0 had to switch to a skeleton crew. Now, for most research activities this had no direct impact, as funding for many key projects did not come from the CSB 2.0 grant. However, a lot of services to make our researchers perform at peak capability were hit. this included my team. CEHRIS, the service group ready to help other researchers was no longer.

But I stayed on, and I used the opportunity to focus my efforts on my own interest. I detached myself from projects I inherited but were not so engaged with, and I engaged myself with projects that interested me. This was, of course, a process over many months, starting end 2017. I feel now that it is time to share with you that I have a clear idea of what my new direction is. It boils downs to this:

My stroke research focuses on three projects in which we collect(ed) data ourselves: PROSCIS, BSPATIAL, BELOVE. The data collection in each of these projects is in different phases, and more papers will be coming out of these projects sooner later than later. Next to this, I will also help to analyze and publish data from others – that is after all what epidemiologists do. My methods research remains a bit of a hodgepodge where I still need to find focus and momentum. The problem here is that funding for this type of research has been lacking so far and will always be difficult to find – especially in Germany. But many ideas that come to from stroke projects have ripened into methodology working papers and abstracts, hopefully resulting in fully published papers quite soon. The third pillar is formed by the meta-research activities that I undertake with QUEST. Until now, these activities were a bit of a hobby, and always on the side. That has changed with the funding of SPOKES.

SPOKES is a new project that wants to improve the way we do biomedical research, especially translational research. Just pointing towards the problem (meta-research) or new top-down policy (ivory tower alert) is not enough. There has to be time and money for early and mid-career researchers to chip in as well in the process. SPOKES is going to facilitate that by making both time and money available. This starts with dedicated time and money for myself: I now work one day a week with the QUEST team. I will provide more details on SPOKES in a later post, but for now, I will just acknowledge that looking forward to this project within the framework of the Wellcome Trust Translational Partnership.

So there you have it, the three new pillars of my research activities in a single blog post. I have decided to lose the name CEHRIS to show that the old service focussed research group is no more. I have been struggling with choosing a new name, but in the end, I have settled for the German standard “AG-Siegerink”. Part lack of imagination, part laziness, and part underlining that there are three related but distinct research lines within that research group.

Up to the next 5 years!?

STEMO, our stroke ambulance, has had a bumpy ride…

STEMO in front of our clinic, source.

Pfew, there has been quite some excitement when it comes to the STEMO, the stroke ambulance in Berlin. The details are too specific -and way too German- for this blog, but the bottomline is this: during our evaluation of the STEMO, we noticed that STEMO was not always used as it should be. And if you do not use a tool like you should, it is hot half as effective. So we keep on trying to improve how STEMO is used in Berlin, even though the evaluation is going on.

We need to take these changes into account, so we wrote a new plan to evaluate STEMO, which was published open access the new BMC journal Neurological Research and Practice. The money to continue the evaluation was secured and we thought we were ready to go. But then reality set in: during budget negotiations a lower committee from the Berlin Senate said simply “NO” to STEMO. A day later however, the Major of Berlin used a “Machtword”, an informal veto to say that STEMO will be kept in the budget in order to finish the formal evaluation.

A true rollercoaster, which will show how directly our research has an impact on the society. The numerous calls, tweets and emails we have received in support of our now 3 STEMO ambulances over last couple of weeks underlines this even more (just the fact that a complete stranger started a petition with all nuances of the case taken into account is just mind boggeling !). But the science has to speak, and we still need to definitively evaluate the effectiveness of STEMO when used like it should be – something we will do over the next months with renewed energy in the whole team.

Auto-immune antibodies and their relevance for stroke patients – a new paper in Stroke

KMfor CVD+mortatily after stroke, stratified to serostatus for the anti-NMDA-R auto-antibody. taken from (doi: 10.1161/STROKEAHA.119.026100)

We recently published one of our projects embedded within the PROSCIS study. This follow-up study that includes 600+ men and women with acute stroke forms the basis of many active projects in the team (1 published, many coming up).

For this paper, PhD candidate PS measured auto-antibodies to the NMDAR receptor. Previous studies suggest that having these antibodies might be a marker, or even induce a kind of neuroprotective effect. That is not what we found: we showed that seropositive patients, especially those with the highest titers have a 3-3.5 fold increase in the risk of having a worse outcome, as well as almost 2-fold increased risk of CVD and death following the initial stroke.

Interesting findings, but some elements in our design do not allow us to draw very strong conclusions. One of them is the uncertainty of the seropositivity status of the patient over time. Are the antibodies actually induced over time? Are they transient? PS has come up with a solid plan to answer some of these questions, which includes measuring the antibodies at multiple time points just after stroke. Now, in PROSCIS we only have one blood sample, so we need to use biosamples from other studies that were designed with multiple blood draws. The team of AM was equally interested in the topic, so we teamed up. I am looking forward to follow-up on the questions that our own research brings up!

The effort was led by PS and most praise should go to her. The paper is published in Stroke, can be found online via pubmed, or via my Mendeley profile (doi: 10.1161/STROKEAHA.119.026100)

Update January 2020: There was a letter to the editor regarding our paper. We wrote a response.

Now hiring!

The text below is the English version of the official and very formal German text.

The QUEST center is looking for a project manager for the SPOKES project. SPOKES is part of the Wellcome Trust translational partnership program and aims to “Create Traction and Stimulate Grass-Root Activities to Promote a Culture of Translation Focused on Value”. SPOKES will be looking for grassroots activities from early and mid-career scientist who want to sustainably increase the value of the research in their own field.

The position will be located within the QUEST Center for Transforming Biomedical Research at the Berlin Institute of Health (BIH). The goal of QUEST is to optimize biomedical research in terms of sound scientific methodology, bio-ethics and access to research.

SPOKES is a new program organized by the QUEST Team at the Berlin Institute of Health. SPOKES enables our own researchers at the Charité / BIH to improve the way we do science. Your task is to identify and support these scientists. More specifically, we expect you to:

  • Promote the program within the BIH research community (interviews, newsletters, social media, events, etc)
  • Find the right candidates for this program (recruiting and selection)
  • Organize the logistics and help prepare the content of all our meetings (workshops, progress meetings, symposia, etc)
  • Support the selected researchers in their projects where possible (design, schedule and execute)

Next to this, there is an opportunity to perform some meta-research yourself.

We are looking for somebody with

  • A degree in biomedical research (MD, MSc, PhD or equivalent)
  • Proficiency in both English and German (both minimally C1)
  • Enthusiasm for improving science – if possible demonstrated by previous courses or other activities

Although no formal training as a project manager is required, we are looking for people who have some experience in setting up and running projects of any kind that involve people with different (scientific) backgrounds.

Intrinsic Coagulation Pathway, History of Headache, and Risk of Ischemic Stroke: a story about interacting risk factors

Yup, another paper from the long-standing collaboration with Leiden. this time, it was PhD candidate HvO who came up with the idea to take a look at the risk of stroke in relation to two risk factors that independently increase the risk. So what then is the new part of this paper? It is about the interaction between the two.

Migraine is a known risk factor for ischemic for stroke in young women. Previous work also indicated that increased levels of the intrinsic coagulation proteins are associated with an increase in ischemic stroke risk. Both roughly double the risk. so what does the combination do?

Let us take a look at the results of analyses in the RATIO study. High levels if antigen levels of coagulation factor FXI are associated with a relative risk of 1.7. A history of severe headache doubles the risk of ischemic stroke. so what can we then expect is both risks just added up? Well, we need to take the standard risk that everybody has into account, which is RR of 1. Then we add the added risk in terms of RR based on the two risk factors. For FXI this is (1.7-1=) 0.7. For headache that is 2.0-1=) 1.0. So we would expect a RR of (1+0.7+1.0=) 2.7. However, we found that the women who had both risk factors had a 5-fold increase in risk, more than what can b expected.

For those who are keeping track, I am of course talking about additive interaction or sometimes referred to biological interaction. this concept is quite different from statistical interaction which – for me – is a useless thing to look at when your underlying research is of a causal nature.

What does this mean? you could interpret this that some women only develop the disease because they are exposed to both risk factors. IN some way, that combination becomes a third ‘risk entity’ that increases the risk in the population. How that works on a biochemical level cannot be answered with this epidemiological study, but some hints from the literature do exist as we discuss in our paper

Of course, some notes have to be taken into account. In addition to the standard limitations of case-control studies, two things stand out: because we study the combination of two risk factors, the precision of our study is relatively low. But then again, what other study is going to answer this question? The absolute risk of ischemic stroke is too low in the general population to perform prospective studies, even when enriched with loads of migraineurs. Another thing that is suboptimal is that the questionnaires used do not allow to conclude that the women who report severe headache actually have a migraine. Our assumption is that many -if not most- do. even though mixing ‘normal’ headaches with migraines in one group would only lead to an underestimation of the true effect of migraine on stroke risk, but still, we have to be careful and therefore stick to the term ‘headache’.

HvO took the lead in this project, which included two short visits to Berlin supported by our Virchow scholarship. The paper has been published in Stroke and can be seen ahead of print on their website.

Migraine and venous thrombosis: Another important piece of the puzzle

Asking the right question is arguably the hardest thing to do in science, or at least in epidemiology. The question that you want to answer dictates the study design, the data that you collect and the type of analyses you are going to use. Often, especially in causal research, this means scrutinizing how you should frame your exposure/outcome relationship. After all, there needs to be positivity and consistency which you can only ensure through “the right research question”. Of note, the third assumption for causal inference i.e. exchangeability, conditional or not, is something you can pursue through study design and analyses. But there is a third part of an epidemiological research question that makes all the difference: the domain of the study, as is so elegantly displayed by the cartoon of Todays Random Medical News or the twitter hash-tag “#inmice“.

The domain is the type of individuals to which the answer has relevance. Often, the domain has a one-to-one relationship with the study population. This is not always the case, as sometimes the domain is broader than the study population at hand. A strong example is that you could use young male infants to have a good estimation of the genetic distribution of genotypes in a case-control study for venous thrombosis in middle-aged women. I am not saying that that case-control study has the best design, but there is a case to be made, especially if we can safely assume that the genotype distribution is not sex chromosome dependent or has shifted through the different generations.

The domain of the study is not only important if you want to know to whom the results of your study actually are relevant, but also if you want to compare the results of different studies. (as a side note, keep in mind the absolute risks of the outcome that come with the different domains: they highly affect how you should interpret the relative risks)

Sometimes, studies look like they fully contradict with each other. One study says yes, the other says no. What to conclude? Who knows! But are you sure both studies actually they answer the same question? Comparing the way the exposure and the outcome are measured in the two studies is one thing – an important thing at that – but it is not the only thing. You should also make sure that you take potential differences and similarities between the domains of the studies into account.

This brings us to the paper by KA and myself that just got published in the latest volume of RPTH. In fact, it is a commentary written after we have reviewed a paper by Folsom et al. that did a very thorough job at analyzing the role between migraine and venous thrombosis in the elderly. They convincingly show that there is no relationship, completely in apparent contrast to previous papers. So we asked ourselves: “Why did the study by Folsom et al report findings in apparent contrast to previous studies?  “

There is, of course, the possibility f just chance. But next to this, we should consider that the analyses by Folsom look at the long term risk in an older population. The other papers looked at at a shorter term, and in a younger population in which migraine is most relevant as migraine often goes away with increasing age. KA and I argue that both studies might just be right, even though they are in apparent contradiction. Why should it not be possible to have a transient increase in thrombosis risk when migraines are most frequent and severe, and that there is no long term increase in risk in the elderly, an age when most migraineurs report less frequent and severe attacks?

The lesson of today: do not look only at the exposure of the outcome when you want to bring the evidence of two or more studies into one coherent theory. Look at the domain as well, as you might just dismiss an important piece of the puzzle.

medRxiv: the pre-print server for medicine

Pre-print servers are a place to place share your academic work before actual peer review and subsequent publication. They are not so new completely new to academia, as many different disciplines have adopted pre-print servers to quickly share ideas and keep the academic discussion going. Many have praised the informal peer-review that you get when you post on pre-print servers, but I primarily like the speed.

But medicine is not one of those disciplines. Up until recently, the medical community had to use bioRxiv, a pre-print server for biology. Very unsatisfactory; as the fields are just too far apart, and the idiosyncrasies of the medical sciences bring some extra requirements. (e.g. ethical approval, trial registration, etc.). So here comes medRxiv, from the makers of bioRxiv with support of the BMJ. Let’s take a moment to listen to the people behind medRxiv to explain the concept themselves.

source: https://www.medrxiv.org/content/about-medrxiv

I love it. I am not sure whether it will be adopted by the community at the same space as some other disciplines have, but doing nothing will never be part of the way forward. Critical participation is the only way.

So, that’s what I did. I wanted to be part of this new thing and convinced with co-authors for using the pre-print concept. I focussed my efforts on the paper in which we describe the BeLOVe study. This is a big cohort we are currently setting up, and in a way, is therefore well-suited for pre-print servers. The pre-print servers allow us to describe without restrictions in word count, appendices or tables and graphs to describe what we want to the level of detail of our choice. The speediness is also welcome, as we want to inform the world on our effects while we are still in the pilot phase and are still able to tweak the design here or there. And that is actually what happened: after being online for a couple of days, our pre-print already sparked some ideas by others.

Now we have to see how much effort it took us, and how much benefit w drew from this extra effort. It would be great if all journals would permit pre-prints (not all do…) and if submitting to a journal would just be a “one click’ kind of effort after jumping through the hoops for the medRxiv.

This is not my first pre-print. For example, the paper that I co-authored on the timely publication of trials from Germany was posted on biorXiv. But being the guy who actually uploads the manuscript is a whole different feeling.

REWARD | EQUATOR Conference 2020 in Berlin

https://www.reward-equator-conference-2020.com

Almost 5 years ago something interesting happened in Edinburgh. REWARD and EQUATOR teamed up and organized a joint conference on “Increasing value and reducing waste in biomedical research “. Over the last five years, that topic has dominated Meta-research and research improvement activities all over the world. Now 5 years later, it is again time for another REWARD and EQUATOR conference, this time in Berlin. And I have the honor to serve on the local organizing committee.

My role is so small, that the LOC is currently not even mentioned on the website. But the website does show some other names, promising a great event! it starts with the theme. which is “Challenges and opportunities for Improvement for Ethics Committees and Regulators, Publishers, Institutions and Researchers, Funders – and Methods for measuring and testing Interventions”. That is not a sexy title like 5 years ago, but it shows that the field has outgrown the alarmistic phase and is now looking for real and lasting changes for the better – a move I can only encourage. See you in Berlin?

https://www.reward-equator-conference-2020.com

Results dissemination from clinical trials conducted at German university medical centers was delayed and incomplete.

My interests are broader than stroke, as you can see my tweets as well as my publications. I am interested in how the medical scientific enterprise works – and more importantly how it can be improved. The latest paper looks at both.

The paper, with the relatively boring title “Results dissemination from clinical trials conducted at German university medical centres was delayed and incomplete.” is a collaboration with QUEST, and carried by DS and his team. The short form of the title might just as well have been “RCT don’t get published, and even if they do it is often too late.”

Now, this is not a new finding, in the sense that older publications also showed high rates of non-publishing. Newer activities in this field, such as the trial trackers for the FDAA and the EU, confirm this idea. The cool thing about these newer trackers is that they rely on continuous data collection through bots that crawl all over the interwebs to look for new trials. This upside thas a couple of downsides though: with constant being updated, these trackers do not work that well as a benchmarking tool. Second, they might miss some obscure type of publication which might lead to underreporting of reporting. Third, to keep the trackers simple they tend to only use one definition as what counts as “timely publication” even though the field, nor the guidelines, are conclusive.

So our project is something different. To get a good benchmark, we looked at whether trials executed by/at German University medical centers were published in a timely fashion. We collected the data automatically as far as we could, but also did a complete double check by hand to ensure we didn’t skip publications (hint, we did, hand search is important, potentially because of the language thing). Then we put all the data in a database, made a shiny app so that readers themselves can decide what definitions and subsets they are interested in. The bottomline, on average only ~50% of trials get published within two years after their formal end. That is too little and too slow.

shiny app

This is a cool publication because it provides a solid benchmark that truly captures the current state. Now, it is up to us, and the community to improve our reporting. We should track progress in the upcoming years by automated trackers, and in 5 years or so do the whole manual tracking once more. But that is not the only reason why it was so inspiring to work on the projects; it was the diverse team of researchers from many different groups that made the work fun to do. The discussions we had on the right methodology were complex and even led to an ancillary paper by DS and his group. But the way this publication was published in the most open way possible (open data, preprint, etc) was also a good experience.

The paper is here on Pubmed, the project page on OSF can be found here and the preprint is on bioRxiv, and let us not forget the shiny app where you can check out the results yourself. Kudos go out to DS and SW who really took the lead in this project.

Joining the PLOS Biology editorial board

I am happy and honored that I can share that I am going to be part of the PLOS Biology editorial board. PLOS Biology has a special model for their editorial duties, with the core of the work being done by in-house staff editors – all scientist turned professional science communicators/publishers. They are supported by the academic editors – scientists who are active in their field and can help the in-house editors with insight/insider knowledge. I will join the team of academic editors.

When the staff editors asked me to join the editorial board, it quickly became clear that they invited because I might be able to contribute to the Meta-research section in the journal. After all, next to some of my peer review reports I wrote for the journal, I published a paper on missing mice, the idea behind sequential designs in preclinical research, and more recently about the role of exact replication.

Next to the meta-research manuscripts that need evaluation, I am also looking forward to just working with the professional and smart editorial office. The staff editors already teased a bit that a couple of new innovations are coming up. So, next to helping meta-research forward, I am looking forward to help shape and evaluate these experiments in scholarly publishing.

Kuopio Stroke Symposium

Kuopio in summer

Every year there is a Neurology symposium organized in the quiet and beautiful town of Kuopio in Finland. Every three years, just like this year, the topic is stroke and for that reason, I was invited to be part of the faculty. A true honor, especially if you consider the other speakers on the program who all delivered excellent talks!

But these symposia are much more than just the hard cold science and prestige. It is also about making new friends and reconnecting with old ones. Leave that up to the Fins, whose decision to get us all on a boat and later in a sauna after a long day in the lecture hall proved to be a stroke of genius.

So, it was not for nothing that many of the talks boiled down to the idea that the best science is done with friends – in a team. This is true for when you are running a complex international stroke rehabilitation RCT, or you are investigating whether the lower risk in CVD morbidity and mortality amongst frequent sauna visitors. Or, in my case, about the role of hypercoagulability in young stroke – pdf of my slides can be found here –

My talk in Augsburg – beyond the binary

@BobSiegerink & Jakob Linseisen discussing the p-values. Thank you for your visit and great talk pic.twitter.com/iBt5ZQxaMi— Sebastian Baumeister (@baumeister_se) 3 May 2019

I am writing this as I am sitting in the train on my way back to Berlin. I was in Augsburg today (2x 5.5 hours in the train!), a small University city next to Munich in the south of Berlin. SB, fellow epidemiologist and BEMC alumnus, invited me to give a talk in their Vortragsreihe.

I had a blast – in part because this talk posed a challenge for me as they have a very mixed audience. I really had to think long and hard how I could provide something a stimulating talk with a solid attention arc for everybody on the audience. Take a look at my slides to see if I succeeded: http://tiny.cc/beyondbinary

My talk at Kuopio stroke symposium

In 6 weeks or so I will be traveling to Finland to speak at the Kuopio stroke symposium. They asked me to talk about my favorite subject, hypercoagulability and ischemic stroke. although I still working on the last details of the slides, I can already provide you with the abstract.

The categories “vessel wall damage” and “disturbance of blood flow” from Virchow’s Triad can easily be used to categorize some well known risk factors for ischemic stroke. This is different for the category “increased clotting propensity”, also known as hypercoagulability. A meta-analysis shows that markers of hypercoagulability are stronger associated with the risk of first ischemic stroke compared to myocardial infarction. This effect seems to be most pronounced in women and in the young, as the RATIO case-control study provides a large portion of the data in this meta-analysis. Although interesting from a causal point of view, understanding the role of hypercoagulability in the etiology of first ischemic stroke in the young does not directly lead to major actionable clinical insights. For this, we need to shift our focus to stroke recurrence. However, literature on the role of hypercoagulability on stroke recurrence is limited. Some emerging treatment targets can however can be identified. These include coagulation Factor XI and XII for which now small molecule and antisense oligonucleotide treatments are being developed and tested. Their relative small role in hemostasis, but critical role in pathophysiological thrombus formation suggest that targeting these factors could reduce stroke risk without increasing the risk of bleeds. The role of Neutrophilic Extracellular Traps, negatively charged long DNA molecules that could act as a scaffold for the coagulation proteins, is also not completely understood although there are some indications that they could be targeted as co-treatment for thrombolysis.

I am looking forward to this conference, not in the least to talk to some friends, get inspired by great speakers and science and enjoy the beautiful surroundings of Kuopio.

postscript: here are my slides that I used in Kuopio

Should you drink one glass of alcohol to reduce your stroke risk?

The answer: no. For a long time there has been doubt whether or not we should believe the observational data whether or not limited alcohol use is in fact good for. You know, the old “U-curve” association. Now, with some smart thinking from the KADORIE guys from China/ Oxford as well as some other methods experts, the ultimate analyses has been done: A Mendelian Randomization study published recently in the Lancet.

If you wanna know what that actually does, you can read a paper I co-wrote a couple of years ago for NDT or the version in Dutch for the NTVG. In short, the technique uses genetic variation as a proxy for the actual phenotype you are interested in. This can be a biomarker, or in this case, alcohol consumption. A large proportion of the Chinese population has some genetic variations in the genes that code for the enzymes that break down alcohol in your blood. These genetic markers are therefore a good indicators how much you can actually can drink – at least on a group level. And as in most regions in China alcohol drinking is the standard, at least for men- how much you can drink is actually a good proxy of how much you actually do drink. Analyse the risk of stroke according the unbiased genetic determined alcohol consumption instead of the traditional questionnaire based alcohol consumption and voila: No U curve in sight –> No protective effect of drinking a little bit of alcohol.

Why I am writing about that study on my own blog? I didn’t work on the research, that is for sure! No, it is because the Dutch newspaper NRC actually contacted me to get some background information which I was happy to do. The science section in the NRC has always been one of the best in the NL, which made it quite an honor as well as an adventure to get involved like that. The journalist, SV, did an excellent job or wrapping all what we discussed in that 30-40 video call into just under 600 words, which you can read here (Dutch).  I really learned a lot helping out and I am looking forward doing this type of work sometime in the future.

Go beyond the binary outcome!

You were just diagnosed with a debilitating disease. You try to make sense of what the next steps are going to be. You ask your doctor, what do I need to do in order to get back to fully functioning adult as good as humanly possible. The doctor starts to tell what to tell you in order to reduce the risk of future events.

That sounds logical at first sight, but in reality, it is not. The question and the answer are disconnected on various levels: what is good for lowering your risk is not necessarily the same thing as the thing that will bring functionality back into your live. Also, they are about different time scales: getting back to a normal life is about weeks, perhaps months, and trying to keep recurrence risk as low as possible is a long term game – lifelong in fact.
A lot of research in various fields have bungled these two things up. The effects of acute treatment are evaluated in studies with 3-5 years of follow up. Or reducing recurrence risk is studied in large cohorts with only 6-12 months of follow up. I am not arguing that this is always a bad idea, but i do think that a better distinction between these concepts could help some fields make some progress. 

We do that in stroke. Since a while now we have adopted the so called modified Rankin scale as the primary outcome in acute stroke trials. It is a 7 category ordinal scale often measured at 90 days after the stroke that actually tells us whether the patients completely recovered (mRS 0) or actually dies (mRS 6) and anything in between. This made so much sense for stroke that I started to wonder whether this would also make sense for other diseases.

I think it does. In a recent paper published a couple of months ago in the RPTH by JLR and me, we call upon the greater thrombosis community to consider to look beyond a binary outcome. I stand by this idea, and for that reason I brought it up again at the Maastricht Consensus Conference on Thrombosis. During that conference another speaker, EK, said that the field needed a new way to capture functionality after VTE. You guessed it, we got together over coffee, shared ideas, recruited SB as a third critical thinker, and we came up with this: a call to action to improve measuring functional limitations after venous thromboembolism.

This is not just a call from us to others to get some action, this is a start of some new upcoming research activity together with EK, SB and myself. First we need the input from other experts on the scale itself. Second, we need to standardize the way we actually score patients, then test this and get the patients perspective on the logistics and questions behind the scale. third we need to know the reliability of scale and how the logistics work in a true RCT setting. Only when we complete all these steps, we will be certain whether looking the binary outcome indeed brings more actionable information when you have talk to your doctor and you ask yourself “how do i increase my chances of getting back to a fully functioning adult as good as humanly possible”.

Replication: how exact do you want to be?

Doing exactly the same experiment for the second time around doesn’t really tell you much. In fact, if you quickly glance over the statistics it might look like you might as well do a coin flip. Wait.. What? Yup, a coin flip. After all, doing the exact same experiment will provide you with a 50/50 when it comes to detecting the true effect (50% power).


The kernel of truth is of course that a coin flip never adds new useful information. But what does an exact replication experiment actually add? This is the question we are trying to answer in latest paper in PLOS Biology where we explore the added value of replications in biomedical research. (see figure). The bottom line is that doing the exact same thing (including the same sample size) really has only limited added value. To understand what than the power implications for replication experiments actually are, we developed a shiny app, where readers can play around with different scenarios. Want to learn more? take a look here: s-quest.bihealth.org/power_replication


The project was carried by SP, which resulted in a paper published in PLOS Biology (find it here). The paper got some traction on news sites as well as twitter, as you see from this altmetric overview

Reusing open data

I was thrilled when I learned that the QUEST center at the BIH was going to reward open data reuse with awards. The details can be found on their website, but the bottom line is this: open science does not only implicate opening up your data, but actually the use of open data. So if everybody open up their data, but nobody is actually using it, the added values is quite limited. 

For that reason I started some projects back in 2015/2016 designed to see how easy it actually is to find data that could be used to answer a question that you are actually interested in. The answer is, not always as easy. The required variables might not be there, and even i they are, it is quite complex to start using a database that is not build by yourself. To understand the value of your results, you have to understand how the data was collected. One study proofed to be so well documented that it was a contender: the English Longitudinal Study on Aging. One of the subsequent analyses that we did was published in a paper –mentioned before on this blog-.and that paper is the reason why I am writing this blog. We received the Open data reuse award.

The award has a 1000 euro attached to it, money the group can spend on travel and consumables. Now, do not get me wrong, 1000 euro is nothing to sneeze at. But 1000 euro is not going to be major driver in your decision whether to reuse open data or not. But the award is nice and I hope effective in stimulating open science, especially as can stimulate the conversation and critical evaluation on the value of reusing open data .     

Long journey, short(ish) story

This is a short story about a long journey. It is about a of which the journey started in 2013 if I am not mistaken. In that year, we decided to link the RATIO case-control study to the data from the Central Buro of Statistics (CBS) in the Netherlands, allowing us to turn the case-control study into a follow-up study.

The first results of this analyses were already published some time ago under as “Recurrence and Mortality in Young Women With Myocardial Infarction or Ischemic Stroke”. To get these results in that journal, we were asked to reduce the paper to a letter. WE did and hope we were able to keep the core message clean and clear: the risk of arterial events, after arterial events, remains high over long period of time 15+ years) and remain true to type.

Just last week (!) we published another analyses of the data, where we contrast the long term risk for those with a presumably hypercoagulable blood profile to those who do not show a tendency to clotting. The bottom line is that, if anything, there is a dose-response between hypercoagulability and arterial thrombosis for ischemic stroke patients, but not for myocardial infarction patients. This is all in line with the conclusions on the role of hypercoagulability and stroke based on data from the same study. But I have to be honest: the evidence is not that overwhelming: the precision is low, as seen by the broad confidence intervals. And with regard to the point estimates, no clinically relevant effects seen. Then again, it is a piece of the puzzle that is needed to understand the role of hypercoagulability in young stroke.

main figure from the paper: Q4 vs Q1 is almost doubling in risk

There is a lot to tell about this publication: how difficult it was to get the study data linked to the CBS to get to the 15 year follow up, how AM did a fantastic job organizing the whole project,  how quartile analyses are possibly not the best way to capture all information that is in the data, how we had tremendous delays because of peer review – especially in the last journal, or how bad some of the peer review reports were, how one of the peer reviewers was a commercial enterprise – which for some time paid people to do peer review, how the peer review reports are all open, how it was to get the funding for getting the paper not locked away behind a paywall.

But I want to keep this story short and not dwell too much on the past. The follow-up period was long, the time it took u to get this published was long, let us keep the rest of the story as short as possible. I am just glad that it is published and finally to be shared with the world.

Pre-prints start to sound better and better…

Finding consensus in Maastricht

source https://twitter.com/hspronk

Last week, I attended and spoke at the Maastricht Consensus Conference on Thrombosis (MCCT). This is not your standard, run-of-the-mill, conference where people share their most recent research. The MCCT is different, and focuses on the larger picture, by giving faculty the (plenary) stage to share their thoughts on opportunities and challenges in the field. Then, with the help of a team of PhD students, these thoughts are than further discussed in a break out session. All was wrapped up by a plenary discussion of what was discussed in the workshops. Interesting format, right?

It was my first MCCT, and I had difficulty envisioning how exactly this format will work out beforehand. Now that I have experienced it all, I can tell you that it really depends on the speaker and the people attending the workshops. When it comes to the 20 minute introductions by the faculty, I think that just an overview of the current state of the art is not enough. The best presentations were all about the bigger picture, and had either an open question, a controversial statement or some form of “crystal ball” vision of the future. It really is difficult to “find consensus” when there is no controversy as was the case in some plenary talks. Given the break-out nature of the workshops, my observations are limited in number. But from what I saw, some controversy (if need be only constructed for the workshop) really did foster discussion amongst the workshop participants.

Two specific activities stand out for me. The first is the lecture and workshop on post PE syndrome and how we should able to monitor the functional outcome of PE. Given my recent plea in RPTH for more ordinal analyses in the field of thrombosis and hemostasis – learning from stroke research with its mRS- we not only had a great academic discussion, but made immediately plans for a couple of projects where we actually could implement this. The second activity I really enjoyed is my own workshop, where I not only gave a general introduction into stroke (prehospital treatment and triage, clinical and etiological heterogeneity etc) but also focused on the role of FXI and NETS. We discussed the role of DNase as a potential for co-treatment for tPA in the acute setting (talking about “crystal ball” type of discussions!). Slides from my lecture can be found here (PDF). An honorable mention has to go out to the PhD students P and V who did a great job in supporting me during the prep for the lecture and workshop. Their smart questions and shared insights really shaped my contribution.

Now, I said it was not always easy to find consensus, which means that it isn’t impossible. In fact, I am sure that themes that were discussed all boil down to a couple opportunities and challenges. A first step was made by HtC and HS from the MCCT leadership team in the closing session on Friday which will proof to be a great jumping board for the consensus paper that will help set the stage for future research in our field of arterial thrombosis.

Messy epidemiology: the tale of transient global amnesia and three control groups

Clinical epidemiology is sometimes messy. The methods and data that you might want to use might not be available or just too damn expensive. Does that mean that you should throw in the towel? I do not think so.

I am currently working in a more clinical oriented setting, as the only researcher trained as a clinical epidemiologist. I could tell about being misunderstood and feeling lonely as the only who one who has seen the light, but that would just be lying. The fact is that my position is one privilege and opportunity, as I work with many different groups together on a wide variety of research questions that have the potential to influence clinical reality directly and bring small, but meaningful progress to the field.

Sometimes that work is messy: not the right methods, a difference in interpretation, a p value in table 1… you get the idea. But sometimes something pretty comes out of that mess. That is what happened with this paper, that just got published online (e-pub) in the European Journal of Neurology.  The general topic is the heart brain interaction, and more specifically to what extent damage to the heart actually has a role in transient global amnesia. Now, the idea that there might be a link is due to some previous case series, as well as the clinical experience of some of my colleagues. Next step would of course to do a formal case control-study, and if you want to estimate true measure of rate ratios, a lot effort has to go into the collection of data from a population based control group. We had neither time nor money to do so, and upon closer inspection, we also did not really need that clean control group to answer some of our questions that would progress to the field.

So instead, we chose three different control groups, perhaps better referred as reference groups, all three with some neurological disease. Yes, there are selections at play for each of these groups, but we could argue that those selections might be true for all groups. If these selection processes are similar for all groups, strong differences in patient characteristics of biomarkers suggest that other biological systems are at play. The trick is not to hide these limitations, but as a practiced judoka, leverage these weaknesses and turn them into a strengths. Be open about what you did, show the results, so that others can build on that experience.

So that is what we did. Compared patients with migraine with aura, vestibular neuritis and transient ischemic attack, patients with transient global amnesia are more likely to exhibitsigns of myocardial stress. This study was not designed – nor will if even be able to – understand the cause of this link, not do we pretend that our odds ratios are in fact estimates of rate ratios or something fancy like that. Still, even though many aspects of this study are not “by the book”, it did provide some new insights that help further thinking about and investigations of this debilitating and impactful disease.

The effort was lead by EH, and the final paper can be found here on pubmed.

Genetic determinants of activity and antigen levels of contact system factors

2018-11-08 12_43_09-RATIO instol zymogen.ppt [Compatibility Mode] - PowerPoint
One of my slides with a cartoon of the intrinsic coagulation system. I know, the reality is way more complicated, but still, I like the picture!
The contact system, or intrinsic coagulation system, have for a long time been an undervalued part of the thrombosis and hemostasis field. Not by me. I love FXI & FXII Not just now, since FXI is suddenly the “new kid on the block” as the new target for antithrombotic treatment through ASOs, but already since I started my PhD in 2007/2008. As any of my colleagues from back then will confirm, I couldn’t shut up about FXI and FXII as I thought that my topic was the only relevant topic in the world. Although common amongst young researcher, I do apologize for this now that I have 20/20 hindsight.

Still, it is only natural that some of the work I continues to be focused on those little bit weird coagulation proteins. Are they relevant to hemostasis? Are they relevant in pathological thrombus formation? What is their role in other biological systems? Questions that the field is only slowly getting answers to. Our latest contribution to this is the analyses of genetic variations in the genes that code for these protein, and estimate if the levels of activation and antigen are in fact -in part- genetically determinant.

This analysis was performed in the RATIO study, from which we primarily focused on the control group. That control group is relatively small for a genetic analyses, but given that we have a relative young group the hope is that the noise is not too bad to pick up some signals. Additionally, given the previous work in the RATIO study, I think this is the only dataset that has a comprehensive phenotyping of the intrinsic coagulation proteins as it includes measures of protein activity, antigen and activation.

The results, which we published in the JTH, are threefold: we were able to confirm previously reported associations between known genetic variations and phenotype. Se were also able to identify two new loci (i.e. KLKB1 rs4253243 for prekallikrein and KNG1rs5029980 for HMWK levels). Third, we did not find evidence of strong associations between variation in the studied genes and the risk of ischemic stroke or myocardial infarction. Small effects can however not be ruled, as the sample size of this study is not enough to yield very precise estimates. 

The work was spearheaded by JLR, with tons of help by HdH, and in collaboration with the thrombosis group at the LUMC.

The paper is published in the JTH, and as always, can also be found at my Mendeley profile.

Getting your life back on track after stroke: returning to work

https://goo.gl/CbNPSE

Stroke severity and incidence might be stabilizing, or even decreasing over time in western countries, but this sure is not true for other parts of the world. But here is something to think about: with increasing survival, people will suffer longer from the consequences of stroke. This is of course especially true if the stroke occured at a young age.

To understand the true impact of stroke, we need to look beyond increased risk of secondary events. We need to understand how the disease affects day-to-day life, especially long term in young stroke patients. The team in Helsinki (HSYR) took a look at the pattern of young stroke patients returning to work. The results:

We included a total of 769 patients, of whom 289 (37.6%) were not working at 1 year, 323 (42.0%) at 2 years, and 361 (46.9%) at 5 years from IS.

That is quite shocking! But how about the pattern? For that we used lasagna plots, something like heatmaps for longitudinal epidemiological data. The results are above: the top panel is just the data like in our database, while the lower data has some sorting to help interpret the results a bit better. 

The paper can be found here, and I am proud to say that it is open access, but you can as always just check my Mendeley profile.

Aarnio K, Rodríguez-Pardo J, Siegerink B, Hardt J, Broman J, Tulkki L, Haapaniemi E, Kaste M, Tatlisumak T, Putaala J. Return to work after ischemic stroke in young adults. Neurology 2018; 0: 1.

Cardiac troponin T and severity of cerebral white matter lesions: quantile regression to the rescue

quantile regression of high vs low troponin T and white matter lesion quantile

A new paper, this time venturing into the field of the so-called heart-brain interaction. We often see stroke patients with cardiac problems, and vice versa. And to make it even more complex, there is also a link to dementia! What to make of this? Is it a case of chicken and the egg, or just confounding by a third variable?  How do these diseases influence each other?

This paper tries to get a grip on this matter by zooming in on a marker of cardiac damage, i.e. cardiac troponin T. We looked at this marker in our stroke patients. Logically, stroke patients do not have increased levels of troponin T, yet, they do. More interestingly, the patients that exhibit high levels of this biomarker also have high level of structural changes in the brain, so called cerebral white matter lesions. 

But the problem is that patients with high levels of troponin T are different from those who have no marker of cardiac damage. They are older and have more comorbidities, so a classic case for adjustment for confounding, right? But then we realize that both troponin as well as white matter lesions are a left skewed data. Log transformation of the variables before you run linear regression, but then the interpretation of the results get a bit complex if you want clear point estimates as answers to your research question.

So we decided to go with a quantile regression, which models the quantile cut offs with all the multivariable regression benefits. The results remain interpretable and we don’t force our data into distribution where it doesn’t fit. From our paper:

In contrast to linear regression analysis, quantile regression can compare medians rather than means, which makes the results more robust to outliers [21]. This approach also allows to model different quantiles of the dependent variable, e.g. 80th percentile. That way, it is possible to investigate the association between hs-cTnT in relation to both the lower and upper parts of the WML distribution. For this study, we chose to perform a median quantile regression analysis, as well as quantile regression analysis for quintiles of WML (i.e. 20th, 40th, 60th and 80th percentile). Other than that, the regression coefficients indicate the effects of the covariate on the cut-offs of the respective quantiles of the dependent variable, adjusted for potential covariates, just like in any other regression model.

Interestingly, the result show that association between high troponin T and white matter lesions is the strongest in the higher quantiles. If you want to stretch to a causal statement that means that high troponin T has a more pronounced effect on white matter lesions in stroke patients who are already at the high end of the distribution of white matter lesions. 

But we should’t stretch it that far. This is a relative simple study, and the clinical relevance of our insights still needs to be established. For example, our unadjusted results might indicate that the association in itself might be strong enough to help predict post stroke cognitive decline. The adjusted numbers are less pronounced, but still, it might be enough to help prediction models.

The paper, led by RvR, is now published in J of Neurol, and can be found here, as well as on my mendeley profile.

 von Rennenberg R, Siegerink B, Ganeshan R, Villringer K, Doehner W, Audebert HJ, Endres M, Nolte CH, Scheitz JF. High-sensitivity cardiac troponin T and severity of cerebral white matter lesions in patients with acute ischemic stroke. J Neurol Springer Berlin Heidelberg; 2018; 0: 0.