New paper: Long-Term Mortality Among ICU Patients With Stroke Compared With Other Critically Ill Patients

Stroke patients can be severely affected by the clot or bleed in their brain. With the emphasis on “can”, because the clinical picture of stroke is varied. The care for stroke cases is often organized in stroke units, specialized wards with the required knowledge and expertise. I forgot who it was – and I have not looked for any literature to back this up – but a MD colleague told me once that stroke units are the best “treatment” for stroke patients.

Why am I telling you this? Because the next paper I want to share with you is not about mild or moderately affected patients, nor is it about the stroke unit. It is about stroke patients who end up at the intensive care unit. Only 1 in 50 to 100 of ICU patients are actually suffering from stroke, so it is clear that these patients do not make up the bulk of the patient population. So, all the more reason to bring some data together and get a better grip on what actually happens with these patients.

That is what we did in the paper “Long-Term Mortality Among ICU Patients With Stroke Compared With Other Critically Ill Patients”. The key element of the paper is the sheer volume of data that were available to study this group: 370,386 ICU patients, of which 7,046 (1.9%) stroke patients (of which almost 40% with intracerebral hemorrhage, a number far higher than natural occurrence).

The results are basically best summed up in the Kaplan Meier found below – it shows that in the short run, the risk of death is quite high (this is after all an ICU population), but also that there is a substantial difference between ischemic and hemorrhagic stroke. Hidden in the appendix are similar graphs where we plot also different diseases (e.g. traumatic brain injury, sepsis, cardiac surgery) that are more prevalent in the ICU to provide MDs with a better feel for the data. Next to these KM’s we also model the data to adjust for case-mix, but I will keep those results for those who are interested and actually read the paper.


Our results are perhaps not the most world shocking, but it is helpful for the people working in the ICU’s, because they get some more information about the patients that they don’t see that often. This type of research is only possible if there is somebody collecting this type of data in a standardized way – and that is where NICE came in. “National Intensive Care Evaluation” is a Dutch NGO that actually does this. Nowadays, most people know this group from the news when they give/gave updates on the number of COVID-19 patients in the ICU in the Netherlands. This is only possible because there was this infrastructure already in place.

MKV took the lead in this paper, which was published in the journal Critical Care Medicine with DOI: 10.1097/CCM.0000000000004492.

Three new papers – part III

As explained here and here, I temporarily combine the announcements of published papers in one blog to save some time. This is part III, where I focus on ordinal outcomes. Of all recent papers, these are the most exciting to me, as they really are bringing something new to the field of thrombosis and COVID-19 research.

Measuring functional limitations after venous thromboembolism: Optimization of the Post-VTE Functional Status (PVFS) Scale. I have written about our call to action, and this is the follow-up paper, with research primarily done in the LUMC. With input from patients as well as 50+ experts through a Delphi process, we were able to optimize our initial scale.

Confounding adjustment performance of ordinal analysis methods in stroke studies. In this simulation study, we show that ordinal data from observational can also be analyzed with a non-parametric approach. Benefits: it allows us to analyze without the need of the proportional odds assumption and still get an easy to understand point estimate of the effect.

The Post-COVID-19 Functional Status (PCFS) Scale: a tool to measure functional status over time after COVID-19. In this letter to the European Respiratory, colleagues from Leiden, Maastricht, Zurich, Mainz, Hasselt, Winterthur, and of course Berlin, we propose to use a scale that is basically the same as the PVFS to monitor and study the long term consequence of COVID-19.

Three new papers published – part II

In my last post, I explained why I am at the moment not writing one post per new paper. Instead, I group them. This time with a common denominator, namely the role of cardiac troponin and stroke:

High-Sensitivity Cardiac Troponin T and Cognitive Function in Patients With Ischemic Stroke. This paper finds its origins in the PROSCIS study, in which we studied other biomarkers as well. In fact, there is a whole lot more coming. The analyses of these longitudinal data showed a – let’s say ‘medium-sized’ – relationship between cardiac troponin and cognitive function. A whole lot of caveats – a presumptive learning curve, not a big drop in cognitive function to work with anyway. After all, these are only mild to moderately affected stroke patients.

Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis. This paper investigates several patient populations -the general population, increased risk population, and stroke patients. The number of patients individuals in the title might, therefore, be a little bit deceiving – I think you should really only look at the results with those separate groups in mind. Not only do I think that the biology might be different, the methodological aspects (e.g. heterogeneity) and interpretation (relative risks with high absolute risks) are also different.

Response by Siegerink et al to Letter Regarding Article, “Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis”. We did the meta-analysis as much as possible “but the book”. We pre-registered our plan and published accordingly. This all to discourage ourselves (and our peer reviewers) to go and “hunt for specific results”. But then there was a letter to the editor with the following central point: Because in the subgroup of patients with material fibrillation, the cut-offs used for the cardiac troponin are so different that pooling these studies together in one analysis does not make sense. At first glance, it looks like the authors have a point: it is difficult to actually get a very strict interpretation from the results that we got. This paper described our response. Hint: upon closer inspection, we do not agree and make a good counterargument (at least, that’s what we think).

On the value of data – routinely vs purposefully

I listen to a bunch of podcasts, and the podcast “The Pitch” is one of them. In that podcast, Entrepreneurs of start-up companies pitch their ideas to investors. Not only is it amusing to hear some of these crazy business ideas, but the podcast also help me to understand about professional life works outside of science. One thing i learned is that it is ok if not expected, to oversell by about a factor 142.

Another thing that I learned is the apparent value of data. The value of data seems to be undisputed in these pitches. In fact, the product or service the company is selling or providing is often only a byproduct: collecting data about their users which subsequently can be leveraged for targeted advertisement seems to be the big play in many start-up companies.

I think this type of “value of data” is what it is: whatever the investors want to pay for that type of data is what it is worth. But it got me thinking about the value of data that we actually collect in medical. Let us first take a look at routinely data, which can be very cheap to collect. But what is the value of the data? The problem is that routinely collected data is often incomplete, rife with error and can lead to enormous biases – both information bias as well as selection bias. Still, some research questions can be answered with routinely collected data – as long as you make some real efforts to think about your design and analyses. So, there is value in routinely collected data as it can provide a first glance into the matter at hand.

And what is the case for purposefully collected data? The idea behind this is that the data is much more reliable: trained staff collects data in a standardised way resulting in datasets without many errors or holes. The downside is the “purpose” which often limits the scope and thereby the amount collected data per included individual. this is the most obvious in randomised clinical trials in which often millions of euro’s are spent to answer one single question. Trials often do no have the precision to provide answers to other questions. So it seems that the data can lose it value after answering that single question.

Luckily, many efforts were made to let purposefully collected keep some if its value even after they have served their purpose. Standardisation efforts between trials make it now possible to pool the data and thus obtain a higher precision. A good example from the field of stroke research is the VISTA collaboration, i.e the Virtual International Stroke Trials Archive”. Here, many trials – and later some observational studies – are combined to answer research questions with enough precision that otherwise would never be possible. This way we can answer questions with high quality of purposefully collected data with numbers otherwise unthinkable.

This brings me to a recent paper we published with data from the VISTA collaboration: “Early in-hospital exposure to statins and outcome after intracerebral haemorrhage”. The underlying question whether and when statins should be initiated / continued after ICH is clinically relevant but also limited in scope and impact, so is it justified to start a trial? We took the the easier and cheaper solution and analysed the data from VISTA. We conclude that

… early in-hospital exposure to statins after acute ICH was associated with better functional outcome compared with no statin exposure early after the event. Our data suggest that this association is particularly driven by continuation of pre-existing statin use within the first two days after the event. Thus, our findings provide clinical evidence to support current expert recommendations that prevalent statin use should be continued during the early in-hospital phase.1921


And this shows the limitations of even well collected data from RCT: as long as the exposure of interest is potentially provided to a certain subgroup (i.e. Confounding by indication), you can never really be certain about the treatment effects. To solve this, we would really need to break the bond between exposure and any other clinical characteristic, i.e. randomize. That remains the golden standard for intended effects of treatments. Still, our paper provided a piece of the puzzle and gave more insight, form data that retained some of its value due to standardisation and pooling. But there is no dollar value that we can put on the value of medical research data – routinely or purposefully collected alike- as it all depends on the question you are trying to answer.

Our paper, with JD in the lead, was published last year in the European Stroke Journal, and can be found here as well as on my Publons profile and Mendeley profile.

Messy epidemiology: the tale of transient global amnesia and three control groups

Clinical epidemiology is sometimes messy. The methods and data that you might want to use might not be available or just too damn expensive. Does that mean that you should throw in the towel? I do not think so.

I am currently working in a more clinical oriented setting, as the only researcher trained as a clinical epidemiologist. I could tell about being misunderstood and feeling lonely as the only who one who has seen the light, but that would just be lying. The fact is that my position is one privilege and opportunity, as I work with many different groups together on a wide variety of research questions that have the potential to influence clinical reality directly and bring small, but meaningful progress to the field.

Sometimes that work is messy: not the right methods, a difference in interpretation, a p value in table 1… you get the idea. But sometimes something pretty comes out of that mess. That is what happened with this paper, that just got published online (e-pub) in the European Journal of Neurology.  The general topic is the heart brain interaction, and more specifically to what extent damage to the heart actually has a role in transient global amnesia. Now, the idea that there might be a link is due to some previous case series, as well as the clinical experience of some of my colleagues. Next step would of course to do a formal case control-study, and if you want to estimate true measure of rate ratios, a lot effort has to go into the collection of data from a population based control group. We had neither time nor money to do so, and upon closer inspection, we also did not really need that clean control group to answer some of our questions that would progress to the field.

So instead, we chose three different control groups, perhaps better referred as reference groups, all three with some neurological disease. Yes, there are selections at play for each of these groups, but we could argue that those selections might be true for all groups. If these selection processes are similar for all groups, strong differences in patient characteristics of biomarkers suggest that other biological systems are at play. The trick is not to hide these limitations, but as a practiced judoka, leverage these weaknesses and turn them into a strengths. Be open about what you did, show the results, so that others can build on that experience.

So that is what we did. Compared patients with migraine with aura, vestibular neuritis and transient ischemic attack, patients with transient global amnesia are more likely to exhibitsigns of myocardial stress. This study was not designed – nor will if even be able to – understand the cause of this link, not do we pretend that our odds ratios are in fact estimates of rate ratios or something fancy like that. Still, even though many aspects of this study are not “by the book”, it did provide some new insights that help further thinking about and investigations of this debilitating and impactful disease.

The effort was lead by EH, and the final paper can be found here on pubmed.

FVIII, Protein C and the Risk of Arterial Thrombosis: More than the Sum of Its Parts.


Peer review is not a pissing contest. Peer reviewing is not about findings the smallest of errors and delay publication because of it. Peer review is not about being right. Peer review is not about rewriting the paper under review. Peer review is not about asking for yet another experiment.


Peer review is about making sure that the conclusions presented in the paper are justified by the data presented and peer review is about helping the authors get the best report on what they did.

At least that what I try to remind myself of when I write my peer review report. So what happened when I wrote a peer review about a paper presenting data on the two hemostatic factors protein C and FVIII in relation to arterial thrombosis. These two proteins are known to have a direct interaction with each other. But does this also translate into the situation where a combination of the two risk factors of the “have both, get extra risk for free”?

There are two approaches to test so-called interaction: statistical and biological. The authors presented one approach, while I thought the other approach was better suited to analyze and interpret the data. Did that result in an academic battle of arguments, or perhaps a peer review deadlock? No, the authors were quite civil to entertain my rambling thoughts and comments with additional analyses and results, but convinced me in the end that their approach have more merit in this particular situation. The editor of thrombosis and hemostasis saw this all going down and agreed with my suggestion that an accompanying editorial on this topic to help the readers understand what actually happened during the peer review process. The nice thing about this is that the editor asked me to that editorial, which can be found here, the paper by Zakai et al can be found here.

All this learned me a thing or two about peer review: Cordial peer review is always better (duh!) than a peer review street brawl, and that sharing aspects from the peer review process could help readers understand the paper in more detail. Open peer review, especially the parts where peer review is not anonymous and reports are open to readers after publication, is a way to foster both practices. In the meantime, this editorial will have to do.


new paper: pulmonary dysfunction and CVD outcome in the ELSA study

 This is a special paper to me, as this is a paper that is 100% the product of my team at the CSB.Well, 100%? Not really. This is the first paper from a series of projects where we work with open data, i.e. data collected by others who subsequently shared it. A lot of people talk about open data, and how all the data created should be made available to other researchers, but not a lot of people talk about using that kind of data. For that reason we have picked a couple of data resources to see how easy it is to work with data that is initially not collected by ourselves.

It is hard, as we now have learned. Even though the studies we have focussed on (ELSA study and UK understanding society) have a good description of their data and methods, understanding this takes time and effort. And even after putting in all the time and effort you might still not know all the little details and idiosyncrasies in this data.

A nice example lies in the exposure that we used in this analyses, pulmonary dysfunction. The data for this exposure was captured in several different datasets, in different variables. Reverse engineering a logical and interpretable concept out of these data points was not easy. This is perhaps also true in data that you do collect yourself, but then at least these thoughts are being more or less done before data collection starts and no reverse engineering is needed. new paper: pulmonary dysfunction and CVD outcome in the ELSA study

So we learned a lot. Not only about the role of pulmonary dysfunction as a cause of CVD (hint, it is limited), or about the different sensitivity analyses that we used to check the influence of missing data on the conclusions of our main analyses (hint, limited again) or the need of updating an exposure that progresses over time (hint, relevant), but also about how it is to use data collected by others (hint, useful but not easy).

The paper, with the title “Pulmonary dysfunction and development of different cardiovascular outcomes in the general population.” with IP as the first author can be found here on pubmed or via my mendeley profile.

predicting DVT with D-dimer in stroke patients: a rebuttal to our letter

Some weeks ago, I reported on a letter to the editor of Thrombosis Research on the question whether D-Dimer indeed does improve DVT risk prediction in stroke patients.

I was going to write a whole story on how one should not use a personal blog to continue the scientific debate. As you can guess, I ended up writing a full paragraph where I did this anyway. So I deleted that paragraph and I am going to do a thing that requires some action from you. I am just going to leave you with the links to the letters and let you decide whether the issues we bring up, but also the corresponding rebuttal of the authors, help to interpret the results from the the original publication.

How to set up a research group

A couple of weeks ago I wrote down some thoughts I had while writing a paper for the JTH series on Early Career Researchers. I was asked to write how one sets up a research group, and the four points I described in my previous post can be recognised in the final paper.

But I also added some reading tips in the paper. reading on a particular topic helps me not only to learn what is written in the books, but also to get my mind in a certain mindset. So, when i knew that i was going to take over a research group in Berlin I read a couple of books, both fiction and non fiction. Some where about Berlin (e.g. Cees Nootebooms Berlijn 1989/2009), some were focussed on academic life (e.g. Porterhouse Blue). They help to get my mind in a certain gear to help me prepare of what is going on. In that sense, my bookcase says a lot about myself.

The number one on the list of recommended reads are the standard management best sellers, as I wrote in the text box:

// Management books There are many titles that I can mention here; whether it the best-seller Seven Habits of Highly Effective People or any of the smaller booklets by Ken Blanchard, I am convinced that reading some of these texts can help you in your own development as a group leader. Perhaps you will like some of the techniques and approaches that are proposed and decide to adopt them. Or, like me, you may initially find yourself irritated because you cannot envision the approaches working in the academic setting. If this happens, I encourage you to keep reading because even in these cases, I learned something about how academia works and what my role as a group leader could be through this process of reflection. My absolute top recommendation in this category is Leadership and Self-Deception: a text that initially got on my nerves but in the end taught me a lot.

I really think that is true. You should not only read books that you agree with, or which story you enjoy. Sometimes you can like a book not for its content but the way it makes you question your own preexisting beliefs and habits. But it is true that this sometimes makes it difficult to actually finnish such a book.

Next to books, I am quite into podcasts so I also wrote

// Start up. Not a book, but a podcast from Gimlet media about “what it’s really like to get a business off the ground.” It is mostly about tech start-ups, but the issues that arise when setting up a business are in many ways similar to those you encounter when you are starting up a research group. I especially enjoyed seasons 1 and 3.

I thought about including the sponsored podcast “open for business” from Gimlet Creative, as it touches upon some very relevant aspects of starting something new. But for me the jury is still out on the “sponsored podcast” concept  – it is branded content from amazon, and I am not sure to what extent I like that. For now, i do not like it enough to include it in the least in my JTH-paper.

The paper is not online due to the summer break,but I will provide a link asap.

– update 11.10.2016 – here is a link to the paper. 





Does d-dimer really improve DVT prediction in stroke?


Good question, and even though thromboprofylaxis is already given according to guidelines in some countries, I can see the added value of a good discriminating prediction rule. Especially finding those patients with low DVT risk might be useful. But using d-dimer is a whole other question. To answer this, a thorough prediction model needs to be set up both with and without the information of d-dimer and only a direct comparison of these two models will provide the information we need.

In our view, that is not what the paper by Balogun et al did. And after critical appraisal of the tables and text, we found some inconsistencies that prohibits the reader from understanding what exactly was done and which results were obtained. In the end, we decided to write a letter to the editor, especially to prevent that other readers to mistakenly take over the conclusion of the authors. This conclusion, being that “D-dimer concentration with in 48 h of acute stroke is independently associated with development of DVT.This observation would require confirmation in a large study.” Our opinion is that the data from this study needs to be analysed properly to justify such an conclusion. One of the key elements in our letter is that the authors never compare the AUC of the model with and without d-dimer. This is needed as that would provide the bulk of the answer whether or not d-dimer should be measured. The only clue we have are the ORs of d-dimer, which range between 3-4, which is not really impressive when it comes to diagnosis and prediction. For more information on this, please check this paper on the misuse of the OR as a measure of interest for diagnosis/prediction by Pepe et al.

A final thing I want to mention is that our letter was the result of a mini-internship of one of the students at the Master programme of the CSB and was drafted in collaboration with our Virchow scholar HGdH from the Netherlands. Great team work!

The letter can be found on the website of Thrombosis Research as well as on my Mendeley profile.


Cardiovascular events after ischemic stroke in young adults (results from the HYSR study)

2016-05-01 21_39_40-Cardiovascular events after ischemic stroke in young adults

The collaboration with the group in finland has turned into a nice new publication, with the title

“Cardiovascular events after ischemic stroke in young adults”

this work, with data from Finland was primarily done by KA and JP. KA came to Berlin to learn some epidemiology with the aid of the Virchow scholarship, so that is where we came in. It was great to have KA to be part of the team, and even better to have been working on their great data.

Now onto the results of the paper: like in the results of the RATIO follow-up study, the risk of recurrent young stroke remained present for a long-term time after the stroke in this analysis of the Helsinki Young Stroke Registry. But unlike the RATIO paper, this data had more information on their patients, for example the TOAST criteria. this means that we were able to identify that the group with a LAA had a very high risk of recurrence.

The paper can be found on the website of Neurology, or via my mendeley profile.

Pregnancy loss and risk of ischaemic stroke and myocardial infarction

2016-04-08 13_36_29-Posteingang - - Outlook

Together with colleagues I worked on a paper on relationship between pregnancy, its complications and stroke and myocardial infarction in young women, which just appeared online on the BJH website.

The article, which analyses data from the RATIO study, concludes that only if you have multiple pregnancy losses, your risk of stroke is increased (OR 2.4) compared to those who never experienced a pregnancy loss. The work was mainly done by AM, and is a good example of international collaborations where we benefitted from the expertise of all team members.

The article, with the full title “Pregnancy loss and risk of ischaemic stroke and myocardial infarction” can be found via PubMed, or via my personal Mendeley page.

Statins and risk of poststroke hemorrhagic complications

2016-03-28 13_00_38-Statins and risk of poststroke hemorrhagic complicationsEaster brought another publication, this time with the title

“Statins and risk of poststroke hemorrhagic complications”

I am very pleased with this paper as it demonstrates two important aspects of my job. First, I was able to share my thought on comparing current users vs never users. As has been argued before (e.g. by the group of Hérnan) and also articulated in a letter to the editor I wrote with colleagues from Leiden, such a comparison brings forth an inherent survival bias: you are comparing never users (i.e. those without indication) vs current users (those who have the indication, can handle the side-effects of the medication, and stay alive long enough to be enrolled into the study as users). This matter is of course only relevant if you want to test the effect of statins, not if you are interested in the mere predictive value of being a statin user.

The second thing about this paper is the way we were able to use data from the VISTA collaboration, which is a large amount of data pooled from previous stroke studies (RCT and observational). I believe such ways of sharing data brings forward science. Should all data be shared online for all to use? I do am not sure of that, but the easy access model of the VISTA collaboration (which includes data maintenance and harmonization etc) is certainly appealing.

The paper can be found here, and on my mendeley profile.


– update 1.5.2016: this paper was topic of a comment in the @greenjournal. See also their website

update 19.5.2016: this project also led to first author JS to be awarded with the young researcher award of the ESOC2016.



Causal Inference in Law: An Epidemiological Perspective


Finally, it is here. The article I wrote together with WdH, MZ and RM was published in the European Journal of Risk and Regulation last week. And boy, did it take time! This whole project, an interdisciplinary project where epidemiological thinking was applied to questions of causal inference in tort law, took > 3 years – with only a couple of months writing… the rest was waiting and waiting and waiting and some peer review. but more on this later.

First some content. in the article we discuss the idea of proportional liability that adheres to the epidemiological concept of multi-causality. But the article is more: as this is a journal for non epidemiologist, we also provide a short and condensed overview of study design, bias and other epidemiological concepts such as counterfactual thinking. You might have recognised the theme from my visits to the Leiden Law school for some workshops. The EJRR editorial describes it asas: “(…) discuss the problem of causal inference in law, by providing an epidemiological viewpoint. More specifically, by scrutinizing the concept of the so-called “proportional liability”, which embraces the epidemiological notion of multi-causality, they demonstrate how the former can be made more proportional to a defendant’s relative contribution in the known causal mechanism underlying a particular damage.”

Getting this thing published was tough: the quality of the peer review was low (dare I say zero?),communication was difficult, submission system flawed etc. But most of all the editorial office was slow – first submission was June 2013! This could be a non-medical journal thing, i do not know, but still almost three years. And this all for an invited article that was planned to be part of a special edition on the link between epi and law, which never came. Due several delays (surprise!) of the other articles for this edition, it was decided that our article is not waiting for this special edition anymore. Therefore, our cool little insight into epidemiology now seems to be lost between all those legal and risk regulation articles. A shame if you ask me, but I am glad that we are not waiting any longer!

Although i do love interdisciplinary projects, and I think the result is a nice one, I do not want to go through this process again. No more EJRR for me.

Ow, one more thing… the article is behind a pay wall and i do not have access through my university, nor did the editorial office provide me with a link to a pdf of the final version. So, to be honest, I don’t have the final article myself! Feels weird. I hope EJRR will provide me with a pdf quite soon. In the meantime, anybody with access to this article, please feel free to send me a copy!

Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke



We published a new article just in PLOS Biology today, with the title:

“Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke”

This is a wonderful collaboration between three fields: stats, epi and lab researchers. Combined we took a look at what is called attrition in the preclinical labs, that is the loss of data in animal experiments. This could be because the animal died before the needed data could be obtained, or just because a measurement failed. This loss of data can be translated to the concept of loss to follow-up in epidemiological cohort studies, and from this field we know that this could lead to substantial loss of statistical power and perhaps even bias.

But it was unknown to what extent this also was a problem in preclinical research, so we did two things. We looked at how often papers indicated there was attrition (with an alarming number of papers that did not provide the data for us to establish whether there was attrition), and we did some simulation what happens if there is attrition in various scenarios. The results paint a clear picture: the loss of power but also the bias is substantial. The degree of these is of course dependent on the scenario of attrition, but the message of the paper is clear: we should be aware of the problems that come with attrition and that reporting on attrition is the first step in minimising this problem.

A nice thing about this paper is that coincides with the start of a new research section in the PLOS galaxy, being “meta-research”, a collection of papers that all focus on how science works, behaves, and can or even should be improved. I can only welcome this, as more projects on this topic are in our pipeline!

The article can be found on pubmed and my mendeley profile.

Update 6.1.16: WOW what a media attention for this one. Interviews with outlets from UK, US, Germany, Switzerland, Argentina, France, Australia etc, German Radio, the dutch Volkskrant, and a video on More via the corresponding altmetrics page . Also interesting is the post by UD, the lead in this project and chief of the CSB,  on his own blog “To infinity, and beyond!”


New article published – Ankle-Brachial Index and Recurrent Stroke Risk: Meta-Analysis

Another publication, this time on the role of the ABI as a predictor for stroke recurrence. This is a meta analysis, which combines data from 11 studies allowing us to see that ABI was moderately associated with recurrent stroke (RR1.7) and vascular events (RR 2.2). Not that much, but it might be just enough to increase some of the risk prediction models available for stroke patients when ABI is incorperated.

This work, the product of the great work of some of the bright students that work at the CSB (JBH and COL), is a good start in our search for a good stroke recurrence risk prediction model. Thiswill be a major topic in our future research in the PROSCIS study which is led by TGL. I am looking forward to the results of that study, as better prediction models are needed in the clinic especially true as more precise data and diagnosis might lead to better subgroup specific risk prediction and treatment.

The article can be found on pubmed and my mendeley profile and should be cited as

Hong J Bin, Leonards CO, Endres M, Siegerink B, Liman TG. Ankle-Brachial Index and Recurrent Stroke Risk. Stroke 2015; : STROKEAHA.115.011321.

First results from the RATIO follow up study

Another article got published today in the JAMA Int Med, this time the results from the first analyses of the RATIO follow-up data. For these data, we linked the RATIO study to the dutch national bureau of statistics (CBS), to obtain 20 years of follow-up on cardiovascular morbidity and mortality. We first submitted a full paper, but later we downsized to a research letter with only 600 words. This means that only the main message (i.e. cardiovascular recurrence is high, persistent over time and disease specific) is left.

It is a “Leiden publication”, where I worked together with AM and FP from Milano. Most of the credit of course goes to AM, who is the first author of this piece. The cool thing about this publication is that the team worked very hard on it for a long time (data linking and analyses where not an easy thing to do, as well as changing from 3000 words to 600 in just a week or so), and that in the end all the hard work paid off. But next to the hard work, it is also nice to see results being picked up by the media. The JAMA Int Med put out an international press release, whereas the LUMC is going to publish its own Dutch version. In the days before the ‘online first’ publication I already answered some emails from writers for medical news sites, some with up to 5.000K views per month. I do not know if you think that’s a lot, but for me it is. The websites that cover this story can be found here (, / / / / and perhaps more to come. Why not just take a look at the Altmetric of this article).

– edit 26.11.2015: a dutch press release from the LUMC can be found here) – edit: oops, has a published great report/interview, but used a wrong title…”Repeat MI and Stroke Risks Defined in ‘Younger’ Women on Oral Contraceptives”. not all women were on OC of course.

Of course, @JAMAInternalMed tweeted about it


The article, with the full title Recurrence and Mortality in Young Women With Myocardial Infarction or Ischemic Stroke: Long-term Follow-up of the Risk of Arterial Thrombosis in Relation to Oral Contraceptives (RATIO) Study can be found via JAMA Internal Medicine or via my personal Mendeley page.

As I reported earlier, this project is supported by a grant from the LUF den Dulk-Moermans foundation, for which we are grateful.

New article published: the relationship between ADAMTS13 and MI

2015-06-16 14_26_12-Plasma ADAMTS13 levels and the risk of myocardial infarction_ an individual pati

this article is a collaboration with a lot of guys. initiated from the Milan group, we ended up with a quite diverse group of researchers to answers this question because of the methods that we used: the individual patient data meta-analysis. The best thing about this approach: you can pool the data from different studies, even while you can adjusted for potential sources of confounding in a similar manner (given that the data are available, that is). On themselves, these studies showed some mixed results. But in the end, we were able to use the combined data to show that there was an increase MI risk but only for those with very low levels of ADAMTS13. So, here you see the power of IPD meta-analysis!

The credits for this work go primarily to AM who did a great job of getting all PI’s on board, analysing the data and writing a god manuscript. The final version is not online, but you find the pre-publication on pubmed



New article published – Conducting your own research: a revised recipe for a clinical research training project

2015-06-07 15_38_24-Mendeley Desktop

A quick update on a new article that was published on friday in the NTVG. This article with the title

“Conducting your own research: a revised recipe for a clinical research training project”

– gives a couple of suggestions for young clinicians/researchers on how they should organise their epidemiological research projects. This paper was written to commemorate the retirement of prof JvdB, who wrote the original article back in 1989. I am quite grew quite fond of this article, as it combines insights from 25 years back as well as quite recent insights (e.g. STROBE and cie Schuyt and resulted in a article that will help young research to rethink how they plan and execute their own research project.

There are 5 key suggestions that form the backbone of this article i.e. limit the research question, conduct a pilot study, write the article before you collect the data, streamline the research process and be accountable. As the article is in Dutch only at this moment, I will work on an English version. First drafts of this ms, each discussing each of the 5 recommendations might appear on this website. And how about a German version?

Anyway, it has to be mentioned that if it not was for JvdB, this article would have never come to light. Not only because he wrote the original, but mostly because he is one of the most inspiring teachers of epidemiology.

New article published – but did I deserve it?

One of these dots is me standing on a platform waiting for my train! Source:

This website is to keep track of all things that sound ‘sciency’, and so all the papers that I contributed end up here with a short description. Normally this means that I am one of the authors and I know well ahead of time that an article will be published online or in print. Today, however, I got a little surprise: I got notice that I am a co-author on a paper (pdf) which I knew was coming, but I didn’t know that I was a co-author. And my amazement grew even more the moment that I discovered that I was placed as the last author, a place reserved for senior authorship in most medical journals.

However , there is a catch… I had to share my ‘last authorship’ position with 3186 others, an unprecedented number!

You might have guessed that this is not just a normal paper and that there is something weird going on here. Well weird is not the right word. Unusual is the word I would like to use since this paper is an example of something that I hope will happen more often! Citizen scientists. A citizen scientist is where ordinary people without any background or training can help in a scientific experiment of some sorts by helping just a little to obtain the data after some minimal instruction. This is wonderfully explained by this project, the iSpex project, where I contributed not as an epidemiologist, but as a citizen scientist. If you want to know more, just read what I have written  previously on this blog in the post ‘measuring aerosols with your iPhone’.

So the researcher who initiated the iSpex project have now analysed their data and submitted the results to the journal Geophysical research letters, and as a bonus made all contributing citizen scientist co-author. Cool!

Now lets get back to the question stated in the title… Did I deserve an authorship on this paper? Basically no: none of the 3187 citizen scientist do not fulfil the criteria of authorship that I am used to (i.e. ICMJE), nor fulfil the criteria of the journal itself. I am no exception. However, I do believe that it is quite clear for any reader what the role of these citizen scientist was in this project. So this new form of a authorship, i.e. ‘gift authorship to a group of citizen scientists’ is a cool way to keep the public engaged to science. A job well done!

New publication “Graphical presentation of confounding in directed acyclic graphs”


A new publication became available, again an ‘educational’. However, this time the topic is new. It is about the application of directed acyclic graphs, a technique widely used in different areas of science. Ranging from computer science, mathematics, psychology, economics and epidemiology, this specific type of graphs has shown to be useful to describe the underlying causal structure of mechanisms of interest. This comes in very handy, since it can help to determine the sources of confounding for a specific epidemiological research question.

But, isn’t that what epidemiologist do all the time? What is new about these graphs, except for the fancy concepts as colliders, edges, and backdoor paths? Well, the idea behind DAGs are not new, there have been diagrams in epidemiology since years, but each epidemiologist has his own specific ways to draw the different relationship between various variables factors. Did you ever got stuck in a discussion about if something is a confounder or not? If you don’t get it resolved by talking, you might want to draw out the your point of view in a diagram, only to see that your colleagues is used to a different way of drawing epidemiological diagrams. DAGs resolve this. There is a clear set on rules that each DAG should comply with and if they do, they provides a clear overview of the sources of confounding and identify the minimal set of variables to account for all confounding present.

So that’s it… DAGs are a nifty method to talk the same idiom while discussing the causal questions you want to resolve. The only thing that you and your colleague now can fight over is the validity of the assumptions made by the DAG you just drew. And that is called good science!

The paper, with first author MMS, appeared in the methodology series of the journal Nephrology Dialysis and Transplantation, can be found here in pdf, and also on my mendeley account.

New publication in NTVG: Mendelian randomisation

Together with HdH and AvHV I wrote an article for the Dutch NTVG on Mendelian Randomisation in the Methodology series, which was published online today. This is not the first time; I wrote in the NTVG before for this up-to-date series (not 1 but 2 papers on crossover design) but I also wrote on Mendelian Randomisation before. In fact that was one of the first ‘ educationals’ I ever wrote. The weird thing is that I never formally applied mendelian randomisation analyses in a paper. I did apply the underlying reasoning in a paper, but no two-stage-least-squares analyses or similar. Does this bother me? Only a bit, but I think this just shows the limited value of formal Mendelian Randomsation studies: you need a lot of power and untestable assumptions which greatly reduces the applicability of this method in practice. however, the underlying reasoning is a good insight in the origin, and effects of confounding (and perhaps even others forms of bias) in epidemiological studies.Thats why I love Mendelian Randomisation; it is just another tool in the epidemiolgists toolbox.

The NTVG paper can be found here on their website (here in pdf) and also on my mendeley account.

New article: the intrinsic coagulation proteins and the risk of arterial thrombosis

I got good news today! A manuscript on the role of the intrinsic coagulation factors in the causal mechanisms leading to myocardial infarction and ischaemic stroke has been accepted for publication by the JTH. It took sometime, but in the end I’m very glad that this paper was published in the JTH because its readership is both clinical as well as biomedical: just the place where I feel most at home.

The basic message? These factors do contribute to ischaemic risk, but not to the risk of myocardial infarction. This is mostly the case for coagulation factor XI, which is a nice finding, because it could be a new target for anti-thrombotic therapies.

The article is now in print and will be made available soon. In the mean time, you can refer to my thesis, in which this research was also described.

New publication: LTTE in the American Journal of Epidemiology

12.coverAt the department of Clinical Epidemiology of the LUMC we have a continuous course/journal in which we read epi-literature and books in a nice little group. The group, called Capita Selecta, has a nice website which can be found here. sometime ago we’ve read an article that proposed to include dormant Mendelian Randomisation studies in RCT, to figure out the causal pathways of a treatment for chronic diseases. This could be most helpful when there is a discrepancy between the expected effect and the observed effect. During the discussion of this article we did not agree with the authors for several reasons. We, AGCB/IP/myself, decided to write a LTTE with these points. The journal was nice enough to publish our concerns, together with a response by the authors of the original article. The PDF can be found via the links below which will take you to the website of the American Journal of Epidemiology. The PDF of our LTTE can also be found at my mendeley profile.

original article
letter to the editor
response by the author

New publication in NTVG: patient crossover studies

Recently another paper became available online. Although accepted couple of months before and not yet in print, the paper on patient crossover studies can now be read and downloaded from the NTVG website. This paper, with first author REJR,  is a continuation on the paper on crossover trials on which I’ve blogged earlier. Together, these articles provide a comprehensive overview of the possibilities to use a study subject as its own control.

New article published: review on obesity and venous thrombosis

Together with colleagues I worked on a review on the role of obesity as a risk factor for venous thrombosis. I’m second author on the article, which come online last week, and most work has been done by SKB from Norway, who is visiting our department for a full year.

The article is written from an epidemiological point of view and discusses several points that are worth mentioning here. First of all, obesity is an ill-defined concept: are we only talking BMI, or do also other measures of obesity need to be taken into account? Second, even when defined, the results are not always easy to interpret. In causal research there are a couple of things that need to be fulfilled before one can answer the question whether something is risk factor of disease. For example, BMI can be reduced by means of exercise   diet or disease, which all three have completely different effects on thrombosis risk. We discuss all these epidemiological problems, together with the existing body of evidence in the new article in seminars of thrombosis and hemostasis. These question are not only important for our understanding of thrombotic disease, but also to grasp the causal role of obesity in (cardiovascular) disease. This research question has in ast couple of years been put on the research agenda of the NEO study, on which perhaps more in the future.

The article, with the full title “Role of Obesity in the Etiology of Deep Vein Thrombosis and Pulmonary Embolism: Current Epidemiological Insights” can be found via PubMed, or via my personal Mendeley page.

The protective effects of statins on thrombosis recurrence: a letter to the editor of the European Heart Journal

Recently, Biere-Safi et al published the results from their analyses of the PHARMO database describing the relation between statin use and the recurrence of pulmonary embolism (pubmed). This article was topic of a heated debate on our department: is it really possible that statin use halves the risk of recurrence in this patient group? During this discussion we found some issues that could led to an overestimation of the underlying true protective effect. We described these issues in a letter to the editor which has been accepted as an e-letter. Some journals use e-letters to facilitate a faster and more vivid debate after a publication, but unfortunately, these e-letters are only to be found at the website of the publisher and not for example in Web Of Scienc or Pubmed. This could mean that these critical parts of the scientific debate could have a smaller reach, which is a pity.

Nonetheless, the text of our e-letter is to be found on the website of the Eur Heart J, or via my Mendeley account.

Paper published in Arthritis Care & Research now quoted in NTVG

The arthritis Care and Research paper which I co-authored (PubMed) attracted attention from the guys of the NTVG. This paper, originally a collaboration between the Reumatology department and the department of Clinical Epidemiology described the relationship between BMI as a proxy for obesity and treatment response in patients with rheumatoid arthritis as is described on the news section of the NTVG website. The text of the news item from the NTVG website can also be read on this website if you ….

Continue reading “Paper published in Arthritis Care & Research now quoted in NTVG”

New article accepted for publication in NTVG

A new article has been accepted in the Nederlands Tijdschrift voor Geneeskunde. The article with the title “patient crossover studies” or “case-crossover studies” is an educational in the Methodology series of the Journal. REJR is the first author of this article and she did a great job on explaining the similarities and differences between this observational study design and the experimental version of this within person comparison. These crossover trials have been discussed by TNB en JGvdB in a previous article in the same series on which i wrote earlier.

Paper published in Arthritis Care & Research

A paper which I co-authored has been indexed for PubMed. This paper is a collaboration between the Reumatology deprtment and the department of Clinical Epidemiology. LH and MvdB have done a great job by describing the relationship between BMI as a proxy for obesity and treatment response in patients with rheumatoid arthritis.

Ref: Heimans L, van den Broek M, le Cessie S, Siegerink B, Riyazi N, Han KH, Kerstens PJSM, Huizinga TWJ, Lems WF, Allaart CF. High BMI is associated with decreased treatment response to combination therapy in recent onset RA patients – a subanalysis from the BeSt study. Arthritis Care & Research. 2013

Article published in NTVG on crossover study

Today, an educational article on crossover studies, written by TNB and JGvdB and myself is published in the NTVG. The article was published in the methodology series which explains specific topics for the general physician: it explains the basic concepts of the crossover trial, but also advocates its statistical efficiency, as can be seen in the graph above. The article is published under open access and is therefore freely accessable. There is a catch… it’s published in Dutch.

More information on my publications can be found on this website and an up to date list of publicaties can be found on my Mendeley profile.