Impact of your results: Beyond the relative risk

I wrote about this in an earlier topic: JLR and I published a paper in which we explain that a single relative risk, irrespective of its form, is jus5t not enough. Some crucial elements go missing in this dimensionless ratio. The RR could allow us to forget about the size of the denominator, the clinical context, the crude binary nature of the outcome. So we have provided some methods and ways of thinking to go beyond the RR in an tutorial published in RPTH (now in early view). The content and message are nothing new for those trained in clinical research (one would hope). Even for those without a formal training most concepts will have heard the concepts discussed in a talk or poster . But with all these concepts in one place, with an explanation why they provide a tad more insight than the RR alone, we hope that we will trigger young (and older) researchers to think whether one of these measures would be useful. Not for them, but for the readers of their papers. The paper is open access CC BY-NC-ND 4.0, and can be downloaded from the website of RPTH, or from my mendeley profile.  

How you quantify the impact of your results matters. A bit.

This is not about altmetrics. Nor is about the emails you get from colleagues or patients. It is about the impact of a certain risk factor. A single relative risk is meaningless. As it is a ratio, it is dimensionless, and without the context hidden in the numerator and denominator, it can be tricky to interpret results. Together with JLR I have a paper coming up in which we plead to use one of the many ways one could interpret the impact of your results, and just simply go beyond the simple relative risk. This will be published in RPTH, the relatively new journal of the ISTH, where I also happen to be on the editorial board.
Venn diagram illustrating the intersections of the independent predictors and poor outcome 12 months after stroke.
One of those ways it to report the population attributable risk: the percent of cases which can be attributed to the risk factor in question. It is often said that if we had a magic wand and would use it to make the risk factor disappear X% of the patient will not develop the disease. Some interpret this as the causal fraction, which is not completely correct if you dive really deep into epidemiological theory, but still, you get the idea. In a paper based on PROSCIS data, with first author CM at the helm, we have tested several ways to calculate the PAR of five well known and established risk factors for bad outcome after stroke. Understanding what lies behind which patient gets has a bad outcome and which doesn’t is one the things we really struggle with, as many patient with well established risk factors just don’t develop a poor outcome. Quantifying the impact of risk factors, and arguably more importantly, ranking the risk factors is a good tool to help MDs, patients, researchers and public health officials to know where to focus on. However, when we compared the PARs calculated by different methods, we came to the conclusion there is quite some variation. The details are in the table below, but the bottom line is this. It is not a good sign when your results depend on the method. Similar methods should get similar results. But upon closer inspection (and somewhat reassuring) the order of magnitude as well as the rank of the 5 risk factors stays almost similar.
So, yes, it is possible to measure the impact of your results. These measures do depend on the type of method you have used, which in itself is somewhat worrying, but given that we don’t have magic wand of which we expect to remove a fraction of the disease of up to 2 decimals precise, the PAR is a great tool to get some more grip on the context of RR. The paper was published in PLOS One and can be found on their website or on my mendeley profile PS This paper is one of the first papers with patient data in which we provided the data together with the manuscript. From the paper: “The data that support the findings of this study are available to all interested researchers at Harvard Dataverse (” Nice.

Teaching award from the German society for epidemiology

teaching at ESOC 2018 summer school
teaching an interactive session on study design at ESOC 2018 summer school

The German society for epidemiology has an annual teaching award, i.e. the “Preis für exzellente Lehre in der Epidemiologie”. From their website:

Mit der Auszeichnung sollen herausragende Leistungen oder überdurchschnittliches Engagement in der Lehre der Epidemiologie gewürdigt werden. (…)  Preiswürdig sind innovative, originelle oder nachhaltige Angebote, ebenso wie ein besonders hoher persönlicher Einsatz für die Lehre.”

In short, anything goes in terms of format, innovation, personal commitment etc. However, there is a trick: only students can nominate you. So what happened? My students nominated me for my “overall teaching concept”. Naturally, the DGEpi wondered what that teaching concept actually was and asked me to provide some more information. So I took that opportunity and actually described what and why I teach, to see what the actual concept behind this all is. Here is the result.

The bottom line is simple: I think you learn the best not only by reading a book, but that you learn by doing, help in the organization and help teach in various epi related activities. You need to get exposed in several formats with different people. So I have helped set up a plethora of activities for the young student to learn epidemiology in different ways on different levels: read classics, discuss in weekly journal clubs, use popular scientific books  in book clubs, but also organize platforms for discussion, interaction and inspiration (yes, I am talking about BEMC). The most important thing might be that students should learn the basics for epidemiology, even though they might not need that for that own research projects. This is especially true for medical students who want to learn about clinical research.

Last week I learned that the award in the end was awarded to me. Of course I am honored on a personal level, and this honors needs to be extended to my mentors. But I also take this award as an indication that the recent and increasing Berlin based epi-activities I helped to organize together with epi enthusiast at the IPH, iBIKE and QUEST did not go unnoticed by the German epidemiological community.

I will pick up the price in Bremen at the yearly conference of the DGEPI. See you there?

Cerebral microbleeds and interaction with antihypertensive treatment in patients with ICH; a tale of two rejected letters

ICH is not my topic, but as we were preparing for the ESO Summerschool I explored the for me as yet untouched areas of stroke research. That brought me to this paper by Shoamanesh et al in JAMA Neurology which investigates a potential interaction between CMB and the treatment at hand in relation to outcome in patients with ICH. Their conclusion: no interaction. The paper is easy to read and has some at first glance convincing data, but then I realized some elements are just not right:

  • the outcome is not rare, still a logistic model is used to assess relative risk
  • interaction is assessed based on multiplicative interaction even while adding variables could lead to other estimates of interaction due to the non-collapsibility of the OR
  • the underlying clinical question of interaction is arguable better answered with an analyses of additive interaction.

I decided to write a letter to the editor. Why? Well, additionally to the methodological issues mentioned above, the power of the analyses was quite low and the conclusion of “no effect” based on a p value >0.05 with low power is in itself a problem. Do I expect that there is a massive shift in how I would interpret the data when they would have analysed the data differently? I don’t think so, especially as the precision of any quantification of additive interaction will be quite low. But that is not the main issue here: the way the data were presented does not allow the reader to assess additive interaction. So my letter was focused on that: suggesting to present the data in a slight different way, and then we can discuss whether the conclusions as drawn by the authors still holds. Then, and only then we get the full picture of the value of CMB in treatment decision. The thing is that we will then realize that the full picture is actually not the full picture, as the data are quite limited and imprecise and more research is required before strong conclusions can be drawn.

But the letter was rejected by JAMA Neurology because of space limitations and priority. I didn’t appeal. The same happened when I submitted an edited version of the paper to Neuro-epidemiology. I didn’t appeal. In the meantime, I’ve contacted the corresponding author, but he did not get back to me. So now what? Pubmed commons died. Pubpeer is, to my taste, too much focused on catching image frauds, even though they do welcome other types of contributions. I know my comments are only interesting for the methodologically inclined, and in the greater scheme of things, their value is limited. I also do understand space limitation when it comes to print, but how about online?Anyway, a lot of reasons why things happened why they happened. But somebody told me that if it was important enough to write a letter, it is important enough to publish it somewhere. So here I am, posting my initial letter on my own website, which almost certainly means that no single reader of the original paper will find out about these comments.

Post publication peer review ideas anybody?

The original paper can be found here, on the website of JAMA Neurology.

My letter can be found here: CMB and intense blood pressure lowering in ICH_ is there an additive effect

FVIII, Protein C and the Risk of Arterial Thrombosis: More than the Sum of Its Parts.


Peer review is not a pissing contest. Peer reviewing is not about findings the smallest of errors and delay publication because of it. Peer review is not about being right. Peer review is not about rewriting the paper under review. Peer review is not about asking for yet another experiment.


Peer review is about making sure that the conclusions presented in the paper are justified by the data presented and peer review is about helping the authors get the best report on what they did.

At least that what I try to remind myself of when I write my peer review report. So what happened when I wrote a peer review about a paper presenting data on the two hemostatic factors protein C and FVIII in relation to arterial thrombosis. These two proteins are known to have a direct interaction with each other. But does this also translate into the situation where a combination of the two risk factors of the “have both, get extra risk for free”?

There are two approaches to test so-called interaction: statistical and biological. The authors presented one approach, while I thought the other approach was better suited to analyze and interpret the data. Did that result in an academic battle of arguments, or perhaps a peer review deadlock? No, the authors were quite civil to entertain my rambling thoughts and comments with additional analyses and results, but convinced me in the end that their approach have more merit in this particular situation. The editor of thrombosis and hemostasis saw this all going down and agreed with my suggestion that an accompanying editorial on this topic to help the readers understand what actually happened during the peer review process. The nice thing about this is that the editor asked me to that editorial, which can be found here, the paper by Zakai et al can be found here.

All this learned me a thing or two about peer review: Cordial peer review is always better (duh!) than a peer review street brawl, and that sharing aspects from the peer review process could help readers understand the paper in more detail. Open peer review, especially the parts where peer review is not anonymous and reports are open to readers after publication, is a way to foster both practices. In the meantime, this editorial will have to do.


New paper: External defibrillator use by bystanders and patient outcomes

Main analyses showing the effect of AED use on several endpoints

In this paper, together with researchers from Harvard and the Institute of Public Health at the Charite, we used data from the CARES dataset to answer some questions regarding the use of automated external defibrillator (AED) in the United States.

It is known from previous studies that AED use does improve clinical outcome of those who are treated with AED. Less known is whether the treatment effect of AEDs administrated by untrained bystanders has a similar beneficial effect, especially because

1) so called neighborhood characteristics have not been taken into account previous analyses and

2) it is difficult to find the right control group.

This paper focuses on these two aspects by taking neighborhood characteristics into account and using so called “negative controls” (i.e. patients who were treated with AED but did not have a shockable rhythm).

I had a lot of fun in this project: i like when my skills are helpful outside of the fields that I am not usually working in. NOt only does it allow me to see how research methodology is applied in different fields, but it also help me understand my own field much better. After all, both AED and STEMO are methods that aim to deliver treatment to a patient as soon as possible, in fact “pre-hospitalisation”. If only a CT scanner could be that small… or can it…

The main lifting on this publication has been done by LWA. thanks for letting me join for the adventure!

The paper can be found on pubmed, and on my mendeley profile

BEMC has a Journal Club now


After a year of successful BEMC talks and seeing the BEMC grow,  it was time for something new. We are starting a new journal club within the BEMC community, purely focussed on methods. The text below describes what we are going to to do, starting in February. (text comes from the BEMC website)

BEMC is trying something new: a journal club. In february, we will start a monthly journal to accompany the BEMC talks as an experiment. The format is subject to change as we will adapt after gaining more experience in what works and what not. For now, we are thinking along the following lines:

Why another journal club?

Aren’t we already drowning in Journal clubs? Perhaps, but not with this kind of journal club. BEMC JClub is focussed on the methods of clinical research. Many epidemiological inclined researchers work at departments who are not focussed on methodology, but rather on a disease or field of medicine. This is reflected in the topics of the different journal clubs around town. We believe there is a need for a methods journal club in Berlin. Our hope for the BEMC JClub is to fulfill that need through interdisciplinary and methodological discussions of the papers that we read.

Who is going to participate?

First of all, please remember that the BEMC community focussed on researchers with a medium to advanced epidemiological knowledge and skill set. This is not only true for our BEMC talks, but also for our JClub.

Next to this, we hope that we will end up with a good group that reflects the BEMC community. This means that we are looking for a group with a nice mix in background and experience. That means that if you think you have unique background and focus in your work, we highly encourage you to join us and make our group as diverse as possible. We strive for this diversity as we do not want the JClub sessions to become echo chambers or teaching sessions, but truly discussions that promote knowledge exchange between methodologist from different fields.

What will we read?

Anything that is relevant for those who attend. The BEMC team will ultimately determine which papers we will read, but we are nice people and listen carefully to the suggestions of regulars. Sometimes we will pick a paper on the same (or related) topic of the BEMC talk of that month.

Even though the BEMC team has the lead in the organisation, the content of the JClub should come from everybody attending. Everybody that attends the Jclub is asked to provide some points, remarks or questions to jumpstart the discussion.

What about students?

Difficult to say. The BEMC JClub is not designed to teach medical students the basics in epidemiology. Then again, everybody who is smart, can keep up and contribute to the discussion is welcome.

Are you a student and in doubt whether the BEMC JClub is for you? Just send us an email.

Where? When?

Details like this can on the BEMC Jclub website. Just click here.

new paper: pulmonary dysfunction and CVD outcome in the ELSA study

 This is a special paper to me, as this is a paper that is 100% the product of my team at the CSB.Well, 100%? Not really. This is the first paper from a series of projects where we work with open data, i.e. data collected by others who subsequently shared it. A lot of people talk about open data, and how all the data created should be made available to other researchers, but not a lot of people talk about using that kind of data. For that reason we have picked a couple of data resources to see how easy it is to work with data that is initially not collected by ourselves.

It is hard, as we now have learned. Even though the studies we have focussed on (ELSA study and UK understanding society) have a good description of their data and methods, understanding this takes time and effort. And even after putting in all the time and effort you might still not know all the little details and idiosyncrasies in this data.

A nice example lies in the exposure that we used in this analyses, pulmonary dysfunction. The data for this exposure was captured in several different datasets, in different variables. Reverse engineering a logical and interpretable concept out of these data points was not easy. This is perhaps also true in data that you do collect yourself, but then at least these thoughts are being more or less done before data collection starts and no reverse engineering is needed. new paper: pulmonary dysfunction and CVD outcome in the ELSA study

So we learned a lot. Not only about the role of pulmonary dysfunction as a cause of CVD (hint, it is limited), or about the different sensitivity analyses that we used to check the influence of missing data on the conclusions of our main analyses (hint, limited again) or the need of updating an exposure that progresses over time (hint, relevant), but also about how it is to use data collected by others (hint, useful but not easy).

The paper, with the title “Pulmonary dysfunction and development of different cardiovascular outcomes in the general population.” with IP as the first author can be found here on pubmed or via my mendeley profile.

New Masterclass: “Papers and Books”

“Navigating numbers” is a series of Masterclass initiated by a team of Charité researchers who think that our students should be able to get more familiar how numbers shape the field of medicine, i.e. both medical practice and medical research. And I get to organize the next in line.

I am very excited to organise the next Masterclass together with J.O. a bright researcher with a focus on health economics. As the full title of the masterclass is “Papers and Books – series 1 – intended effect of treatments”, some health economics knowledge is a must in this journal club style series of meetings.

But what will we exactly do? This Masterclass will focus on reading some papers as well as a book (very surprising), all with a focus on study design and how to do proper research into “intended effect of treatment” . I borrowed this term from one of my former epidemiology teachers, Jan Vandenbroucke, as it helps to denote only a part of the field of medical research with its own idiosyncrasies, yet not limited by study design.

The Masterclass runs for 8 meetings only, and as such not nearly enough to have the students understand all in and outs of proper study design. But that is also not the goal: we want to show the participants how one should go about when the ultimate question is medicine is asked: “should we treat or not?”

If you want to participate, please check out our flyer

New paper: Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young Adults

2017-06-16 09_26_46-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young2017-06-16 09_25_58-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young

Just a relative risk is not enough to fully understand the implications of your findings. Sure, if you are an expert in a field, the context of that field will help you to assess the RR. But if ou are not, the context of the numerator and denominator is often lost. There are several ways to work towards that. If you have a question that revolves around group discrimination (i.e. questions of diagnosis or prediction) the RR needs to be understood in relation to other predictors or diagnostic variables. That combination is best assessed through the added discriminatory value such as the AUC improvement or even more fancy methods like reclassification tables and net benefit indices. But if you are interested in are interested in a single factor (e.g. in questions of causality or treatment) a number needed to treat (NNT) or the Population Attributable Fraction can be used.

The PAF has been subject of my publications before, for example in these papers where we use the PAF to provide the context for the different OR of markers of hypercoagulability in the RATIO study / in a systematic review. This paper is a more general text, as it is meant to provide in insight for non epidemiologist what epidemiology can bring to the field of law. Here, the PAF is an interesting measure, as it has relation to the etiological fraction – a number that can be very interesting in tort law. Some of my slides from a law symposium that I attended addresses these questions and that particular Dutch case of tort law.

But the PAF is and remains an epidemiological measure and tells us what fraction of the cases in the population can be attributed to the exposure of interest. You can combine the PAF to a single number (given some assumptions which basically boil down to the idea that the combined factors work on an exact multiplicative scale, both statistically as well as biologically). A 2016 Lancet paper, which made huge impact and increased interest in the concept of the PAF, was the INTERSTROKE paper. It showed that up to 90% of all stroke cases can be attributed to only 10 factors, and all of them modifiable.

We had the question whether this was the same for young stroke patients. After all, the longstanding idea is that young stroke is a different disease from old stroke, where traditional CVD risk factors play a less prominent role. The idea is that more exotic causal mechanisms (e.g. hypercoagulability) play a more prominent role in this age group. Boy, where we wrong. In a dataset which combines data from the SIFAP and GEDA studies, we noticed that the bulk of the cases can be attributed to modifiable risk factors (80% to 4 risk factors). There are some elements with the paper (age effect even within the young study population, subtype effects, definition effects) that i wont go into here. For that you need the read the paper -published in stroke- here, or via my mendeley account. The main work of the work was done by AA and UG. Great job!

New paper in RPTH: Statins and the risk of DVT recurrence

coverI am very happy and honored that i can tell you that our paper “Statin use and risk of recurrent venous thrombosis: results from the MEGA follow-up study” is the very first paper in the new ISTH journal “Research and Practices in Thrombosis and Hemostasis“.

This new journal, for which I serve on the editorial board, is the sister journal of the JTH, but has a couple of focus point that are not present in the JTH. Biggest difference is the open access policy of the RPTH. Next to that, there are a couple of things or subjects that the RPTH welcomes, which are perhaps not so common in traditional journals (e.g. career development articles, educationals, nursing and patient perspectives etc).

Our paper is however a very standard paper, in the sense that it is original research publication regarding the role of statins and the risk of thrombosis recurrence. We answer the question whether statins indeed is linked with a lower risk of recurrence based on observational data, opening up the door to confounding by indication. To counteract, we applied a propensity score, and most important of all, we only used so-called “incident users”. Incident vs prevalent users of statins is a theme that has been a topic on this blog before (for example here and here). The bottom line is this: people who are currently using statins are different from people who are prescribed statins – adherence issues, side effects, or low life expectancy could be reasons for discontinuation.  You need to take this difference between these type of statin users into account, or the protective effect of statins, or any other medication for that matter, might be biassed. In the case of statins and DVT recurrence it can be argued that the risk lowering effect of statins is overestimated. In itself that is not a problem in an observational study. But if the results of this observational study is subsequently used in a sample size calculation for a proper trial, that trial will be underpowered and we might have lost our (expensive and potentially only) shot at really knowing whether or not DVT patients benefit from statins.

RPTH will be launched during ISTH 2017 which will be held in Berlin in a couple of weeks.

New paper: A Prothrombotic Score Based on Genetic Polymorphisms of the Hemostatic System Differs in Patients with IS, MI, or PAOD

My first paper in frontiers of cardiovascular medicine, an open access platform focussed on cardiovascular medicine. This is not a regular case-control study, where the prevelance of a risk factor is compared between an unselected patient group and a reference group from the general population. No, this paper takes patients with cardiovascular disease who are referred for thrombophilia testing. when the different diseases (ischemic stroke vs myocardial infarction / PAOD) are then compared in terms of their thrombophilic propensity, it is clear that these two groups are different. The first culprit to think might be that thrombophilia indeed plays a different role in the etiology of these disease, like we demonstrated in a RATIO publication as well as this systematic review, but it might also be that there is just a different referral pattern. in any case, it indicates that the role of thrombophilia – whether it is causal or physician suspected – is different between the different forms of arterial thrombosis.

Advancing prehospital care of stroke patients in Berlin: a new study to see the impact of STEMO on functional outcome

There are strange ambulances driving around in Berlin. They are the so-called STEMO cars, or Stroke Einsatz Mobile, basically driving stroke units. They have the possibility to make a CT scan to rule out bleeds and subsequently start thrombolysis before getting to the hospital. A previous study showed that this descreases time to treatment by ~25 minutes. The question now is whether the patients are indeed better of in terms of functional outcome. For that we are currently running the B_PROUD study of which we recently published the design here.

Virchow’s triad and lessons on the causes of ischemic stroke

I wrote a blog post for BMC, the publisher of Thrombosis Journal in order to celebrate blood clot awareness month. I took my two favorite subjects, i.e. stroke and coagulation, and I added some history and voila!  The BMC version can be found here.

When I look out of my window from my office at the Charité hospital in the middle of Berlin, I see the old pathology building in which Rudolph Virchow used to work. The building is just as monumental as the legacy of this famous pathologist who gave us what is now known as Virchow’s triad for thrombotic diseases.

In ‘Thrombose und Embolie’, published in 1865, he postulated that the consequences of thrombotic disease can be attributed one of three categories: phenomena of interrupted blood flow, phenomena associated with irritation of the vessel wall and its vicinity and phenomena of blood coagulation. This concept has now been modified to describe the causes of thrombosis and has since been a guiding principle for many thrombosis researchers.

The traditional split in interest between arterial thrombosis researchers, who focus primarily on the vessel wall, and venous thrombosis researchers, who focus more on hypercoagulation, might not be justified. Take ischemic stroke for example. Lesions of the vascular wall are definitely a cause of stroke, but perhaps only in the subset of patient who experience a so called large vessel ischemic stroke. It is also well established that a disturbance of blood flow in atrial fibrillation can cause cardioembolic stroke.

Less well studied, but perhaps not less relevant, is the role of hypercoagulation as a cause of ischemic stroke. It seems that an increased clotting propensity is associated with an increased risk of ischemic stroke, especially in the young in which a third of main causes of the stroke goes undetermined. Perhaps hypercoagulability plays a much more prominent role then we traditionally assume?

But this ‘one case, one cause’ approach takes Virchow’s efforts to classify thrombosis a bit too strictly. Many diseases can be called multi-causal, which means that no single risk factor in itself is sufficient and only a combination of risk factors working in concert cause the disease. This is certainly true for stroke, and translates to the idea that each different stroke subtype might be the result of a different combination of risk factors.

If we combine Virchow’s work with the idea of multi-causality, and the heterogeneity of stroke subtypes, we can reimagine a new version of Virchow’s Triad (figure 1). In this version, the patient groups or even individuals are scored according to the relative contribution of the three classical categories.

From this figure, one can see that some subtypes of ischemic stroke might be more like some forms of venous thrombosis than other forms of stroke, a concept that could bring new ideas for research and perhaps has consequences for stroke treatment and care.

Figure 1. An example of a gradual classification of ischemic stroke and venous thrombosis according to the three elements of Virchow’s triad.

However, recent developments in the field of stroke treatment and care have been focused on the acute treatment of ischemic stroke. Stroke ambulances that can discriminate between hemorrhagic and ischemic stroke -information needed to start thrombolysis in the ambulance-drive the streets of Cleveland, Gothenburg, Edmonton and Berlin. Other major developments are in the field of mechanical thrombectomy, with wonderful results from many studies such as the Dutch MR CLEAN study. Even though these two new approaches save lives and prevent disability in many, they are ‘too late’ in the sense that they are reactive and do not prevent clot formation.

Therefore, in this blood clot awareness month, I hope that stroke and thrombosis researchers join forces and further develop our understanding of the causes of ischemic stroke so that we can Stop The Clot!

Increasing efficiency of preclinical research by group sequential designs: a new paper in PLOS biology

We have another paper published in PLOS Biology. The theme is in the same area as the first paper I published in that journal, which had the wonderful title “where have all the rodents gone”, but this time we did not focus on threats to internal validity, but we explored whether sequential study designs can be useful in preclinical research.

Sequential designs, what are those? It is a family of study designs (perhaps you could call it the “adaptive study size design” family) where one takes a quick peek at the results before the total number of subject is enrolled. But, this peek comes at a cost: it should be taken into account in the statistical analyses, as it has direct consequence for the interpretation of the final result of the experiment. But the bottom line is this: with the information you get half way through can decide to continue with the experiment or to stop because of efficacy or futility reasons. If this sounds familiar to those familiar with interim analyses in clinical trials, it is because it is the sam concept. however, we explored its impact when applied to animal experiments.

Figure from our publication in PLOS Biology describing sequential study designs in or computer simulations

Old wine in new bottles” one might say, and some of the reviewers for this paper published rightfully pointed out that our paper was not novel in terms of showing how sequential designs are more efficient compared to non sequential designs. But there is not where the novelty lies. Up untill now, we have not seen people applying this approach to preclinical research in a formal way. However, our experience is that a lot of preclinical studies are done with some kind of informal sequential aspect. No p<0.05? Just add another mouse/cell culture/synapse/MRI scan to the mix! The problem here is that there is no formal framework in which this is done, leading to cherry picking, p-hacking and other nasty stuff that you can’t grasp from the methods and results section.

Should all preclinical studies from now on half sequential designs? My guess would be NO, and there are two major reasons why. First of all, sequential data analyses have their ideosyncrasies and might not be for everyone. Second, the logistics of sequential study designs are complex, especially if you are affraid to introduce batch effects. We only wanted to show preclinical researchers that the sequential approach has their benefits: the same information with on average less costs. If you translate “costs” into animals the obvious conclusion is: apply sequential designs where you can, and the decrease in animals can “re-invested” in more animals per study to obtain higher power in preclinical research. But I hope that the side effect of this paper (or perhaps its main effect!) will be that the readers just think about their current practices and whether thise involve those ‘informal sequential designs’ that really hurt science.

The paper, this time with aless exotic title, “Increasing efficiency of preclinical research by group sequential designs” can be found on the website of PLOS biology.

Associate editor at BMC Thrombosis Journal


In the week just before Christmas, HtC approached me by asking whether or not I would like to join the editorial board of BMC Thrombosis Journal as an Associate Editor. the aims and scope of the journal, taken from their website:

“Thrombosis Journal  is an open-access journal that publishes original articles on aspects of clinical and basic research, new methodology, case reports and reviews in the areas of thrombosis.Topics of particular interest include the diagnosis of arterial and venous thrombosis, new antithrombotic treatments, new developments in the understanding, diagnosis and treatments of atherosclerotic vessel disease, relations between haemostasis and vascular disease, hypertension, diabetes, immunology and obesity.”

I talked to HtC, someone at BMC, as well as some of my friends and colleagues whether or not this would be a wise thing to do. Here is an overview of the points that came up:

Experience: Thrombosis is the field where I grew up in as a researcher. I know the basics, and have some extensive knowledge on specific parts of the field. But with my move to Germany, I started to focus on stroke, so one might wonder why not use your time to work with a stroke related journal. My answer is that the field of thrombosis is a stroke related field and that my position in both worlds is a good opportunity to learn from both fields. Sure, there will be topics that I have less knowledge off, but here is where an associate editor should rely on expert reviewers and fellow editors.

This new position will also provide me with a bunch of new experiences in itself: for example, sitting on the other side of the table in a peer review process might help me to better understand a rejection of one of my own papers. Bottom line is that I think that I both bring and gain relevant experiences in this new position.

Time: These things cost time. A lot. Especially when you need to learn the skills needed for the job, like me. But learning these skills as an associate editor is an integral part of the science apparatus, and I am sure that the time that I invest will help me develop as a scientist. Also, the time that I need to spend is not necessary the type of time that I currently lack, i.e. writing time. For writing and doing research myself I need decent blocks of time to dive in and focus  — 4+ hours if possible. The time I need to perform my associate editor tasks is more fragmented: find peer reviewers, read their comments and make a final judgement are relative fragmented activities and I am sure that as soon as I get the hang of it I can squeeze those activities within shorter slots of time. Perhaps a nice way to fill those otherwise lost 30 minutes between two meetings?

Open science: Thrombosis journal is part of the Biomed central family. As such, it is an 100% OA journal. It is not that I am an open science fanboy or sceptic, but I am very curious how OA is developing and working with an OA journal will help me to understand what OA can and cannot deliver.

Going over these points, I am convinced that I can contribute to the journal with my experience in the fields of coagulation, stroke and research methodology. Also, I think that the time that it will take to learn the skills needed are an investment that in the end will help me to grow as a researcher. So, I replied HtC with a positive answer. Expect email requesting for a peer review report soon!

New team member!

A couple of weeks ago I announced that my team was looking for a new post-doc. I received many applications, some even from as far as Italy and Spain. Out of this pile of candidates we were able to find an individual candidate who fulfilled all the requirements we had mind and than some. It is great that she will join the team in December. JH has worked in the field of epidemiology for quite some time and is not only experienced in setting up new projects and provide physicians with methodological input on their clinical research projects, but she also has a great interest in the more methodological side of epidemiology. For example, she is co-author/developer of the program DAGitty which can be used to draw causal diagrams. She is also speaker for the working group methodology of the German Society of Epidemiology (dgEpi). Her background in psychology also means that she brings a lot of knowledge on methods that we as a team do not have so far. In short, a great addition to the team. Welcome JH!



Berlin Epidemiological Methods Colloquium kicks of with SER event

A small group of epi-nerds (JLR, TK and myself) decided to start a colloquium on epidemiological methods. This colloquium series kicks off with a webcast of an event organised by the Society for Epidemiological Research (SER), but in general we will organize meetings focussed on advanced topics in epidemiological methods. Anyone interested is welcome. Regularly meetings will start in February 2017. All meetings will be held in English.
More information on the first event can be found below or via this link:

“Perspective of relative versus absolute effect measures” via SERdigital

Date: Wednesday, November 16th 2016 Time: 6:00pm – 9:00pm
Location: Seminar Room of the Neurology Clinic, first floor (Alte Nervenklinik)
Bonhoefferweg 3, Charite Universitätsmedizin Berlin- Campus Mitte, 10117 Berlin

Join us for a live, interactive viewing party of a debate between two leading epidemiologists, Dr. Charlie Poole and Dr. Donna Spiegelman, about the merits of relative versus absolute effect measures. Which measure of effect should epidemiologists prioritize? This digital event organized by the Society for Epidemiologic Research will also include three live oral presentations selected from submitted abstracts. There will be open discussion with other viewers from across the globe and opportunities to submit questions to the speakers. And since no movie night is complete without popcorn, we will provide that, too! For more information, see:

If you plan to attend, please register (space limited):


The paradox of the BMI paradox


I had the honor to be invited to the PHYSBE research group in Gothenburg, Sweden. I got to talk about the paradox of the BMI paradox. In the announcement abstract I wrote:

“The paradox of the BMI paradox”
Many fields have their own so-called “paradox”, where a risk factor in certain
instances suddenly seems to be protective. A good example is the BMI paradox,
where high BMI in some studies seems to be protective of mortality. I will
argue that these paradoxes can be explained by a form of selection bias. But I
will also discuss that these paradoxes have provided researchers with much
more than just an erroneous conclusion on the causal link between BMI and

I first address the problem of BMI as an exposure. Easy stuff. But then we come to index even bias, or collider stratification bias. and how selections do matter in a recurrence research paradox -like PFO & stroke- or a health status research like BMI- and can introduce confounding into the equation.

I see that the confounding might not be enough to explain all that is observed in observational research, so I continued looking for other reasons there are these strong feelings on these paradoxes. Do they exist, or don’t they?I found that the two sides tend to “talk in two worlds”. One side talks about causal research and asks what we can learn from the biological systems that might play a role, whereas others think with their clinical  POV and start to talk about RCTs and the need for weight control programs in patients. But there is huge difference in study design, RQ and interpretation of results between the studies that they cite and interpret. Perhaps part of the paradox can be explained by this misunderstanding.

But the cool thing about the paradox is that through complicated topics, new hypothesis , interesting findings and strong feelings about the existence of paradoxes, I think that the we can all agree: the field of obesity research has won in the end. and with winning i mean that the methods are now better described, better discussed and better applied. New hypothesis are being generated and confirmed or refuted. All in all, the field makes progress not despite, but because the paradox. A paradox that doesn’t even exist. How is that for a paradox?

All in all an interesting day, and i think i made some friends in Gothenburg. Perhaps we can do some cool science together!

Slides can be found here.

predicting DVT with D-dimer in stroke patients: a rebuttal to our letter

Some weeks ago, I reported on a letter to the editor of Thrombosis Research on the question whether D-Dimer indeed does improve DVT risk prediction in stroke patients.

I was going to write a whole story on how one should not use a personal blog to continue the scientific debate. As you can guess, I ended up writing a full paragraph where I did this anyway. So I deleted that paragraph and I am going to do a thing that requires some action from you. I am just going to leave you with the links to the letters and let you decide whether the issues we bring up, but also the corresponding rebuttal of the authors, help to interpret the results from the the original publication.

ECTH 2016


Last week was the first edition of the European Congress on Thrombosis and Hemostasis in the Hague (NL). The idea of this conference is to provide a platform for european thrombosis researchers and doctors to meet in the dull years between ISTH meetings. There is a strong emphasis on enabling and training the young researchers, as can be from the different activities and organisational aspects. One os these things was the Junior advisory board, of which I was part. We had the task to give advice both solicited and unsolicited, and help organise and shape some of the innovative aspects. For example: we had the so-called fast and furious sessions, where authors of the best abstract were asked to let go of the standard presentation format and share their research TED talk style.

I learned a lot during these sessions, and even got in contact with some groups that have interesting methods and approaches that we might apply in our studies and patient populations. My thoughts: targeting FXII and FXI as well as DNAse treatment are the next big thing. We also had a great selection of speakers for meet-the-experts and how-to sessions. These sessions demanded active participation of all participants which is really a great way to build new collaborations and friendships.

The 5K fun run with 35+ participants was also a great succes.

The wednesday plenary sessions, including the talks on novel and innovative methods of scholarly communications as well as the very well received sessions from Malcolm Macloud on reducing research waste where inspiring to all. Missed it? do not worry, they have shared their slides online!

All in all, the conference was a great success in both numbers (750+ participants) as well as scientific quality. I am looking forward to the next edition, which will be held in Marseille in two years time. Hope to see you all there!

How to set up a research group

A couple of weeks ago I wrote down some thoughts I had while writing a paper for the JTH series on Early Career Researchers. I was asked to write how one sets up a research group, and the four points I described in my previous post can be recognised in the final paper.

But I also added some reading tips in the paper. reading on a particular topic helps me not only to learn what is written in the books, but also to get my mind in a certain mindset. So, when i knew that i was going to take over a research group in Berlin I read a couple of books, both fiction and non fiction. Some where about Berlin (e.g. Cees Nootebooms Berlijn 1989/2009), some were focussed on academic life (e.g. Porterhouse Blue). They help to get my mind in a certain gear to help me prepare of what is going on. In that sense, my bookcase says a lot about myself.

The number one on the list of recommended reads are the standard management best sellers, as I wrote in the text box:

// Management books There are many titles that I can mention here; whether it the best-seller Seven Habits of Highly Effective People or any of the smaller booklets by Ken Blanchard, I am convinced that reading some of these texts can help you in your own development as a group leader. Perhaps you will like some of the techniques and approaches that are proposed and decide to adopt them. Or, like me, you may initially find yourself irritated because you cannot envision the approaches working in the academic setting. If this happens, I encourage you to keep reading because even in these cases, I learned something about how academia works and what my role as a group leader could be through this process of reflection. My absolute top recommendation in this category is Leadership and Self-Deception: a text that initially got on my nerves but in the end taught me a lot.

I really think that is true. You should not only read books that you agree with, or which story you enjoy. Sometimes you can like a book not for its content but the way it makes you question your own preexisting beliefs and habits. But it is true that this sometimes makes it difficult to actually finnish such a book.

Next to books, I am quite into podcasts so I also wrote

// Start up. Not a book, but a podcast from Gimlet media about “what it’s really like to get a business off the ground.” It is mostly about tech start-ups, but the issues that arise when setting up a business are in many ways similar to those you encounter when you are starting up a research group. I especially enjoyed seasons 1 and 3.

I thought about including the sponsored podcast “open for business” from Gimlet Creative, as it touches upon some very relevant aspects of starting something new. But for me the jury is still out on the “sponsored podcast” concept  – it is branded content from amazon, and I am not sure to what extent I like that. For now, i do not like it enough to include it in the least in my JTH-paper.

The paper is not online due to the summer break,but I will provide a link asap.

– update 11.10.2016 – here is a link to the paper. 





Does d-dimer really improve DVT prediction in stroke?


Good question, and even though thromboprofylaxis is already given according to guidelines in some countries, I can see the added value of a good discriminating prediction rule. Especially finding those patients with low DVT risk might be useful. But using d-dimer is a whole other question. To answer this, a thorough prediction model needs to be set up both with and without the information of d-dimer and only a direct comparison of these two models will provide the information we need.

In our view, that is not what the paper by Balogun et al did. And after critical appraisal of the tables and text, we found some inconsistencies that prohibits the reader from understanding what exactly was done and which results were obtained. In the end, we decided to write a letter to the editor, especially to prevent that other readers to mistakenly take over the conclusion of the authors. This conclusion, being that “D-dimer concentration with in 48 h of acute stroke is independently associated with development of DVT.This observation would require confirmation in a large study.” Our opinion is that the data from this study needs to be analysed properly to justify such an conclusion. One of the key elements in our letter is that the authors never compare the AUC of the model with and without d-dimer. This is needed as that would provide the bulk of the answer whether or not d-dimer should be measured. The only clue we have are the ORs of d-dimer, which range between 3-4, which is not really impressive when it comes to diagnosis and prediction. For more information on this, please check this paper on the misuse of the OR as a measure of interest for diagnosis/prediction by Pepe et al.

A final thing I want to mention is that our letter was the result of a mini-internship of one of the students at the Master programme of the CSB and was drafted in collaboration with our Virchow scholar HGdH from the Netherlands. Great team work!

The letter can be found on the website of Thrombosis Research as well as on my Mendeley profile.


Teaching a new module: Critical Thinking in Translational Medicine


I have the honor to design and teach a new master module in not one, but two master programs at the Charité. This new module has the title “Critical Thinking in Translational Medicine” and will focus on the concept that science is an exercise in uncertainty. But somehow, scientist – especially the young – do not seem to be trained in handling these uncertainties. Overselling of results, scientific fads and the why “most research findings are false” will be on the schedule of this 15 week course starting this October.

But that’s not all. We will also have some topics regarding new innovations and activities in the scientific enterprise: sharing of data, new ways to publish and share your results will be discussed by our students. The goal is that each week we will have some introduction perspective. Of course there will be a some exercise and group discussions. Each week 4 students have the task to summarise the results of the meeting, as well as prepare a pro-contra debate which will be held on two occasions. Perhaps these students even should write some blog entries?

The bottom line is this: science is more than a pipet, understanding confounding or knowing why a regression model does what it does. It is also about the scientific enterprise, which is what it is, and has many shortcomings. Some critical thinking on these topics, together with some good discussion will help our student to form their own thoughts on these issues and hopefully help them to prepare for a wonderful scientific career.



Starting a research group: some thoughts for a new paper


It has been 18 months since I started in Berlin to start at the CSB to take over the lead of the clinical epidemiology research group. Recently, the ISTH early career taskforce  have contacted me whether I would be willing to write something about my experiences over the last 18 months as a rookie group leader. The idea is that these experiences, combined with a couple of other papers on similar useful topics for early career researchers, will be published in JTH.

I was a bit reluctant at first, as I believe that how people handle new situations that one encounters as a new group leader is quite dependent on personality and the individual circumstances. But then again, the new situations that i encountered might be more generalizable to other people. So I decided to go ahead and focus more on the description of the new situations I found myself in while trying to keep the personal experiences limited and only for illustrations only.

While writing, I have discerned that there are basically 4 new things about my new situations that I would have loved to realise a bit earlier.

  1. A new research group is never without context; get to know the academic landscape of your research group as this is where you find people for new collaboration etc
  2. You either start a new research group from scratch, or your inherit a research group; be aware that both have very different consequences and require different approaches.
  3. Try to find training and mentoring to help you cope with your new roles that group leaders have; it is not only the role of group leader that you need to get adjusted to. HR manager, accountant, mentor, researcher, project initiator, project manager, consultant are just a couple of roles that I also need to fulfill on a regular basis.
  4. New projects; it is tempting to put all your power, attention time and money behind a project. but sometimes new projects fail. Perhaps start a couple of small side projects as a contingency?

As said, the stuff I describe in the paper might be very specific for my situation and as such not likely to be applicable for everyone. Nonetheless, I hope that reading the paper might help other young researchers to help them prepare for the transition from post-doc to group leader. I will report back when the paper is finished and available online.


Thesis can now be downloaded without password

Omslag proefschrift Siegerink

For a long time there was a password needed to access my thesis. There where two reasons: some elements where not yet published in peer-reviewed journal, and some elements where not only to be published on my own website. This is because some journals only allow publication of your own work after some time passed since publication.

A couple of things have changed that have led me to remove the password lock: a lot of time has passed and if that amount of time is not enough I don’t care anymore. That sounds perhaps bad ass, but of course it isn’t – especially since most ideas have been published already. It is a mere results of some slight changes that I went through over the last couple of months regarding publishing and the relationship between authors and journals.Do not be surprised if I will make some remarks to these changes when i provides some more updates on this blog.


New office



I just moved to a new office. Bigger, brighter and the possibility to open up a window are the main reasons why I am quite happy with the move. But the best thing is the new view. Now, let’s hope that I still get some work done!

Need directions to my new office? Please visit me!



Cardiovascular events after ischemic stroke in young adults (results from the HYSR study)

2016-05-01 21_39_40-Cardiovascular events after ischemic stroke in young adults

The collaboration with the group in finland has turned into a nice new publication, with the title

“Cardiovascular events after ischemic stroke in young adults”

this work, with data from Finland was primarily done by KA and JP. KA came to Berlin to learn some epidemiology with the aid of the Virchow scholarship, so that is where we came in. It was great to have KA to be part of the team, and even better to have been working on their great data.

Now onto the results of the paper: like in the results of the RATIO follow-up study, the risk of recurrent young stroke remained present for a long-term time after the stroke in this analysis of the Helsinki Young Stroke Registry. But unlike the RATIO paper, this data had more information on their patients, for example the TOAST criteria. this means that we were able to identify that the group with a LAA had a very high risk of recurrence.

The paper can be found on the website of Neurology, or via my mendeley profile.

Pregnancy loss and risk of ischaemic stroke and myocardial infarction

2016-04-08 13_36_29-Posteingang - - Outlook

Together with colleagues I worked on a paper on relationship between pregnancy, its complications and stroke and myocardial infarction in young women, which just appeared online on the BJH website.

The article, which analyses data from the RATIO study, concludes that only if you have multiple pregnancy losses, your risk of stroke is increased (OR 2.4) compared to those who never experienced a pregnancy loss. The work was mainly done by AM, and is a good example of international collaborations where we benefitted from the expertise of all team members.

The article, with the full title “Pregnancy loss and risk of ischaemic stroke and myocardial infarction” can be found via PubMed, or via my personal Mendeley page.

Statins and risk of poststroke hemorrhagic complications

2016-03-28 13_00_38-Statins and risk of poststroke hemorrhagic complicationsEaster brought another publication, this time with the title

“Statins and risk of poststroke hemorrhagic complications”

I am very pleased with this paper as it demonstrates two important aspects of my job. First, I was able to share my thought on comparing current users vs never users. As has been argued before (e.g. by the group of Hérnan) and also articulated in a letter to the editor I wrote with colleagues from Leiden, such a comparison brings forth an inherent survival bias: you are comparing never users (i.e. those without indication) vs current users (those who have the indication, can handle the side-effects of the medication, and stay alive long enough to be enrolled into the study as users). This matter is of course only relevant if you want to test the effect of statins, not if you are interested in the mere predictive value of being a statin user.

The second thing about this paper is the way we were able to use data from the VISTA collaboration, which is a large amount of data pooled from previous stroke studies (RCT and observational). I believe such ways of sharing data brings forward science. Should all data be shared online for all to use? I do am not sure of that, but the easy access model of the VISTA collaboration (which includes data maintenance and harmonization etc) is certainly appealing.

The paper can be found here, and on my mendeley profile.


– update 1.5.2016: this paper was topic of a comment in the @greenjournal. See also their website

update 19.5.2016: this project also led to first author JS to be awarded with the young researcher award of the ESOC2016.



Causal Inference in Law: An Epidemiological Perspective


Finally, it is here. The article I wrote together with WdH, MZ and RM was published in the European Journal of Risk and Regulation last week. And boy, did it take time! This whole project, an interdisciplinary project where epidemiological thinking was applied to questions of causal inference in tort law, took > 3 years – with only a couple of months writing… the rest was waiting and waiting and waiting and some peer review. but more on this later.

First some content. in the article we discuss the idea of proportional liability that adheres to the epidemiological concept of multi-causality. But the article is more: as this is a journal for non epidemiologist, we also provide a short and condensed overview of study design, bias and other epidemiological concepts such as counterfactual thinking. You might have recognised the theme from my visits to the Leiden Law school for some workshops. The EJRR editorial describes it asas: “(…) discuss the problem of causal inference in law, by providing an epidemiological viewpoint. More specifically, by scrutinizing the concept of the so-called “proportional liability”, which embraces the epidemiological notion of multi-causality, they demonstrate how the former can be made more proportional to a defendant’s relative contribution in the known causal mechanism underlying a particular damage.”

Getting this thing published was tough: the quality of the peer review was low (dare I say zero?),communication was difficult, submission system flawed etc. But most of all the editorial office was slow – first submission was June 2013! This could be a non-medical journal thing, i do not know, but still almost three years. And this all for an invited article that was planned to be part of a special edition on the link between epi and law, which never came. Due several delays (surprise!) of the other articles for this edition, it was decided that our article is not waiting for this special edition anymore. Therefore, our cool little insight into epidemiology now seems to be lost between all those legal and risk regulation articles. A shame if you ask me, but I am glad that we are not waiting any longer!

Although i do love interdisciplinary projects, and I think the result is a nice one, I do not want to go through this process again. No more EJRR for me.

Ow, one more thing… the article is behind a pay wall and i do not have access through my university, nor did the editorial office provide me with a link to a pdf of the final version. So, to be honest, I don’t have the final article myself! Feels weird. I hope EJRR will provide me with a pdf quite soon. In the meantime, anybody with access to this article, please feel free to send me a copy!

Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke



We published a new article just in PLOS Biology today, with the title:

“Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke”

This is a wonderful collaboration between three fields: stats, epi and lab researchers. Combined we took a look at what is called attrition in the preclinical labs, that is the loss of data in animal experiments. This could be because the animal died before the needed data could be obtained, or just because a measurement failed. This loss of data can be translated to the concept of loss to follow-up in epidemiological cohort studies, and from this field we know that this could lead to substantial loss of statistical power and perhaps even bias.

But it was unknown to what extent this also was a problem in preclinical research, so we did two things. We looked at how often papers indicated there was attrition (with an alarming number of papers that did not provide the data for us to establish whether there was attrition), and we did some simulation what happens if there is attrition in various scenarios. The results paint a clear picture: the loss of power but also the bias is substantial. The degree of these is of course dependent on the scenario of attrition, but the message of the paper is clear: we should be aware of the problems that come with attrition and that reporting on attrition is the first step in minimising this problem.

A nice thing about this paper is that coincides with the start of a new research section in the PLOS galaxy, being “meta-research”, a collection of papers that all focus on how science works, behaves, and can or even should be improved. I can only welcome this, as more projects on this topic are in our pipeline!

The article can be found on pubmed and my mendeley profile.

Update 6.1.16: WOW what a media attention for this one. Interviews with outlets from UK, US, Germany, Switzerland, Argentina, France, Australia etc, German Radio, the dutch Volkskrant, and a video on More via the corresponding altmetrics page . Also interesting is the post by UD, the lead in this project and chief of the CSB,  on his own blog “To infinity, and beyond!”


New article published – Ankle-Brachial Index and Recurrent Stroke Risk: Meta-Analysis

Another publication, this time on the role of the ABI as a predictor for stroke recurrence. This is a meta analysis, which combines data from 11 studies allowing us to see that ABI was moderately associated with recurrent stroke (RR1.7) and vascular events (RR 2.2). Not that much, but it might be just enough to increase some of the risk prediction models available for stroke patients when ABI is incorperated.

This work, the product of the great work of some of the bright students that work at the CSB (JBH and COL), is a good start in our search for a good stroke recurrence risk prediction model. Thiswill be a major topic in our future research in the PROSCIS study which is led by TGL. I am looking forward to the results of that study, as better prediction models are needed in the clinic especially true as more precise data and diagnosis might lead to better subgroup specific risk prediction and treatment.

The article can be found on pubmed and my mendeley profile and should be cited as

Hong J Bin, Leonards CO, Endres M, Siegerink B, Liman TG. Ankle-Brachial Index and Recurrent Stroke Risk. Stroke 2015; : STROKEAHA.115.011321.

The ECTH 2016 in The Hague

My first conference experience (ISTH 2008, Boston) got me hooked on science. All these people doing the same thing, speaking the same language, and looking to show and share their knowledge. This is true when you are involved in the organisation. Organising the international soccer match at the Olympic stadium in Amsterdam linked to the ISTH 2013 to celebrate the 25th anniversary of the NVTH was fun. But lets not forget the exciting challenge of organising the WEON 2014.

And now, the birth of a new conference, the European Congress of Thrombosis and Hemostasis, which will be held in The Hague in Netherlands (28-30 sept 2016). I am very excited for several reasons: First of all, this conference will fill in the gap of the bi-annual ISTH conferences. Second, I have the honor to help out as the chair of the junior advisory board. Third, the Hague! My old home town!

So, we have 10 months to organise some interesting meetings and activities, primary focussed on the young researchers. Time to get started!

First results from the RATIO follow up study

Another article got published today in the JAMA Int Med, this time the results from the first analyses of the RATIO follow-up data. For these data, we linked the RATIO study to the dutch national bureau of statistics (CBS), to obtain 20 years of follow-up on cardiovascular morbidity and mortality. We first submitted a full paper, but later we downsized to a research letter with only 600 words. This means that only the main message (i.e. cardiovascular recurrence is high, persistent over time and disease specific) is left.

It is a “Leiden publication”, where I worked together with AM and FP from Milano. Most of the credit of course goes to AM, who is the first author of this piece. The cool thing about this publication is that the team worked very hard on it for a long time (data linking and analyses where not an easy thing to do, as well as changing from 3000 words to 600 in just a week or so), and that in the end all the hard work paid off. But next to the hard work, it is also nice to see results being picked up by the media. The JAMA Int Med put out an international press release, whereas the LUMC is going to publish its own Dutch version. In the days before the ‘online first’ publication I already answered some emails from writers for medical news sites, some with up to 5.000K views per month. I do not know if you think that’s a lot, but for me it is. The websites that cover this story can be found here (, / / / / and perhaps more to come. Why not just take a look at the Altmetric of this article).

– edit 26.11.2015: a dutch press release from the LUMC can be found here) – edit: oops, has a published great report/interview, but used a wrong title…”Repeat MI and Stroke Risks Defined in ‘Younger’ Women on Oral Contraceptives”. not all women were on OC of course.

Of course, @JAMAInternalMed tweeted about it


The article, with the full title Recurrence and Mortality in Young Women With Myocardial Infarction or Ischemic Stroke: Long-term Follow-up of the Risk of Arterial Thrombosis in Relation to Oral Contraceptives (RATIO) Study can be found via JAMA Internal Medicine or via my personal Mendeley page.

As I reported earlier, this project is supported by a grant from the LUF den Dulk-Moermans foundation, for which we are grateful.

A year in Berlin


So, it is just over a year since I started here in Berlin. In this year I had the opportunity to start some great projects. Some of these projects have already resulted in some handsome -upcoming- publications.

For those who wonder, the picture gives a somewhat inflated impression of the size of the team, as we decided to include all people who currently work with us. This includes two of our five students and 2 virchow scholars that are visiting from Amsterdam and Hamburg. I included them all in the picture, as I enjoy my work here in Berlin because of all team members. Now, let’s do some science!

Spectrum of cerebral spinal fluid findings in patients with posterior reversible encephalopathy syndrome


This is one of the first projects that I was involved with from start to finish since my start in Berlin to be published, so I’m quite content with it. A cool landmark after a year in Berlin.

Together with TL and LN I supervised a student from the Netherlands (JH). This publication is the result of all the work JH did, together with the great medical knowledge from the rest of the team. About the research: Posterior reversible encephalopathy syndrome, or PRES, is a syndrome that can have stroke like symptoms, but in fact has got nothing to do with it. The syndrome was recognised as a separate entity only a couple of years ago, and this group of patients that we collected from the Charite is one of the largest collections in the world.

It is a syndrome characterised by edema (being either vasogenic or cytotoxic), suggesting there is something wrong with the fluid balance in the brain. A good way to learn more about the fluids in the brain is to take a look at the different things you can measure in the cerebrospinal fluid. The aim of this paper was therefore to see to what extend the edema, but also other patients characteristics, was associated with CSF parameters.

Our main conclusion is indeed the total amount of protein in the CSF is elevated in most PRES patients, and that severe edema grade was associated with more CSF. Remind yourself that this is basically a case series (with some follow up) but CSF is therefore measured during diagnosis and only in a selection of the patients. Selection bias is therefore likely to be affecting our results as well as the possibility of reverse causation. Next to that, research into “syndromes” is always complicated as they are a man-made concept. This problem we also encountered in the RATIO analyses about the antiphospholipid syndrome (Urbanus, Lancet Neurol 2009): a real syndrome diagnosis could not be given, as that requires two blood draws with 3 months time in between which is not possible in this case-control study. But still, there is a whole lot of stuff to learn about the syndromes in our clinical research projects.

I think this is also true for the PRES study: I think that our results show that it is justified to do a prospective and rigorous and standardised analyses of these patients with the dangerous syndrome. More knowledge on the causes and consequences is needed!

The paper can be cited as:

Neeb L, Hoekstra J, Endres M, Siegerink B, Siebert E, Liman TG. Spectrum of cerebral spinal fluid findings in patients with posterior reversible encephalopathy syndrome. J Neurol; 2015; (e-pub) and can be found on pubmed or on my mendeley profile

New article: Lipoprotein (a) as a risk factor for ischemic stroke: a meta-analysis


Together with several co-authors, with first author AN in the lead, we did a meta analyses on the role of Lp(a) as a risk factor of stroke. Bottomline, Lp(a) seems to be a risk factor for stroke, which was most prominently seen in the young.

The results are not the only reason why I am so enthusiastic by this article. It is also about the epidemiological problem that AN encountered and we ended up discussing over coffee. The problem: the different studies use different categorisations (tertiles, quartiles, quintiles). How to use these data and pool them in a way to get a valid and precise answer to the research question? In the end we ended up using the technique proposed used by D Danesh et al. JAMA. 1998;279(18):1477-1482 that uses the normal distribution and the distances in SD. A neat technique, even though it assumes a couple of things about the uniformity of the effect over the range of the exposure. An IPD would be better, as we would be free to investigate the dose relationship and we would be able to keep adjustment for confounding uniform, but hey… this is cool in itself!

The article can be found on pubmed and on my mendeley profile.

Fellow of the European Stroke Organisation

I just got word that I am elected as fellow of the European Stroke Organisation. Well, elected sounds more cool then it really is… I applied myself by sending in an application letter, resume, some form to show my experience in stroke research and two letters of recommendation of two active fellows and that’s that. So what does this mean? Basically, the fellows of the ESO are those who want to put some of their time to good use in name of the ESO, such as being active in one fo the committees. I chose to get active in teaching epidemiology (teaching courses during the ESOC conferences, or in the winter/summer schools, perhaps in the to be founded ESO scientific journal), but how is as of this moment not completely clear yet. Nonetheless, I am glad that I can work with and through this organisation to improve the epidemiological knowledge in the field of stroke.