Joining the PLOS Biology editorial board

I am happy and honored that I can share that I am going to be part of the PLOS Biology editorial board. PLOS Biology has a special model for their editorial duties, with the core of the work being done by in-house staff editors – all scientist turned professional science communicators/publishers. They are supported by the academic editors – scientists who are active in their field and can help the in-house editors with insight/insider knowledge. I will join the team of academic editors.

When the staff editors asked me to join the editorial board, it quickly became clear that they invited because I might be able to contribute to the Meta-research section in the journal. After all, next to some of my peer review reports I wrote for the journal, I published a paper on missing mice, the idea behind sequential designs in preclinical research, and more recently about the role of exact replication.

Next to the meta-research manuscripts that need evaluation, I am also looking forward to just working with the professional and smart editorial office. The staff editors already teased a bit that a couple of new innovations are coming up. So, next to helping meta-research forward, I am looking forward to help shape and evaluate these experiments in scholarly publishing.

Kuopio Stroke Symposium

Kuopio in summer

Every year there is a Neurology symposium organized in the quiet and beautiful town of Kuopio in Finland. Every three years, just like this year, the topic is stroke and for that reason, I was invited to be part of the faculty. A true honor, especially if you consider the other speakers on the program who all delivered excellent talks!

But these symposia are much more than just the hard cold science and prestige. It is also about making new friends and reconnecting with old ones. Leave that up to the Fins, whose decision to get us all on a boat and later in a sauna after a long day in the lecture hall proved to be a stroke of genius.

So, it was not for nothing that many of the talks boiled down to the idea that the best science is done with friends – in a team. This is true for when you are running a complex international stroke rehabilitation RCT, or you are investigating whether the lower risk in CVD morbidity and mortality amongst frequent sauna visitors. Or, in my case, about the role of hypercoagulability in young stroke – pdf of my slides can be found here –

My talk in Augsburg – beyond the binary

@BobSiegerink & Jakob Linseisen discussing the p-values. Thank you for your visit and great talk pic.twitter.com/iBt5ZQxaMi— Sebastian Baumeister (@baumeister_se) 3 May 2019

I am writing this as I am sitting in the train on my way back to Berlin. I was in Augsburg today (2x 5.5 hours in the train!), a small University city next to Munich in the south of Berlin. SB, fellow epidemiologist and BEMC alumnus, invited me to give a talk in their Vortragsreihe.

I had a blast – in part because this talk posed a challenge for me as they have a very mixed audience. I really had to think long and hard how I could provide something a stimulating talk with a solid attention arc for everybody on the audience. Take a look at my slides to see if I succeeded: http://tiny.cc/beyondbinary

My talk at Kuopio stroke symposium

In 6 weeks or so I will be traveling to Finland to speak at the Kuopio stroke symposium. They asked me to talk about my favorite subject, hypercoagulability and ischemic stroke. although I still working on the last details of the slides, I can already provide you with the abstract.

The categories “vessel wall damage” and “disturbance of blood flow” from Virchow’s Triad can easily be used to categorize some well known risk factors for ischemic stroke. This is different for the category “increased clotting propensity”, also known as hypercoagulability. A meta-analysis shows that markers of hypercoagulability are stronger associated with the risk of first ischemic stroke compared to myocardial infarction. This effect seems to be most pronounced in women and in the young, as the RATIO case-control study provides a large portion of the data in this meta-analysis. Although interesting from a causal point of view, understanding the role of hypercoagulability in the etiology of first ischemic stroke in the young does not directly lead to major actionable clinical insights. For this, we need to shift our focus to stroke recurrence. However, literature on the role of hypercoagulability on stroke recurrence is limited. Some emerging treatment targets can however can be identified. These include coagulation Factor XI and XII for which now small molecule and antisense oligonucleotide treatments are being developed and tested. Their relative small role in hemostasis, but critical role in pathophysiological thrombus formation suggest that targeting these factors could reduce stroke risk without increasing the risk of bleeds. The role of Neutrophilic Extracellular Traps, negatively charged long DNA molecules that could act as a scaffold for the coagulation proteins, is also not completely understood although there are some indications that they could be targeted as co-treatment for thrombolysis.

I am looking forward to this conference, not in the least to talk to some friends, get inspired by great speakers and science and enjoy the beautiful surroundings of Kuopio.

postscript: here are my slides that I used in Kuopio

Should you drink one glass of alcohol to reduce your stroke risk?

The answer: no. For a long time there has been doubt whether or not we should believe the observational data whether or not limited alcohol use is in fact good for. You know, the old “U-curve” association. Now, with some smart thinking from the KADORIE guys from China/ Oxford as well as some other methods experts, the ultimate analyses has been done: A Mendelian Randomization study published recently in the Lancet.

If you wanna know what that actually does, you can read a paper I co-wrote a couple of years ago for NDT or the version in Dutch for the NTVG. In short, the technique uses genetic variation as a proxy for the actual phenotype you are interested in. This can be a biomarker, or in this case, alcohol consumption. A large proportion of the Chinese population has some genetic variations in the genes that code for the enzymes that break down alcohol in your blood. These genetic markers are therefore a good indicators how much you can actually can drink – at least on a group level. And as in most regions in China alcohol drinking is the standard, at least for men- how much you can drink is actually a good proxy of how much you actually do drink. Analyse the risk of stroke according the unbiased genetic determined alcohol consumption instead of the traditional questionnaire based alcohol consumption and voila: No U curve in sight –> No protective effect of drinking a little bit of alcohol.

Why I am writing about that study on my own blog? I didn’t work on the research, that is for sure! No, it is because the Dutch newspaper NRC actually contacted me to get some background information which I was happy to do. The science section in the NRC has always been one of the best in the NL, which made it quite an honor as well as an adventure to get involved like that. The journalist, SV, did an excellent job or wrapping all what we discussed in that 30-40 video call into just under 600 words, which you can read here (Dutch).  I really learned a lot helping out and I am looking forward doing this type of work sometime in the future.

Go beyond the binary outcome!

You were just diagnosed with a debilitating disease. You try to make sense of what the next steps are going to be. You ask your doctor, what do I need to do in order to get back to fully functioning adult as good as humanly possible. The doctor starts to tell what to tell you in order to reduce the risk of future events.

That sounds logical at first sight, but in reality, it is not. The question and the answer are disconnected on various levels: what is good for lowering your risk is not necessarily the same thing as the thing that will bring functionality back into your live. Also, they are about different time scales: getting back to a normal life is about weeks, perhaps months, and trying to keep recurrence risk as low as possible is a long term game – lifelong in fact.
A lot of research in various fields have bungled these two things up. The effects of acute treatment are evaluated in studies with 3-5 years of follow up. Or reducing recurrence risk is studied in large cohorts with only 6-12 months of follow up. I am not arguing that this is always a bad idea, but i do think that a better distinction between these concepts could help some fields make some progress. 

We do that in stroke. Since a while now we have adopted the so called modified Rankin scale as the primary outcome in acute stroke trials. It is a 7 category ordinal scale often measured at 90 days after the stroke that actually tells us whether the patients completely recovered (mRS 0) or actually dies (mRS 6) and anything in between. This made so much sense for stroke that I started to wonder whether this would also make sense for other diseases.

I think it does. In a recent paper published a couple of months ago in the RPTH by JLR and me, we call upon the greater thrombosis community to consider to look beyond a binary outcome. I stand by this idea, and for that reason I brought it up again at the Maastricht Consensus Conference on Thrombosis. During that conference another speaker, EK, said that the field needed a new way to capture functionality after VTE. You guessed it, we got together over coffee, shared ideas, recruited SB as a third critical thinker, and we came up with this: a call to action to improve measuring functional limitations after venous thromboembolism.

This is not just a call from us to others to get some action, this is a start of some new upcoming research activity together with EK, SB and myself. First we need the input from other experts on the scale itself. Second, we need to standardize the way we actually score patients, then test this and get the patients perspective on the logistics and questions behind the scale. third we need to know the reliability of scale and how the logistics work in a true RCT setting. Only when we complete all these steps, we will be certain whether looking the binary outcome indeed brings more actionable information when you have talk to your doctor and you ask yourself “how do i increase my chances of getting back to a fully functioning adult as good as humanly possible”.

Replication: how exact do you want to be?

Doing exactly the same experiment for the second time around doesn’t really tell you much. In fact, if you quickly glance over the statistics it might look like you might as well do a coin flip. Wait.. What? Yup, a coin flip. After all, doing the exact same experiment will provide you with a 50/50 when it comes to detecting the true effect (50% power).


The kernel of truth is of course that a coin flip never adds new useful information. But what does an exact replication experiment actually add? This is the question we are trying to answer in latest paper in PLOS Biology where we explore the added value of replications in biomedical research. (see figure). The bottom line is that doing the exact same thing (including the same sample size) really has only limited added value. To understand what than the power implications for replication experiments actually are, we developed a shiny app, where readers can play around with different scenarios. Want to learn more? take a look here: s-quest.bihealth.org/power_replication


The project was carried by SP, which resulted in a paper published in PLOS Biology (find it here). The paper got some traction on news sites as well as twitter, as you see from this altmetric overview

Reusing open data

I was thrilled when I learned that the QUEST center at the BIH was going to reward open data reuse with awards. The details can be found on their website, but the bottom line is this: open science does not only implicate opening up your data, but actually the use of open data. So if everybody open up their data, but nobody is actually using it, the added values is quite limited. 

For that reason I started some projects back in 2015/2016 designed to see how easy it actually is to find data that could be used to answer a question that you are actually interested in. The answer is, not always as easy. The required variables might not be there, and even i they are, it is quite complex to start using a database that is not build by yourself. To understand the value of your results, you have to understand how the data was collected. One study proofed to be so well documented that it was a contender: the English Longitudinal Study on Aging. One of the subsequent analyses that we did was published in a paper –mentioned before on this blog-.and that paper is the reason why I am writing this blog. We received the Open data reuse award.

The award has a 1000 euro attached to it, money the group can spend on travel and consumables. Now, do not get me wrong, 1000 euro is nothing to sneeze at. But 1000 euro is not going to be major driver in your decision whether to reuse open data or not. But the award is nice and I hope effective in stimulating open science, especially as can stimulate the conversation and critical evaluation on the value of reusing open data .     

Long journey, short(ish) story

This is a short story about a long journey. It is about a of which the journey started in 2013 if I am not mistaken. In that year, we decided to link the RATIO case-control study to the data from the Central Buro of Statistics (CBS) in the Netherlands, allowing us to turn the case-control study into a follow-up study.

The first results of this analyses were already published some time ago under as “Recurrence and Mortality in Young Women With Myocardial Infarction or Ischemic Stroke”. To get these results in that journal, we were asked to reduce the paper to a letter. WE did and hope we were able to keep the core message clean and clear: the risk of arterial events, after arterial events, remains high over long period of time 15+ years) and remain true to type.

Just last week (!) we published another analyses of the data, where we contrast the long term risk for those with a presumably hypercoagulable blood profile to those who do not show a tendency to clotting. The bottom line is that, if anything, there is a dose-response between hypercoagulability and arterial thrombosis for ischemic stroke patients, but not for myocardial infarction patients. This is all in line with the conclusions on the role of hypercoagulability and stroke based on data from the same study. But I have to be honest: the evidence is not that overwhelming: the precision is low, as seen by the broad confidence intervals. And with regard to the point estimates, no clinically relevant effects seen. Then again, it is a piece of the puzzle that is needed to understand the role of hypercoagulability in young stroke.

main figure from the paper: Q4 vs Q1 is almost doubling in risk

There is a lot to tell about this publication: how difficult it was to get the study data linked to the CBS to get to the 15 year follow up, how AM did a fantastic job organizing the whole project,  how quartile analyses are possibly not the best way to capture all information that is in the data, how we had tremendous delays because of peer review – especially in the last journal, or how bad some of the peer review reports were, how one of the peer reviewers was a commercial enterprise – which for some time paid people to do peer review, how the peer review reports are all open, how it was to get the funding for getting the paper not locked away behind a paywall.

But I want to keep this story short and not dwell too much on the past. The follow-up period was long, the time it took u to get this published was long, let us keep the rest of the story as short as possible. I am just glad that it is published and finally to be shared with the world.

Pre-prints start to sound better and better…

Finding consensus in Maastricht

source https://twitter.com/hspronk

Last week, I attended and spoke at the Maastricht Consensus Conference on Thrombosis (MCCT). This is not your standard, run-of-the-mill, conference where people share their most recent research. The MCCT is different, and focuses on the larger picture, by giving faculty the (plenary) stage to share their thoughts on opportunities and challenges in the field. Then, with the help of a team of PhD students, these thoughts are than further discussed in a break out session. All was wrapped up by a plenary discussion of what was discussed in the workshops. Interesting format, right?

It was my first MCCT, and I had difficulty envisioning how exactly this format will work out beforehand. Now that I have experienced it all, I can tell you that it really depends on the speaker and the people attending the workshops. When it comes to the 20 minute introductions by the faculty, I think that just an overview of the current state of the art is not enough. The best presentations were all about the bigger picture, and had either an open question, a controversial statement or some form of “crystal ball” vision of the future. It really is difficult to “find consensus” when there is no controversy as was the case in some plenary talks. Given the break-out nature of the workshops, my observations are limited in number. But from what I saw, some controversy (if need be only constructed for the workshop) really did foster discussion amongst the workshop participants.

Two specific activities stand out for me. The first is the lecture and workshop on post PE syndrome and how we should able to monitor the functional outcome of PE. Given my recent plea in RPTH for more ordinal analyses in the field of thrombosis and hemostasis – learning from stroke research with its mRS- we not only had a great academic discussion, but made immediately plans for a couple of projects where we actually could implement this. The second activity I really enjoyed is my own workshop, where I not only gave a general introduction into stroke (prehospital treatment and triage, clinical and etiological heterogeneity etc) but also focused on the role of FXI and NETS. We discussed the role of DNase as a potential for co-treatment for tPA in the acute setting (talking about “crystal ball” type of discussions!). Slides from my lecture can be found here (PDF). An honorable mention has to go out to the PhD students P and V who did a great job in supporting me during the prep for the lecture and workshop. Their smart questions and shared insights really shaped my contribution.

Now, I said it was not always easy to find consensus, which means that it isn’t impossible. In fact, I am sure that themes that were discussed all boil down to a couple opportunities and challenges. A first step was made by HtC and HS from the MCCT leadership team in the closing session on Friday which will proof to be a great jumping board for the consensus paper that will help set the stage for future research in our field of arterial thrombosis.

Messy epidemiology: the tale of transient global amnesia and three control groups

Clinical epidemiology is sometimes messy. The methods and data that you might want to use might not be available or just too damn expensive. Does that mean that you should throw in the towel? I do not think so.

I am currently working in a more clinical oriented setting, as the only researcher trained as a clinical epidemiologist. I could tell about being misunderstood and feeling lonely as the only who one who has seen the light, but that would just be lying. The fact is that my position is one privilege and opportunity, as I work with many different groups together on a wide variety of research questions that have the potential to influence clinical reality directly and bring small, but meaningful progress to the field.

Sometimes that work is messy: not the right methods, a difference in interpretation, a p value in table 1… you get the idea. But sometimes something pretty comes out of that mess. That is what happened with this paper, that just got published online (e-pub) in the European Journal of Neurology.  The general topic is the heart brain interaction, and more specifically to what extent damage to the heart actually has a role in transient global amnesia. Now, the idea that there might be a link is due to some previous case series, as well as the clinical experience of some of my colleagues. Next step would of course to do a formal case control-study, and if you want to estimate true measure of rate ratios, a lot effort has to go into the collection of data from a population based control group. We had neither time nor money to do so, and upon closer inspection, we also did not really need that clean control group to answer some of our questions that would progress to the field.

So instead, we chose three different control groups, perhaps better referred as reference groups, all three with some neurological disease. Yes, there are selections at play for each of these groups, but we could argue that those selections might be true for all groups. If these selection processes are similar for all groups, strong differences in patient characteristics of biomarkers suggest that other biological systems are at play. The trick is not to hide these limitations, but as a practiced judoka, leverage these weaknesses and turn them into a strengths. Be open about what you did, show the results, so that others can build on that experience.

So that is what we did. Compared patients with migraine with aura, vestibular neuritis and transient ischemic attack, patients with transient global amnesia are more likely to exhibitsigns of myocardial stress. This study was not designed – nor will if even be able to – understand the cause of this link, not do we pretend that our odds ratios are in fact estimates of rate ratios or something fancy like that. Still, even though many aspects of this study are not “by the book”, it did provide some new insights that help further thinking about and investigations of this debilitating and impactful disease.

The effort was lead by EH, and the final paper can be found here on pubmed.

Genetic determinants of activity and antigen levels of contact system factors

2018-11-08 12_43_09-RATIO instol zymogen.ppt [Compatibility Mode] - PowerPoint
One of my slides with a cartoon of the intrinsic coagulation system. I know, the reality is way more complicated, but still, I like the picture!
The contact system, or intrinsic coagulation system, have for a long time been an undervalued part of the thrombosis and hemostasis field. Not by me. I love FXI & FXII Not just now, since FXI is suddenly the “new kid on the block” as the new target for antithrombotic treatment through ASOs, but already since I started my PhD in 2007/2008. As any of my colleagues from back then will confirm, I couldn’t shut up about FXI and FXII as I thought that my topic was the only relevant topic in the world. Although common amongst young researcher, I do apologize for this now that I have 20/20 hindsight.

Still, it is only natural that some of the work I continues to be focused on those little bit weird coagulation proteins. Are they relevant to hemostasis? Are they relevant in pathological thrombus formation? What is their role in other biological systems? Questions that the field is only slowly getting answers to. Our latest contribution to this is the analyses of genetic variations in the genes that code for these protein, and estimate if the levels of activation and antigen are in fact -in part- genetically determinant.

This analysis was performed in the RATIO study, from which we primarily focused on the control group. That control group is relatively small for a genetic analyses, but given that we have a relative young group the hope is that the noise is not too bad to pick up some signals. Additionally, given the previous work in the RATIO study, I think this is the only dataset that has a comprehensive phenotyping of the intrinsic coagulation proteins as it includes measures of protein activity, antigen and activation.

The results, which we published in the JTH, are threefold: we were able to confirm previously reported associations between known genetic variations and phenotype. Se were also able to identify two new loci (i.e. KLKB1 rs4253243 for prekallikrein and KNG1rs5029980 for HMWK levels). Third, we did not find evidence of strong associations between variation in the studied genes and the risk of ischemic stroke or myocardial infarction. Small effects can however not be ruled, as the sample size of this study is not enough to yield very precise estimates. 

The work was spearheaded by JLR, with tons of help by HdH, and in collaboration with the thrombosis group at the LUMC.

The paper is published in the JTH, and as always, can also be found at my Mendeley profile.

Getting your life back on track after stroke: returning to work

https://goo.gl/CbNPSE

Stroke severity and incidence might be stabilizing, or even decreasing over time in western countries, but this sure is not true for other parts of the world. But here is something to think about: with increasing survival, people will suffer longer from the consequences of stroke. This is of course especially true if the stroke occured at a young age.

To understand the true impact of stroke, we need to look beyond increased risk of secondary events. We need to understand how the disease affects day-to-day life, especially long term in young stroke patients. The team in Helsinki (HSYR) took a look at the pattern of young stroke patients returning to work. The results:

We included a total of 769 patients, of whom 289 (37.6%) were not working at 1 year, 323 (42.0%) at 2 years, and 361 (46.9%) at 5 years from IS.

That is quite shocking! But how about the pattern? For that we used lasagna plots, something like heatmaps for longitudinal epidemiological data. The results are above: the top panel is just the data like in our database, while the lower data has some sorting to help interpret the results a bit better. 

The paper can be found here, and I am proud to say that it is open access, but you can as always just check my Mendeley profile.

Aarnio K, Rodríguez-Pardo J, Siegerink B, Hardt J, Broman J, Tulkki L, Haapaniemi E, Kaste M, Tatlisumak T, Putaala J. Return to work after ischemic stroke in young adults. Neurology 2018; 0: 1.

Cardiac troponin T and severity of cerebral white matter lesions: quantile regression to the rescue

quantile regression of high vs low troponin T and white matter lesion quantile

A new paper, this time venturing into the field of the so-called heart-brain interaction. We often see stroke patients with cardiac problems, and vice versa. And to make it even more complex, there is also a link to dementia! What to make of this? Is it a case of chicken and the egg, or just confounding by a third variable?  How do these diseases influence each other?

This paper tries to get a grip on this matter by zooming in on a marker of cardiac damage, i.e. cardiac troponin T. We looked at this marker in our stroke patients. Logically, stroke patients do not have increased levels of troponin T, yet, they do. More interestingly, the patients that exhibit high levels of this biomarker also have high level of structural changes in the brain, so called cerebral white matter lesions. 

But the problem is that patients with high levels of troponin T are different from those who have no marker of cardiac damage. They are older and have more comorbidities, so a classic case for adjustment for confounding, right? But then we realize that both troponin as well as white matter lesions are a left skewed data. Log transformation of the variables before you run linear regression, but then the interpretation of the results get a bit complex if you want clear point estimates as answers to your research question.

So we decided to go with a quantile regression, which models the quantile cut offs with all the multivariable regression benefits. The results remain interpretable and we don’t force our data into distribution where it doesn’t fit. From our paper:

In contrast to linear regression analysis, quantile regression can compare medians rather than means, which makes the results more robust to outliers [21]. This approach also allows to model different quantiles of the dependent variable, e.g. 80th percentile. That way, it is possible to investigate the association between hs-cTnT in relation to both the lower and upper parts of the WML distribution. For this study, we chose to perform a median quantile regression analysis, as well as quantile regression analysis for quintiles of WML (i.e. 20th, 40th, 60th and 80th percentile). Other than that, the regression coefficients indicate the effects of the covariate on the cut-offs of the respective quantiles of the dependent variable, adjusted for potential covariates, just like in any other regression model.

Interestingly, the result show that association between high troponin T and white matter lesions is the strongest in the higher quantiles. If you want to stretch to a causal statement that means that high troponin T has a more pronounced effect on white matter lesions in stroke patients who are already at the high end of the distribution of white matter lesions. 

But we should’t stretch it that far. This is a relative simple study, and the clinical relevance of our insights still needs to be established. For example, our unadjusted results might indicate that the association in itself might be strong enough to help predict post stroke cognitive decline. The adjusted numbers are less pronounced, but still, it might be enough to help prediction models.

The paper, led by RvR, is now published in J of Neurol, and can be found here, as well as on my mendeley profile.

 von Rennenberg R, Siegerink B, Ganeshan R, Villringer K, Doehner W, Audebert HJ, Endres M, Nolte CH, Scheitz JF. High-sensitivity cardiac troponin T and severity of cerebral white matter lesions in patients with acute ischemic stroke. J Neurol Springer Berlin Heidelberg; 2018; 0: 0.

Impact of your results: Beyond the relative risk

I wrote about this in an earlier topic: JLR and I published a paper in which we explain that a single relative risk, irrespective of its form, is jus5t not enough. Some crucial elements go missing in this dimensionless ratio. The RR could allow us to forget about the size of the denominator, the clinical context, the crude binary nature of the outcome. So we have provided some methods and ways of thinking to go beyond the RR in an tutorial published in RPTH (now in early view). The content and message are nothing new for those trained in clinical research (one would hope). Even for those without a formal training most concepts will have heard the concepts discussed in a talk or poster . But with all these concepts in one place, with an explanation why they provide a tad more insight than the RR alone, we hope that we will trigger young (and older) researchers to think whether one of these measures would be useful. Not for them, but for the readers of their papers. The paper is open access CC BY-NC-ND 4.0, and can be downloaded from the website of RPTH, or from my mendeley profile.  

How you quantify the impact of your results matters. A bit.

This is not about altmetrics. Nor is about the emails you get from colleagues or patients. It is about the impact of a certain risk factor. A single relative risk is meaningless. As it is a ratio, it is dimensionless, and without the context hidden in the numerator and denominator, it can be tricky to interpret results. Together with JLR I have a paper coming up in which we plead to use one of the many ways one could interpret the impact of your results, and just simply go beyond the simple relative risk. This will be published in RPTH, the relatively new journal of the ISTH, where I also happen to be on the editorial board.
Venn diagram illustrating the intersections of the independent predictors and poor outcome 12 months after stroke.  https://doi.org/10.1371/journal.pone.0204285.g003
One of those ways it to report the population attributable risk: the percent of cases which can be attributed to the risk factor in question. It is often said that if we had a magic wand and would use it to make the risk factor disappear X% of the patient will not develop the disease. Some interpret this as the causal fraction, which is not completely correct if you dive really deep into epidemiological theory, but still, you get the idea. In a paper based on PROSCIS data, with first author CM at the helm, we have tested several ways to calculate the PAR of five well known and established risk factors for bad outcome after stroke. Understanding what lies behind which patient gets has a bad outcome and which doesn’t is one the things we really struggle with, as many patient with well established risk factors just don’t develop a poor outcome. Quantifying the impact of risk factors, and arguably more importantly, ranking the risk factors is a good tool to help MDs, patients, researchers and public health officials to know where to focus on. However, when we compared the PARs calculated by different methods, we came to the conclusion there is quite some variation. The details are in the table below, but the bottom line is this. It is not a good sign when your results depend on the method. Similar methods should get similar results. But upon closer inspection (and somewhat reassuring) the order of magnitude as well as the rank of the 5 risk factors stays almost similar.
https://doi.org/10.1371/journal.pone.0204285.g003
So, yes, it is possible to measure the impact of your results. These measures do depend on the type of method you have used, which in itself is somewhat worrying, but given that we don’t have magic wand of which we expect to remove a fraction of the disease of up to 2 decimals precise, the PAR is a great tool to get some more grip on the context of RR. The paper was published in PLOS One and can be found on their website or on my mendeley profile PS This paper is one of the first papers with patient data in which we provided the data together with the manuscript. From the paper: “The data that support the findings of this study are available to all interested researchers at Harvard Dataverse (https://doi.org/10.7910/DVN/REBNRX).” Nice.

Teaching award from the German society for epidemiology

teaching at ESOC 2018 summer school
teaching an interactive session on study design at ESOC 2018 summer school

The German society for epidemiology has an annual teaching award, i.e. the “Preis für exzellente Lehre in der Epidemiologie”. From their website:

Mit der Auszeichnung sollen herausragende Leistungen oder überdurchschnittliches Engagement in der Lehre der Epidemiologie gewürdigt werden. (…)  Preiswürdig sind innovative, originelle oder nachhaltige Angebote, ebenso wie ein besonders hoher persönlicher Einsatz für die Lehre.”

In short, anything goes in terms of format, innovation, personal commitment etc. However, there is a trick: only students can nominate you. So what happened? My students nominated me for my “overall teaching concept”. Naturally, the DGEpi wondered what that teaching concept actually was and asked me to provide some more information. So I took that opportunity and actually described what and why I teach, to see what the actual concept behind this all is. Here is the result.

The bottom line is simple: I think you learn the best not only by reading a book, but that you learn by doing, help in the organization and help teach in various epi related activities. You need to get exposed in several formats with different people. So I have helped set up a plethora of activities for the young student to learn epidemiology in different ways on different levels: read classics, discuss in weekly journal clubs, use popular scientific books  in book clubs, but also organize platforms for discussion, interaction and inspiration (yes, I am talking about BEMC). The most important thing might be that students should learn the basics for epidemiology, even though they might not need that for that own research projects. This is especially true for medical students who want to learn about clinical research.

Last week I learned that the award in the end was awarded to me. Of course I am honored on a personal level, and this honors needs to be extended to my mentors. But I also take this award as an indication that the recent and increasing Berlin based epi-activities I helped to organize together with epi enthusiast at the IPH, iBIKE and QUEST did not go unnoticed by the German epidemiological community.

I will pick up the price in Bremen at the yearly conference of the DGEPI. See you there?

Cerebral microbleeds and interaction with antihypertensive treatment in patients with ICH; a tale of two rejected letters

ICH is not my topic, but as we were preparing for the ESO Summerschool I explored the for me as yet untouched areas of stroke research. That brought me to this paper by Shoamanesh et al in JAMA Neurology which investigates a potential interaction between CMB and the treatment at hand in relation to outcome in patients with ICH. Their conclusion: no interaction. The paper is easy to read and has some at first glance convincing data, but then I realized some elements are just not right:

  • the outcome is not rare, still a logistic model is used to assess relative risk
  • interaction is assessed based on multiplicative interaction even while adding variables could lead to other estimates of interaction due to the non-collapsibility of the OR
  • the underlying clinical question of interaction is arguable better answered with an analyses of additive interaction.

I decided to write a letter to the editor. Why? Well, additionally to the methodological issues mentioned above, the power of the analyses was quite low and the conclusion of “no effect” based on a p value >0.05 with low power is in itself a problem. Do I expect that there is a massive shift in how I would interpret the data when they would have analysed the data differently? I don’t think so, especially as the precision of any quantification of additive interaction will be quite low. But that is not the main issue here: the way the data were presented does not allow the reader to assess additive interaction. So my letter was focused on that: suggesting to present the data in a slight different way, and then we can discuss whether the conclusions as drawn by the authors still holds. Then, and only then we get the full picture of the value of CMB in treatment decision. The thing is that we will then realize that the full picture is actually not the full picture, as the data are quite limited and imprecise and more research is required before strong conclusions can be drawn.

But the letter was rejected by JAMA Neurology because of space limitations and priority. I didn’t appeal. The same happened when I submitted an edited version of the paper to Neuro-epidemiology. I didn’t appeal. In the meantime, I’ve contacted the corresponding author, but he did not get back to me. So now what? Pubmed commons died. Pubpeer is, to my taste, too much focused on catching image frauds, even though they do welcome other types of contributions. I know my comments are only interesting for the methodologically inclined, and in the greater scheme of things, their value is limited. I also do understand space limitation when it comes to print, but how about online?Anyway, a lot of reasons why things happened why they happened. But somebody told me that if it was important enough to write a letter, it is important enough to publish it somewhere. So here I am, posting my initial letter on my own website, which almost certainly means that no single reader of the original paper will find out about these comments.

Post publication peer review ideas anybody?

The original paper can be found here, on the website of JAMA Neurology.

My letter can be found here: CMB and intense blood pressure lowering in ICH_ is there an additive effect

FVIII, Protein C and the Risk of Arterial Thrombosis: More than the Sum of Its Parts.

maxresdefault
source: https://www.youtube.com/watch?v=jGMRLLySc4w 

Peer review is not a pissing contest. Peer reviewing is not about findings the smallest of errors and delay publication because of it. Peer review is not about being right. Peer review is not about rewriting the paper under review. Peer review is not about asking for yet another experiment.

 

Peer review is about making sure that the conclusions presented in the paper are justified by the data presented and peer review is about helping the authors get the best report on what they did.

At least that what I try to remind myself of when I write my peer review report. So what happened when I wrote a peer review about a paper presenting data on the two hemostatic factors protein C and FVIII in relation to arterial thrombosis. These two proteins are known to have a direct interaction with each other. But does this also translate into the situation where a combination of the two risk factors of the “have both, get extra risk for free”?

There are two approaches to test so-called interaction: statistical and biological. The authors presented one approach, while I thought the other approach was better suited to analyze and interpret the data. Did that result in an academic battle of arguments, or perhaps a peer review deadlock? No, the authors were quite civil to entertain my rambling thoughts and comments with additional analyses and results, but convinced me in the end that their approach have more merit in this particular situation. The editor of thrombosis and hemostasis saw this all going down and agreed with my suggestion that an accompanying editorial on this topic to help the readers understand what actually happened during the peer review process. The nice thing about this is that the editor asked me to that editorial, which can be found here, the paper by Zakai et al can be found here.

All this learned me a thing or two about peer review: Cordial peer review is always better (duh!) than a peer review street brawl, and that sharing aspects from the peer review process could help readers understand the paper in more detail. Open peer review, especially the parts where peer review is not anonymous and reports are open to readers after publication, is a way to foster both practices. In the meantime, this editorial will have to do.

 

New paper: External defibrillator use by bystanders and patient outcomes

source: https://goo.gl/HkZkV5
Main analyses showing the effect of AED use on several endpoints

In this paper, together with researchers from Harvard and the Institute of Public Health at the Charite, we used data from the CARES dataset to answer some questions regarding the use of automated external defibrillator (AED) in the United States.

It is known from previous studies that AED use does improve clinical outcome of those who are treated with AED. Less known is whether the treatment effect of AEDs administrated by untrained bystanders has a similar beneficial effect, especially because

1) so called neighborhood characteristics have not been taken into account previous analyses and

2) it is difficult to find the right control group.

This paper focuses on these two aspects by taking neighborhood characteristics into account and using so called “negative controls” (i.e. patients who were treated with AED but did not have a shockable rhythm).

I had a lot of fun in this project: i like when my skills are helpful outside of the fields that I am not usually working in. NOt only does it allow me to see how research methodology is applied in different fields, but it also help me understand my own field much better. After all, both AED and STEMO are methods that aim to deliver treatment to a patient as soon as possible, in fact “pre-hospitalisation”. If only a CT scanner could be that small… or can it…

The main lifting on this publication has been done by LWA. thanks for letting me join for the adventure!

The paper can be found on pubmed, and on my mendeley profile

BEMC has a Journal Club now

cropped-favicon_bemc1

After a year of successful BEMC talks and seeing the BEMC grow,  it was time for something new. We are starting a new journal club within the BEMC community, purely focussed on methods. The text below describes what we are going to to do, starting in February. (text comes from the BEMC website)

BEMC is trying something new: a journal club. In february, we will start a monthly journal to accompany the BEMC talks as an experiment. The format is subject to change as we will adapt after gaining more experience in what works and what not. For now, we are thinking along the following lines:

Why another journal club?

Aren’t we already drowning in Journal clubs? Perhaps, but not with this kind of journal club. BEMC JClub is focussed on the methods of clinical research. Many epidemiological inclined researchers work at departments who are not focussed on methodology, but rather on a disease or field of medicine. This is reflected in the topics of the different journal clubs around town. We believe there is a need for a methods journal club in Berlin. Our hope for the BEMC JClub is to fulfill that need through interdisciplinary and methodological discussions of the papers that we read.

Who is going to participate?

First of all, please remember that the BEMC community focussed on researchers with a medium to advanced epidemiological knowledge and skill set. This is not only true for our BEMC talks, but also for our JClub.

Next to this, we hope that we will end up with a good group that reflects the BEMC community. This means that we are looking for a group with a nice mix in background and experience. That means that if you think you have unique background and focus in your work, we highly encourage you to join us and make our group as diverse as possible. We strive for this diversity as we do not want the JClub sessions to become echo chambers or teaching sessions, but truly discussions that promote knowledge exchange between methodologist from different fields.

What will we read?

Anything that is relevant for those who attend. The BEMC team will ultimately determine which papers we will read, but we are nice people and listen carefully to the suggestions of regulars. Sometimes we will pick a paper on the same (or related) topic of the BEMC talk of that month.

Even though the BEMC team has the lead in the organisation, the content of the JClub should come from everybody attending. Everybody that attends the Jclub is asked to provide some points, remarks or questions to jumpstart the discussion.

What about students?

Difficult to say. The BEMC JClub is not designed to teach medical students the basics in epidemiology. Then again, everybody who is smart, can keep up and contribute to the discussion is welcome.

Are you a student and in doubt whether the BEMC JClub is for you? Just send us an email.

Where? When?

Details like this can on the BEMC Jclub website. Just click here.

new paper: pulmonary dysfunction and CVD outcome in the ELSA study

 This is a special paper to me, as this is a paper that is 100% the product of my team at the CSB.Well, 100%? Not really. This is the first paper from a series of projects where we work with open data, i.e. data collected by others who subsequently shared it. A lot of people talk about open data, and how all the data created should be made available to other researchers, but not a lot of people talk about using that kind of data. For that reason we have picked a couple of data resources to see how easy it is to work with data that is initially not collected by ourselves.

It is hard, as we now have learned. Even though the studies we have focussed on (ELSA study and UK understanding society) have a good description of their data and methods, understanding this takes time and effort. And even after putting in all the time and effort you might still not know all the little details and idiosyncrasies in this data.

A nice example lies in the exposure that we used in this analyses, pulmonary dysfunction. The data for this exposure was captured in several different datasets, in different variables. Reverse engineering a logical and interpretable concept out of these data points was not easy. This is perhaps also true in data that you do collect yourself, but then at least these thoughts are being more or less done before data collection starts and no reverse engineering is needed. new paper: pulmonary dysfunction and CVD outcome in the ELSA study

So we learned a lot. Not only about the role of pulmonary dysfunction as a cause of CVD (hint, it is limited), or about the different sensitivity analyses that we used to check the influence of missing data on the conclusions of our main analyses (hint, limited again) or the need of updating an exposure that progresses over time (hint, relevant), but also about how it is to use data collected by others (hint, useful but not easy).

The paper, with the title “Pulmonary dysfunction and development of different cardiovascular outcomes in the general population.” with IP as the first author can be found here on pubmed or via my mendeley profile.

New Masterclass: “Papers and Books”

“Navigating numbers” is a series of Masterclass initiated by a team of Charité researchers who think that our students should be able to get more familiar how numbers shape the field of medicine, i.e. both medical practice and medical research. And I get to organize the next in line.

I am very excited to organise the next Masterclass together with J.O. a bright researcher with a focus on health economics. As the full title of the masterclass is “Papers and Books – series 1 – intended effect of treatments”, some health economics knowledge is a must in this journal club style series of meetings.

But what will we exactly do? This Masterclass will focus on reading some papers as well as a book (very surprising), all with a focus on study design and how to do proper research into “intended effect of treatment” . I borrowed this term from one of my former epidemiology teachers, Jan Vandenbroucke, as it helps to denote only a part of the field of medical research with its own idiosyncrasies, yet not limited by study design.

The Masterclass runs for 8 meetings only, and as such not nearly enough to have the students understand all in and outs of proper study design. But that is also not the goal: we want to show the participants how one should go about when the ultimate question is medicine is asked: “should we treat or not?”

If you want to participate, please check out our flyer

New paper: Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young Adults

2017-06-16 09_26_46-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young2017-06-16 09_25_58-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young

Just a relative risk is not enough to fully understand the implications of your findings. Sure, if you are an expert in a field, the context of that field will help you to assess the RR. But if ou are not, the context of the numerator and denominator is often lost. There are several ways to work towards that. If you have a question that revolves around group discrimination (i.e. questions of diagnosis or prediction) the RR needs to be understood in relation to other predictors or diagnostic variables. That combination is best assessed through the added discriminatory value such as the AUC improvement or even more fancy methods like reclassification tables and net benefit indices. But if you are interested in are interested in a single factor (e.g. in questions of causality or treatment) a number needed to treat (NNT) or the Population Attributable Fraction can be used.

The PAF has been subject of my publications before, for example in these papers where we use the PAF to provide the context for the different OR of markers of hypercoagulability in the RATIO study / in a systematic review. This paper is a more general text, as it is meant to provide in insight for non epidemiologist what epidemiology can bring to the field of law. Here, the PAF is an interesting measure, as it has relation to the etiological fraction – a number that can be very interesting in tort law. Some of my slides from a law symposium that I attended addresses these questions and that particular Dutch case of tort law.

But the PAF is and remains an epidemiological measure and tells us what fraction of the cases in the population can be attributed to the exposure of interest. You can combine the PAF to a single number (given some assumptions which basically boil down to the idea that the combined factors work on an exact multiplicative scale, both statistically as well as biologically). A 2016 Lancet paper, which made huge impact and increased interest in the concept of the PAF, was the INTERSTROKE paper. It showed that up to 90% of all stroke cases can be attributed to only 10 factors, and all of them modifiable.

We had the question whether this was the same for young stroke patients. After all, the longstanding idea is that young stroke is a different disease from old stroke, where traditional CVD risk factors play a less prominent role. The idea is that more exotic causal mechanisms (e.g. hypercoagulability) play a more prominent role in this age group. Boy, where we wrong. In a dataset which combines data from the SIFAP and GEDA studies, we noticed that the bulk of the cases can be attributed to modifiable risk factors (80% to 4 risk factors). There are some elements with the paper (age effect even within the young study population, subtype effects, definition effects) that i wont go into here. For that you need the read the paper -published in stroke- here, or via my mendeley account. The main work of the work was done by AA and UG. Great job!

New paper in RPTH: Statins and the risk of DVT recurrence

coverI am very happy and honored that i can tell you that our paper “Statin use and risk of recurrent venous thrombosis: results from the MEGA follow-up study” is the very first paper in the new ISTH journal “Research and Practices in Thrombosis and Hemostasis“.

This new journal, for which I serve on the editorial board, is the sister journal of the JTH, but has a couple of focus point that are not present in the JTH. Biggest difference is the open access policy of the RPTH. Next to that, there are a couple of things or subjects that the RPTH welcomes, which are perhaps not so common in traditional journals (e.g. career development articles, educationals, nursing and patient perspectives etc).

Our paper is however a very standard paper, in the sense that it is original research publication regarding the role of statins and the risk of thrombosis recurrence. We answer the question whether statins indeed is linked with a lower risk of recurrence based on observational data, opening up the door to confounding by indication. To counteract, we applied a propensity score, and most important of all, we only used so-called “incident users”. Incident vs prevalent users of statins is a theme that has been a topic on this blog before (for example here and here). The bottom line is this: people who are currently using statins are different from people who are prescribed statins – adherence issues, side effects, or low life expectancy could be reasons for discontinuation.  You need to take this difference between these type of statin users into account, or the protective effect of statins, or any other medication for that matter, might be biassed. In the case of statins and DVT recurrence it can be argued that the risk lowering effect of statins is overestimated. In itself that is not a problem in an observational study. But if the results of this observational study is subsequently used in a sample size calculation for a proper trial, that trial will be underpowered and we might have lost our (expensive and potentially only) shot at really knowing whether or not DVT patients benefit from statins.

RPTH will be launched during ISTH 2017 which will be held in Berlin in a couple of weeks.

New paper: A Prothrombotic Score Based on Genetic Polymorphisms of the Hemostatic System Differs in Patients with IS, MI, or PAOD

logo-home[1]
frontiersin.org
My first paper in frontiers of cardiovascular medicine, an open access platform focussed on cardiovascular medicine. This is not a regular case-control study, where the prevelance of a risk factor is compared between an unselected patient group and a reference group from the general population. No, this paper takes patients with cardiovascular disease who are referred for thrombophilia testing. when the different diseases (ischemic stroke vs myocardial infarction / PAOD) are then compared in terms of their thrombophilic propensity, it is clear that these two groups are different. The first culprit to think might be that thrombophilia indeed plays a different role in the etiology of these disease, like we demonstrated in a RATIO publication as well as this systematic review, but it might also be that there is just a different referral pattern. in any case, it indicates that the role of thrombophilia – whether it is causal or physician suspected – is different between the different forms of arterial thrombosis.

Advancing prehospital care of stroke patients in Berlin: a new study to see the impact of STEMO on functional outcome

There are strange ambulances driving around in Berlin. They are the so-called STEMO cars, or Stroke Einsatz Mobile, basically driving stroke units. They have the possibility to make a CT scan to rule out bleeds and subsequently start thrombolysis before getting to the hospital. A previous study showed that this descreases time to treatment by ~25 minutes. The question now is whether the patients are indeed better of in terms of functional outcome. For that we are currently running the B_PROUD study of which we recently published the design here.

Virchow’s triad and lessons on the causes of ischemic stroke

I wrote a blog post for BMC, the publisher of Thrombosis Journal in order to celebrate blood clot awareness month. I took my two favorite subjects, i.e. stroke and coagulation, and I added some history and voila!  The BMC version can be found here.

When I look out of my window from my office at the Charité hospital in the middle of Berlin, I see the old pathology building in which Rudolph Virchow used to work. The building is just as monumental as the legacy of this famous pathologist who gave us what is now known as Virchow’s triad for thrombotic diseases.

In ‘Thrombose und Embolie’, published in 1865, he postulated that the consequences of thrombotic disease can be attributed one of three categories: phenomena of interrupted blood flow, phenomena associated with irritation of the vessel wall and its vicinity and phenomena of blood coagulation. This concept has now been modified to describe the causes of thrombosis and has since been a guiding principle for many thrombosis researchers.

The traditional split in interest between arterial thrombosis researchers, who focus primarily on the vessel wall, and venous thrombosis researchers, who focus more on hypercoagulation, might not be justified. Take ischemic stroke for example. Lesions of the vascular wall are definitely a cause of stroke, but perhaps only in the subset of patient who experience a so called large vessel ischemic stroke. It is also well established that a disturbance of blood flow in atrial fibrillation can cause cardioembolic stroke.

Less well studied, but perhaps not less relevant, is the role of hypercoagulation as a cause of ischemic stroke. It seems that an increased clotting propensity is associated with an increased risk of ischemic stroke, especially in the young in which a third of main causes of the stroke goes undetermined. Perhaps hypercoagulability plays a much more prominent role then we traditionally assume?

But this ‘one case, one cause’ approach takes Virchow’s efforts to classify thrombosis a bit too strictly. Many diseases can be called multi-causal, which means that no single risk factor in itself is sufficient and only a combination of risk factors working in concert cause the disease. This is certainly true for stroke, and translates to the idea that each different stroke subtype might be the result of a different combination of risk factors.

If we combine Virchow’s work with the idea of multi-causality, and the heterogeneity of stroke subtypes, we can reimagine a new version of Virchow’s Triad (figure 1). In this version, the patient groups or even individuals are scored according to the relative contribution of the three classical categories.

From this figure, one can see that some subtypes of ischemic stroke might be more like some forms of venous thrombosis than other forms of stroke, a concept that could bring new ideas for research and perhaps has consequences for stroke treatment and care.

Figure 1. An example of a gradual classification of ischemic stroke and venous thrombosis according to the three elements of Virchow’s triad.

However, recent developments in the field of stroke treatment and care have been focused on the acute treatment of ischemic stroke. Stroke ambulances that can discriminate between hemorrhagic and ischemic stroke -information needed to start thrombolysis in the ambulance-drive the streets of Cleveland, Gothenburg, Edmonton and Berlin. Other major developments are in the field of mechanical thrombectomy, with wonderful results from many studies such as the Dutch MR CLEAN study. Even though these two new approaches save lives and prevent disability in many, they are ‘too late’ in the sense that they are reactive and do not prevent clot formation.

Therefore, in this blood clot awareness month, I hope that stroke and thrombosis researchers join forces and further develop our understanding of the causes of ischemic stroke so that we can Stop The Clot!

Increasing efficiency of preclinical research by group sequential designs: a new paper in PLOS biology

We have another paper published in PLOS Biology. The theme is in the same area as the first paper I published in that journal, which had the wonderful title “where have all the rodents gone”, but this time we did not focus on threats to internal validity, but we explored whether sequential study designs can be useful in preclinical research.

Sequential designs, what are those? It is a family of study designs (perhaps you could call it the “adaptive study size design” family) where one takes a quick peek at the results before the total number of subject is enrolled. But, this peek comes at a cost: it should be taken into account in the statistical analyses, as it has direct consequence for the interpretation of the final result of the experiment. But the bottom line is this: with the information you get half way through can decide to continue with the experiment or to stop because of efficacy or futility reasons. If this sounds familiar to those familiar with interim analyses in clinical trials, it is because it is the sam concept. however, we explored its impact when applied to animal experiments.

Figure from our publication in PLOS Biology describing sequential study designs in or computer simulations

Old wine in new bottles” one might say, and some of the reviewers for this paper published rightfully pointed out that our paper was not novel in terms of showing how sequential designs are more efficient compared to non sequential designs. But there is not where the novelty lies. Up untill now, we have not seen people applying this approach to preclinical research in a formal way. However, our experience is that a lot of preclinical studies are done with some kind of informal sequential aspect. No p<0.05? Just add another mouse/cell culture/synapse/MRI scan to the mix! The problem here is that there is no formal framework in which this is done, leading to cherry picking, p-hacking and other nasty stuff that you can’t grasp from the methods and results section.

Should all preclinical studies from now on half sequential designs? My guess would be NO, and there are two major reasons why. First of all, sequential data analyses have their ideosyncrasies and might not be for everyone. Second, the logistics of sequential study designs are complex, especially if you are affraid to introduce batch effects. We only wanted to show preclinical researchers that the sequential approach has their benefits: the same information with on average less costs. If you translate “costs” into animals the obvious conclusion is: apply sequential designs where you can, and the decrease in animals can “re-invested” in more animals per study to obtain higher power in preclinical research. But I hope that the side effect of this paper (or perhaps its main effect!) will be that the readers just think about their current practices and whether thise involve those ‘informal sequential designs’ that really hurt science.

The paper, this time with aless exotic title, “Increasing efficiency of preclinical research by group sequential designs” can be found on the website of PLOS biology.

Associate editor at BMC Thrombosis Journal

source: https://goo.gl/CS2XtJ
source: https://goo.gl/CS2XtJ

In the week just before Christmas, HtC approached me by asking whether or not I would like to join the editorial board of BMC Thrombosis Journal as an Associate Editor. the aims and scope of the journal, taken from their website:

“Thrombosis Journal  is an open-access journal that publishes original articles on aspects of clinical and basic research, new methodology, case reports and reviews in the areas of thrombosis.Topics of particular interest include the diagnosis of arterial and venous thrombosis, new antithrombotic treatments, new developments in the understanding, diagnosis and treatments of atherosclerotic vessel disease, relations between haemostasis and vascular disease, hypertension, diabetes, immunology and obesity.”

I talked to HtC, someone at BMC, as well as some of my friends and colleagues whether or not this would be a wise thing to do. Here is an overview of the points that came up:

Experience: Thrombosis is the field where I grew up in as a researcher. I know the basics, and have some extensive knowledge on specific parts of the field. But with my move to Germany, I started to focus on stroke, so one might wonder why not use your time to work with a stroke related journal. My answer is that the field of thrombosis is a stroke related field and that my position in both worlds is a good opportunity to learn from both fields. Sure, there will be topics that I have less knowledge off, but here is where an associate editor should rely on expert reviewers and fellow editors.

This new position will also provide me with a bunch of new experiences in itself: for example, sitting on the other side of the table in a peer review process might help me to better understand a rejection of one of my own papers. Bottom line is that I think that I both bring and gain relevant experiences in this new position.

Time: These things cost time. A lot. Especially when you need to learn the skills needed for the job, like me. But learning these skills as an associate editor is an integral part of the science apparatus, and I am sure that the time that I invest will help me develop as a scientist. Also, the time that I need to spend is not necessary the type of time that I currently lack, i.e. writing time. For writing and doing research myself I need decent blocks of time to dive in and focus  — 4+ hours if possible. The time I need to perform my associate editor tasks is more fragmented: find peer reviewers, read their comments and make a final judgement are relative fragmented activities and I am sure that as soon as I get the hang of it I can squeeze those activities within shorter slots of time. Perhaps a nice way to fill those otherwise lost 30 minutes between two meetings?

Open science: Thrombosis journal is part of the Biomed central family. As such, it is an 100% OA journal. It is not that I am an open science fanboy or sceptic, but I am very curious how OA is developing and working with an OA journal will help me to understand what OA can and cannot deliver.

Going over these points, I am convinced that I can contribute to the journal with my experience in the fields of coagulation, stroke and research methodology. Also, I think that the time that it will take to learn the skills needed are an investment that in the end will help me to grow as a researcher. So, I replied HtC with a positive answer. Expect email requesting for a peer review report soon!

New team member!

A couple of weeks ago I announced that my team was looking for a new post-doc. I received many applications, some even from as far as Italy and Spain. Out of this pile of candidates we were able to find an individual candidate who fulfilled all the requirements we had mind and than some. It is great that she will join the team in December. JH has worked in the field of epidemiology for quite some time and is not only experienced in setting up new projects and provide physicians with methodological input on their clinical research projects, but she also has a great interest in the more methodological side of epidemiology. For example, she is co-author/developer of the program DAGitty which can be used to draw causal diagrams. She is also speaker for the working group methodology of the German Society of Epidemiology (dgEpi). Her background in psychology also means that she brings a lot of knowledge on methods that we as a team do not have so far. In short, a great addition to the team. Welcome JH!

 

 

Berlin Epidemiological Methods Colloquium kicks of with SER event

A small group of epi-nerds (JLR, TK and myself) decided to start a colloquium on epidemiological methods. This colloquium series kicks off with a webcast of an event organised by the Society for Epidemiological Research (SER), but in general we will organize meetings focussed on advanced topics in epidemiological methods. Anyone interested is welcome. Regularly meetings will start in February 2017. All meetings will be held in English.
More information on the first event can be found below or via this link:

“Perspective of relative versus absolute effect measures” via SERdigital

Date: Wednesday, November 16th 2016 Time: 6:00pm – 9:00pm
Location: Seminar Room of the Neurology Clinic, first floor (Alte Nervenklinik)
Bonhoefferweg 3, Charite Universitätsmedizin Berlin- Campus Mitte, 10117 Berlin
(Map: https://www.charite.de/service/lageplan/plan/map/ccm_bonhoefferweg_3)

Description:
Join us for a live, interactive viewing party of a debate between two leading epidemiologists, Dr. Charlie Poole and Dr. Donna Spiegelman, about the merits of relative versus absolute effect measures. Which measure of effect should epidemiologists prioritize? This digital event organized by the Society for Epidemiologic Research will also include three live oral presentations selected from submitted abstracts. There will be open discussion with other viewers from across the globe and opportunities to submit questions to the speakers. And since no movie night is complete without popcorn, we will provide that, too! For more information, see: https://epiresearch.org/ser50/serdigital

If you plan to attend, please register (space limited): https://goo.gl/forms/3Q0OsOxufk4rL9Pu1

 

The paradox of the BMI paradox

2016-10-19-17_52_02-physbe-talk-bs-pdf-adobe-reader

I had the honor to be invited to the PHYSBE research group in Gothenburg, Sweden. I got to talk about the paradox of the BMI paradox. In the announcement abstract I wrote:

“The paradox of the BMI paradox”
Many fields have their own so-called “paradox”, where a risk factor in certain
instances suddenly seems to be protective. A good example is the BMI paradox,
where high BMI in some studies seems to be protective of mortality. I will
argue that these paradoxes can be explained by a form of selection bias. But I
will also discuss that these paradoxes have provided researchers with much
more than just an erroneous conclusion on the causal link between BMI and
mortality.

I first address the problem of BMI as an exposure. Easy stuff. But then we come to index even bias, or collider stratification bias. and how selections do matter in a recurrence research paradox -like PFO & stroke- or a health status research like BMI- and can introduce confounding into the equation.

I see that the confounding might not be enough to explain all that is observed in observational research, so I continued looking for other reasons there are these strong feelings on these paradoxes. Do they exist, or don’t they?I found that the two sides tend to “talk in two worlds”. One side talks about causal research and asks what we can learn from the biological systems that might play a role, whereas others think with their clinical  POV and start to talk about RCTs and the need for weight control programs in patients. But there is huge difference in study design, RQ and interpretation of results between the studies that they cite and interpret. Perhaps part of the paradox can be explained by this misunderstanding.

But the cool thing about the paradox is that through complicated topics, new hypothesis , interesting findings and strong feelings about the existence of paradoxes, I think that the we can all agree: the field of obesity research has won in the end. and with winning i mean that the methods are now better described, better discussed and better applied. New hypothesis are being generated and confirmed or refuted. All in all, the field makes progress not despite, but because the paradox. A paradox that doesn’t even exist. How is that for a paradox?

All in all an interesting day, and i think i made some friends in Gothenburg. Perhaps we can do some cool science together!

Slides can be found here.

predicting DVT with D-dimer in stroke patients: a rebuttal to our letter

2016-10-09-18_05_33-1-s2-0-s0049384816305102-main-pdf
Some weeks ago, I reported on a letter to the editor of Thrombosis Research on the question whether D-Dimer indeed does improve DVT risk prediction in stroke patients.

I was going to write a whole story on how one should not use a personal blog to continue the scientific debate. As you can guess, I ended up writing a full paragraph where I did this anyway. So I deleted that paragraph and I am going to do a thing that requires some action from you. I am just going to leave you with the links to the letters and let you decide whether the issues we bring up, but also the corresponding rebuttal of the authors, help to interpret the results from the the original publication.

ECTH 2016

ecth-small-logo
ecth2016.org

Last week was the first edition of the European Congress on Thrombosis and Hemostasis in the Hague (NL). The idea of this conference is to provide a platform for european thrombosis researchers and doctors to meet in the dull years between ISTH meetings. There is a strong emphasis on enabling and training the young researchers, as can be from the different activities and organisational aspects. One os these things was the Junior advisory board, of which I was part. We had the task to give advice both solicited and unsolicited, and help organise and shape some of the innovative aspects. For example: we had the so-called fast and furious sessions, where authors of the best abstract were asked to let go of the standard presentation format and share their research TED talk style.

I learned a lot during these sessions, and even got in contact with some groups that have interesting methods and approaches that we might apply in our studies and patient populations. My thoughts: targeting FXII and FXI as well as DNAse treatment are the next big thing. We also had a great selection of speakers for meet-the-experts and how-to sessions. These sessions demanded active participation of all participants which is really a great way to build new collaborations and friendships.

The 5K fun run with 35+ participants was also a great succes.

The wednesday plenary sessions, including the talks on novel and innovative methods of scholarly communications as well as the very well received sessions from Malcolm Macloud on reducing research waste where inspiring to all. Missed it? do not worry, they have shared their slides online!

All in all, the conference was a great success in both numbers (750+ participants) as well as scientific quality. I am looking forward to the next edition, which will be held in Marseille in two years time. Hope to see you all there!

How to set up a research group

A couple of weeks ago I wrote down some thoughts I had while writing a paper for the JTH series on Early Career Researchers. I was asked to write how one sets up a research group, and the four points I described in my previous post can be recognised in the final paper.

But I also added some reading tips in the paper. reading on a particular topic helps me not only to learn what is written in the books, but also to get my mind in a certain mindset. So, when i knew that i was going to take over a research group in Berlin I read a couple of books, both fiction and non fiction. Some where about Berlin (e.g. Cees Nootebooms Berlijn 1989/2009), some were focussed on academic life (e.g. Porterhouse Blue). They help to get my mind in a certain gear to help me prepare of what is going on. In that sense, my bookcase says a lot about myself.

The number one on the list of recommended reads are the standard management best sellers, as I wrote in the text box:

// Management books There are many titles that I can mention here; whether it the best-seller Seven Habits of Highly Effective People or any of the smaller booklets by Ken Blanchard, I am convinced that reading some of these texts can help you in your own development as a group leader. Perhaps you will like some of the techniques and approaches that are proposed and decide to adopt them. Or, like me, you may initially find yourself irritated because you cannot envision the approaches working in the academic setting. If this happens, I encourage you to keep reading because even in these cases, I learned something about how academia works and what my role as a group leader could be through this process of reflection. My absolute top recommendation in this category is Leadership and Self-Deception: a text that initially got on my nerves but in the end taught me a lot.

I really think that is true. You should not only read books that you agree with, or which story you enjoy. Sometimes you can like a book not for its content but the way it makes you question your own preexisting beliefs and habits. But it is true that this sometimes makes it difficult to actually finnish such a book.

Next to books, I am quite into podcasts so I also wrote

// Start up. Not a book, but a podcast from Gimlet media about “what it’s really like to get a business off the ground.” It is mostly about tech start-ups, but the issues that arise when setting up a business are in many ways similar to those you encounter when you are starting up a research group. I especially enjoyed seasons 1 and 3.

I thought about including the sponsored podcast “open for business” from Gimlet Creative, as it touches upon some very relevant aspects of starting something new. But for me the jury is still out on the “sponsored podcast” concept  – it is branded content from amazon, and I am not sure to what extent I like that. For now, i do not like it enough to include it in the least in my JTH-paper.

The paper is not online due to the summer break,but I will provide a link asap.

– update 11.10.2016 – here is a link to the paper. 

 

 

 

 

Does d-dimer really improve DVT prediction in stroke?

369
elsevier.com

Good question, and even though thromboprofylaxis is already given according to guidelines in some countries, I can see the added value of a good discriminating prediction rule. Especially finding those patients with low DVT risk might be useful. But using d-dimer is a whole other question. To answer this, a thorough prediction model needs to be set up both with and without the information of d-dimer and only a direct comparison of these two models will provide the information we need.

In our view, that is not what the paper by Balogun et al did. And after critical appraisal of the tables and text, we found some inconsistencies that prohibits the reader from understanding what exactly was done and which results were obtained. In the end, we decided to write a letter to the editor, especially to prevent that other readers to mistakenly take over the conclusion of the authors. This conclusion, being that “D-dimer concentration with in 48 h of acute stroke is independently associated with development of DVT.This observation would require confirmation in a large study.” Our opinion is that the data from this study needs to be analysed properly to justify such an conclusion. One of the key elements in our letter is that the authors never compare the AUC of the model with and without d-dimer. This is needed as that would provide the bulk of the answer whether or not d-dimer should be measured. The only clue we have are the ORs of d-dimer, which range between 3-4, which is not really impressive when it comes to diagnosis and prediction. For more information on this, please check this paper on the misuse of the OR as a measure of interest for diagnosis/prediction by Pepe et al.

A final thing I want to mention is that our letter was the result of a mini-internship of one of the students at the Master programme of the CSB and was drafted in collaboration with our Virchow scholar HGdH from the Netherlands. Great team work!

The letter can be found on the website of Thrombosis Research as well as on my Mendeley profile.

 

Teaching a new module: Critical Thinking in Translational Medicine

science-critical-thinking
topdocumentaryfilm

I have the honor to design and teach a new master module in not one, but two master programs at the Charité. This new module has the title “Critical Thinking in Translational Medicine” and will focus on the concept that science is an exercise in uncertainty. But somehow, scientist – especially the young – do not seem to be trained in handling these uncertainties. Overselling of results, scientific fads and the why “most research findings are false” will be on the schedule of this 15 week course starting this October.

But that’s not all. We will also have some topics regarding new innovations and activities in the scientific enterprise: sharing of data, new ways to publish and share your results will be discussed by our students. The goal is that each week we will have some introduction perspective. Of course there will be a some exercise and group discussions. Each week 4 students have the task to summarise the results of the meeting, as well as prepare a pro-contra debate which will be held on two occasions. Perhaps these students even should write some blog entries?

The bottom line is this: science is more than a pipet, understanding confounding or knowing why a regression model does what it does. It is also about the scientific enterprise, which is what it is, and has many shortcomings. Some critical thinking on these topics, together with some good discussion will help our student to form their own thoughts on these issues and hopefully help them to prepare for a wonderful scientific career.

 

 

Starting a research group: some thoughts for a new paper

isth-logo

It has been 18 months since I started in Berlin to start at the CSB to take over the lead of the clinical epidemiology research group. Recently, the ISTH early career taskforce  have contacted me whether I would be willing to write something about my experiences over the last 18 months as a rookie group leader. The idea is that these experiences, combined with a couple of other papers on similar useful topics for early career researchers, will be published in JTH.

I was a bit reluctant at first, as I believe that how people handle new situations that one encounters as a new group leader is quite dependent on personality and the individual circumstances. But then again, the new situations that i encountered might be more generalizable to other people. So I decided to go ahead and focus more on the description of the new situations I found myself in while trying to keep the personal experiences limited and only for illustrations only.

While writing, I have discerned that there are basically 4 new things about my new situations that I would have loved to realise a bit earlier.

  1. A new research group is never without context; get to know the academic landscape of your research group as this is where you find people for new collaboration etc
  2. You either start a new research group from scratch, or your inherit a research group; be aware that both have very different consequences and require different approaches.
  3. Try to find training and mentoring to help you cope with your new roles that group leaders have; it is not only the role of group leader that you need to get adjusted to. HR manager, accountant, mentor, researcher, project initiator, project manager, consultant are just a couple of roles that I also need to fulfill on a regular basis.
  4. New projects; it is tempting to put all your power, attention time and money behind a project. but sometimes new projects fail. Perhaps start a couple of small side projects as a contingency?

As said, the stuff I describe in the paper might be very specific for my situation and as such not likely to be applicable for everyone. Nonetheless, I hope that reading the paper might help other young researchers to help them prepare for the transition from post-doc to group leader. I will report back when the paper is finished and available online.