My talk in Augsburg – beyond the binary

@BobSiegerink & Jakob Linseisen discussing the p-values. Thank you for your visit and great talk pic.twitter.com/iBt5ZQxaMi— Sebastian Baumeister (@baumeister_se) 3 May 2019

I am writing this as I am sitting in the train on my way back to Berlin. I was in Augsburg today (2x 5.5 hours in the train!), a small University city next to Munich in the south of Berlin. SB, fellow epidemiologist and BEMC alumnus, invited me to give a talk in their Vortragsreihe.

I had a blast – in part because this talk posed a challenge for me as they have a very mixed audience. I really had to think long and hard how I could provide something a stimulating talk with a solid attention arc for everybody on the audience. Take a look at my slides to see if I succeeded: http://tiny.cc/beyondbinary

Advertisements

My talk at Kuopio stroke symposium

In 6 weeks or so I will be traveling to Finland to speak at the Kuopio stroke symposium. They asked me to talk about my favorite subject, hypercoagulability and ischemic stroke. although I still working on the last details of the slides, I can already provide you with the abstract.

The categories “vessel wall damage” and “disturbance of blood flow” from Virchow’s Triad can easily be used to categorize some well known risk factors for ischemic stroke. This is different for the category “increased clotting propensity”, also known as hypercoagulability. A meta-analysis shows that markers of hypercoagulability are stronger associated with the risk of first ischemic stroke compared to myocardial infarction. This effect seems to be most pronounced in women and in the young, as the RATIO case-control study provides a large portion of the data in this meta-analysis. Although interesting from a causal point of view, understanding the role of hypercoagulability in the etiology of first ischemic stroke in the young does not directly lead to major actionable clinical insights. For this, we need to shift our focus to stroke recurrence. However, literature on the role of hypercoagulability on stroke recurrence is limited. Some emerging treatment targets can however can be identified. These include coagulation Factor XI and XII for which now small molecule and antisense oligonucleotide treatments are being developed and tested. Their relative small role in hemostasis, but critical role in pathophysiological thrombus formation suggest that targeting these factors could reduce stroke risk without increasing the risk of bleeds. The role of Neutrophilic Extracellular Traps, negatively charged long DNA molecules that could act as a scaffold for the coagulation proteins, is also not completely understood although there are some indications that they could be targeted as co-treatment for thrombolysis.

I am looking forward to this conference, not in the least to talk to some friends, get inspired by great speakers and science and enjoy the beautiful surroundings of Kuopio.

postscript: here are my slides that I used in Kuopio

Should you drink one glass of alcohol to reduce your stroke risk?

The answer: no. For a long time there has been doubt whether or not we should believe the observational data whether or not limited alcohol use is in fact good for. You know, the old “U-curve” association. Now, with some smart thinking from the KADORIE guys from China/ Oxford as well as some other methods experts, the ultimate analyses has been done: A Mendelian Randomization study published recently in the Lancet.

If you wanna know what that actually does, you can read a paper I co-wrote a couple of years ago for NDT or the version in Dutch for the NTVG. In short, the technique uses genetic variation as a proxy for the actual phenotype you are interested in. This can be a biomarker, or in this case, alcohol consumption. A large proportion of the Chinese population has some genetic variations in the genes that code for the enzymes that break down alcohol in your blood. These genetic markers are therefore a good indicators how much you can actually can drink – at least on a group level. And as in most regions in China alcohol drinking is the standard, at least for men- how much you can drink is actually a good proxy of how much you actually do drink. Analyse the risk of stroke according the unbiased genetic determined alcohol consumption instead of the traditional questionnaire based alcohol consumption and voila: No U curve in sight –> No protective effect of drinking a little bit of alcohol.

Why I am writing about that study on my own blog? I didn’t work on the research, that is for sure! No, it is because the Dutch newspaper NRC actually contacted me to get some background information which I was happy to do. The science section in the NRC has always been one of the best in the NL, which made it quite an honor as well as an adventure to get involved like that. The journalist, SV, did an excellent job or wrapping all what we discussed in that 30-40 video call into just under 600 words, which you can read here (Dutch).  I really learned a lot helping out and I am looking forward doing this type of work sometime in the future.

Go beyond the binary outcome!

You were just diagnosed with a debilitating disease. You try to make sense of what the next steps are going to be. You ask your doctor, what do I need to do in order to get back to fully functioning adult as good as humanly possible. The doctor starts to tell what to tell you in order to reduce the risk of future events.

That sounds logical at first sight, but in reality, it is not. The question and the answer are disconnected on various levels: what is good for lowering your risk is not necessarily the same thing as the thing that will bring functionality back into your live. Also, they are about different time scales: getting back to a normal life is about weeks, perhaps months, and trying to keep recurrence risk as low as possible is a long term game – lifelong in fact.
A lot of research in various fields have bungled these two things up. The effects of acute treatment are evaluated in studies with 3-5 years of follow up. Or reducing recurrence risk is studied in large cohorts with only 6-12 months of follow up. I am not arguing that this is always a bad idea, but i do think that a better distinction between these concepts could help some fields make some progress. 

We do that in stroke. Since a while now we have adopted the so called modified Rankin scale as the primary outcome in acute stroke trials. It is a 7 category ordinal scale often measured at 90 days after the stroke that actually tells us whether the patients completely recovered (mRS 0) or actually dies (mRS 6) and anything in between. This made so much sense for stroke that I started to wonder whether this would also make sense for other diseases.

I think it does. In a recent paper published a couple of months ago in the RPTH by JLR and me, we call upon the greater thrombosis community to consider to look beyond a binary outcome. I stand by this idea, and for that reason I brought it up again at the Maastricht Consensus Conference on Thrombosis. During that conference another speaker, EK, said that the field needed a new way to capture functionality after VTE. You guessed it, we got together over coffee, shared ideas, recruited SB as a third critical thinker, and we came up with this: a call to action to improve measuring functional limitations after venous thromboembolism.

This is not just a call from us to others to get some action, this is a start of some new upcoming research activity together with EK, SB and myself. First we need the input from other experts on the scale itself. Second, we need to standardize the way we actually score patients, then test this and get the patients perspective on the logistics and questions behind the scale. third we need to know the reliability of scale and how the logistics work in a true RCT setting. Only when we complete all these steps, we will be certain whether looking the binary outcome indeed brings more actionable information when you have talk to your doctor and you ask yourself “how do i increase my chances of getting back to a fully functioning adult as good as humanly possible”.

Replication: how exact do you want to be?

Doing exactly the same experiment for the second time around doesn’t really tell you much. In fact, if you quickly glance over the statistics it might look like you might as well do a coin flip. Wait.. What? Yup, a coin flip. After all, doing the exact same experiment will provide you with a 50/50 when it comes to detecting the true effect (50% power).


The kernel of truth is of course that a coin flip never adds new useful information. But what does an exact replication experiment actually add? This is the question we are trying to answer in latest paper in PLOS Biology where we explore the added value of replications in biomedical research. (see figure). The bottom line is that doing the exact same thing (including the same sample size) really has only limited added value. To understand what than the power implications for replication experiments actually are, we developed a shiny app, where readers can play around with different scenarios. Want to learn more? take a look here: s-quest.bihealth.org/power_replication


The project was carried by SP, which resulted in a paper published in PLOS Biology (find it here). The paper got some traction on news sites as well as twitter, as you see from this altmetric overview

Reusing open data

I was thrilled when I learned that the QUEST center at the BIH was going to reward open data reuse with awards. The details can be found on their website, but the bottom line is this: open science does not only implicate opening up your data, but actually the use of open data. So if everybody open up their data, but nobody is actually using it, the added values is quite limited. 

For that reason I started some projects back in 2015/2016 designed to see how easy it actually is to find data that could be used to answer a question that you are actually interested in. The answer is, not always as easy. The required variables might not be there, and even i they are, it is quite complex to start using a database that is not build by yourself. To understand the value of your results, you have to understand how the data was collected. One study proofed to be so well documented that it was a contender: the English Longitudinal Study on Aging. One of the subsequent analyses that we did was published in a paper –mentioned before on this blog-.and that paper is the reason why I am writing this blog. We received the Open data reuse award.

The award has a 1000 euro attached to it, money the group can spend on travel and consumables. Now, do not get me wrong, 1000 euro is nothing to sneeze at. But 1000 euro is not going to be major driver in your decision whether to reuse open data or not. But the award is nice and I hope effective in stimulating open science, especially as can stimulate the conversation and critical evaluation on the value of reusing open data .     

Long journey, short(ish) story

This is a short story about a long journey. It is about a of which the journey started in 2013 if I am not mistaken. In that year, we decided to link the RATIO case-control study to the data from the Central Buro of Statistics (CBS) in the Netherlands, allowing us to turn the case-control study into a follow-up study.

The first results of this analyses were already published some time ago under as “Recurrence and Mortality in Young Women With Myocardial Infarction or Ischemic Stroke”. To get these results in that journal, we were asked to reduce the paper to a letter. WE did and hope we were able to keep the core message clean and clear: the risk of arterial events, after arterial events, remains high over long period of time 15+ years) and remain true to type.

Just last week (!) we published another analyses of the data, where we contrast the long term risk for those with a presumably hypercoagulable blood profile to those who do not show a tendency to clotting. The bottom line is that, if anything, there is a dose-response between hypercoagulability and arterial thrombosis for ischemic stroke patients, but not for myocardial infarction patients. This is all in line with the conclusions on the role of hypercoagulability and stroke based on data from the same study. But I have to be honest: the evidence is not that overwhelming: the precision is low, as seen by the broad confidence intervals. And with regard to the point estimates, no clinically relevant effects seen. Then again, it is a piece of the puzzle that is needed to understand the role of hypercoagulability in young stroke.

main figure from the paper: Q4 vs Q1 is almost doubling in risk

There is a lot to tell about this publication: how difficult it was to get the study data linked to the CBS to get to the 15 year follow up, how AM did a fantastic job organizing the whole project,  how quartile analyses are possibly not the best way to capture all information that is in the data, how we had tremendous delays because of peer review – especially in the last journal, or how bad some of the peer review reports were, how one of the peer reviewers was a commercial enterprise – which for some time paid people to do peer review, how the peer review reports are all open, how it was to get the funding for getting the paper not locked away behind a paywall.

But I want to keep this story short and not dwell too much on the past. The follow-up period was long, the time it took u to get this published was long, let us keep the rest of the story as short as possible. I am just glad that it is published and finally to be shared with the world.

Pre-prints start to sound better and better…