Should you drink one glass of alcohol to reduce your stroke risk?

The answer: no. For a long time there has been doubt whether or not we should believe the observational data whether or not limited alcohol use is in fact good for. You know, the old “U-curve” association. Now, with some smart thinking from the KADORIE guys from China/ Oxford as well as some other methods experts, the ultimate analyses has been done: A Mendelian Randomization study published recently in the Lancet.

If you wanna know what that actually does, you can read a paper I co-wrote a couple of years ago for NDT or the version in Dutch for the NTVG. In short, the technique uses genetic variation as a proxy for the actual phenotype you are interested in. This can be a biomarker, or in this case, alcohol consumption. A large proportion of the Chinese population has some genetic variations in the genes that code for the enzymes that break down alcohol in your blood. These genetic markers are therefore a good indicators how much you can actually can drink – at least on a group level. And as in most regions in China alcohol drinking is the standard, at least for men- how much you can drink is actually a good proxy of how much you actually do drink. Analyse the risk of stroke according the unbiased genetic determined alcohol consumption instead of the traditional questionnaire based alcohol consumption and voila: No U curve in sight –> No protective effect of drinking a little bit of alcohol.

Why I am writing about that study on my own blog? I didn’t work on the research, that is for sure! No, it is because the Dutch newspaper NRC actually contacted me to get some background information which I was happy to do. The science section in the NRC has always been one of the best in the NL, which made it quite an honor as well as an adventure to get involved like that. The journalist, SV, did an excellent job or wrapping all what we discussed in that 30-40 video call into just under 600 words, which you can read here (Dutch).  I really learned a lot helping out and I am looking forward doing this type of work sometime in the future.

Advertisements

Go beyond the binary outcome!

You were just diagnosed with a debilitating disease. You try to make sense of what the next steps are going to be. You ask your doctor, what do I need to do in order to get back to fully functioning adult as good as humanly possible. The doctor starts to tell what to tell you in order to reduce the risk of future events.

That sounds logical at first sight, but in reality, it is not. The question and the answer are disconnected on various levels: what is good for lowering your risk is not necessarily the same thing as the thing that will bring functionality back into your live. Also, they are about different time scales: getting back to a normal life is about weeks, perhaps months, and trying to keep recurrence risk as low as possible is a long term game – lifelong in fact.
A lot of research in various fields have bungled these two things up. The effects of acute treatment are evaluated in studies with 3-5 years of follow up. Or reducing recurrence risk is studied in large cohorts with only 6-12 months of follow up. I am not arguing that this is always a bad idea, but i do think that a better distinction between these concepts could help some fields make some progress. 

We do that in stroke. Since a while now we have adopted the so called modified Rankin scale as the primary outcome in acute stroke trials. It is a 7 category ordinal scale often measured at 90 days after the stroke that actually tells us whether the patients completely recovered (mRS 0) or actually dies (mRS 6) and anything in between. This made so much sense for stroke that I started to wonder whether this would also make sense for other diseases.

I think it does. In a recent paper published a couple of months ago in the RPTH by JLR and me, we call upon the greater thrombosis community to consider to look beyond a binary outcome. I stand by this idea, and for that reason I brought it up again at the Maastricht Consensus Conference on Thrombosis. During that conference another speaker, EK, said that the field needed a new way to capture functionality after VTE. You guessed it, we got together over coffee, shared ideas, recruited SB as a third critical thinker, and we came up with this: a call to action to improve measuring functional limitations after venous thromboembolism.

This is not just a call from us to others to get some action, this is a start of some new upcoming research activity together with EK, SB and myself. First we need the input from other experts on the scale itself. Second, we need to standardize the way we actually score patients, then test this and get the patients perspective on the logistics and questions behind the scale. third we need to know the reliability of scale and how the logistics work in a true RCT setting. Only when we complete all these steps, we will be certain whether looking the binary outcome indeed brings more actionable information when you have talk to your doctor and you ask yourself “how do i increase my chances of getting back to a fully functioning adult as good as humanly possible”.

Finding consensus in Maastricht

source https://twitter.com/hspronk

Last week, I attended and spoke at the Maastricht Consensus Conference on Thrombosis (MCCT). This is not your standard, run-of-the-mill, conference where people share their most recent research. The MCCT is different, and focuses on the larger picture, by giving faculty the (plenary) stage to share their thoughts on opportunities and challenges in the field. Then, with the help of a team of PhD students, these thoughts are than further discussed in a break out session. All was wrapped up by a plenary discussion of what was discussed in the workshops. Interesting format, right?

It was my first MCCT, and I had difficulty envisioning how exactly this format will work out beforehand. Now that I have experienced it all, I can tell you that it really depends on the speaker and the people attending the workshops. When it comes to the 20 minute introductions by the faculty, I think that just an overview of the current state of the art is not enough. The best presentations were all about the bigger picture, and had either an open question, a controversial statement or some form of “crystal ball” vision of the future. It really is difficult to “find consensus” when there is no controversy as was the case in some plenary talks. Given the break-out nature of the workshops, my observations are limited in number. But from what I saw, some controversy (if need be only constructed for the workshop) really did foster discussion amongst the workshop participants.

Two specific activities stand out for me. The first is the lecture and workshop on post PE syndrome and how we should able to monitor the functional outcome of PE. Given my recent plea in RPTH for more ordinal analyses in the field of thrombosis and hemostasis – learning from stroke research with its mRS- we not only had a great academic discussion, but made immediately plans for a couple of projects where we actually could implement this. The second activity I really enjoyed is my own workshop, where I not only gave a general introduction into stroke (prehospital treatment and triage, clinical and etiological heterogeneity etc) but also focused on the role of FXI and NETS. We discussed the role of DNase as a potential for co-treatment for tPA in the acute setting (talking about “crystal ball” type of discussions!). Slides from my lecture can be found here (PDF). An honorable mention has to go out to the PhD students P and V who did a great job in supporting me during the prep for the lecture and workshop. Their smart questions and shared insights really shaped my contribution.

Now, I said it was not always easy to find consensus, which means that it isn’t impossible. In fact, I am sure that themes that were discussed all boil down to a couple opportunities and challenges. A first step was made by HtC and HS from the MCCT leadership team in the closing session on Friday which will proof to be a great jumping board for the consensus paper that will help set the stage for future research in our field of arterial thrombosis.

Messy epidemiology: the tale of transient global amnesia and three control groups

Clinical epidemiology is sometimes messy. The methods and data that you might want to use might not be available or just too damn expensive. Does that mean that you should throw in the towel? I do not think so.

I am currently working in a more clinical oriented setting, as the only researcher trained as a clinical epidemiologist. I could tell about being misunderstood and feeling lonely as the only who one who has seen the light, but that would just be lying. The fact is that my position is one privilege and opportunity, as I work with many different groups together on a wide variety of research questions that have the potential to influence clinical reality directly and bring small, but meaningful progress to the field.

Sometimes that work is messy: not the right methods, a difference in interpretation, a p value in table 1… you get the idea. But sometimes something pretty comes out of that mess. That is what happened with this paper, that just got published online (e-pub) in the European Journal of Neurology.  The general topic is the heart brain interaction, and more specifically to what extent damage to the heart actually has a role in transient global amnesia. Now, the idea that there might be a link is due to some previous case series, as well as the clinical experience of some of my colleagues. Next step would of course to do a formal case control-study, and if you want to estimate true measure of rate ratios, a lot effort has to go into the collection of data from a population based control group. We had neither time nor money to do so, and upon closer inspection, we also did not really need that clean control group to answer some of our questions that would progress to the field.

So instead, we chose three different control groups, perhaps better referred as reference groups, all three with some neurological disease. Yes, there are selections at play for each of these groups, but we could argue that those selections might be true for all groups. If these selection processes are similar for all groups, strong differences in patient characteristics of biomarkers suggest that other biological systems are at play. The trick is not to hide these limitations, but as a practiced judoka, leverage these weaknesses and turn them into a strengths. Be open about what you did, show the results, so that others can build on that experience.

So that is what we did. Compared patients with migraine with aura, vestibular neuritis and transient ischemic attack, patients with transient global amnesia are more likely to exhibitsigns of myocardial stress. This study was not designed – nor will if even be able to – understand the cause of this link, not do we pretend that our odds ratios are in fact estimates of rate ratios or something fancy like that. Still, even though many aspects of this study are not “by the book”, it did provide some new insights that help further thinking about and investigations of this debilitating and impactful disease.

The effort was lead by EH, and the final paper can be found here on pubmed.