New Masterclass: “Papers and Books”

“Navigating numbers” is a series of Masterclass initiated by a team of Charité researchers who think that our students should be able to get more familiar how numbers shape the field of medicine, i.e. both medical practice and medical research. And I get to organize the next in line.

I am very excited to organise the next Masterclass together with J.O. a bright researcher with a focus on health economics. As the full title of the masterclass is “Papers and Books – series 1 – intended effect of treatments”, some health economics knowledge is a must in this journal club style series of meetings.

But what will we exactly do? This Masterclass will focus on reading some papers as well as a book (very surprising), all with a focus on study design and how to do proper research into “intended effect of treatment” . I borrowed this term from one of my former epidemiology teachers, Jan Vandenbroucke, as it helps to denote only a part of the field of medical research with its own idiosyncrasies, yet not limited by study design.

The Masterclass runs for 8 meetings only, and as such not nearly enough to have the students understand all in and outs of proper study design. But that is also not the goal: we want to show the participants how one should go about when the ultimate question is medicine is asked: “should we treat or not?”

If you want to participate, please check out our flyer

Advertisements

New paper: Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young Adults

2017-06-16 09_26_46-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young2017-06-16 09_25_58-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young

Just a relative risk is not enough to fully understand the implications of your findings. Sure, if you are an expert in a field, the context of that field will help you to assess the RR. But if ou are not, the context of the numerator and denominator is often lost. There are several ways to work towards that. If you have a question that revolves around group discrimination (i.e. questions of diagnosis or prediction) the RR needs to be understood in relation to other predictors or diagnostic variables. That combination is best assessed through the added discriminatory value such as the AUC improvement or even more fancy methods like reclassification tables and net benefit indices. But if you are interested in are interested in a single factor (e.g. in questions of causality or treatment) a number needed to treat (NNT) or the Population Attributable Fraction can be used.

The PAF has been subject of my publications before, for example in these papers where we use the PAF to provide the context for the different OR of markers of hypercoagulability in the RATIO study / in a systematic review. This paper is a more general text, as it is meant to provide in insight for non epidemiologist what epidemiology can bring to the field of law. Here, the PAF is an interesting measure, as it has relation to the etiological fraction – a number that can be very interesting in tort law. Some of my slides from a law symposium that I attended addresses these questions and that particular Dutch case of tort law.

But the PAF is and remains an epidemiological measure and tells us what fraction of the cases in the population can be attributed to the exposure of interest. You can combine the PAF to a single number (given some assumptions which basically boil down to the idea that the combined factors work on an exact multiplicative scale, both statistically as well as biologically). A 2016 Lancet paper, which made huge impact and increased interest in the concept of the PAF, was the INTERSTROKE paper. It showed that up to 90% of all stroke cases can be attributed to only 10 factors, and all of them modifiable.

We had the question whether this was the same for young stroke patients. After all, the longstanding idea is that young stroke is a different disease from old stroke, where traditional CVD risk factors play a less prominent role. The idea is that more exotic causal mechanisms (e.g. hypercoagulability) play a more prominent role in this age group. Boy, where we wrong. In a dataset which combines data from the SIFAP and GEDA studies, we noticed that the bulk of the cases can be attributed to modifiable risk factors (80% to 4 risk factors). There are some elements with the paper (age effect even within the young study population, subtype effects, definition effects) that i wont go into here. For that you need the read the paper -published in stroke- here, or via my mendeley account. The main work of the work was done by AA and UG. Great job!

New paper in RPTH: Statins and the risk of DVT recurrence

coverI am very happy and honored that i can tell you that our paper “Statin use and risk of recurrent venous thrombosis: results from the MEGA follow-up study” is the very first paper in the new ISTH journal “Research and Practices in Thrombosis and Hemostasis“.

This new journal, for which I serve on the editorial board, is the sister journal of the JTH, but has a couple of focus point that are not present in the JTH. Biggest difference is the open access policy of the RPTH. Next to that, there are a couple of things or subjects that the RPTH welcomes, which are perhaps not so common in traditional journals (e.g. career development articles, educationals, nursing and patient perspectives etc).

Our paper is however a very standard paper, in the sense that it is original research publication regarding the role of statins and the risk of thrombosis recurrence. We answer the question whether statins indeed is linked with a lower risk of recurrence based on observational data, opening up the door to confounding by indication. To counteract, we applied a propensity score, and most important of all, we only used so-called “incident users”. Incident vs prevalent users of statins is a theme that has been a topic on this blog before (for example here and here). The bottom line is this: people who are currently using statins are different from people who are prescribed statins – adherence issues, side effects, or low life expectancy could be reasons for discontinuation.  You need to take this difference between these type of statin users into account, or the protective effect of statins, or any other medication for that matter, might be biassed. In the case of statins and DVT recurrence it can be argued that the risk lowering effect of statins is overestimated. In itself that is not a problem in an observational study. But if the results of this observational study is subsequently used in a sample size calculation for a proper trial, that trial will be underpowered and we might have lost our (expensive and potentially only) shot at really knowing whether or not DVT patients benefit from statins.

RPTH will be launched during ISTH 2017 which will be held in Berlin in a couple of weeks.

New paper: A Prothrombotic Score Based on Genetic Polymorphisms of the Hemostatic System Differs in Patients with IS, MI, or PAOD

logo-home[1]
frontiersin.org
My first paper in frontiers of cardiovascular medicine, an open access platform focussed on cardiovascular medicine. This is not a regular case-control study, where the prevelance of a risk factor is compared between an unselected patient group and a reference group from the general population. No, this paper takes patients with cardiovascular disease who are referred for thrombophilia testing. when the different diseases (ischemic stroke vs myocardial infarction / PAOD) are then compared in terms of their thrombophilic propensity, it is clear that these two groups are different. The first culprit to think might be that thrombophilia indeed plays a different role in the etiology of these disease, like we demonstrated in a RATIO publication as well as this systematic review, but it might also be that there is just a different referral pattern. in any case, it indicates that the role of thrombophilia – whether it is causal or physician suspected – is different between the different forms of arterial thrombosis.

Advancing prehospital care of stroke patients in Berlin: a new study to see the impact of STEMO on functional outcome

There are strange ambulances driving around in Berlin. They are the so-called STEMO cars, or Stroke Einsatz Mobile, basically driving stroke units. They have the possibility to make a CT scan to rule out bleeds and subsequently start thrombolysis before getting to the hospital. A previous study showed that this descreases time to treatment by ~25 minutes. The question now is whether the patients are indeed better of in terms of functional outcome. For that we are currently running the B_PROUD study of which we recently published the design here.

Virchow’s triad and lessons on the causes of ischemic stroke

I wrote a blog post for BMC, the publisher of Thrombosis Journal in order to celebrate blood clot awareness month. I took my two favorite subjects, i.e. stroke and coagulation, and I added some history and voila!  The BMC version can be found here.

When I look out of my window from my office at the Charité hospital in the middle of Berlin, I see the old pathology building in which Rudolph Virchow used to work. The building is just as monumental as the legacy of this famous pathologist who gave us what is now known as Virchow’s triad for thrombotic diseases.

In ‘Thrombose und Embolie’, published in 1865, he postulated that the consequences of thrombotic disease can be attributed one of three categories: phenomena of interrupted blood flow, phenomena associated with irritation of the vessel wall and its vicinity and phenomena of blood coagulation. This concept has now been modified to describe the causes of thrombosis and has since been a guiding principle for many thrombosis researchers.

The traditional split in interest between arterial thrombosis researchers, who focus primarily on the vessel wall, and venous thrombosis researchers, who focus more on hypercoagulation, might not be justified. Take ischemic stroke for example. Lesions of the vascular wall are definitely a cause of stroke, but perhaps only in the subset of patient who experience a so called large vessel ischemic stroke. It is also well established that a disturbance of blood flow in atrial fibrillation can cause cardioembolic stroke.

Less well studied, but perhaps not less relevant, is the role of hypercoagulation as a cause of ischemic stroke. It seems that an increased clotting propensity is associated with an increased risk of ischemic stroke, especially in the young in which a third of main causes of the stroke goes undetermined. Perhaps hypercoagulability plays a much more prominent role then we traditionally assume?

But this ‘one case, one cause’ approach takes Virchow’s efforts to classify thrombosis a bit too strictly. Many diseases can be called multi-causal, which means that no single risk factor in itself is sufficient and only a combination of risk factors working in concert cause the disease. This is certainly true for stroke, and translates to the idea that each different stroke subtype might be the result of a different combination of risk factors.

If we combine Virchow’s work with the idea of multi-causality, and the heterogeneity of stroke subtypes, we can reimagine a new version of Virchow’s Triad (figure 1). In this version, the patient groups or even individuals are scored according to the relative contribution of the three classical categories.

From this figure, one can see that some subtypes of ischemic stroke might be more like some forms of venous thrombosis than other forms of stroke, a concept that could bring new ideas for research and perhaps has consequences for stroke treatment and care.

Figure 1. An example of a gradual classification of ischemic stroke and venous thrombosis according to the three elements of Virchow’s triad.

However, recent developments in the field of stroke treatment and care have been focused on the acute treatment of ischemic stroke. Stroke ambulances that can discriminate between hemorrhagic and ischemic stroke -information needed to start thrombolysis in the ambulance-drive the streets of Cleveland, Gothenburg, Edmonton and Berlin. Other major developments are in the field of mechanical thrombectomy, with wonderful results from many studies such as the Dutch MR CLEAN study. Even though these two new approaches save lives and prevent disability in many, they are ‘too late’ in the sense that they are reactive and do not prevent clot formation.

Therefore, in this blood clot awareness month, I hope that stroke and thrombosis researchers join forces and further develop our understanding of the causes of ischemic stroke so that we can Stop The Clot!

Increasing efficiency of preclinical research by group sequential designs: a new paper in PLOS biology

We have another paper published in PLOS Biology. The theme is in the same area as the first paper I published in that journal, which had the wonderful title “where have all the rodents gone”, but this time we did not focus on threats to internal validity, but we explored whether sequential study designs can be useful in preclinical research.

Sequential designs, what are those? It is a family of study designs (perhaps you could call it the “adaptive study size design” family) where one takes a quick peek at the results before the total number of subject is enrolled. But, this peek comes at a cost: it should be taken into account in the statistical analyses, as it has direct consequence for the interpretation of the final result of the experiment. But the bottom line is this: with the information you get half way through can decide to continue with the experiment or to stop because of efficacy or futility reasons. If this sounds familiar to those familiar with interim analyses in clinical trials, it is because it is the sam concept. however, we explored its impact when applied to animal experiments.

Figure from our publication in PLOS Biology describing sequential study designs in or computer simulations

Old wine in new bottles” one might say, and some of the reviewers for this paper published rightfully pointed out that our paper was not novel in terms of showing how sequential designs are more efficient compared to non sequential designs. But there is not where the novelty lies. Up untill now, we have not seen people applying this approach to preclinical research in a formal way. However, our experience is that a lot of preclinical studies are done with some kind of informal sequential aspect. No p<0.05? Just add another mouse/cell culture/synapse/MRI scan to the mix! The problem here is that there is no formal framework in which this is done, leading to cherry picking, p-hacking and other nasty stuff that you can’t grasp from the methods and results section.

Should all preclinical studies from now on half sequential designs? My guess would be NO, and there are two major reasons why. First of all, sequential data analyses have their ideosyncrasies and might not be for everyone. Second, the logistics of sequential study designs are complex, especially if you are affraid to introduce batch effects. We only wanted to show preclinical researchers that the sequential approach has their benefits: the same information with on average less costs. If you translate “costs” into animals the obvious conclusion is: apply sequential designs where you can, and the decrease in animals can “re-invested” in more animals per study to obtain higher power in preclinical research. But I hope that the side effect of this paper (or perhaps its main effect!) will be that the readers just think about their current practices and whether thise involve those ‘informal sequential designs’ that really hurt science.

The paper, this time with aless exotic title, “Increasing efficiency of preclinical research by group sequential designs” can be found on the website of PLOS biology.