Three new papers – part III

As explained here and here, I temporarily combine the announcements of published papers in one blog to save some time. This is part III, where I focus on ordinal outcomes. Of all recent papers, these are the most exciting to me, as they really are bringing something new to the field of thrombosis and COVID-19 research.

Measuring functional limitations after venous thromboembolism: Optimization of the Post-VTE Functional Status (PVFS) Scale. I have written about our call to action, and this is the follow-up paper, with research primarily done in the LUMC. With input from patients as well as 50+ experts through a Delphi process, we were able to optimize our initial scale.

Confounding adjustment performance of ordinal analysis methods in stroke studies. In this simulation study, we show that ordinal data from observational can also be analyzed with a non-parametric approach. Benefits: it allows us to analyze without the need of the proportional odds assumption and still get an easy to understand point estimate of the effect.

The Post-COVID-19 Functional Status (PCFS) Scale: a tool to measure functional status over time after COVID-19. In this letter to the European Respiratory, colleagues from Leiden, Maastricht, Zurich, Mainz, Hasselt, Winterthur, and of course Berlin, we propose to use a scale that is basically the same as the PVFS to monitor and study the long term consequence of COVID-19.

On the value of data – routinely vs purposefully

I listen to a bunch of podcasts, and the podcast “The Pitch” is one of them. In that podcast, Entrepreneurs of start-up companies pitch their ideas to investors. Not only is it amusing to hear some of these crazy business ideas, but the podcast also help me to understand about professional life works outside of science. One thing i learned is that it is ok if not expected, to oversell by about a factor 142.

Another thing that I learned is the apparent value of data. The value of data seems to be undisputed in these pitches. In fact, the product or service the company is selling or providing is often only a byproduct: collecting data about their users which subsequently can be leveraged for targeted advertisement seems to be the big play in many start-up companies.

I think this type of “value of data” is what it is: whatever the investors want to pay for that type of data is what it is worth. But it got me thinking about the value of data that we actually collect in medical. Let us first take a look at routinely data, which can be very cheap to collect. But what is the value of the data? The problem is that routinely collected data is often incomplete, rife with error and can lead to enormous biases – both information bias as well as selection bias. Still, some research questions can be answered with routinely collected data – as long as you make some real efforts to think about your design and analyses. So, there is value in routinely collected data as it can provide a first glance into the matter at hand.

And what is the case for purposefully collected data? The idea behind this is that the data is much more reliable: trained staff collects data in a standardised way resulting in datasets without many errors or holes. The downside is the “purpose” which often limits the scope and thereby the amount collected data per included individual. this is the most obvious in randomised clinical trials in which often millions of euro’s are spent to answer one single question. Trials often do no have the precision to provide answers to other questions. So it seems that the data can lose it value after answering that single question.

Luckily, many efforts were made to let purposefully collected keep some if its value even after they have served their purpose. Standardisation efforts between trials make it now possible to pool the data and thus obtain a higher precision. A good example from the field of stroke research is the VISTA collaboration, i.e the Virtual International Stroke Trials Archive”. Here, many trials – and later some observational studies – are combined to answer research questions with enough precision that otherwise would never be possible. This way we can answer questions with high quality of purposefully collected data with numbers otherwise unthinkable.

This brings me to a recent paper we published with data from the VISTA collaboration: “Early in-hospital exposure to statins and outcome after intracerebral haemorrhage”. The underlying question whether and when statins should be initiated / continued after ICH is clinically relevant but also limited in scope and impact, so is it justified to start a trial? We took the the easier and cheaper solution and analysed the data from VISTA. We conclude that

… early in-hospital exposure to statins after acute ICH was associated with better functional outcome compared with no statin exposure early after the event. Our data suggest that this association is particularly driven by continuation of pre-existing statin use within the first two days after the event. Thus, our findings provide clinical evidence to support current expert recommendations that prevalent statin use should be continued during the early in-hospital phase.1921

link

And this shows the limitations of even well collected data from RCT: as long as the exposure of interest is potentially provided to a certain subgroup (i.e. Confounding by indication), you can never really be certain about the treatment effects. To solve this, we would really need to break the bond between exposure and any other clinical characteristic, i.e. randomize. That remains the golden standard for intended effects of treatments. Still, our paper provided a piece of the puzzle and gave more insight, form data that retained some of its value due to standardisation and pooling. But there is no dollar value that we can put on the value of medical research data – routinely or purposefully collected alike- as it all depends on the question you are trying to answer.

Our paper, with JD in the lead, was published last year in the European Stroke Journal, and can be found here as well as on my Publons profile and Mendeley profile.

Intrinsic Coagulation Pathway, History of Headache, and Risk of Ischemic Stroke: a story about interacting risk factors

Yup, another paper from the long-standing collaboration with Leiden. this time, it was PhD candidate HvO who came up with the idea to take a look at the risk of stroke in relation to two risk factors that independently increase the risk. So what then is the new part of this paper? It is about the interaction between the two.

Migraine is a known risk factor for ischemic for stroke in young women. Previous work also indicated that increased levels of the intrinsic coagulation proteins are associated with an increase in ischemic stroke risk. Both roughly double the risk. so what does the combination do?

Let us take a look at the results of analyses in the RATIO study. High levels if antigen levels of coagulation factor FXI are associated with a relative risk of 1.7. A history of severe headache doubles the risk of ischemic stroke. so what can we then expect is both risks just added up? Well, we need to take the standard risk that everybody has into account, which is RR of 1. Then we add the added risk in terms of RR based on the two risk factors. For FXI this is (1.7-1=) 0.7. For headache that is 2.0-1=) 1.0. So we would expect a RR of (1+0.7+1.0=) 2.7. However, we found that the women who had both risk factors had a 5-fold increase in risk, more than what can b expected.

For those who are keeping track, I am of course talking about additive interaction or sometimes referred to biological interaction. this concept is quite different from statistical interaction which – for me – is a useless thing to look at when your underlying research is of a causal nature.

What does this mean? you could interpret this that some women only develop the disease because they are exposed to both risk factors. IN some way, that combination becomes a third ‘risk entity’ that increases the risk in the population. How that works on a biochemical level cannot be answered with this epidemiological study, but some hints from the literature do exist as we discuss in our paper

Of course, some notes have to be taken into account. In addition to the standard limitations of case-control studies, two things stand out: because we study the combination of two risk factors, the precision of our study is relatively low. But then again, what other study is going to answer this question? The absolute risk of ischemic stroke is too low in the general population to perform prospective studies, even when enriched with loads of migraineurs. Another thing that is suboptimal is that the questionnaires used do not allow to conclude that the women who report severe headache actually have a migraine. Our assumption is that many -if not most- do. even though mixing ‘normal’ headaches with migraines in one group would only lead to an underestimation of the true effect of migraine on stroke risk, but still, we have to be careful and therefore stick to the term ‘headache’.

HvO took the lead in this project, which included two short visits to Berlin supported by our Virchow scholarship. The paper has been published in Stroke and can be seen ahead of print on their website.

medRxiv: the pre-print server for medicine

Pre-print servers are a place to place share your academic work before actual peer review and subsequent publication. They are not so new completely new to academia, as many different disciplines have adopted pre-print servers to quickly share ideas and keep the academic discussion going. Many have praised the informal peer-review that you get when you post on pre-print servers, but I primarily like the speed.

But medicine is not one of those disciplines. Up until recently, the medical community had to use bioRxiv, a pre-print server for biology. Very unsatisfactory; as the fields are just too far apart, and the idiosyncrasies of the medical sciences bring some extra requirements. (e.g. ethical approval, trial registration, etc.). So here comes medRxiv, from the makers of bioRxiv with support of the BMJ. Let’s take a moment to listen to the people behind medRxiv to explain the concept themselves.

source: https://www.medrxiv.org/content/about-medrxiv

I love it. I am not sure whether it will be adopted by the community at the same space as some other disciplines have, but doing nothing will never be part of the way forward. Critical participation is the only way.

So, that’s what I did. I wanted to be part of this new thing and convinced with co-authors for using the pre-print concept. I focussed my efforts on the paper in which we describe the BeLOVe study. This is a big cohort we are currently setting up, and in a way, is therefore well-suited for pre-print servers. The pre-print servers allow us to describe without restrictions in word count, appendices or tables and graphs to describe what we want to the level of detail of our choice. The speediness is also welcome, as we want to inform the world on our effects while we are still in the pilot phase and are still able to tweak the design here or there. And that is actually what happened: after being online for a couple of days, our pre-print already sparked some ideas by others.

Now we have to see how much effort it took us, and how much benefit w drew from this extra effort. It would be great if all journals would permit pre-prints (not all do…) and if submitting to a journal would just be a “one click’ kind of effort after jumping through the hoops for the medRxiv.

This is not my first pre-print. For example, the paper that I co-authored on the timely publication of trials from Germany was posted on biorXiv. But being the guy who actually uploads the manuscript is a whole different feeling.

Messy epidemiology: the tale of transient global amnesia and three control groups

Clinical epidemiology is sometimes messy. The methods and data that you might want to use might not be available or just too damn expensive. Does that mean that you should throw in the towel? I do not think so.

I am currently working in a more clinical oriented setting, as the only researcher trained as a clinical epidemiologist. I could tell about being misunderstood and feeling lonely as the only who one who has seen the light, but that would just be lying. The fact is that my position is one privilege and opportunity, as I work with many different groups together on a wide variety of research questions that have the potential to influence clinical reality directly and bring small, but meaningful progress to the field.

Sometimes that work is messy: not the right methods, a difference in interpretation, a p value in table 1… you get the idea. But sometimes something pretty comes out of that mess. That is what happened with this paper, that just got published online (e-pub) in the European Journal of Neurology.  The general topic is the heart brain interaction, and more specifically to what extent damage to the heart actually has a role in transient global amnesia. Now, the idea that there might be a link is due to some previous case series, as well as the clinical experience of some of my colleagues. Next step would of course to do a formal case control-study, and if you want to estimate true measure of rate ratios, a lot effort has to go into the collection of data from a population based control group. We had neither time nor money to do so, and upon closer inspection, we also did not really need that clean control group to answer some of our questions that would progress to the field.

So instead, we chose three different control groups, perhaps better referred as reference groups, all three with some neurological disease. Yes, there are selections at play for each of these groups, but we could argue that those selections might be true for all groups. If these selection processes are similar for all groups, strong differences in patient characteristics of biomarkers suggest that other biological systems are at play. The trick is not to hide these limitations, but as a practiced judoka, leverage these weaknesses and turn them into a strengths. Be open about what you did, show the results, so that others can build on that experience.

So that is what we did. Compared patients with migraine with aura, vestibular neuritis and transient ischemic attack, patients with transient global amnesia are more likely to exhibitsigns of myocardial stress. This study was not designed – nor will if even be able to – understand the cause of this link, not do we pretend that our odds ratios are in fact estimates of rate ratios or something fancy like that. Still, even though many aspects of this study are not “by the book”, it did provide some new insights that help further thinking about and investigations of this debilitating and impactful disease.

The effort was lead by EH, and the final paper can be found here on pubmed.

Impact of your results: Beyond the relative risk

I wrote about this in an earlier topic: JLR and I published a paper in which we explain that a single relative risk, irrespective of its form, is jus5t not enough. Some crucial elements go missing in this dimensionless ratio. The RR could allow us to forget about the size of the denominator, the clinical context, the crude binary nature of the outcome. So we have provided some methods and ways of thinking to go beyond the RR in an tutorial published in RPTH (now in early view). The content and message are nothing new for those trained in clinical research (one would hope). Even for those without a formal training most concepts will have heard the concepts discussed in a talk or poster . But with all these concepts in one place, with an explanation why they provide a tad more insight than the RR alone, we hope that we will trigger young (and older) researchers to think whether one of these measures would be useful. Not for them, but for the readers of their papers. The paper is open access CC BY-NC-ND 4.0, and can be downloaded from the website of RPTH, or from my mendeley profile.  

Advancing prehospital care of stroke patients in Berlin: a new study to see the impact of STEMO on functional outcome

There are strange ambulances driving around in Berlin. They are the so-called STEMO cars, or Stroke Einsatz Mobile, basically driving stroke units. They have the possibility to make a CT scan to rule out bleeds and subsequently start thrombolysis before getting to the hospital. A previous study showed that this descreases time to treatment by ~25 minutes. The question now is whether the patients are indeed better of in terms of functional outcome. For that we are currently running the B_PROUD study of which we recently published the design here.

Virchow’s triad and lessons on the causes of ischemic stroke

I wrote a blog post for BMC, the publisher of Thrombosis Journal in order to celebrate blood clot awareness month. I took my two favorite subjects, i.e. stroke and coagulation, and I added some history and voila!  The BMC version can be found here.

When I look out of my window from my office at the Charité hospital in the middle of Berlin, I see the old pathology building in which Rudolph Virchow used to work. The building is just as monumental as the legacy of this famous pathologist who gave us what is now known as Virchow’s triad for thrombotic diseases.

In ‘Thrombose und Embolie’, published in 1865, he postulated that the consequences of thrombotic disease can be attributed one of three categories: phenomena of interrupted blood flow, phenomena associated with irritation of the vessel wall and its vicinity and phenomena of blood coagulation. This concept has now been modified to describe the causes of thrombosis and has since been a guiding principle for many thrombosis researchers.

The traditional split in interest between arterial thrombosis researchers, who focus primarily on the vessel wall, and venous thrombosis researchers, who focus more on hypercoagulation, might not be justified. Take ischemic stroke for example. Lesions of the vascular wall are definitely a cause of stroke, but perhaps only in the subset of patient who experience a so called large vessel ischemic stroke. It is also well established that a disturbance of blood flow in atrial fibrillation can cause cardioembolic stroke.

Less well studied, but perhaps not less relevant, is the role of hypercoagulation as a cause of ischemic stroke. It seems that an increased clotting propensity is associated with an increased risk of ischemic stroke, especially in the young in which a third of main causes of the stroke goes undetermined. Perhaps hypercoagulability plays a much more prominent role then we traditionally assume?

But this ‘one case, one cause’ approach takes Virchow’s efforts to classify thrombosis a bit too strictly. Many diseases can be called multi-causal, which means that no single risk factor in itself is sufficient and only a combination of risk factors working in concert cause the disease. This is certainly true for stroke, and translates to the idea that each different stroke subtype might be the result of a different combination of risk factors.

If we combine Virchow’s work with the idea of multi-causality, and the heterogeneity of stroke subtypes, we can reimagine a new version of Virchow’s Triad (figure 1). In this version, the patient groups or even individuals are scored according to the relative contribution of the three classical categories.

From this figure, one can see that some subtypes of ischemic stroke might be more like some forms of venous thrombosis than other forms of stroke, a concept that could bring new ideas for research and perhaps has consequences for stroke treatment and care.

Figure 1. An example of a gradual classification of ischemic stroke and venous thrombosis according to the three elements of Virchow’s triad.

However, recent developments in the field of stroke treatment and care have been focused on the acute treatment of ischemic stroke. Stroke ambulances that can discriminate between hemorrhagic and ischemic stroke -information needed to start thrombolysis in the ambulance-drive the streets of Cleveland, Gothenburg, Edmonton and Berlin. Other major developments are in the field of mechanical thrombectomy, with wonderful results from many studies such as the Dutch MR CLEAN study. Even though these two new approaches save lives and prevent disability in many, they are ‘too late’ in the sense that they are reactive and do not prevent clot formation.

Therefore, in this blood clot awareness month, I hope that stroke and thrombosis researchers join forces and further develop our understanding of the causes of ischemic stroke so that we can Stop The Clot!

Increasing efficiency of preclinical research by group sequential designs: a new paper in PLOS biology

We have another paper published in PLOS Biology. The theme is in the same area as the first paper I published in that journal, which had the wonderful title “where have all the rodents gone”, but this time we did not focus on threats to internal validity, but we explored whether sequential study designs can be useful in preclinical research.

Sequential designs, what are those? It is a family of study designs (perhaps you could call it the “adaptive study size design” family) where one takes a quick peek at the results before the total number of subject is enrolled. But, this peek comes at a cost: it should be taken into account in the statistical analyses, as it has direct consequence for the interpretation of the final result of the experiment. But the bottom line is this: with the information you get half way through can decide to continue with the experiment or to stop because of efficacy or futility reasons. If this sounds familiar to those familiar with interim analyses in clinical trials, it is because it is the sam concept. however, we explored its impact when applied to animal experiments.

Figure from our publication in PLOS Biology describing sequential study designs in or computer simulations

Old wine in new bottles” one might say, and some of the reviewers for this paper published rightfully pointed out that our paper was not novel in terms of showing how sequential designs are more efficient compared to non sequential designs. But there is not where the novelty lies. Up untill now, we have not seen people applying this approach to preclinical research in a formal way. However, our experience is that a lot of preclinical studies are done with some kind of informal sequential aspect. No p<0.05? Just add another mouse/cell culture/synapse/MRI scan to the mix! The problem here is that there is no formal framework in which this is done, leading to cherry picking, p-hacking and other nasty stuff that you can’t grasp from the methods and results section.

Should all preclinical studies from now on half sequential designs? My guess would be NO, and there are two major reasons why. First of all, sequential data analyses have their ideosyncrasies and might not be for everyone. Second, the logistics of sequential study designs are complex, especially if you are affraid to introduce batch effects. We only wanted to show preclinical researchers that the sequential approach has their benefits: the same information with on average less costs. If you translate “costs” into animals the obvious conclusion is: apply sequential designs where you can, and the decrease in animals can “re-invested” in more animals per study to obtain higher power in preclinical research. But I hope that the side effect of this paper (or perhaps its main effect!) will be that the readers just think about their current practices and whether thise involve those ‘informal sequential designs’ that really hurt science.

The paper, this time with aless exotic title, “Increasing efficiency of preclinical research by group sequential designs” can be found on the website of PLOS biology.

Associate editor at BMC Thrombosis Journal

source: https://goo.gl/CS2XtJ
source: https://goo.gl/CS2XtJ

In the week just before Christmas, HtC approached me by asking whether or not I would like to join the editorial board of BMC Thrombosis Journal as an Associate Editor. the aims and scope of the journal, taken from their website:

“Thrombosis Journal  is an open-access journal that publishes original articles on aspects of clinical and basic research, new methodology, case reports and reviews in the areas of thrombosis.Topics of particular interest include the diagnosis of arterial and venous thrombosis, new antithrombotic treatments, new developments in the understanding, diagnosis and treatments of atherosclerotic vessel disease, relations between haemostasis and vascular disease, hypertension, diabetes, immunology and obesity.”

I talked to HtC, someone at BMC, as well as some of my friends and colleagues whether or not this would be a wise thing to do. Here is an overview of the points that came up:

Experience: Thrombosis is the field where I grew up in as a researcher. I know the basics, and have some extensive knowledge on specific parts of the field. But with my move to Germany, I started to focus on stroke, so one might wonder why not use your time to work with a stroke related journal. My answer is that the field of thrombosis is a stroke related field and that my position in both worlds is a good opportunity to learn from both fields. Sure, there will be topics that I have less knowledge off, but here is where an associate editor should rely on expert reviewers and fellow editors.

This new position will also provide me with a bunch of new experiences in itself: for example, sitting on the other side of the table in a peer review process might help me to better understand a rejection of one of my own papers. Bottom line is that I think that I both bring and gain relevant experiences in this new position.

Time: These things cost time. A lot. Especially when you need to learn the skills needed for the job, like me. But learning these skills as an associate editor is an integral part of the science apparatus, and I am sure that the time that I invest will help me develop as a scientist. Also, the time that I need to spend is not necessary the type of time that I currently lack, i.e. writing time. For writing and doing research myself I need decent blocks of time to dive in and focus  — 4+ hours if possible. The time I need to perform my associate editor tasks is more fragmented: find peer reviewers, read their comments and make a final judgement are relative fragmented activities and I am sure that as soon as I get the hang of it I can squeeze those activities within shorter slots of time. Perhaps a nice way to fill those otherwise lost 30 minutes between two meetings?

Open science: Thrombosis journal is part of the Biomed central family. As such, it is an 100% OA journal. It is not that I am an open science fanboy or sceptic, but I am very curious how OA is developing and working with an OA journal will help me to understand what OA can and cannot deliver.

Going over these points, I am convinced that I can contribute to the journal with my experience in the fields of coagulation, stroke and research methodology. Also, I think that the time that it will take to learn the skills needed are an investment that in the end will help me to grow as a researcher. So, I replied HtC with a positive answer. Expect email requesting for a peer review report soon!

The paradox of the BMI paradox

2016-10-19-17_52_02-physbe-talk-bs-pdf-adobe-reader

I had the honor to be invited to the PHYSBE research group in Gothenburg, Sweden. I got to talk about the paradox of the BMI paradox. In the announcement abstract I wrote:

“The paradox of the BMI paradox”
Many fields have their own so-called “paradox”, where a risk factor in certain
instances suddenly seems to be protective. A good example is the BMI paradox,
where high BMI in some studies seems to be protective of mortality. I will
argue that these paradoxes can be explained by a form of selection bias. But I
will also discuss that these paradoxes have provided researchers with much
more than just an erroneous conclusion on the causal link between BMI and
mortality.

I first address the problem of BMI as an exposure. Easy stuff. But then we come to index even bias, or collider stratification bias. and how selections do matter in a recurrence research paradox -like PFO & stroke- or a health status research like BMI- and can introduce confounding into the equation.

I see that the confounding might not be enough to explain all that is observed in observational research, so I continued looking for other reasons there are these strong feelings on these paradoxes. Do they exist, or don’t they?I found that the two sides tend to “talk in two worlds”. One side talks about causal research and asks what we can learn from the biological systems that might play a role, whereas others think with their clinical  POV and start to talk about RCTs and the need for weight control programs in patients. But there is huge difference in study design, RQ and interpretation of results between the studies that they cite and interpret. Perhaps part of the paradox can be explained by this misunderstanding.

But the cool thing about the paradox is that through complicated topics, new hypothesis , interesting findings and strong feelings about the existence of paradoxes, I think that the we can all agree: the field of obesity research has won in the end. and with winning i mean that the methods are now better described, better discussed and better applied. New hypothesis are being generated and confirmed or refuted. All in all, the field makes progress not despite, but because the paradox. A paradox that doesn’t even exist. How is that for a paradox?

All in all an interesting day, and i think i made some friends in Gothenburg. Perhaps we can do some cool science together!

Slides can be found here.

predicting DVT with D-dimer in stroke patients: a rebuttal to our letter

2016-10-09-18_05_33-1-s2-0-s0049384816305102-main-pdf
Some weeks ago, I reported on a letter to the editor of Thrombosis Research on the question whether D-Dimer indeed does improve DVT risk prediction in stroke patients.

I was going to write a whole story on how one should not use a personal blog to continue the scientific debate. As you can guess, I ended up writing a full paragraph where I did this anyway. So I deleted that paragraph and I am going to do a thing that requires some action from you. I am just going to leave you with the links to the letters and let you decide whether the issues we bring up, but also the corresponding rebuttal of the authors, help to interpret the results from the the original publication.

Cardiovascular events after ischemic stroke in young adults (results from the HYSR study)

2016-05-01 21_39_40-Cardiovascular events after ischemic stroke in young adults

The collaboration with the group in finland has turned into a nice new publication, with the title

“Cardiovascular events after ischemic stroke in young adults”

this work, with data from Finland was primarily done by KA and JP. KA came to Berlin to learn some epidemiology with the aid of the Virchow scholarship, so that is where we came in. It was great to have KA to be part of the team, and even better to have been working on their great data.

Now onto the results of the paper: like in the results of the RATIO follow-up study, the risk of recurrent young stroke remained present for a long-term time after the stroke in this analysis of the Helsinki Young Stroke Registry. But unlike the RATIO paper, this data had more information on their patients, for example the TOAST criteria. this means that we were able to identify that the group with a LAA had a very high risk of recurrence.

The paper can be found on the website of Neurology, or via my mendeley profile.

Statins and risk of poststroke hemorrhagic complications

2016-03-28 13_00_38-Statins and risk of poststroke hemorrhagic complicationsEaster brought another publication, this time with the title

“Statins and risk of poststroke hemorrhagic complications”

I am very pleased with this paper as it demonstrates two important aspects of my job. First, I was able to share my thought on comparing current users vs never users. As has been argued before (e.g. by the group of Hérnan) and also articulated in a letter to the editor I wrote with colleagues from Leiden, such a comparison brings forth an inherent survival bias: you are comparing never users (i.e. those without indication) vs current users (those who have the indication, can handle the side-effects of the medication, and stay alive long enough to be enrolled into the study as users). This matter is of course only relevant if you want to test the effect of statins, not if you are interested in the mere predictive value of being a statin user.

The second thing about this paper is the way we were able to use data from the VISTA collaboration, which is a large amount of data pooled from previous stroke studies (RCT and observational). I believe such ways of sharing data brings forward science. Should all data be shared online for all to use? I do am not sure of that, but the easy access model of the VISTA collaboration (which includes data maintenance and harmonization etc) is certainly appealing.

The paper can be found here, and on my mendeley profile.

 

– update 1.5.2016: this paper was topic of a comment in the @greenjournal. See also their website

update 19.5.2016: this project also led to first author JS to be awarded with the young researcher award of the ESOC2016.

 

 

Causal Inference in Law: An Epidemiological Perspective

source:ejrr

Finally, it is here. The article I wrote together with WdH, MZ and RM was published in the European Journal of Risk and Regulation last week. And boy, did it take time! This whole project, an interdisciplinary project where epidemiological thinking was applied to questions of causal inference in tort law, took > 3 years – with only a couple of months writing… the rest was waiting and waiting and waiting and some peer review. but more on this later.

First some content. in the article we discuss the idea of proportional liability that adheres to the epidemiological concept of multi-causality. But the article is more: as this is a journal for non epidemiologist, we also provide a short and condensed overview of study design, bias and other epidemiological concepts such as counterfactual thinking. You might have recognised the theme from my visits to the Leiden Law school for some workshops. The EJRR editorial describes it asas: “(…) discuss the problem of causal inference in law, by providing an epidemiological viewpoint. More specifically, by scrutinizing the concept of the so-called “proportional liability”, which embraces the epidemiological notion of multi-causality, they demonstrate how the former can be made more proportional to a defendant’s relative contribution in the known causal mechanism underlying a particular damage.”

Getting this thing published was tough: the quality of the peer review was low (dare I say zero?),communication was difficult, submission system flawed etc. But most of all the editorial office was slow – first submission was June 2013! This could be a non-medical journal thing, i do not know, but still almost three years. And this all for an invited article that was planned to be part of a special edition on the link between epi and law, which never came. Due several delays (surprise!) of the other articles for this edition, it was decided that our article is not waiting for this special edition anymore. Therefore, our cool little insight into epidemiology now seems to be lost between all those legal and risk regulation articles. A shame if you ask me, but I am glad that we are not waiting any longer!

Although i do love interdisciplinary projects, and I think the result is a nice one, I do not want to go through this process again. No more EJRR for me.

Ow, one more thing… the article is behind a pay wall and i do not have access through my university, nor did the editorial office provide me with a link to a pdf of the final version. So, to be honest, I don’t have the final article myself! Feels weird. I hope EJRR will provide me with a pdf quite soon. In the meantime, anybody with access to this article, please feel free to send me a copy!

Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke

 

source: journals.plos.org/plosbiology

We published a new article just in PLOS Biology today, with the title:

“Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke”

This is a wonderful collaboration between three fields: stats, epi and lab researchers. Combined we took a look at what is called attrition in the preclinical labs, that is the loss of data in animal experiments. This could be because the animal died before the needed data could be obtained, or just because a measurement failed. This loss of data can be translated to the concept of loss to follow-up in epidemiological cohort studies, and from this field we know that this could lead to substantial loss of statistical power and perhaps even bias.

But it was unknown to what extent this also was a problem in preclinical research, so we did two things. We looked at how often papers indicated there was attrition (with an alarming number of papers that did not provide the data for us to establish whether there was attrition), and we did some simulation what happens if there is attrition in various scenarios. The results paint a clear picture: the loss of power but also the bias is substantial. The degree of these is of course dependent on the scenario of attrition, but the message of the paper is clear: we should be aware of the problems that come with attrition and that reporting on attrition is the first step in minimising this problem.

A nice thing about this paper is that coincides with the start of a new research section in the PLOS galaxy, being “meta-research”, a collection of papers that all focus on how science works, behaves, and can or even should be improved. I can only welcome this, as more projects on this topic are in our pipeline!

The article can be found on pubmed and my mendeley profile.

Update 6.1.16: WOW what a media attention for this one. Interviews with outlets from UK, US, Germany, Switzerland, Argentina, France, Australia etc, German Radio, the dutch Volkskrant, and a video on focus.de. More via the corresponding altmetrics page . Also interesting is the post by UD, the lead in this project and chief of the CSB,  on his own blog “To infinity, and beyond!”

 

New article published – Ankle-Brachial Index and Recurrent Stroke Risk: Meta-Analysis


Another publication, this time on the role of the ABI as a predictor for stroke recurrence. This is a meta analysis, which combines data from 11 studies allowing us to see that ABI was moderately associated with recurrent stroke (RR1.7) and vascular events (RR 2.2). Not that much, but it might be just enough to increase some of the risk prediction models available for stroke patients when ABI is incorperated.

This work, the product of the great work of some of the bright students that work at the CSB (JBH and COL), is a good start in our search for a good stroke recurrence risk prediction model. Thiswill be a major topic in our future research in the PROSCIS study which is led by TGL. I am looking forward to the results of that study, as better prediction models are needed in the clinic especially true as more precise data and diagnosis might lead to better subgroup specific risk prediction and treatment.

The article can be found on pubmed and my mendeley profile and should be cited as

Hong J Bin, Leonards CO, Endres M, Siegerink B, Liman TG. Ankle-Brachial Index and Recurrent Stroke Risk. Stroke 2015; : STROKEAHA.115.011321.

New articles published: hypercoagulability and the risk of ischaemic stroke and myocardial infarction

Ischaemic stroke + myocardial infarction = arterial thrombosis. Are these two diseases just two sides of the side coin? Well, most if the research I did in the last couple of years tell a different story: most times,hypercoagulability has a stronger impact on the risk of ischaemic stroke at least when compared to myocardial infarction. And when in some cases this was not the case, at least it as clear that the impact was differential. But these papers I published were all single data dots, so we needed to provide an overview of all these data points to get the whole picture. And we did so by publishing two papers, one in the JTH and one in PLOS ONE.

The first paper is a general discussion of the results from the RATIO study, basically an adaptation from my discussion chapter of my thesis (yes it took some time to get to the point of publication, but that’s a whole different story), with a more in-depth discussion to what extent we can draw conclusions from these data. We tried to fill in the caveats (limited number of markers, only young women, only case-control, basically single study) of the first study with our second publication. Here we did the same trick, but in a systematic review.This way, our results have more external validity, while we ensured the internal validity by only including studies that studied both diseases and thus ruling out large biases due to differences in study design. I love these two publications!

You can find these publications through their PMID 26178535 and 26178535, or via my mendeley account.

PS the JTH paper has PAFs in them. Cool!

 

New article published: the relationship between ADAMTS13 and MI

2015-06-16 14_26_12-Plasma ADAMTS13 levels and the risk of myocardial infarction_ an individual pati

this article is a collaboration with a lot of guys. initiated from the Milan group, we ended up with a quite diverse group of researchers to answers this question because of the methods that we used: the individual patient data meta-analysis. The best thing about this approach: you can pool the data from different studies, even while you can adjusted for potential sources of confounding in a similar manner (given that the data are available, that is). On themselves, these studies showed some mixed results. But in the end, we were able to use the combined data to show that there was an increase MI risk but only for those with very low levels of ADAMTS13. So, here you see the power of IPD meta-analysis!

The credits for this work go primarily to AM who did a great job of getting all PI’s on board, analysing the data and writing a god manuscript. The final version is not online, but you find the pre-publication on pubmed

 

 

New article published – Conducting your own research: a revised recipe for a clinical research training project

2015-06-07 15_38_24-Mendeley Desktop
source: https://www.ntvg.nl/artikelen/zelf-onderzoek-doen

A quick update on a new article that was published on friday in the NTVG. This article with the title

“Conducting your own research: a revised recipe for a clinical research training project”

– gives a couple of suggestions for young clinicians/researchers on how they should organise their epidemiological research projects. This paper was written to commemorate the retirement of prof JvdB, who wrote the original article back in 1989. I am quite grew quite fond of this article, as it combines insights from 25 years back as well as quite recent insights (e.g. STROBE and cie Schuyt and resulted in a article that will help young research to rethink how they plan and execute their own research project.

There are 5 key suggestions that form the backbone of this article i.e. limit the research question, conduct a pilot study, write the article before you collect the data, streamline the research process and be accountable. As the article is in Dutch only at this moment, I will work on an English version. First drafts of this ms, each discussing each of the 5 recommendations might appear on this website. And how about a German version?

Anyway, it has to be mentioned that if it not was for JvdB, this article would have never come to light. Not only because he wrote the original, but mostly because he is one of the most inspiring teachers of epidemiology.

New publication: LTTE in the American Journal of Epidemiology

12.coverAt the department of Clinical Epidemiology of the LUMC we have a continuous course/journal in which we read epi-literature and books in a nice little group. The group, called Capita Selecta, has a nice website which can be found here. sometime ago we’ve read an article that proposed to include dormant Mendelian Randomisation studies in RCT, to figure out the causal pathways of a treatment for chronic diseases. This could be most helpful when there is a discrepancy between the expected effect and the observed effect. During the discussion of this article we did not agree with the authors for several reasons. We, AGCB/IP/myself, decided to write a LTTE with these points. The journal was nice enough to publish our concerns, together with a response by the authors of the original article. The PDF can be found via the links below which will take you to the website of the American Journal of Epidemiology. The PDF of our LTTE can also be found at my mendeley profile.

original article
letter to the editor
response by the author

Grant awarded to investigate the long term effects of cardiovascular disease at a young age

Today I got a letter from the Leiden University Fund (LUF) to inform me that the grant we requested was granted. This is great, because now we can investigate the long-term effects of young stroke, myocardial infarction and peripheral arterial disease. We will do this by linking our data from the RATIO to the several national databases (e.g. cause of death registries and hospital admissions) that are under control by the central bureau of statistics (CBS). I will perform this research together with AM and other Italian colleagues from Milan. 

The grant (11K) that was awarded is the Den Dulk Moermans Fonds, which exist since 2010, as we can read from the Dutch information the LUF website:

Het Den Dulk-Moermans Fonds is opgericht in 2010 na ontvangst van een erfenis van dhr. A.M. den Dulk. De doelstelling van het Fonds is het financieren van onderzoek naar gezondheid in de breedste zin van het woord.

The protective effects of statins on thrombosis recurrence: a letter to the editor of the European Heart Journal

Recently, Biere-Safi et al published the results from their analyses of the PHARMO database describing the relation between statin use and the recurrence of pulmonary embolism (pubmed). This article was topic of a heated debate on our department: is it really possible that statin use halves the risk of recurrence in this patient group? During this discussion we found some issues that could led to an overestimation of the underlying true protective effect. We described these issues in a letter to the editor which has been accepted as an e-letter. Some journals use e-letters to facilitate a faster and more vivid debate after a publication, but unfortunately, these e-letters are only to be found at the website of the publisher and not for example in Web Of Scienc or Pubmed. This could mean that these critical parts of the scientific debate could have a smaller reach, which is a pity.

Nonetheless, the text of our e-letter is to be found on the website of the Eur Heart J, or via my Mendeley account.

Grant awarded to investigate the role of coagulation FVIII in the aetiology of ischaemic stroke in RATIO study

I just received a letter from the KNAW stating that the grant proposal I sent to one of the fund of the KNAW, the van Leersumfunds, was awarded. From their website, we can only learn a little about this fund:

“The Van Leersum Fund supports neuro(bio)logical, radiological and pharmaceutical research by awarding a series of research grants.

The Fund was established in 1922 and is named after P. van Leersum. The assets of the fund are made up of his estate and the estate of Ms I.G. Harbers-Kramer.”

With this grant we will be able to measure coagulation favtor VIII in the ischaemic stroke substudy of the RATIO study. Coagulation factor FVIII is one of the most potent risk factors for venous thrombosis in the coagulation system, and were quite curious what effect it has on the risk of ischaemic stroke in young women.