new paper: pulmonary dysfunction and CVD outcome in the ELSA study

 This is a special paper to me, as this is a paper that is 100% the product of my team at the CSB.Well, 100%? Not really. This is the first paper from a series of projects where we work with open data, i.e. data collected by others who subsequently shared it. A lot of people talk about open data, and how all the data created should be made available to other researchers, but not a lot of people talk about using that kind of data. For that reason we have picked a couple of data resources to see how easy it is to work with data that is initially not collected by ourselves.

It is hard, as we now have learned. Even though the studies we have focussed on (ELSA study and UK understanding society) have a good description of their data and methods, understanding this takes time and effort. And even after putting in all the time and effort you might still not know all the little details and idiosyncrasies in this data.

A nice example lies in the exposure that we used in this analyses, pulmonary dysfunction. The data for this exposure was captured in several different datasets, in different variables. Reverse engineering a logical and interpretable concept out of these data points was not easy. This is perhaps also true in data that you do collect yourself, but then at least these thoughts are being more or less done before data collection starts and no reverse engineering is needed. new paper: pulmonary dysfunction and CVD outcome in the ELSA study

So we learned a lot. Not only about the role of pulmonary dysfunction as a cause of CVD (hint, it is limited), or about the different sensitivity analyses that we used to check the influence of missing data on the conclusions of our main analyses (hint, limited again) or the need of updating an exposure that progresses over time (hint, relevant), but also about how it is to use data collected by others (hint, useful but not easy).

The paper, with the title “Pulmonary dysfunction and development of different cardiovascular outcomes in the general population.” with IP as the first author can be found here on pubmed or via my mendeley profile.

Advertisements

New paper: Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young Adults

2017-06-16 09_26_46-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young2017-06-16 09_25_58-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young

Just a relative risk is not enough to fully understand the implications of your findings. Sure, if you are an expert in a field, the context of that field will help you to assess the RR. But if ou are not, the context of the numerator and denominator is often lost. There are several ways to work towards that. If you have a question that revolves around group discrimination (i.e. questions of diagnosis or prediction) the RR needs to be understood in relation to other predictors or diagnostic variables. That combination is best assessed through the added discriminatory value such as the AUC improvement or even more fancy methods like reclassification tables and net benefit indices. But if you are interested in are interested in a single factor (e.g. in questions of causality or treatment) a number needed to treat (NNT) or the Population Attributable Fraction can be used.

The PAF has been subject of my publications before, for example in these papers where we use the PAF to provide the context for the different OR of markers of hypercoagulability in the RATIO study / in a systematic review. This paper is a more general text, as it is meant to provide in insight for non epidemiologist what epidemiology can bring to the field of law. Here, the PAF is an interesting measure, as it has relation to the etiological fraction – a number that can be very interesting in tort law. Some of my slides from a law symposium that I attended addresses these questions and that particular Dutch case of tort law.

But the PAF is and remains an epidemiological measure and tells us what fraction of the cases in the population can be attributed to the exposure of interest. You can combine the PAF to a single number (given some assumptions which basically boil down to the idea that the combined factors work on an exact multiplicative scale, both statistically as well as biologically). A 2016 Lancet paper, which made huge impact and increased interest in the concept of the PAF, was the INTERSTROKE paper. It showed that up to 90% of all stroke cases can be attributed to only 10 factors, and all of them modifiable.

We had the question whether this was the same for young stroke patients. After all, the longstanding idea is that young stroke is a different disease from old stroke, where traditional CVD risk factors play a less prominent role. The idea is that more exotic causal mechanisms (e.g. hypercoagulability) play a more prominent role in this age group. Boy, where we wrong. In a dataset which combines data from the SIFAP and GEDA studies, we noticed that the bulk of the cases can be attributed to modifiable risk factors (80% to 4 risk factors). There are some elements with the paper (age effect even within the young study population, subtype effects, definition effects) that i wont go into here. For that you need the read the paper -published in stroke- here, or via my mendeley account. The main work of the work was done by AA and UG. Great job!

Starting a research group: some thoughts for a new paper

isth-logo

It has been 18 months since I started in Berlin to start at the CSB to take over the lead of the clinical epidemiology research group. Recently, the ISTH early career taskforce  have contacted me whether I would be willing to write something about my experiences over the last 18 months as a rookie group leader. The idea is that these experiences, combined with a couple of other papers on similar useful topics for early career researchers, will be published in JTH.

I was a bit reluctant at first, as I believe that how people handle new situations that one encounters as a new group leader is quite dependent on personality and the individual circumstances. But then again, the new situations that i encountered might be more generalizable to other people. So I decided to go ahead and focus more on the description of the new situations I found myself in while trying to keep the personal experiences limited and only for illustrations only.

While writing, I have discerned that there are basically 4 new things about my new situations that I would have loved to realise a bit earlier.

  1. A new research group is never without context; get to know the academic landscape of your research group as this is where you find people for new collaboration etc
  2. You either start a new research group from scratch, or your inherit a research group; be aware that both have very different consequences and require different approaches.
  3. Try to find training and mentoring to help you cope with your new roles that group leaders have; it is not only the role of group leader that you need to get adjusted to. HR manager, accountant, mentor, researcher, project initiator, project manager, consultant are just a couple of roles that I also need to fulfill on a regular basis.
  4. New projects; it is tempting to put all your power, attention time and money behind a project. but sometimes new projects fail. Perhaps start a couple of small side projects as a contingency?

As said, the stuff I describe in the paper might be very specific for my situation and as such not likely to be applicable for everyone. Nonetheless, I hope that reading the paper might help other young researchers to help them prepare for the transition from post-doc to group leader. I will report back when the paper is finished and available online.

 

New articles published: hypercoagulability and the risk of ischaemic stroke and myocardial infarction

Ischaemic stroke + myocardial infarction = arterial thrombosis. Are these two diseases just two sides of the side coin? Well, most if the research I did in the last couple of years tell a different story: most times,hypercoagulability has a stronger impact on the risk of ischaemic stroke at least when compared to myocardial infarction. And when in some cases this was not the case, at least it as clear that the impact was differential. But these papers I published were all single data dots, so we needed to provide an overview of all these data points to get the whole picture. And we did so by publishing two papers, one in the JTH and one in PLOS ONE.

The first paper is a general discussion of the results from the RATIO study, basically an adaptation from my discussion chapter of my thesis (yes it took some time to get to the point of publication, but that’s a whole different story), with a more in-depth discussion to what extent we can draw conclusions from these data. We tried to fill in the caveats (limited number of markers, only young women, only case-control, basically single study) of the first study with our second publication. Here we did the same trick, but in a systematic review.This way, our results have more external validity, while we ensured the internal validity by only including studies that studied both diseases and thus ruling out large biases due to differences in study design. I love these two publications!

You can find these publications through their PMID 26178535 and 26178535, or via my mendeley account.

PS the JTH paper has PAFs in them. Cool!

 

New publication in NTVG: Mendelian randomisation

Together with HdH and AvHV I wrote an article for the Dutch NTVG on Mendelian Randomisation in the Methodology series, which was published online today. This is not the first time; I wrote in the NTVG before for this up-to-date series (not 1 but 2 papers on crossover design) but I also wrote on Mendelian Randomisation before. In fact that was one of the first ‘ educationals’ I ever wrote. The weird thing is that I never formally applied mendelian randomisation analyses in a paper. I did apply the underlying reasoning in a paper, but no two-stage-least-squares analyses or similar. Does this bother me? Only a bit, but I think this just shows the limited value of formal Mendelian Randomsation studies: you need a lot of power and untestable assumptions which greatly reduces the applicability of this method in practice. however, the underlying reasoning is a good insight in the origin, and effects of confounding (and perhaps even others forms of bias) in epidemiological studies.Thats why I love Mendelian Randomisation; it is just another tool in the epidemiolgists toolbox.

The NTVG paper can be found here on their website (here in pdf) and also on my mendeley account.

Paper published in Arthritis Care & Research now quoted in NTVG

The arthritis Care and Research paper which I co-authored (PubMed) attracted attention from the guys of the NTVG. This paper, originally a collaboration between the Reumatology department and the department of Clinical Epidemiology described the relationship between BMI as a proxy for obesity and treatment response in patients with rheumatoid arthritis as is described on the news section of the NTVG website. The text of the news item from the NTVG website can also be read on this website if you ….

Continue reading “Paper published in Arthritis Care & Research now quoted in NTVG”

Paper published in Arthritis Care & Research

A paper which I co-authored has been indexed for PubMed. This paper is a collaboration between the Reumatology deprtment and the department of Clinical Epidemiology. LH and MvdB have done a great job by describing the relationship between BMI as a proxy for obesity and treatment response in patients with rheumatoid arthritis.

Ref: Heimans L, van den Broek M, le Cessie S, Siegerink B, Riyazi N, Han KH, Kerstens PJSM, Huizinga TWJ, Lems WF, Allaart CF. High BMI is associated with decreased treatment response to combination therapy in recent onset RA patients – a subanalysis from the BeSt study. Arthritis Care & Research. 2013