What will happen when after an ICH? A summary of the current state of prediction models

Figure 2 from the paper, showing the number of prognostic models that use a certain combination of outcome (rows) and the timing of outcome assessment (columns)

The question seems to be straightforward: “what bad stuff happens when after somebody develops an intracerebral hemorrhage, and how will I know whether that will also happen to me now that I have one”? The answer is, as always, “it depends”. It depends on how you actually specify the question. What does “bad stuff” mean? Which “when” are you interested? And what are your personal risk factors? We need all this information in order to get an answer from a clinical prediction model.

The thing is, we also need a good working clinical prediction model – that is it should distinguish those who develop the bad stuff from those who don’t, but it should also make sure that the absolute risks are about right. This new paper (project carried by JW) discusses all ins and outs when it comes to the current state of affairs when it comes to predictions. Written for neurologist, some of these comments and points that we rise will not be new to methodologists. But as it is not a given that methodologist will be involved somebody decides that a new prediction model needs to be developed, we wrote it all in up in this review.

The paper, publishes in Neurological Research and Practice, has a couple of messages:

  • The number of existing prediction models for this disease is already quite big – and the complexity of the models seem to increase overtime, without a clear indication that the performance of these models gets better. A lot of these models use different definitions for the type of outcome, as well as the moment that the outcome is assessed – all leading to wildly different models, which are difficult to compare.
  • The statistical workup is limited: The performance is often only measured in a simple AUC- calibration and net benefit is not reported on. Even more worryingly, external validation not always possible, as the original publications do not provide point estimates.
  • Given the severity of the disease, the so-called “withdrawal of care bias” is an important element when thinking and talking about prognostic scores. This bias, in which those with a bad score do not receive treatment can lead to a self-fulfilling prophecy type of situation in the clinic, captured in the data.

In short – when you think you want to develop a new model, think again. Think long and hard. Identify why the current models are working or are not working. Can you improve? Do you have the insights and skill set to do so? Really? If you think so, please do so, but just don’t add another not so useful prediction model to the already saturated literature.

Three new papers published – part II

In my last post, I explained why I am at the moment not writing one post per new paper. Instead, I group them. This time with a common denominator, namely the role of cardiac troponin and stroke:

High-Sensitivity Cardiac Troponin T and Cognitive Function in Patients With Ischemic Stroke. This paper finds its origins in the PROSCIS study, in which we studied other biomarkers as well. In fact, there is a whole lot more coming. The analyses of these longitudinal data showed a – let’s say ‘medium-sized’ – relationship between cardiac troponin and cognitive function. A whole lot of caveats – a presumptive learning curve, not a big drop in cognitive function to work with anyway. After all, these are only mild to moderately affected stroke patients.

Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis. This paper investigates several patient populations -the general population, increased risk population, and stroke patients. The number of patients individuals in the title might, therefore, be a little bit deceiving – I think you should really only look at the results with those separate groups in mind. Not only do I think that the biology might be different, the methodological aspects (e.g. heterogeneity) and interpretation (relative risks with high absolute risks) are also different.

Response by Siegerink et al to Letter Regarding Article, “Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis”. We did the meta-analysis as much as possible “but the book”. We pre-registered our plan and published accordingly. This all to discourage ourselves (and our peer reviewers) to go and “hunt for specific results”. But then there was a letter to the editor with the following central point: Because in the subgroup of patients with material fibrillation, the cut-offs used for the cardiac troponin are so different that pooling these studies together in one analysis does not make sense. At first glance, it looks like the authors have a point: it is difficult to actually get a very strict interpretation from the results that we got. This paper described our response. Hint: upon closer inspection, we do not agree and make a good counterargument (at least, that’s what we think).

Three new papers published

Normally I publish a new post for each new paper that we publish. But with COVID-19, normal does not really work anymore. But i don’t want to completely throw my normal workflow overboard. Therefore, a quick update on a couple of publications, all in one blogpost, yet without a common denominator:

Stachulski, F., Siegerink, B. and Bösel, J. (2020) ‘Dying in the Neurointensive Care Unit After Withdrawal of Life-Sustaining Therapy: Associations of Advance Directives and Health-Care Proxies With Timing and Treatment Intensity’, Journal of Intensive Care Medicine A paper about the role of advanced directives and treatment in the neurointensive care unit. Not normally the topic I publish about, as the severity of disease in these patients is luckily not what we normally see in stroke patients.

Impact of COPD and anemia on motor and cognitive performance in the general older population: results from the English longitudinal study of ageing. This paper makes use of the ELSA study – an open-access database – and hinges on the idea that sometimes two risk factors only lead to the progression of disease/symptoms if they work jointly. This idea behind interaction is often “tested” with a simple statistical interaction model. There are many reasons why this is not the best thing to do, so we also looked at biological (or additive interaction).

Thrombo-Inflammation in Cardiovascular Disease: An Expert Consensus Document from the Third Maastricht Consensus Conference on Thrombosis. This is a hefty paper, with just as many authors as pages it seems. But this is not a normal paper – it is the consensus statement of the thrombosis meeting last year in Maastricht. I really liked that meeting, not only because I got to see old friends, but also because of a number of ideas and papers were the product of this meeting. This paper is, of course, one of them. But after this one, some papers on the development of an ordinal outcome for functional status after venous thrombosis. But they will be part of a later blog post.

New paper – Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative

I report often on this blog about new papers that I have co-authored. Every time I highlight something that is special about that particular publication. This time I want to highlight a paper that I co-authored, but also didn’t. Let me explain.

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000576#sec014

The paper, with the title, Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative, was published in PLOS Biology and describes the QUEST center. The author list mentions three individual QUEST researchers, but it also has this interesting “on behalf of the QUEST group” author reference. What does that actually mean?

Since I have reshuffled my research, I am officially part of the QUEST team, and therefore I am part of that group. I gave some input on the paper, like many of my colleagues, but nowhere near enough to justify full authorship. That would, after all, require the following 4(!) elements, according to the ICMJE,

  • Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
  • Drafting the work or revising it critically for important intellectual content; AND
  • Final approval of the version to be published; AND
  • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

This is what the ICMJE says about large author groups: “Some large multi-author groups designate authorship by a group name, with or without the names of individuals. When submitting a manuscript authored by a group, the corresponding author should specify the group name if one exists, and clearly identify the group members who can take credit and responsibility for the work as authors. The byline of the article identifies who is directly responsible for the manuscript, and MEDLINE lists as authors whichever names appear on the byline. If the byline includes a group name, MEDLINE will list the names of individual group members who are authors or who are collaborators, sometimes called non-author contributors, if there is a note associated with the byline clearly stating that the individual names are elsewhere in the paper and whether those names are authors or collaborators.”

I think that this format should be used more, but that will only happen if people take the collaborator status seriously as well. Other “contribution solutions” can help to give some insight into what it means to be a collaborator, such as a detailed description like in movie credits or a standardized contribution table. We have to start appreciating all forms of contributions.

new paper: pulmonary dysfunction and CVD outcome in the ELSA study

 This is a special paper to me, as this is a paper that is 100% the product of my team at the CSB.Well, 100%? Not really. This is the first paper from a series of projects where we work with open data, i.e. data collected by others who subsequently shared it. A lot of people talk about open data, and how all the data created should be made available to other researchers, but not a lot of people talk about using that kind of data. For that reason we have picked a couple of data resources to see how easy it is to work with data that is initially not collected by ourselves.

It is hard, as we now have learned. Even though the studies we have focussed on (ELSA study and UK understanding society) have a good description of their data and methods, understanding this takes time and effort. And even after putting in all the time and effort you might still not know all the little details and idiosyncrasies in this data.

A nice example lies in the exposure that we used in this analyses, pulmonary dysfunction. The data for this exposure was captured in several different datasets, in different variables. Reverse engineering a logical and interpretable concept out of these data points was not easy. This is perhaps also true in data that you do collect yourself, but then at least these thoughts are being more or less done before data collection starts and no reverse engineering is needed. new paper: pulmonary dysfunction and CVD outcome in the ELSA study

So we learned a lot. Not only about the role of pulmonary dysfunction as a cause of CVD (hint, it is limited), or about the different sensitivity analyses that we used to check the influence of missing data on the conclusions of our main analyses (hint, limited again) or the need of updating an exposure that progresses over time (hint, relevant), but also about how it is to use data collected by others (hint, useful but not easy).

The paper, with the title “Pulmonary dysfunction and development of different cardiovascular outcomes in the general population.” with IP as the first author can be found here on pubmed or via my mendeley profile.

New paper: Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young Adults

2017-06-16 09_26_46-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young2017-06-16 09_25_58-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young

Just a relative risk is not enough to fully understand the implications of your findings. Sure, if you are an expert in a field, the context of that field will help you to assess the RR. But if ou are not, the context of the numerator and denominator is often lost. There are several ways to work towards that. If you have a question that revolves around group discrimination (i.e. questions of diagnosis or prediction) the RR needs to be understood in relation to other predictors or diagnostic variables. That combination is best assessed through the added discriminatory value such as the AUC improvement or even more fancy methods like reclassification tables and net benefit indices. But if you are interested in are interested in a single factor (e.g. in questions of causality or treatment) a number needed to treat (NNT) or the Population Attributable Fraction can be used.

The PAF has been subject of my publications before, for example in these papers where we use the PAF to provide the context for the different OR of markers of hypercoagulability in the RATIO study / in a systematic review. This paper is a more general text, as it is meant to provide in insight for non epidemiologist what epidemiology can bring to the field of law. Here, the PAF is an interesting measure, as it has relation to the etiological fraction – a number that can be very interesting in tort law. Some of my slides from a law symposium that I attended addresses these questions and that particular Dutch case of tort law.

But the PAF is and remains an epidemiological measure and tells us what fraction of the cases in the population can be attributed to the exposure of interest. You can combine the PAF to a single number (given some assumptions which basically boil down to the idea that the combined factors work on an exact multiplicative scale, both statistically as well as biologically). A 2016 Lancet paper, which made huge impact and increased interest in the concept of the PAF, was the INTERSTROKE paper. It showed that up to 90% of all stroke cases can be attributed to only 10 factors, and all of them modifiable.

We had the question whether this was the same for young stroke patients. After all, the longstanding idea is that young stroke is a different disease from old stroke, where traditional CVD risk factors play a less prominent role. The idea is that more exotic causal mechanisms (e.g. hypercoagulability) play a more prominent role in this age group. Boy, where we wrong. In a dataset which combines data from the SIFAP and GEDA studies, we noticed that the bulk of the cases can be attributed to modifiable risk factors (80% to 4 risk factors). There are some elements with the paper (age effect even within the young study population, subtype effects, definition effects) that i wont go into here. For that you need the read the paper -published in stroke- here, or via my mendeley account. The main work of the work was done by AA and UG. Great job!

Starting a research group: some thoughts for a new paper

isth-logo

It has been 18 months since I started in Berlin to start at the CSB to take over the lead of the clinical epidemiology research group. Recently, the ISTH early career taskforce  have contacted me whether I would be willing to write something about my experiences over the last 18 months as a rookie group leader. The idea is that these experiences, combined with a couple of other papers on similar useful topics for early career researchers, will be published in JTH.

I was a bit reluctant at first, as I believe that how people handle new situations that one encounters as a new group leader is quite dependent on personality and the individual circumstances. But then again, the new situations that i encountered might be more generalizable to other people. So I decided to go ahead and focus more on the description of the new situations I found myself in while trying to keep the personal experiences limited and only for illustrations only.

While writing, I have discerned that there are basically 4 new things about my new situations that I would have loved to realise a bit earlier.

  1. A new research group is never without context; get to know the academic landscape of your research group as this is where you find people for new collaboration etc
  2. You either start a new research group from scratch, or your inherit a research group; be aware that both have very different consequences and require different approaches.
  3. Try to find training and mentoring to help you cope with your new roles that group leaders have; it is not only the role of group leader that you need to get adjusted to. HR manager, accountant, mentor, researcher, project initiator, project manager, consultant are just a couple of roles that I also need to fulfill on a regular basis.
  4. New projects; it is tempting to put all your power, attention time and money behind a project. but sometimes new projects fail. Perhaps start a couple of small side projects as a contingency?

As said, the stuff I describe in the paper might be very specific for my situation and as such not likely to be applicable for everyone. Nonetheless, I hope that reading the paper might help other young researchers to help them prepare for the transition from post-doc to group leader. I will report back when the paper is finished and available online.

 

New articles published: hypercoagulability and the risk of ischaemic stroke and myocardial infarction

Ischaemic stroke + myocardial infarction = arterial thrombosis. Are these two diseases just two sides of the side coin? Well, most if the research I did in the last couple of years tell a different story: most times,hypercoagulability has a stronger impact on the risk of ischaemic stroke at least when compared to myocardial infarction. And when in some cases this was not the case, at least it as clear that the impact was differential. But these papers I published were all single data dots, so we needed to provide an overview of all these data points to get the whole picture. And we did so by publishing two papers, one in the JTH and one in PLOS ONE.

The first paper is a general discussion of the results from the RATIO study, basically an adaptation from my discussion chapter of my thesis (yes it took some time to get to the point of publication, but that’s a whole different story), with a more in-depth discussion to what extent we can draw conclusions from these data. We tried to fill in the caveats (limited number of markers, only young women, only case-control, basically single study) of the first study with our second publication. Here we did the same trick, but in a systematic review.This way, our results have more external validity, while we ensured the internal validity by only including studies that studied both diseases and thus ruling out large biases due to differences in study design. I love these two publications!

You can find these publications through their PMID 26178535 and 26178535, or via my mendeley account.

PS the JTH paper has PAFs in them. Cool!

 

New publication in NTVG: Mendelian randomisation

Together with HdH and AvHV I wrote an article for the Dutch NTVG on Mendelian Randomisation in the Methodology series, which was published online today. This is not the first time; I wrote in the NTVG before for this up-to-date series (not 1 but 2 papers on crossover design) but I also wrote on Mendelian Randomisation before. In fact that was one of the first ‘ educationals’ I ever wrote. The weird thing is that I never formally applied mendelian randomisation analyses in a paper. I did apply the underlying reasoning in a paper, but no two-stage-least-squares analyses or similar. Does this bother me? Only a bit, but I think this just shows the limited value of formal Mendelian Randomsation studies: you need a lot of power and untestable assumptions which greatly reduces the applicability of this method in practice. however, the underlying reasoning is a good insight in the origin, and effects of confounding (and perhaps even others forms of bias) in epidemiological studies.Thats why I love Mendelian Randomisation; it is just another tool in the epidemiolgists toolbox.

The NTVG paper can be found here on their website (here in pdf) and also on my mendeley account.

Paper published in Arthritis Care & Research now quoted in NTVG

The arthritis Care and Research paper which I co-authored (PubMed) attracted attention from the guys of the NTVG. This paper, originally a collaboration between the Reumatology department and the department of Clinical Epidemiology described the relationship between BMI as a proxy for obesity and treatment response in patients with rheumatoid arthritis as is described on the news section of the NTVG website. The text of the news item from the NTVG website can also be read on this website if you ….

Continue reading “Paper published in Arthritis Care & Research now quoted in NTVG”

Paper published in Arthritis Care & Research

A paper which I co-authored has been indexed for PubMed. This paper is a collaboration between the Reumatology deprtment and the department of Clinical Epidemiology. LH and MvdB have done a great job by describing the relationship between BMI as a proxy for obesity and treatment response in patients with rheumatoid arthritis.

Ref: Heimans L, van den Broek M, le Cessie S, Siegerink B, Riyazi N, Han KH, Kerstens PJSM, Huizinga TWJ, Lems WF, Allaart CF. High BMI is associated with decreased treatment response to combination therapy in recent onset RA patients – a subanalysis from the BeSt study. Arthritis Care & Research. 2013

Article published in NTVG on crossover study

Today, an educational article on crossover studies, written by TNB and JGvdB and myself is published in the NTVG. The article was published in the methodology series which explains specific topics for the general physician: it explains the basic concepts of the crossover trial, but also advocates its statistical efficiency, as can be seen in the graph above. The article is published under open access and is therefore freely accessable. There is a catch… it’s published in Dutch.

More information on my publications can be found on this website and an up to date list of publicaties can be found on my Mendeley profile.