What will happen when after an ICH? A summary of the current state of prediction models

Figure 2 from the paper, showing the number of prognostic models that use a certain combination of outcome (rows) and the timing of outcome assessment (columns)

The question seems to be straightforward: “what bad stuff happens when after somebody develops an intracerebral hemorrhage, and how will I know whether that will also happen to me now that I have one”? The answer is, as always, “it depends”. It depends on how you actually specify the question. What does “bad stuff” mean? Which “when” are you interested? And what are your personal risk factors? We need all this information in order to get an answer from a clinical prediction model.

The thing is, we also need a good working clinical prediction model – that is it should distinguish those who develop the bad stuff from those who don’t, but it should also make sure that the absolute risks are about right. This new paper (project carried by JW) discusses all ins and outs when it comes to the current state of affairs when it comes to predictions. Written for neurologist, some of these comments and points that we rise will not be new to methodologists. But as it is not a given that methodologist will be involved somebody decides that a new prediction model needs to be developed, we wrote it all in up in this review.

The paper, publishes in Neurological Research and Practice, has a couple of messages:

  • The number of existing prediction models for this disease is already quite big – and the complexity of the models seem to increase overtime, without a clear indication that the performance of these models gets better. A lot of these models use different definitions for the type of outcome, as well as the moment that the outcome is assessed – all leading to wildly different models, which are difficult to compare.
  • The statistical workup is limited: The performance is often only measured in a simple AUC- calibration and net benefit is not reported on. Even more worryingly, external validation not always possible, as the original publications do not provide point estimates.
  • Given the severity of the disease, the so-called “withdrawal of care bias” is an important element when thinking and talking about prognostic scores. This bias, in which those with a bad score do not receive treatment can lead to a self-fulfilling prophecy type of situation in the clinic, captured in the data.

In short – when you think you want to develop a new model, think again. Think long and hard. Identify why the current models are working or are not working. Can you improve? Do you have the insights and skill set to do so? Really? If you think so, please do so, but just don’t add another not so useful prediction model to the already saturated literature.

New paper – Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative

I report often on this blog about new papers that I have co-authored. Every time I highlight something that is special about that particular publication. This time I want to highlight a paper that I co-authored, but also didn’t. Let me explain.

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000576#sec014

The paper, with the title, Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative, was published in PLOS Biology and describes the QUEST center. The author list mentions three individual QUEST researchers, but it also has this interesting “on behalf of the QUEST group” author reference. What does that actually mean?

Since I have reshuffled my research, I am officially part of the QUEST team, and therefore I am part of that group. I gave some input on the paper, like many of my colleagues, but nowhere near enough to justify full authorship. That would, after all, require the following 4(!) elements, according to the ICMJE,

  • Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
  • Drafting the work or revising it critically for important intellectual content; AND
  • Final approval of the version to be published; AND
  • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

This is what the ICMJE says about large author groups: “Some large multi-author groups designate authorship by a group name, with or without the names of individuals. When submitting a manuscript authored by a group, the corresponding author should specify the group name if one exists, and clearly identify the group members who can take credit and responsibility for the work as authors. The byline of the article identifies who is directly responsible for the manuscript, and MEDLINE lists as authors whichever names appear on the byline. If the byline includes a group name, MEDLINE will list the names of individual group members who are authors or who are collaborators, sometimes called non-author contributors, if there is a note associated with the byline clearly stating that the individual names are elsewhere in the paper and whether those names are authors or collaborators.”

I think that this format should be used more, but that will only happen if people take the collaborator status seriously as well. Other “contribution solutions” can help to give some insight into what it means to be a collaborator, such as a detailed description like in movie credits or a standardized contribution table. We have to start appreciating all forms of contributions.

Cardiac troponin T and severity of cerebral white matter lesions: quantile regression to the rescue

quantile regression of high vs low troponin T and white matter lesion quantile

A new paper, this time venturing into the field of the so-called heart-brain interaction. We often see stroke patients with cardiac problems, and vice versa. And to make it even more complex, there is also a link to dementia! What to make of this? Is it a case of chicken and the egg, or just confounding by a third variable?  How do these diseases influence each other?

This paper tries to get a grip on this matter by zooming in on a marker of cardiac damage, i.e. cardiac troponin T. We looked at this marker in our stroke patients. Logically, stroke patients do not have increased levels of troponin T, yet, they do. More interestingly, the patients that exhibit high levels of this biomarker also have high level of structural changes in the brain, so called cerebral white matter lesions. 

But the problem is that patients with high levels of troponin T are different from those who have no marker of cardiac damage. They are older and have more comorbidities, so a classic case for adjustment for confounding, right? But then we realize that both troponin as well as white matter lesions are a left skewed data. Log transformation of the variables before you run linear regression, but then the interpretation of the results get a bit complex if you want clear point estimates as answers to your research question.

So we decided to go with a quantile regression, which models the quantile cut offs with all the multivariable regression benefits. The results remain interpretable and we don’t force our data into distribution where it doesn’t fit. From our paper:

In contrast to linear regression analysis, quantile regression can compare medians rather than means, which makes the results more robust to outliers [21]. This approach also allows to model different quantiles of the dependent variable, e.g. 80th percentile. That way, it is possible to investigate the association between hs-cTnT in relation to both the lower and upper parts of the WML distribution. For this study, we chose to perform a median quantile regression analysis, as well as quantile regression analysis for quintiles of WML (i.e. 20th, 40th, 60th and 80th percentile). Other than that, the regression coefficients indicate the effects of the covariate on the cut-offs of the respective quantiles of the dependent variable, adjusted for potential covariates, just like in any other regression model.

Interestingly, the result show that association between high troponin T and white matter lesions is the strongest in the higher quantiles. If you want to stretch to a causal statement that means that high troponin T has a more pronounced effect on white matter lesions in stroke patients who are already at the high end of the distribution of white matter lesions. 

But we should’t stretch it that far. This is a relative simple study, and the clinical relevance of our insights still needs to be established. For example, our unadjusted results might indicate that the association in itself might be strong enough to help predict post stroke cognitive decline. The adjusted numbers are less pronounced, but still, it might be enough to help prediction models.

The paper, led by RvR, is now published in J of Neurol, and can be found here, as well as on my mendeley profile.

 von Rennenberg R, Siegerink B, Ganeshan R, Villringer K, Doehner W, Audebert HJ, Endres M, Nolte CH, Scheitz JF. High-sensitivity cardiac troponin T and severity of cerebral white matter lesions in patients with acute ischemic stroke. J Neurol Springer Berlin Heidelberg; 2018; 0: 0.

Advancing prehospital care of stroke patients in Berlin: a new study to see the impact of STEMO on functional outcome

There are strange ambulances driving around in Berlin. They are the so-called STEMO cars, or Stroke Einsatz Mobile, basically driving stroke units. They have the possibility to make a CT scan to rule out bleeds and subsequently start thrombolysis before getting to the hospital. A previous study showed that this descreases time to treatment by ~25 minutes. The question now is whether the patients are indeed better of in terms of functional outcome. For that we are currently running the B_PROUD study of which we recently published the design here.

How to set up a research group

A couple of weeks ago I wrote down some thoughts I had while writing a paper for the JTH series on Early Career Researchers. I was asked to write how one sets up a research group, and the four points I described in my previous post can be recognised in the final paper.

But I also added some reading tips in the paper. reading on a particular topic helps me not only to learn what is written in the books, but also to get my mind in a certain mindset. So, when i knew that i was going to take over a research group in Berlin I read a couple of books, both fiction and non fiction. Some where about Berlin (e.g. Cees Nootebooms Berlijn 1989/2009), some were focussed on academic life (e.g. Porterhouse Blue). They help to get my mind in a certain gear to help me prepare of what is going on. In that sense, my bookcase says a lot about myself.

The number one on the list of recommended reads are the standard management best sellers, as I wrote in the text box:

// Management books There are many titles that I can mention here; whether it the best-seller Seven Habits of Highly Effective People or any of the smaller booklets by Ken Blanchard, I am convinced that reading some of these texts can help you in your own development as a group leader. Perhaps you will like some of the techniques and approaches that are proposed and decide to adopt them. Or, like me, you may initially find yourself irritated because you cannot envision the approaches working in the academic setting. If this happens, I encourage you to keep reading because even in these cases, I learned something about how academia works and what my role as a group leader could be through this process of reflection. My absolute top recommendation in this category is Leadership and Self-Deception: a text that initially got on my nerves but in the end taught me a lot.

I really think that is true. You should not only read books that you agree with, or which story you enjoy. Sometimes you can like a book not for its content but the way it makes you question your own preexisting beliefs and habits. But it is true that this sometimes makes it difficult to actually finnish such a book.

Next to books, I am quite into podcasts so I also wrote

// Start up. Not a book, but a podcast from Gimlet media about “what it’s really like to get a business off the ground.” It is mostly about tech start-ups, but the issues that arise when setting up a business are in many ways similar to those you encounter when you are starting up a research group. I especially enjoyed seasons 1 and 3.

I thought about including the sponsored podcast “open for business” from Gimlet Creative, as it touches upon some very relevant aspects of starting something new. But for me the jury is still out on the “sponsored podcast” concept  – it is branded content from amazon, and I am not sure to what extent I like that. For now, i do not like it enough to include it in the least in my JTH-paper.

The paper is not online due to the summer break,but I will provide a link asap.

– update 11.10.2016 – here is a link to the paper. 

 

 

 

 

Pregnancy loss and risk of ischaemic stroke and myocardial infarction

2016-04-08 13_36_29-Posteingang - bob.siegerink@charite.de - Outlook

Together with colleagues I worked on a paper on relationship between pregnancy, its complications and stroke and myocardial infarction in young women, which just appeared online on the BJH website.

The article, which analyses data from the RATIO study, concludes that only if you have multiple pregnancy losses, your risk of stroke is increased (OR 2.4) compared to those who never experienced a pregnancy loss. The work was mainly done by AM, and is a good example of international collaborations where we benefitted from the expertise of all team members.

The article, with the full title “Pregnancy loss and risk of ischaemic stroke and myocardial infarction” can be found via PubMed, or via my personal Mendeley page.

Causal Inference in Law: An Epidemiological Perspective

source:ejrr

Finally, it is here. The article I wrote together with WdH, MZ and RM was published in the European Journal of Risk and Regulation last week. And boy, did it take time! This whole project, an interdisciplinary project where epidemiological thinking was applied to questions of causal inference in tort law, took > 3 years – with only a couple of months writing… the rest was waiting and waiting and waiting and some peer review. but more on this later.

First some content. in the article we discuss the idea of proportional liability that adheres to the epidemiological concept of multi-causality. But the article is more: as this is a journal for non epidemiologist, we also provide a short and condensed overview of study design, bias and other epidemiological concepts such as counterfactual thinking. You might have recognised the theme from my visits to the Leiden Law school for some workshops. The EJRR editorial describes it asas: “(…) discuss the problem of causal inference in law, by providing an epidemiological viewpoint. More specifically, by scrutinizing the concept of the so-called “proportional liability”, which embraces the epidemiological notion of multi-causality, they demonstrate how the former can be made more proportional to a defendant’s relative contribution in the known causal mechanism underlying a particular damage.”

Getting this thing published was tough: the quality of the peer review was low (dare I say zero?),communication was difficult, submission system flawed etc. But most of all the editorial office was slow – first submission was June 2013! This could be a non-medical journal thing, i do not know, but still almost three years. And this all for an invited article that was planned to be part of a special edition on the link between epi and law, which never came. Due several delays (surprise!) of the other articles for this edition, it was decided that our article is not waiting for this special edition anymore. Therefore, our cool little insight into epidemiology now seems to be lost between all those legal and risk regulation articles. A shame if you ask me, but I am glad that we are not waiting any longer!

Although i do love interdisciplinary projects, and I think the result is a nice one, I do not want to go through this process again. No more EJRR for me.

Ow, one more thing… the article is behind a pay wall and i do not have access through my university, nor did the editorial office provide me with a link to a pdf of the final version. So, to be honest, I don’t have the final article myself! Feels weird. I hope EJRR will provide me with a pdf quite soon. In the meantime, anybody with access to this article, please feel free to send me a copy!

First results from the RATIO follow up study

Another article got published today in the JAMA Int Med, this time the results from the first analyses of the RATIO follow-up data. For these data, we linked the RATIO study to the dutch national bureau of statistics (CBS), to obtain 20 years of follow-up on cardiovascular morbidity and mortality. We first submitted a full paper, but later we downsized to a research letter with only 600 words. This means that only the main message (i.e. cardiovascular recurrence is high, persistent over time and disease specific) is left.

It is a “Leiden publication”, where I worked together with AM and FP from Milano. Most of the credit of course goes to AM, who is the first author of this piece. The cool thing about this publication is that the team worked very hard on it for a long time (data linking and analyses where not an easy thing to do, as well as changing from 3000 words to 600 in just a week or so), and that in the end all the hard work paid off. But next to the hard work, it is also nice to see results being picked up by the media. The JAMA Int Med put out an international press release, whereas the LUMC is going to publish its own Dutch version. In the days before the ‘online first’ publication I already answered some emails from writers for medical news sites, some with up to 5.000K views per month. I do not know if you think that’s a lot, but for me it is. The websites that cover this story can be found here (dagensmedisin.sehealio.commedicaldaily.com, medpagetoday.commedonline.atdrugs.com / healthday.com / webmd.com /  usnews.com / doctorslounge.commedicalxpress.commedicalnewstoday.comeurekalert.org and perhaps more to come. Why not just take a look at the Altmetric of this article).

– edit 26.11.2015: a dutch press release from the LUMC can be found here) – edit: oops, medpagetoday.com has a published great report/interview, but used a wrong title…”Repeat MI and Stroke Risks Defined in ‘Younger’ Women on Oral Contraceptives”. not all women were on OC of course.

Of course, @JAMAInternalMed tweeted about it

 

The article, with the full title Recurrence and Mortality in Young Women With Myocardial Infarction or Ischemic Stroke: Long-term Follow-up of the Risk of Arterial Thrombosis in Relation to Oral Contraceptives (RATIO) Study can be found via JAMA Internal Medicine or via my personal Mendeley page.

As I reported earlier, this project is supported by a grant from the LUF den Dulk-Moermans foundation, for which we are grateful.

New article: Lipoprotein (a) as a risk factor for ischemic stroke: a meta-analysis

source: atherosclerosis-journal.com

Together with several co-authors, with first author AN in the lead, we did a meta analyses on the role of Lp(a) as a risk factor of stroke. Bottomline, Lp(a) seems to be a risk factor for stroke, which was most prominently seen in the young.

The results are not the only reason why I am so enthusiastic by this article. It is also about the epidemiological problem that AN encountered and we ended up discussing over coffee. The problem: the different studies use different categorisations (tertiles, quartiles, quintiles). How to use these data and pool them in a way to get a valid and precise answer to the research question? In the end we ended up using the technique proposed used by D Danesh et al. JAMA. 1998;279(18):1477-1482 that uses the normal distribution and the distances in SD. A neat technique, even though it assumes a couple of things about the uniformity of the effect over the range of the exposure. An IPD would be better, as we would be free to investigate the dose relationship and we would be able to keep adjustment for confounding uniform, but hey… this is cool in itself!

The article can be found on pubmed and on my mendeley profile.

New articles published: hypercoagulability and the risk of ischaemic stroke and myocardial infarction

Ischaemic stroke + myocardial infarction = arterial thrombosis. Are these two diseases just two sides of the side coin? Well, most if the research I did in the last couple of years tell a different story: most times,hypercoagulability has a stronger impact on the risk of ischaemic stroke at least when compared to myocardial infarction. And when in some cases this was not the case, at least it as clear that the impact was differential. But these papers I published were all single data dots, so we needed to provide an overview of all these data points to get the whole picture. And we did so by publishing two papers, one in the JTH and one in PLOS ONE.

The first paper is a general discussion of the results from the RATIO study, basically an adaptation from my discussion chapter of my thesis (yes it took some time to get to the point of publication, but that’s a whole different story), with a more in-depth discussion to what extent we can draw conclusions from these data. We tried to fill in the caveats (limited number of markers, only young women, only case-control, basically single study) of the first study with our second publication. Here we did the same trick, but in a systematic review.This way, our results have more external validity, while we ensured the internal validity by only including studies that studied both diseases and thus ruling out large biases due to differences in study design. I love these two publications!

You can find these publications through their PMID 26178535 and 26178535, or via my mendeley account.

PS the JTH paper has PAFs in them. Cool!

 

New article published: the relationship between ADAMTS13 and MI

2015-06-16 14_26_12-Plasma ADAMTS13 levels and the risk of myocardial infarction_ an individual pati

this article is a collaboration with a lot of guys. initiated from the Milan group, we ended up with a quite diverse group of researchers to answers this question because of the methods that we used: the individual patient data meta-analysis. The best thing about this approach: you can pool the data from different studies, even while you can adjusted for potential sources of confounding in a similar manner (given that the data are available, that is). On themselves, these studies showed some mixed results. But in the end, we were able to use the combined data to show that there was an increase MI risk but only for those with very low levels of ADAMTS13. So, here you see the power of IPD meta-analysis!

The credits for this work go primarily to AM who did a great job of getting all PI’s on board, analysing the data and writing a god manuscript. The final version is not online, but you find the pre-publication on pubmed

 

 

New article: the intrinsic coagulation proteins and the risk of arterial thrombosis

I got good news today! A manuscript on the role of the intrinsic coagulation factors in the causal mechanisms leading to myocardial infarction and ischaemic stroke has been accepted for publication by the JTH. It took sometime, but in the end I’m very glad that this paper was published in the JTH because its readership is both clinical as well as biomedical: just the place where I feel most at home.

The basic message? These factors do contribute to ischaemic risk, but not to the risk of myocardial infarction. This is mostly the case for coagulation factor XI, which is a nice finding, because it could be a new target for anti-thrombotic therapies.

The article is now in print and will be made available soon. In the mean time, you can refer to my thesis, in which this research was also described.

New publication: LTTE in the American Journal of Epidemiology

12.coverAt the department of Clinical Epidemiology of the LUMC we have a continuous course/journal in which we read epi-literature and books in a nice little group. The group, called Capita Selecta, has a nice website which can be found here. sometime ago we’ve read an article that proposed to include dormant Mendelian Randomisation studies in RCT, to figure out the causal pathways of a treatment for chronic diseases. This could be most helpful when there is a discrepancy between the expected effect and the observed effect. During the discussion of this article we did not agree with the authors for several reasons. We, AGCB/IP/myself, decided to write a LTTE with these points. The journal was nice enough to publish our concerns, together with a response by the authors of the original article. The PDF can be found via the links below which will take you to the website of the American Journal of Epidemiology. The PDF of our LTTE can also be found at my mendeley profile.

original article
letter to the editor
response by the author

New article published: review on obesity and venous thrombosis

Together with colleagues I worked on a review on the role of obesity as a risk factor for venous thrombosis. I’m second author on the article, which come online last week, and most work has been done by SKB from Norway, who is visiting our department for a full year.

The article is written from an epidemiological point of view and discusses several points that are worth mentioning here. First of all, obesity is an ill-defined concept: are we only talking BMI, or do also other measures of obesity need to be taken into account? Second, even when defined, the results are not always easy to interpret. In causal research there are a couple of things that need to be fulfilled before one can answer the question whether something is risk factor of disease. For example, BMI can be reduced by means of exercise   diet or disease, which all three have completely different effects on thrombosis risk. We discuss all these epidemiological problems, together with the existing body of evidence in the new article in seminars of thrombosis and hemostasis. These question are not only important for our understanding of thrombotic disease, but also to grasp the causal role of obesity in (cardiovascular) disease. This research question has in ast couple of years been put on the research agenda of the NEO study, on which perhaps more in the future.

The article, with the full title “Role of Obesity in the Etiology of Deep Vein Thrombosis and Pulmonary Embolism: Current Epidemiological Insights” can be found via PubMed, or via my personal Mendeley page.

Ben Goldacres ‘Bad Pharma’ and research from the LUMC

Ben Goldacre, known from the bestseller Bad Science (book and blog) has a new book, Bad Pharma. Goldacre is always fun to read: science, both the method as the social phenomenon, explained for non-scientist while still interesting for scientist. The same goes for his new title Bad Pharma, where he explains what is right and wrong in the field of clinical trials needed to determine what treatment is best given. Before I am going to review the complete book, perhaps this TED talk will explain it all:

Basically, his point is that for good answers to questions on what treatment is best to save lives, it is pivotal that all the results of all trials are published. This sounds a bit old, since there are databases in which trials should be registered. However, only registering the existence of a trial is not enough: all data should become known to the public. This sounds familiar: this standpoint is off course the same standpoint of the AllTrials.net petition, which is initiated by a.o. Ben Goldacre. For more on AllTrials.net, please see a previous post.

While reading the book of Goldacre it started reading about reasearch done in the Netherlands, where 250 students were looking into the adverts for medication: they checked their quality (was the science OK?) and correct use (does it support the claim?) of the trials in major journals and found that half was of good quality and only half supported the claim. And the nice thing about this research? It was executed at our department as part of one of our  undergraduate courses! All students scored trials and a couple of students were also engaged in the analyses/writing/submission process. The paper from this research, cited by Goldacre, is available from the website of the Netherlands Journal of Medicine.  (pdf, open access) An earlier paper with the same concept but focussed on rheumatoid arthritis medication is also published, also open acces. (pdf)