Migraine and venous thrombosis: Another important piece of the puzzle

Asking the right question is arguably the hardest thing to do in science, or at least in epidemiology. The question that you want to answer dictates the study design, the data that you collect and the type of analyses you are going to use. Often, especially in causal research, this means scrutinizing how you should frame your exposure/outcome relationship. After all, there needs to be positivity and consistency which you can only ensure through “the right research question”. Of note, the third assumption for causal inference i.e. exchangeability, conditional or not, is something you can pursue through study design and analyses. But there is a third part of an epidemiological research question that makes all the difference: the domain of the study, as is so elegantly displayed by the cartoon of Todays Random Medical News or the twitter hash-tag “#inmice“.

The domain is the type of individuals to which the answer has relevance. Often, the domain has a one-to-one relationship with the study population. This is not always the case, as sometimes the domain is broader than the study population at hand. A strong example is that you could use young male infants to have a good estimation of the genetic distribution of genotypes in a case-control study for venous thrombosis in middle-aged women. I am not saying that that case-control study has the best design, but there is a case to be made, especially if we can safely assume that the genotype distribution is not sex chromosome dependent or has shifted through the different generations.

The domain of the study is not only important if you want to know to whom the results of your study actually are relevant, but also if you want to compare the results of different studies. (as a side note, keep in mind the absolute risks of the outcome that come with the different domains: they highly affect how you should interpret the relative risks)

Sometimes, studies look like they fully contradict with each other. One study says yes, the other says no. What to conclude? Who knows! But are you sure both studies actually they answer the same question? Comparing the way the exposure and the outcome are measured in the two studies is one thing – an important thing at that – but it is not the only thing. You should also make sure that you take potential differences and similarities between the domains of the studies into account.

This brings us to the paper by KA and myself that just got published in the latest volume of RPTH. In fact, it is a commentary written after we have reviewed a paper by Folsom et al. that did a very thorough job at analyzing the role between migraine and venous thrombosis in the elderly. They convincingly show that there is no relationship, completely in apparent contrast to previous papers. So we asked ourselves: “Why did the study by Folsom et al report findings in apparent contrast to previous studies?  “

There is, of course, the possibility f just chance. But next to this, we should consider that the analyses by Folsom look at the long term risk in an older population. The other papers looked at at a shorter term, and in a younger population in which migraine is most relevant as migraine often goes away with increasing age. KA and I argue that both studies might just be right, even though they are in apparent contradiction. Why should it not be possible to have a transient increase in thrombosis risk when migraines are most frequent and severe, and that there is no long term increase in risk in the elderly, an age when most migraineurs report less frequent and severe attacks?

The lesson of today: do not look only at the exposure of the outcome when you want to bring the evidence of two or more studies into one coherent theory. Look at the domain as well, as you might just dismiss an important piece of the puzzle.

Advertisements

Finding consensus in Maastricht

source https://twitter.com/hspronk

Last week, I attended and spoke at the Maastricht Consensus Conference on Thrombosis (MCCT). This is not your standard, run-of-the-mill, conference where people share their most recent research. The MCCT is different, and focuses on the larger picture, by giving faculty the (plenary) stage to share their thoughts on opportunities and challenges in the field. Then, with the help of a team of PhD students, these thoughts are than further discussed in a break out session. All was wrapped up by a plenary discussion of what was discussed in the workshops. Interesting format, right?

It was my first MCCT, and I had difficulty envisioning how exactly this format will work out beforehand. Now that I have experienced it all, I can tell you that it really depends on the speaker and the people attending the workshops. When it comes to the 20 minute introductions by the faculty, I think that just an overview of the current state of the art is not enough. The best presentations were all about the bigger picture, and had either an open question, a controversial statement or some form of “crystal ball” vision of the future. It really is difficult to “find consensus” when there is no controversy as was the case in some plenary talks. Given the break-out nature of the workshops, my observations are limited in number. But from what I saw, some controversy (if need be only constructed for the workshop) really did foster discussion amongst the workshop participants.

Two specific activities stand out for me. The first is the lecture and workshop on post PE syndrome and how we should able to monitor the functional outcome of PE. Given my recent plea in RPTH for more ordinal analyses in the field of thrombosis and hemostasis – learning from stroke research with its mRS- we not only had a great academic discussion, but made immediately plans for a couple of projects where we actually could implement this. The second activity I really enjoyed is my own workshop, where I not only gave a general introduction into stroke (prehospital treatment and triage, clinical and etiological heterogeneity etc) but also focused on the role of FXI and NETS. We discussed the role of DNase as a potential for co-treatment for tPA in the acute setting (talking about “crystal ball” type of discussions!). Slides from my lecture can be found here (PDF). An honorable mention has to go out to the PhD students P and V who did a great job in supporting me during the prep for the lecture and workshop. Their smart questions and shared insights really shaped my contribution.

Now, I said it was not always easy to find consensus, which means that it isn’t impossible. In fact, I am sure that themes that were discussed all boil down to a couple opportunities and challenges. A first step was made by HtC and HS from the MCCT leadership team in the closing session on Friday which will proof to be a great jumping board for the consensus paper that will help set the stage for future research in our field of arterial thrombosis.

Messy epidemiology: the tale of transient global amnesia and three control groups

Clinical epidemiology is sometimes messy. The methods and data that you might want to use might not be available or just too damn expensive. Does that mean that you should throw in the towel? I do not think so.

I am currently working in a more clinical oriented setting, as the only researcher trained as a clinical epidemiologist. I could tell about being misunderstood and feeling lonely as the only who one who has seen the light, but that would just be lying. The fact is that my position is one privilege and opportunity, as I work with many different groups together on a wide variety of research questions that have the potential to influence clinical reality directly and bring small, but meaningful progress to the field.

Sometimes that work is messy: not the right methods, a difference in interpretation, a p value in table 1… you get the idea. But sometimes something pretty comes out of that mess. That is what happened with this paper, that just got published online (e-pub) in the European Journal of Neurology.  The general topic is the heart brain interaction, and more specifically to what extent damage to the heart actually has a role in transient global amnesia. Now, the idea that there might be a link is due to some previous case series, as well as the clinical experience of some of my colleagues. Next step would of course to do a formal case control-study, and if you want to estimate true measure of rate ratios, a lot effort has to go into the collection of data from a population based control group. We had neither time nor money to do so, and upon closer inspection, we also did not really need that clean control group to answer some of our questions that would progress to the field.

So instead, we chose three different control groups, perhaps better referred as reference groups, all three with some neurological disease. Yes, there are selections at play for each of these groups, but we could argue that those selections might be true for all groups. If these selection processes are similar for all groups, strong differences in patient characteristics of biomarkers suggest that other biological systems are at play. The trick is not to hide these limitations, but as a practiced judoka, leverage these weaknesses and turn them into a strengths. Be open about what you did, show the results, so that others can build on that experience.

So that is what we did. Compared patients with migraine with aura, vestibular neuritis and transient ischemic attack, patients with transient global amnesia are more likely to exhibitsigns of myocardial stress. This study was not designed – nor will if even be able to – understand the cause of this link, not do we pretend that our odds ratios are in fact estimates of rate ratios or something fancy like that. Still, even though many aspects of this study are not “by the book”, it did provide some new insights that help further thinking about and investigations of this debilitating and impactful disease.

The effort was lead by EH, and the final paper can be found here on pubmed.

Cardiac troponin T and severity of cerebral white matter lesions: quantile regression to the rescue

quantile regression of high vs low troponin T and white matter lesion quantile

A new paper, this time venturing into the field of the so-called heart-brain interaction. We often see stroke patients with cardiac problems, and vice versa. And to make it even more complex, there is also a link to dementia! What to make of this? Is it a case of chicken and the egg, or just confounding by a third variable?  How do these diseases influence each other?

This paper tries to get a grip on this matter by zooming in on a marker of cardiac damage, i.e. cardiac troponin T. We looked at this marker in our stroke patients. Logically, stroke patients do not have increased levels of troponin T, yet, they do. More interestingly, the patients that exhibit high levels of this biomarker also have high level of structural changes in the brain, so called cerebral white matter lesions. 

But the problem is that patients with high levels of troponin T are different from those who have no marker of cardiac damage. They are older and have more comorbidities, so a classic case for adjustment for confounding, right? But then we realize that both troponin as well as white matter lesions are a left skewed data. Log transformation of the variables before you run linear regression, but then the interpretation of the results get a bit complex if you want clear point estimates as answers to your research question.

So we decided to go with a quantile regression, which models the quantile cut offs with all the multivariable regression benefits. The results remain interpretable and we don’t force our data into distribution where it doesn’t fit. From our paper:

In contrast to linear regression analysis, quantile regression can compare medians rather than means, which makes the results more robust to outliers [21]. This approach also allows to model different quantiles of the dependent variable, e.g. 80th percentile. That way, it is possible to investigate the association between hs-cTnT in relation to both the lower and upper parts of the WML distribution. For this study, we chose to perform a median quantile regression analysis, as well as quantile regression analysis for quintiles of WML (i.e. 20th, 40th, 60th and 80th percentile). Other than that, the regression coefficients indicate the effects of the covariate on the cut-offs of the respective quantiles of the dependent variable, adjusted for potential covariates, just like in any other regression model.

Interestingly, the result show that association between high troponin T and white matter lesions is the strongest in the higher quantiles. If you want to stretch to a causal statement that means that high troponin T has a more pronounced effect on white matter lesions in stroke patients who are already at the high end of the distribution of white matter lesions. 

But we should’t stretch it that far. This is a relative simple study, and the clinical relevance of our insights still needs to be established. For example, our unadjusted results might indicate that the association in itself might be strong enough to help predict post stroke cognitive decline. The adjusted numbers are less pronounced, but still, it might be enough to help prediction models.

The paper, led by RvR, is now published in J of Neurol, and can be found here, as well as on my mendeley profile.

 von Rennenberg R, Siegerink B, Ganeshan R, Villringer K, Doehner W, Audebert HJ, Endres M, Nolte CH, Scheitz JF. High-sensitivity cardiac troponin T and severity of cerebral white matter lesions in patients with acute ischemic stroke. J Neurol Springer Berlin Heidelberg; 2018; 0: 0.

Impact of your results: Beyond the relative risk

I wrote about this in an earlier topic: JLR and I published a paper in which we explain that a single relative risk, irrespective of its form, is jus5t not enough. Some crucial elements go missing in this dimensionless ratio. The RR could allow us to forget about the size of the denominator, the clinical context, the crude binary nature of the outcome. So we have provided some methods and ways of thinking to go beyond the RR in an tutorial published in RPTH (now in early view). The content and message are nothing new for those trained in clinical research (one would hope). Even for those without a formal training most concepts will have heard the concepts discussed in a talk or poster . But with all these concepts in one place, with an explanation why they provide a tad more insight than the RR alone, we hope that we will trigger young (and older) researchers to think whether one of these measures would be useful. Not for them, but for the readers of their papers. The paper is open access CC BY-NC-ND 4.0, and can be downloaded from the website of RPTH, or from my mendeley profile.  

new paper: pulmonary dysfunction and CVD outcome in the ELSA study

 This is a special paper to me, as this is a paper that is 100% the product of my team at the CSB.Well, 100%? Not really. This is the first paper from a series of projects where we work with open data, i.e. data collected by others who subsequently shared it. A lot of people talk about open data, and how all the data created should be made available to other researchers, but not a lot of people talk about using that kind of data. For that reason we have picked a couple of data resources to see how easy it is to work with data that is initially not collected by ourselves.

It is hard, as we now have learned. Even though the studies we have focussed on (ELSA study and UK understanding society) have a good description of their data and methods, understanding this takes time and effort. And even after putting in all the time and effort you might still not know all the little details and idiosyncrasies in this data.

A nice example lies in the exposure that we used in this analyses, pulmonary dysfunction. The data for this exposure was captured in several different datasets, in different variables. Reverse engineering a logical and interpretable concept out of these data points was not easy. This is perhaps also true in data that you do collect yourself, but then at least these thoughts are being more or less done before data collection starts and no reverse engineering is needed. new paper: pulmonary dysfunction and CVD outcome in the ELSA study

So we learned a lot. Not only about the role of pulmonary dysfunction as a cause of CVD (hint, it is limited), or about the different sensitivity analyses that we used to check the influence of missing data on the conclusions of our main analyses (hint, limited again) or the need of updating an exposure that progresses over time (hint, relevant), but also about how it is to use data collected by others (hint, useful but not easy).

The paper, with the title “Pulmonary dysfunction and development of different cardiovascular outcomes in the general population.” with IP as the first author can be found here on pubmed or via my mendeley profile.

New Masterclass: “Papers and Books”

“Navigating numbers” is a series of Masterclass initiated by a team of Charité researchers who think that our students should be able to get more familiar how numbers shape the field of medicine, i.e. both medical practice and medical research. And I get to organize the next in line.

I am very excited to organise the next Masterclass together with J.O. a bright researcher with a focus on health economics. As the full title of the masterclass is “Papers and Books – series 1 – intended effect of treatments”, some health economics knowledge is a must in this journal club style series of meetings.

But what will we exactly do? This Masterclass will focus on reading some papers as well as a book (very surprising), all with a focus on study design and how to do proper research into “intended effect of treatment” . I borrowed this term from one of my former epidemiology teachers, Jan Vandenbroucke, as it helps to denote only a part of the field of medical research with its own idiosyncrasies, yet not limited by study design.

The Masterclass runs for 8 meetings only, and as such not nearly enough to have the students understand all in and outs of proper study design. But that is also not the goal: we want to show the participants how one should go about when the ultimate question is medicine is asked: “should we treat or not?”

If you want to participate, please check out our flyer