Retracting our own paper

I wrote a series of emails in the last couple of weeks I never thought I would need to write: I gave the final okay on the wording of a retraction notice for one of the papers that I have worked on during my time in Berlin. Let me provide some more insight than a regular retraction notice provides.

Let’s start with the paper that we needed to retract. It is a paper in which we investigate the so-called smoking paradox – the idea that those who smoke might have more beneficial effects from thrombolysis treatment for stroke. Because of the presumed mechanisms, as well as the direct method of treatment delivery IA thrombolysis is of particular interest here. The paper, “The smoking paradox in ischemic stroke patients treated with intra-arterial thrombolysis in combination with mechanical thrombectomy–VISTA-Endovascular”, looked at this presumed relation, but we were not able to find evidence that was in support of the hypothesis.

But why then the retraction? To study this phenomenon, we needed data rich with people who were treated with IA thrombolysis and solid data on smoking behavior. We found this combination in the form of a dataset from the VISTA collaboration. VISTA is founded to collect useful data from several sources and combine them in a way to further strengthen international stroke research where possible. But something went wrong: the variables we used did not actually represent what we thought they did. This is a combination of limited documentation, sub-optimal data management, etc etc. In short, a mistake by the people who managed the data made us analyze faulty data. The data managers identified the mistake and contacted us. Together we looked at whether we could actually fix the error (i.e. prepare a correction to the paper), but the number of people who had the treatment of interest in the corrected dataset is just too low to actually analyze the data and get to a somewhat reliable answer to our research question.

So, a retraction is indicated. The co-authors, VISTA, as well as the people on the ethics team at PLOS were all quite professional and looking for the most suitable way to handle this situation. This is not a quick process, by the way – from the moment that we first identified the mistake, it took us ~10 weeks to get the retraction published. This is because we first wanted to make sure that retraction is the right step, get all the technical details regarding the issue, then we had to inform our co-authors and get their formal OK on the request for retraction, then got in touch with the PLOS ethics team, then we had two rounds of getting formal OK’s on the final retraction text, etc, and only then the retraction notice went into production. The final product is only the following couple of sentences:

After this article [1] was published, the authors became aware of a dataset error that renders the article’s conclusions invalid.

Specifically, due to data labelling and missing information issues, the ‘IAT’ data reflect intra-arterial (IA) treatment rather than the more restricted treatment type of IA-thrombolysis. Further investigation of the dataset revealed that only 24 individuals in the study population received IA-thrombolysis, instead of N = 216 as was reported in [1]. Hence, the article’s main conclusion is not valid or reliable as it is based on the wrong data.

Furthermore, due to the small size of the IA-thrombolysis-positive group, the dataset is not sufficiently powered to address the research question.

In light of the above concerns, the authors retract this article.

All authors agree with retraction.

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0279276

Do you know what is weird? You know you are doing the right thing, but still… it feels as if it is not the sciency thing to do. I now have to recognize that retracting a paper, even when it is to correct a mistake without any scientific fraud involved, triggers feelings of anxiety. What will people actually think of me when I have a retraction on my track record? Rationally, I can argue the issue and explain why it is a good thing to have a retraction on your record when it is required. But still, those feeling pop up in my brain from time to time. When that happens, I just try to remember the best thing that came out of this new experience: my lectures on scientific retractions will never be the same.

New paper – Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative

I report often on this blog about new papers that I have co-authored. Every time I highlight something that is special about that particular publication. This time I want to highlight a paper that I co-authored, but also didn’t. Let me explain.

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000576#sec014

The paper, with the title, Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative, was published in PLOS Biology and describes the QUEST center. The author list mentions three individual QUEST researchers, but it also has this interesting “on behalf of the QUEST group” author reference. What does that actually mean?

Since I have reshuffled my research, I am officially part of the QUEST team, and therefore I am part of that group. I gave some input on the paper, like many of my colleagues, but nowhere near enough to justify full authorship. That would, after all, require the following 4(!) elements, according to the ICMJE,

  • Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
  • Drafting the work or revising it critically for important intellectual content; AND
  • Final approval of the version to be published; AND
  • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

This is what the ICMJE says about large author groups: “Some large multi-author groups designate authorship by a group name, with or without the names of individuals. When submitting a manuscript authored by a group, the corresponding author should specify the group name if one exists, and clearly identify the group members who can take credit and responsibility for the work as authors. The byline of the article identifies who is directly responsible for the manuscript, and MEDLINE lists as authors whichever names appear on the byline. If the byline includes a group name, MEDLINE will list the names of individual group members who are authors or who are collaborators, sometimes called non-author contributors, if there is a note associated with the byline clearly stating that the individual names are elsewhere in the paper and whether those names are authors or collaborators.”

I think that this format should be used more, but that will only happen if people take the collaborator status seriously as well. Other “contribution solutions” can help to give some insight into what it means to be a collaborator, such as a detailed description like in movie credits or a standardized contribution table. We have to start appreciating all forms of contributions.

Migraine and venous thrombosis: Another important piece of the puzzle

Asking the right question is arguably the hardest thing to do in science, or at least in epidemiology. The question that you want to answer dictates the study design, the data that you collect and the type of analyses you are going to use. Often, especially in causal research, this means scrutinizing how you should frame your exposure/outcome relationship. After all, there needs to be positivity and consistency which you can only ensure through “the right research question”. Of note, the third assumption for causal inference i.e. exchangeability, conditional or not, is something you can pursue through study design and analyses. But there is a third part of an epidemiological research question that makes all the difference: the domain of the study, as is so elegantly displayed by the cartoon of Todays Random Medical News or the twitter hash-tag “#inmice“.

The domain is the type of individuals to which the answer has relevance. Often, the domain has a one-to-one relationship with the study population. This is not always the case, as sometimes the domain is broader than the study population at hand. A strong example is that you could use young male infants to have a good estimation of the genetic distribution of genotypes in a case-control study for venous thrombosis in middle-aged women. I am not saying that that case-control study has the best design, but there is a case to be made, especially if we can safely assume that the genotype distribution is not sex chromosome dependent or has shifted through the different generations.

The domain of the study is not only important if you want to know to whom the results of your study actually are relevant, but also if you want to compare the results of different studies. (as a side note, keep in mind the absolute risks of the outcome that come with the different domains: they highly affect how you should interpret the relative risks)

Sometimes, studies look like they fully contradict with each other. One study says yes, the other says no. What to conclude? Who knows! But are you sure both studies actually they answer the same question? Comparing the way the exposure and the outcome are measured in the two studies is one thing – an important thing at that – but it is not the only thing. You should also make sure that you take potential differences and similarities between the domains of the studies into account.

This brings us to the paper by KA and myself that just got published in the latest volume of RPTH. In fact, it is a commentary written after we have reviewed a paper by Folsom et al. that did a very thorough job at analyzing the role between migraine and venous thrombosis in the elderly. They convincingly show that there is no relationship, completely in apparent contrast to previous papers. So we asked ourselves: “Why did the study by Folsom et al report findings in apparent contrast to previous studies?  “

There is, of course, the possibility f just chance. But next to this, we should consider that the analyses by Folsom look at the long term risk in an older population. The other papers looked at at a shorter term, and in a younger population in which migraine is most relevant as migraine often goes away with increasing age. KA and I argue that both studies might just be right, even though they are in apparent contradiction. Why should it not be possible to have a transient increase in thrombosis risk when migraines are most frequent and severe, and that there is no long term increase in risk in the elderly, an age when most migraineurs report less frequent and severe attacks?

The lesson of today: do not look only at the exposure of the outcome when you want to bring the evidence of two or more studies into one coherent theory. Look at the domain as well, as you might just dismiss an important piece of the puzzle.

Finding consensus in Maastricht

source https://twitter.com/hspronk

Last week, I attended and spoke at the Maastricht Consensus Conference on Thrombosis (MCCT). This is not your standard, run-of-the-mill, conference where people share their most recent research. The MCCT is different, and focuses on the larger picture, by giving faculty the (plenary) stage to share their thoughts on opportunities and challenges in the field. Then, with the help of a team of PhD students, these thoughts are than further discussed in a break out session. All was wrapped up by a plenary discussion of what was discussed in the workshops. Interesting format, right?

It was my first MCCT, and I had difficulty envisioning how exactly this format will work out beforehand. Now that I have experienced it all, I can tell you that it really depends on the speaker and the people attending the workshops. When it comes to the 20 minute introductions by the faculty, I think that just an overview of the current state of the art is not enough. The best presentations were all about the bigger picture, and had either an open question, a controversial statement or some form of “crystal ball” vision of the future. It really is difficult to “find consensus” when there is no controversy as was the case in some plenary talks. Given the break-out nature of the workshops, my observations are limited in number. But from what I saw, some controversy (if need be only constructed for the workshop) really did foster discussion amongst the workshop participants.

Two specific activities stand out for me. The first is the lecture and workshop on post PE syndrome and how we should able to monitor the functional outcome of PE. Given my recent plea in RPTH for more ordinal analyses in the field of thrombosis and hemostasis – learning from stroke research with its mRS- we not only had a great academic discussion, but made immediately plans for a couple of projects where we actually could implement this. The second activity I really enjoyed is my own workshop, where I not only gave a general introduction into stroke (prehospital treatment and triage, clinical and etiological heterogeneity etc) but also focused on the role of FXI and NETS. We discussed the role of DNase as a potential for co-treatment for tPA in the acute setting (talking about “crystal ball” type of discussions!). Slides from my lecture can be found here (PDF). An honorable mention has to go out to the PhD students P and V who did a great job in supporting me during the prep for the lecture and workshop. Their smart questions and shared insights really shaped my contribution.

Now, I said it was not always easy to find consensus, which means that it isn’t impossible. In fact, I am sure that themes that were discussed all boil down to a couple opportunities and challenges. A first step was made by HtC and HS from the MCCT leadership team in the closing session on Friday which will proof to be a great jumping board for the consensus paper that will help set the stage for future research in our field of arterial thrombosis.

Messy epidemiology: the tale of transient global amnesia and three control groups

Clinical epidemiology is sometimes messy. The methods and data that you might want to use might not be available or just too damn expensive. Does that mean that you should throw in the towel? I do not think so.

I am currently working in a more clinical oriented setting, as the only researcher trained as a clinical epidemiologist. I could tell about being misunderstood and feeling lonely as the only who one who has seen the light, but that would just be lying. The fact is that my position is one privilege and opportunity, as I work with many different groups together on a wide variety of research questions that have the potential to influence clinical reality directly and bring small, but meaningful progress to the field.

Sometimes that work is messy: not the right methods, a difference in interpretation, a p value in table 1… you get the idea. But sometimes something pretty comes out of that mess. That is what happened with this paper, that just got published online (e-pub) in the European Journal of Neurology.  The general topic is the heart brain interaction, and more specifically to what extent damage to the heart actually has a role in transient global amnesia. Now, the idea that there might be a link is due to some previous case series, as well as the clinical experience of some of my colleagues. Next step would of course to do a formal case control-study, and if you want to estimate true measure of rate ratios, a lot effort has to go into the collection of data from a population based control group. We had neither time nor money to do so, and upon closer inspection, we also did not really need that clean control group to answer some of our questions that would progress to the field.

So instead, we chose three different control groups, perhaps better referred as reference groups, all three with some neurological disease. Yes, there are selections at play for each of these groups, but we could argue that those selections might be true for all groups. If these selection processes are similar for all groups, strong differences in patient characteristics of biomarkers suggest that other biological systems are at play. The trick is not to hide these limitations, but as a practiced judoka, leverage these weaknesses and turn them into a strengths. Be open about what you did, show the results, so that others can build on that experience.

So that is what we did. Compared patients with migraine with aura, vestibular neuritis and transient ischemic attack, patients with transient global amnesia are more likely to exhibitsigns of myocardial stress. This study was not designed – nor will if even be able to – understand the cause of this link, not do we pretend that our odds ratios are in fact estimates of rate ratios or something fancy like that. Still, even though many aspects of this study are not “by the book”, it did provide some new insights that help further thinking about and investigations of this debilitating and impactful disease.

The effort was lead by EH, and the final paper can be found here on pubmed.

Cardiac troponin T and severity of cerebral white matter lesions: quantile regression to the rescue

quantile regression of high vs low troponin T and white matter lesion quantile

A new paper, this time venturing into the field of the so-called heart-brain interaction. We often see stroke patients with cardiac problems, and vice versa. And to make it even more complex, there is also a link to dementia! What to make of this? Is it a case of chicken and the egg, or just confounding by a third variable?  How do these diseases influence each other?

This paper tries to get a grip on this matter by zooming in on a marker of cardiac damage, i.e. cardiac troponin T. We looked at this marker in our stroke patients. Logically, stroke patients do not have increased levels of troponin T, yet, they do. More interestingly, the patients that exhibit high levels of this biomarker also have high level of structural changes in the brain, so called cerebral white matter lesions. 

But the problem is that patients with high levels of troponin T are different from those who have no marker of cardiac damage. They are older and have more comorbidities, so a classic case for adjustment for confounding, right? But then we realize that both troponin as well as white matter lesions are a left skewed data. Log transformation of the variables before you run linear regression, but then the interpretation of the results get a bit complex if you want clear point estimates as answers to your research question.

So we decided to go with a quantile regression, which models the quantile cut offs with all the multivariable regression benefits. The results remain interpretable and we don’t force our data into distribution where it doesn’t fit. From our paper:

In contrast to linear regression analysis, quantile regression can compare medians rather than means, which makes the results more robust to outliers [21]. This approach also allows to model different quantiles of the dependent variable, e.g. 80th percentile. That way, it is possible to investigate the association between hs-cTnT in relation to both the lower and upper parts of the WML distribution. For this study, we chose to perform a median quantile regression analysis, as well as quantile regression analysis for quintiles of WML (i.e. 20th, 40th, 60th and 80th percentile). Other than that, the regression coefficients indicate the effects of the covariate on the cut-offs of the respective quantiles of the dependent variable, adjusted for potential covariates, just like in any other regression model.

Interestingly, the result show that association between high troponin T and white matter lesions is the strongest in the higher quantiles. If you want to stretch to a causal statement that means that high troponin T has a more pronounced effect on white matter lesions in stroke patients who are already at the high end of the distribution of white matter lesions. 

But we should’t stretch it that far. This is a relative simple study, and the clinical relevance of our insights still needs to be established. For example, our unadjusted results might indicate that the association in itself might be strong enough to help predict post stroke cognitive decline. The adjusted numbers are less pronounced, but still, it might be enough to help prediction models.

The paper, led by RvR, is now published in J of Neurol, and can be found here, as well as on my mendeley profile.

 von Rennenberg R, Siegerink B, Ganeshan R, Villringer K, Doehner W, Audebert HJ, Endres M, Nolte CH, Scheitz JF. High-sensitivity cardiac troponin T and severity of cerebral white matter lesions in patients with acute ischemic stroke. J Neurol Springer Berlin Heidelberg; 2018; 0: 0.

Impact of your results: Beyond the relative risk

I wrote about this in an earlier topic: JLR and I published a paper in which we explain that a single relative risk, irrespective of its form, is jus5t not enough. Some crucial elements go missing in this dimensionless ratio. The RR could allow us to forget about the size of the denominator, the clinical context, the crude binary nature of the outcome. So we have provided some methods and ways of thinking to go beyond the RR in an tutorial published in RPTH (now in early view). The content and message are nothing new for those trained in clinical research (one would hope). Even for those without a formal training most concepts will have heard the concepts discussed in a talk or poster . But with all these concepts in one place, with an explanation why they provide a tad more insight than the RR alone, we hope that we will trigger young (and older) researchers to think whether one of these measures would be useful. Not for them, but for the readers of their papers. The paper is open access CC BY-NC-ND 4.0, and can be downloaded from the website of RPTH, or from my mendeley profile.  

new paper: pulmonary dysfunction and CVD outcome in the ELSA study

 This is a special paper to me, as this is a paper that is 100% the product of my team at the CSB.Well, 100%? Not really. This is the first paper from a series of projects where we work with open data, i.e. data collected by others who subsequently shared it. A lot of people talk about open data, and how all the data created should be made available to other researchers, but not a lot of people talk about using that kind of data. For that reason we have picked a couple of data resources to see how easy it is to work with data that is initially not collected by ourselves.

It is hard, as we now have learned. Even though the studies we have focussed on (ELSA study and UK understanding society) have a good description of their data and methods, understanding this takes time and effort. And even after putting in all the time and effort you might still not know all the little details and idiosyncrasies in this data.

A nice example lies in the exposure that we used in this analyses, pulmonary dysfunction. The data for this exposure was captured in several different datasets, in different variables. Reverse engineering a logical and interpretable concept out of these data points was not easy. This is perhaps also true in data that you do collect yourself, but then at least these thoughts are being more or less done before data collection starts and no reverse engineering is needed. new paper: pulmonary dysfunction and CVD outcome in the ELSA study

So we learned a lot. Not only about the role of pulmonary dysfunction as a cause of CVD (hint, it is limited), or about the different sensitivity analyses that we used to check the influence of missing data on the conclusions of our main analyses (hint, limited again) or the need of updating an exposure that progresses over time (hint, relevant), but also about how it is to use data collected by others (hint, useful but not easy).

The paper, with the title “Pulmonary dysfunction and development of different cardiovascular outcomes in the general population.” with IP as the first author can be found here on pubmed or via my mendeley profile.

New Masterclass: “Papers and Books”

“Navigating numbers” is a series of Masterclass initiated by a team of Charité researchers who think that our students should be able to get more familiar how numbers shape the field of medicine, i.e. both medical practice and medical research. And I get to organize the next in line.

I am very excited to organise the next Masterclass together with J.O. a bright researcher with a focus on health economics. As the full title of the masterclass is “Papers and Books – series 1 – intended effect of treatments”, some health economics knowledge is a must in this journal club style series of meetings.

But what will we exactly do? This Masterclass will focus on reading some papers as well as a book (very surprising), all with a focus on study design and how to do proper research into “intended effect of treatment” . I borrowed this term from one of my former epidemiology teachers, Jan Vandenbroucke, as it helps to denote only a part of the field of medical research with its own idiosyncrasies, yet not limited by study design.

The Masterclass runs for 8 meetings only, and as such not nearly enough to have the students understand all in and outs of proper study design. But that is also not the goal: we want to show the participants how one should go about when the ultimate question is medicine is asked: “should we treat or not?”

If you want to participate, please check out our flyer

New paper: Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young Adults

2017-06-16 09_26_46-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young2017-06-16 09_25_58-Contribution of Established Stroke Risk Factors to the Burden of Stroke in Young

Just a relative risk is not enough to fully understand the implications of your findings. Sure, if you are an expert in a field, the context of that field will help you to assess the RR. But if ou are not, the context of the numerator and denominator is often lost. There are several ways to work towards that. If you have a question that revolves around group discrimination (i.e. questions of diagnosis or prediction) the RR needs to be understood in relation to other predictors or diagnostic variables. That combination is best assessed through the added discriminatory value such as the AUC improvement or even more fancy methods like reclassification tables and net benefit indices. But if you are interested in are interested in a single factor (e.g. in questions of causality or treatment) a number needed to treat (NNT) or the Population Attributable Fraction can be used.

The PAF has been subject of my publications before, for example in these papers where we use the PAF to provide the context for the different OR of markers of hypercoagulability in the RATIO study / in a systematic review. This paper is a more general text, as it is meant to provide in insight for non epidemiologist what epidemiology can bring to the field of law. Here, the PAF is an interesting measure, as it has relation to the etiological fraction – a number that can be very interesting in tort law. Some of my slides from a law symposium that I attended addresses these questions and that particular Dutch case of tort law.

But the PAF is and remains an epidemiological measure and tells us what fraction of the cases in the population can be attributed to the exposure of interest. You can combine the PAF to a single number (given some assumptions which basically boil down to the idea that the combined factors work on an exact multiplicative scale, both statistically as well as biologically). A 2016 Lancet paper, which made huge impact and increased interest in the concept of the PAF, was the INTERSTROKE paper. It showed that up to 90% of all stroke cases can be attributed to only 10 factors, and all of them modifiable.

We had the question whether this was the same for young stroke patients. After all, the longstanding idea is that young stroke is a different disease from old stroke, where traditional CVD risk factors play a less prominent role. The idea is that more exotic causal mechanisms (e.g. hypercoagulability) play a more prominent role in this age group. Boy, where we wrong. In a dataset which combines data from the SIFAP and GEDA studies, we noticed that the bulk of the cases can be attributed to modifiable risk factors (80% to 4 risk factors). There are some elements with the paper (age effect even within the young study population, subtype effects, definition effects) that i wont go into here. For that you need the read the paper -published in stroke- here, or via my mendeley account. The main work of the work was done by AA and UG. Great job!

Virchow’s triad and lessons on the causes of ischemic stroke

I wrote a blog post for BMC, the publisher of Thrombosis Journal in order to celebrate blood clot awareness month. I took my two favorite subjects, i.e. stroke and coagulation, and I added some history and voila!  The BMC version can be found here.

When I look out of my window from my office at the Charité hospital in the middle of Berlin, I see the old pathology building in which Rudolph Virchow used to work. The building is just as monumental as the legacy of this famous pathologist who gave us what is now known as Virchow’s triad for thrombotic diseases.

In ‘Thrombose und Embolie’, published in 1865, he postulated that the consequences of thrombotic disease can be attributed one of three categories: phenomena of interrupted blood flow, phenomena associated with irritation of the vessel wall and its vicinity and phenomena of blood coagulation. This concept has now been modified to describe the causes of thrombosis and has since been a guiding principle for many thrombosis researchers.

The traditional split in interest between arterial thrombosis researchers, who focus primarily on the vessel wall, and venous thrombosis researchers, who focus more on hypercoagulation, might not be justified. Take ischemic stroke for example. Lesions of the vascular wall are definitely a cause of stroke, but perhaps only in the subset of patient who experience a so called large vessel ischemic stroke. It is also well established that a disturbance of blood flow in atrial fibrillation can cause cardioembolic stroke.

Less well studied, but perhaps not less relevant, is the role of hypercoagulation as a cause of ischemic stroke. It seems that an increased clotting propensity is associated with an increased risk of ischemic stroke, especially in the young in which a third of main causes of the stroke goes undetermined. Perhaps hypercoagulability plays a much more prominent role then we traditionally assume?

But this ‘one case, one cause’ approach takes Virchow’s efforts to classify thrombosis a bit too strictly. Many diseases can be called multi-causal, which means that no single risk factor in itself is sufficient and only a combination of risk factors working in concert cause the disease. This is certainly true for stroke, and translates to the idea that each different stroke subtype might be the result of a different combination of risk factors.

If we combine Virchow’s work with the idea of multi-causality, and the heterogeneity of stroke subtypes, we can reimagine a new version of Virchow’s Triad (figure 1). In this version, the patient groups or even individuals are scored according to the relative contribution of the three classical categories.

From this figure, one can see that some subtypes of ischemic stroke might be more like some forms of venous thrombosis than other forms of stroke, a concept that could bring new ideas for research and perhaps has consequences for stroke treatment and care.

Figure 1. An example of a gradual classification of ischemic stroke and venous thrombosis according to the three elements of Virchow’s triad.

However, recent developments in the field of stroke treatment and care have been focused on the acute treatment of ischemic stroke. Stroke ambulances that can discriminate between hemorrhagic and ischemic stroke -information needed to start thrombolysis in the ambulance-drive the streets of Cleveland, Gothenburg, Edmonton and Berlin. Other major developments are in the field of mechanical thrombectomy, with wonderful results from many studies such as the Dutch MR CLEAN study. Even though these two new approaches save lives and prevent disability in many, they are ‘too late’ in the sense that they are reactive and do not prevent clot formation.

Therefore, in this blood clot awareness month, I hope that stroke and thrombosis researchers join forces and further develop our understanding of the causes of ischemic stroke so that we can Stop The Clot!

New team member!

A couple of weeks ago I announced that my team was looking for a new post-doc. I received many applications, some even from as far as Italy and Spain. Out of this pile of candidates we were able to find an individual candidate who fulfilled all the requirements we had mind and than some. It is great that she will join the team in December. JH has worked in the field of epidemiology for quite some time and is not only experienced in setting up new projects and provide physicians with methodological input on their clinical research projects, but she also has a great interest in the more methodological side of epidemiology. For example, she is co-author/developer of the program DAGitty which can be used to draw causal diagrams. She is also speaker for the working group methodology of the German Society of Epidemiology (dgEpi). Her background in psychology also means that she brings a lot of knowledge on methods that we as a team do not have so far. In short, a great addition to the team. Welcome JH!

 

 

Berlin Epidemiological Methods Colloquium kicks of with SER event

A small group of epi-nerds (JLR, TK and myself) decided to start a colloquium on epidemiological methods. This colloquium series kicks off with a webcast of an event organised by the Society for Epidemiological Research (SER), but in general we will organize meetings focussed on advanced topics in epidemiological methods. Anyone interested is welcome. Regularly meetings will start in February 2017. All meetings will be held in English.
More information on the first event can be found below or via this link:

“Perspective of relative versus absolute effect measures” via SERdigital

Date: Wednesday, November 16th 2016 Time: 6:00pm – 9:00pm
Location: Seminar Room of the Neurology Clinic, first floor (Alte Nervenklinik)
Bonhoefferweg 3, Charite Universitätsmedizin Berlin- Campus Mitte, 10117 Berlin
(Map: https://www.charite.de/service/lageplan/plan/map/ccm_bonhoefferweg_3)

Description:
Join us for a live, interactive viewing party of a debate between two leading epidemiologists, Dr. Charlie Poole and Dr. Donna Spiegelman, about the merits of relative versus absolute effect measures. Which measure of effect should epidemiologists prioritize? This digital event organized by the Society for Epidemiologic Research will also include three live oral presentations selected from submitted abstracts. There will be open discussion with other viewers from across the globe and opportunities to submit questions to the speakers. And since no movie night is complete without popcorn, we will provide that, too! For more information, see: https://epiresearch.org/ser50/serdigital

If you plan to attend, please register (space limited): https://goo.gl/forms/3Q0OsOxufk4rL9Pu1

 

predicting DVT with D-dimer in stroke patients: a rebuttal to our letter

2016-10-09-18_05_33-1-s2-0-s0049384816305102-main-pdf
Some weeks ago, I reported on a letter to the editor of Thrombosis Research on the question whether D-Dimer indeed does improve DVT risk prediction in stroke patients.

I was going to write a whole story on how one should not use a personal blog to continue the scientific debate. As you can guess, I ended up writing a full paragraph where I did this anyway. So I deleted that paragraph and I am going to do a thing that requires some action from you. I am just going to leave you with the links to the letters and let you decide whether the issues we bring up, but also the corresponding rebuttal of the authors, help to interpret the results from the the original publication.

How to set up a research group

A couple of weeks ago I wrote down some thoughts I had while writing a paper for the JTH series on Early Career Researchers. I was asked to write how one sets up a research group, and the four points I described in my previous post can be recognised in the final paper.

But I also added some reading tips in the paper. reading on a particular topic helps me not only to learn what is written in the books, but also to get my mind in a certain mindset. So, when i knew that i was going to take over a research group in Berlin I read a couple of books, both fiction and non fiction. Some where about Berlin (e.g. Cees Nootebooms Berlijn 1989/2009), some were focussed on academic life (e.g. Porterhouse Blue). They help to get my mind in a certain gear to help me prepare of what is going on. In that sense, my bookcase says a lot about myself.

The number one on the list of recommended reads are the standard management best sellers, as I wrote in the text box:

// Management books There are many titles that I can mention here; whether it the best-seller Seven Habits of Highly Effective People or any of the smaller booklets by Ken Blanchard, I am convinced that reading some of these texts can help you in your own development as a group leader. Perhaps you will like some of the techniques and approaches that are proposed and decide to adopt them. Or, like me, you may initially find yourself irritated because you cannot envision the approaches working in the academic setting. If this happens, I encourage you to keep reading because even in these cases, I learned something about how academia works and what my role as a group leader could be through this process of reflection. My absolute top recommendation in this category is Leadership and Self-Deception: a text that initially got on my nerves but in the end taught me a lot.

I really think that is true. You should not only read books that you agree with, or which story you enjoy. Sometimes you can like a book not for its content but the way it makes you question your own preexisting beliefs and habits. But it is true that this sometimes makes it difficult to actually finnish such a book.

Next to books, I am quite into podcasts so I also wrote

// Start up. Not a book, but a podcast from Gimlet media about “what it’s really like to get a business off the ground.” It is mostly about tech start-ups, but the issues that arise when setting up a business are in many ways similar to those you encounter when you are starting up a research group. I especially enjoyed seasons 1 and 3.

I thought about including the sponsored podcast “open for business” from Gimlet Creative, as it touches upon some very relevant aspects of starting something new. But for me the jury is still out on the “sponsored podcast” concept  – it is branded content from amazon, and I am not sure to what extent I like that. For now, i do not like it enough to include it in the least in my JTH-paper.

The paper is not online due to the summer break,but I will provide a link asap.

– update 11.10.2016 – here is a link to the paper. 

 

 

 

 

Does d-dimer really improve DVT prediction in stroke?

369
elsevier.com

Good question, and even though thromboprofylaxis is already given according to guidelines in some countries, I can see the added value of a good discriminating prediction rule. Especially finding those patients with low DVT risk might be useful. But using d-dimer is a whole other question. To answer this, a thorough prediction model needs to be set up both with and without the information of d-dimer and only a direct comparison of these two models will provide the information we need.

In our view, that is not what the paper by Balogun et al did. And after critical appraisal of the tables and text, we found some inconsistencies that prohibits the reader from understanding what exactly was done and which results were obtained. In the end, we decided to write a letter to the editor, especially to prevent that other readers to mistakenly take over the conclusion of the authors. This conclusion, being that “D-dimer concentration with in 48 h of acute stroke is independently associated with development of DVT.This observation would require confirmation in a large study.” Our opinion is that the data from this study needs to be analysed properly to justify such an conclusion. One of the key elements in our letter is that the authors never compare the AUC of the model with and without d-dimer. This is needed as that would provide the bulk of the answer whether or not d-dimer should be measured. The only clue we have are the ORs of d-dimer, which range between 3-4, which is not really impressive when it comes to diagnosis and prediction. For more information on this, please check this paper on the misuse of the OR as a measure of interest for diagnosis/prediction by Pepe et al.

A final thing I want to mention is that our letter was the result of a mini-internship of one of the students at the Master programme of the CSB and was drafted in collaboration with our Virchow scholar HGdH from the Netherlands. Great team work!

The letter can be found on the website of Thrombosis Research as well as on my Mendeley profile.

 

Cardiovascular events after ischemic stroke in young adults (results from the HYSR study)

2016-05-01 21_39_40-Cardiovascular events after ischemic stroke in young adults

The collaboration with the group in finland has turned into a nice new publication, with the title

“Cardiovascular events after ischemic stroke in young adults”

this work, with data from Finland was primarily done by KA and JP. KA came to Berlin to learn some epidemiology with the aid of the Virchow scholarship, so that is where we came in. It was great to have KA to be part of the team, and even better to have been working on their great data.

Now onto the results of the paper: like in the results of the RATIO follow-up study, the risk of recurrent young stroke remained present for a long-term time after the stroke in this analysis of the Helsinki Young Stroke Registry. But unlike the RATIO paper, this data had more information on their patients, for example the TOAST criteria. this means that we were able to identify that the group with a LAA had a very high risk of recurrence.

The paper can be found on the website of Neurology, or via my mendeley profile.

Pregnancy loss and risk of ischaemic stroke and myocardial infarction

2016-04-08 13_36_29-Posteingang - bob.siegerink@charite.de - Outlook

Together with colleagues I worked on a paper on relationship between pregnancy, its complications and stroke and myocardial infarction in young women, which just appeared online on the BJH website.

The article, which analyses data from the RATIO study, concludes that only if you have multiple pregnancy losses, your risk of stroke is increased (OR 2.4) compared to those who never experienced a pregnancy loss. The work was mainly done by AM, and is a good example of international collaborations where we benefitted from the expertise of all team members.

The article, with the full title “Pregnancy loss and risk of ischaemic stroke and myocardial infarction” can be found via PubMed, or via my personal Mendeley page.

Statins and risk of poststroke hemorrhagic complications

2016-03-28 13_00_38-Statins and risk of poststroke hemorrhagic complicationsEaster brought another publication, this time with the title

“Statins and risk of poststroke hemorrhagic complications”

I am very pleased with this paper as it demonstrates two important aspects of my job. First, I was able to share my thought on comparing current users vs never users. As has been argued before (e.g. by the group of Hérnan) and also articulated in a letter to the editor I wrote with colleagues from Leiden, such a comparison brings forth an inherent survival bias: you are comparing never users (i.e. those without indication) vs current users (those who have the indication, can handle the side-effects of the medication, and stay alive long enough to be enrolled into the study as users). This matter is of course only relevant if you want to test the effect of statins, not if you are interested in the mere predictive value of being a statin user.

The second thing about this paper is the way we were able to use data from the VISTA collaboration, which is a large amount of data pooled from previous stroke studies (RCT and observational). I believe such ways of sharing data brings forward science. Should all data be shared online for all to use? I do am not sure of that, but the easy access model of the VISTA collaboration (which includes data maintenance and harmonization etc) is certainly appealing.

The paper can be found here, and on my mendeley profile.

 

– update 1.5.2016: this paper was topic of a comment in the @greenjournal. See also their website

update 19.5.2016: this project also led to first author JS to be awarded with the young researcher award of the ESOC2016.

 

 

Causal Inference in Law: An Epidemiological Perspective

source:ejrr

Finally, it is here. The article I wrote together with WdH, MZ and RM was published in the European Journal of Risk and Regulation last week. And boy, did it take time! This whole project, an interdisciplinary project where epidemiological thinking was applied to questions of causal inference in tort law, took > 3 years – with only a couple of months writing… the rest was waiting and waiting and waiting and some peer review. but more on this later.

First some content. in the article we discuss the idea of proportional liability that adheres to the epidemiological concept of multi-causality. But the article is more: as this is a journal for non epidemiologist, we also provide a short and condensed overview of study design, bias and other epidemiological concepts such as counterfactual thinking. You might have recognised the theme from my visits to the Leiden Law school for some workshops. The EJRR editorial describes it asas: “(…) discuss the problem of causal inference in law, by providing an epidemiological viewpoint. More specifically, by scrutinizing the concept of the so-called “proportional liability”, which embraces the epidemiological notion of multi-causality, they demonstrate how the former can be made more proportional to a defendant’s relative contribution in the known causal mechanism underlying a particular damage.”

Getting this thing published was tough: the quality of the peer review was low (dare I say zero?),communication was difficult, submission system flawed etc. But most of all the editorial office was slow – first submission was June 2013! This could be a non-medical journal thing, i do not know, but still almost three years. And this all for an invited article that was planned to be part of a special edition on the link between epi and law, which never came. Due several delays (surprise!) of the other articles for this edition, it was decided that our article is not waiting for this special edition anymore. Therefore, our cool little insight into epidemiology now seems to be lost between all those legal and risk regulation articles. A shame if you ask me, but I am glad that we are not waiting any longer!

Although i do love interdisciplinary projects, and I think the result is a nice one, I do not want to go through this process again. No more EJRR for me.

Ow, one more thing… the article is behind a pay wall and i do not have access through my university, nor did the editorial office provide me with a link to a pdf of the final version. So, to be honest, I don’t have the final article myself! Feels weird. I hope EJRR will provide me with a pdf quite soon. In the meantime, anybody with access to this article, please feel free to send me a copy!

Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke

 

source: journals.plos.org/plosbiology

We published a new article just in PLOS Biology today, with the title:

“Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke”

This is a wonderful collaboration between three fields: stats, epi and lab researchers. Combined we took a look at what is called attrition in the preclinical labs, that is the loss of data in animal experiments. This could be because the animal died before the needed data could be obtained, or just because a measurement failed. This loss of data can be translated to the concept of loss to follow-up in epidemiological cohort studies, and from this field we know that this could lead to substantial loss of statistical power and perhaps even bias.

But it was unknown to what extent this also was a problem in preclinical research, so we did two things. We looked at how often papers indicated there was attrition (with an alarming number of papers that did not provide the data for us to establish whether there was attrition), and we did some simulation what happens if there is attrition in various scenarios. The results paint a clear picture: the loss of power but also the bias is substantial. The degree of these is of course dependent on the scenario of attrition, but the message of the paper is clear: we should be aware of the problems that come with attrition and that reporting on attrition is the first step in minimising this problem.

A nice thing about this paper is that coincides with the start of a new research section in the PLOS galaxy, being “meta-research”, a collection of papers that all focus on how science works, behaves, and can or even should be improved. I can only welcome this, as more projects on this topic are in our pipeline!

The article can be found on pubmed and my mendeley profile.

Update 6.1.16: WOW what a media attention for this one. Interviews with outlets from UK, US, Germany, Switzerland, Argentina, France, Australia etc, German Radio, the dutch Volkskrant, and a video on focus.de. More via the corresponding altmetrics page . Also interesting is the post by UD, the lead in this project and chief of the CSB,  on his own blog “To infinity, and beyond!”

 

New article published – Ankle-Brachial Index and Recurrent Stroke Risk: Meta-Analysis


Another publication, this time on the role of the ABI as a predictor for stroke recurrence. This is a meta analysis, which combines data from 11 studies allowing us to see that ABI was moderately associated with recurrent stroke (RR1.7) and vascular events (RR 2.2). Not that much, but it might be just enough to increase some of the risk prediction models available for stroke patients when ABI is incorperated.

This work, the product of the great work of some of the bright students that work at the CSB (JBH and COL), is a good start in our search for a good stroke recurrence risk prediction model. Thiswill be a major topic in our future research in the PROSCIS study which is led by TGL. I am looking forward to the results of that study, as better prediction models are needed in the clinic especially true as more precise data and diagnosis might lead to better subgroup specific risk prediction and treatment.

The article can be found on pubmed and my mendeley profile and should be cited as

Hong J Bin, Leonards CO, Endres M, Siegerink B, Liman TG. Ankle-Brachial Index and Recurrent Stroke Risk. Stroke 2015; : STROKEAHA.115.011321.

The ECTH 2016 in The Hague

My first conference experience (ISTH 2008, Boston) got me hooked on science. All these people doing the same thing, speaking the same language, and looking to show and share their knowledge. This is true when you are involved in the organisation. Organising the international soccer match at the Olympic stadium in Amsterdam linked to the ISTH 2013 to celebrate the 25th anniversary of the NVTH was fun. But lets not forget the exciting challenge of organising the WEON 2014.

And now, the birth of a new conference, the European Congress of Thrombosis and Hemostasis, which will be held in The Hague in Netherlands (28-30 sept 2016). I am very excited for several reasons: First of all, this conference will fill in the gap of the bi-annual ISTH conferences. Second, I have the honor to help out as the chair of the junior advisory board. Third, the Hague! My old home town!

So, we have 10 months to organise some interesting meetings and activities, primary focussed on the young researchers. Time to get started!

First results from the RATIO follow up study

Another article got published today in the JAMA Int Med, this time the results from the first analyses of the RATIO follow-up data. For these data, we linked the RATIO study to the dutch national bureau of statistics (CBS), to obtain 20 years of follow-up on cardiovascular morbidity and mortality. We first submitted a full paper, but later we downsized to a research letter with only 600 words. This means that only the main message (i.e. cardiovascular recurrence is high, persistent over time and disease specific) is left.

It is a “Leiden publication”, where I worked together with AM and FP from Milano. Most of the credit of course goes to AM, who is the first author of this piece. The cool thing about this publication is that the team worked very hard on it for a long time (data linking and analyses where not an easy thing to do, as well as changing from 3000 words to 600 in just a week or so), and that in the end all the hard work paid off. But next to the hard work, it is also nice to see results being picked up by the media. The JAMA Int Med put out an international press release, whereas the LUMC is going to publish its own Dutch version. In the days before the ‘online first’ publication I already answered some emails from writers for medical news sites, some with up to 5.000K views per month. I do not know if you think that’s a lot, but for me it is. The websites that cover this story can be found here (dagensmedisin.sehealio.commedicaldaily.com, medpagetoday.commedonline.atdrugs.com / healthday.com / webmd.com /  usnews.com / doctorslounge.commedicalxpress.commedicalnewstoday.comeurekalert.org and perhaps more to come. Why not just take a look at the Altmetric of this article).

– edit 26.11.2015: a dutch press release from the LUMC can be found here) – edit: oops, medpagetoday.com has a published great report/interview, but used a wrong title…”Repeat MI and Stroke Risks Defined in ‘Younger’ Women on Oral Contraceptives”. not all women were on OC of course.

Of course, @JAMAInternalMed tweeted about it

 

The article, with the full title Recurrence and Mortality in Young Women With Myocardial Infarction or Ischemic Stroke: Long-term Follow-up of the Risk of Arterial Thrombosis in Relation to Oral Contraceptives (RATIO) Study can be found via JAMA Internal Medicine or via my personal Mendeley page.

As I reported earlier, this project is supported by a grant from the LUF den Dulk-Moermans foundation, for which we are grateful.

A year in Berlin

teamfoto-ag-siegerink

So, it is just over a year since I started here in Berlin. In this year I had the opportunity to start some great projects. Some of these projects have already resulted in some handsome -upcoming- publications.

For those who wonder, the picture gives a somewhat inflated impression of the size of the team, as we decided to include all people who currently work with us. This includes two of our five students and 2 virchow scholars that are visiting from Amsterdam and Hamburg. I included them all in the picture, as I enjoy my work here in Berlin because of all team members. Now, let’s do some science!

Spectrum of cerebral spinal fluid findings in patients with posterior reversible encephalopathy syndrome

source: http://www.springer.com

This is one of the first projects that I was involved with from start to finish since my start in Berlin to be published, so I’m quite content with it. A cool landmark after a year in Berlin.

Together with TL and LN I supervised a student from the Netherlands (JH). This publication is the result of all the work JH did, together with the great medical knowledge from the rest of the team. About the research: Posterior reversible encephalopathy syndrome, or PRES, is a syndrome that can have stroke like symptoms, but in fact has got nothing to do with it. The syndrome was recognised as a separate entity only a couple of years ago, and this group of patients that we collected from the Charite is one of the largest collections in the world.

It is a syndrome characterised by edema (being either vasogenic or cytotoxic), suggesting there is something wrong with the fluid balance in the brain. A good way to learn more about the fluids in the brain is to take a look at the different things you can measure in the cerebrospinal fluid. The aim of this paper was therefore to see to what extend the edema, but also other patients characteristics, was associated with CSF parameters.

Our main conclusion is indeed the total amount of protein in the CSF is elevated in most PRES patients, and that severe edema grade was associated with more CSF. Remind yourself that this is basically a case series (with some follow up) but CSF is therefore measured during diagnosis and only in a selection of the patients. Selection bias is therefore likely to be affecting our results as well as the possibility of reverse causation. Next to that, research into “syndromes” is always complicated as they are a man-made concept. This problem we also encountered in the RATIO analyses about the antiphospholipid syndrome (Urbanus, Lancet Neurol 2009): a real syndrome diagnosis could not be given, as that requires two blood draws with 3 months time in between which is not possible in this case-control study. But still, there is a whole lot of stuff to learn about the syndromes in our clinical research projects.

I think this is also true for the PRES study: I think that our results show that it is justified to do a prospective and rigorous and standardised analyses of these patients with the dangerous syndrome. More knowledge on the causes and consequences is needed!

The paper can be cited as:

Neeb L, Hoekstra J, Endres M, Siegerink B, Siebert E, Liman TG. Spectrum of cerebral spinal fluid findings in patients with posterior reversible encephalopathy syndrome. J Neurol; 2015; (e-pub) and can be found on pubmed or on my mendeley profile

New article: Lipoprotein (a) as a risk factor for ischemic stroke: a meta-analysis

source: atherosclerosis-journal.com

Together with several co-authors, with first author AN in the lead, we did a meta analyses on the role of Lp(a) as a risk factor of stroke. Bottomline, Lp(a) seems to be a risk factor for stroke, which was most prominently seen in the young.

The results are not the only reason why I am so enthusiastic by this article. It is also about the epidemiological problem that AN encountered and we ended up discussing over coffee. The problem: the different studies use different categorisations (tertiles, quartiles, quintiles). How to use these data and pool them in a way to get a valid and precise answer to the research question? In the end we ended up using the technique proposed used by D Danesh et al. JAMA. 1998;279(18):1477-1482 that uses the normal distribution and the distances in SD. A neat technique, even though it assumes a couple of things about the uniformity of the effect over the range of the exposure. An IPD would be better, as we would be free to investigate the dose relationship and we would be able to keep adjustment for confounding uniform, but hey… this is cool in itself!

The article can be found on pubmed and on my mendeley profile.

Fellow of the European Stroke Organisation

 www.eso-sss-2012.med.unideb.hu

I just got word that I am elected as fellow of the European Stroke Organisation. Well, elected sounds more cool then it really is… I applied myself by sending in an application letter, resume, some form to show my experience in stroke research and two letters of recommendation of two active fellows and that’s that. So what does this mean? Basically, the fellows of the ESO are those who want to put some of their time to good use in name of the ESO, such as being active in one fo the committees. I chose to get active in teaching epidemiology (teaching courses during the ESOC conferences, or in the winter/summer schools, perhaps in the to be founded ESO scientific journal), but how is as of this moment not completely clear yet. Nonetheless, I am glad that I can work with and through this organisation to improve the epidemiological knowledge in the field of stroke.

New article published: the relationship between ADAMTS13 and MI

2015-06-16 14_26_12-Plasma ADAMTS13 levels and the risk of myocardial infarction_ an individual pati

this article is a collaboration with a lot of guys. initiated from the Milan group, we ended up with a quite diverse group of researchers to answers this question because of the methods that we used: the individual patient data meta-analysis. The best thing about this approach: you can pool the data from different studies, even while you can adjusted for potential sources of confounding in a similar manner (given that the data are available, that is). On themselves, these studies showed some mixed results. But in the end, we were able to use the combined data to show that there was an increase MI risk but only for those with very low levels of ADAMTS13. So, here you see the power of IPD meta-analysis!

The credits for this work go primarily to AM who did a great job of getting all PI’s on board, analysing the data and writing a god manuscript. The final version is not online, but you find the pre-publication on pubmed

 

 

Changing stroke incidence and prevalence

changing stroke population

Lower changing incidences of disease over time do not necessarily mean that the number of patients in care also goes down, as the prevalence of the disease is a function of incidence and mortality. “Death Cures”. Combine this notion with the fact that both the incidence and mortality rates of the different stroke subtypes change different over time, and you will see that the group of patients that suffer from stroke will be quite different from the current one.

I made this picture to accompany a small text on declining stroke incidences which I have written for the newsletter of the Kompetenznetz Schlaganfall. which can be found in this pdf.

New article published – Conducting your own research: a revised recipe for a clinical research training project

2015-06-07 15_38_24-Mendeley Desktop
source: https://www.ntvg.nl/artikelen/zelf-onderzoek-doen

A quick update on a new article that was published on friday in the NTVG. This article with the title

“Conducting your own research: a revised recipe for a clinical research training project”

– gives a couple of suggestions for young clinicians/researchers on how they should organise their epidemiological research projects. This paper was written to commemorate the retirement of prof JvdB, who wrote the original article back in 1989. I am quite grew quite fond of this article, as it combines insights from 25 years back as well as quite recent insights (e.g. STROBE and cie Schuyt and resulted in a article that will help young research to rethink how they plan and execute their own research project.

There are 5 key suggestions that form the backbone of this article i.e. limit the research question, conduct a pilot study, write the article before you collect the data, streamline the research process and be accountable. As the article is in Dutch only at this moment, I will work on an English version. First drafts of this ms, each discussing each of the 5 recommendations might appear on this website. And how about a German version?

Anyway, it has to be mentioned that if it not was for JvdB, this article would have never come to light. Not only because he wrote the original, but mostly because he is one of the most inspiring teachers of epidemiology.

The professor as an entrepeneur

picture: onderzoeksredactie.nl

Today, I’ve read a long read from the onderzoekdsredactie, which is a Dutch initiative for high quality research journalism. In this article they present their results from their research into the conflicts of interest of profs in the Netherlands. They were very thorough: they published a summary in article from, but also made sure that all methodological choices, the questionnaire they used, the results etc are all available for further scrutiny of the reader. It is a shame though that the complete dataset is not available for further analyses (what characteristics make that some prof do not disclose their COI?)

The results are, although unpleasant to realise, not new. At least not to me. I can imagine that for most people the idea of prof with COI is indeed a rarity, but working in academia I’ve seen numbers of cases to know that this is not the case. The article that I’ve read was thorough in their analyses: it is not only because profs just want to get rich, but this concept of the prof as an entrepreneur is even supported by the Dutch government. Recent changes in the funding structure of research makes that ‘valorisation’, spinn-offs and collaboration with industry partners are promoted. this is all to further enlarge the ‘societal impact’ of science. These changes mightinded enforce such a thing, but I think that the academic freedom that researchers have should never be the victim.

New article published – but did I deserve it?

One of these dots is me standing on a platform waiting for my train! Source: GNCnet.nl

This website is to keep track of all things that sound ‘sciency’, and so all the papers that I contributed end up here with a short description. Normally this means that I am one of the authors and I know well ahead of time that an article will be published online or in print. Today, however, I got a little surprise: I got notice that I am a co-author on a paper (pdf) which I knew was coming, but I didn’t know that I was a co-author. And my amazement grew even more the moment that I discovered that I was placed as the last author, a place reserved for senior authorship in most medical journals.

However , there is a catch… I had to share my ‘last authorship’ position with 3186 others, an unprecedented number!

You might have guessed that this is not just a normal paper and that there is something weird going on here. Well weird is not the right word. Unusual is the word I would like to use since this paper is an example of something that I hope will happen more often! Citizen scientists. A citizen scientist is where ordinary people without any background or training can help in a scientific experiment of some sorts by helping just a little to obtain the data after some minimal instruction. This is wonderfully explained by this project, the iSpex project, where I contributed not as an epidemiologist, but as a citizen scientist. If you want to know more, just read what I have written  previously on this blog in the post ‘measuring aerosols with your iPhone’.

So the researcher who initiated the iSpex project have now analysed their data and submitted the results to the journal Geophysical research letters, and as a bonus made all contributing citizen scientist co-author. Cool!

Now lets get back to the question stated in the title… Did I deserve an authorship on this paper? Basically no: none of the 3187 citizen scientist do not fulfil the criteria of authorship that I am used to (i.e. ICMJE), nor fulfil the criteria of the journal itself. I am no exception. However, I do believe that it is quite clear for any reader what the role of these citizen scientist was in this project. So this new form of a authorship, i.e. ‘gift authorship to a group of citizen scientists’ is a cool way to keep the public engaged to science. A job well done!

New publication “Graphical presentation of confounding in directed acyclic graphs”

source: wikimedia.org

A new publication became available, again an ‘educational’. However, this time the topic is new. It is about the application of directed acyclic graphs, a technique widely used in different areas of science. Ranging from computer science, mathematics, psychology, economics and epidemiology, this specific type of graphs has shown to be useful to describe the underlying causal structure of mechanisms of interest. This comes in very handy, since it can help to determine the sources of confounding for a specific epidemiological research question.

But, isn’t that what epidemiologist do all the time? What is new about these graphs, except for the fancy concepts as colliders, edges, and backdoor paths? Well, the idea behind DAGs are not new, there have been diagrams in epidemiology since years, but each epidemiologist has his own specific ways to draw the different relationship between various variables factors. Did you ever got stuck in a discussion about if something is a confounder or not? If you don’t get it resolved by talking, you might want to draw out the your point of view in a diagram, only to see that your colleagues is used to a different way of drawing epidemiological diagrams. DAGs resolve this. There is a clear set on rules that each DAG should comply with and if they do, they provides a clear overview of the sources of confounding and identify the minimal set of variables to account for all confounding present.

So that’s it… DAGs are a nifty method to talk the same idiom while discussing the causal questions you want to resolve. The only thing that you and your colleague now can fight over is the validity of the assumptions made by the DAG you just drew. And that is called good science!

The paper, with first author MMS, appeared in the methodology series of the journal Nephrology Dialysis and Transplantation, can be found here in pdf, and also on my mendeley account.

New publication in NTVG: Mendelian randomisation

Together with HdH and AvHV I wrote an article for the Dutch NTVG on Mendelian Randomisation in the Methodology series, which was published online today. This is not the first time; I wrote in the NTVG before for this up-to-date series (not 1 but 2 papers on crossover design) but I also wrote on Mendelian Randomisation before. In fact that was one of the first ‘ educationals’ I ever wrote. The weird thing is that I never formally applied mendelian randomisation analyses in a paper. I did apply the underlying reasoning in a paper, but no two-stage-least-squares analyses or similar. Does this bother me? Only a bit, but I think this just shows the limited value of formal Mendelian Randomsation studies: you need a lot of power and untestable assumptions which greatly reduces the applicability of this method in practice. however, the underlying reasoning is a good insight in the origin, and effects of confounding (and perhaps even others forms of bias) in epidemiological studies.Thats why I love Mendelian Randomisation; it is just another tool in the epidemiolgists toolbox.

The NTVG paper can be found here on their website (here in pdf) and also on my mendeley account.

Moving to Berlin!

After about 8 years learning and working in Leiden at the LUMC, it is time for something new. I’ve got a new job as the head of the ‘Clinical Epidemiology and Health Services Research in Stroke’ unit at the Center for Stroke research in Berlin (CSB, http://www.schlaganfallcentrum.de). This a very exciting opportunity for me: working with new colleagues on new projects, learning more about stroke research and strengthen the epidemiological studies that are executed at the CSB. I am looking forward to work with these brilliant and creative minds especially the guys from the CEHRIS team.

With moving to Berlin I will have to leave Leiden, which do regret. Not only because of the great research, but also because of the students and co-workers. Fortunately, I think that this new chapter in my academic life will provide ample opportunity to start new collaborations between Berlin and Leiden.

preconference workshop ‘crash course peer review’ cancelled

I worked together with some partners on a new workshop for young epidemiologist. The title says it all: WEON preconference workshop ‘crash course peer review’.

Unfortunately, we had to cancel the workshop because the number of participants was to low to justify the effort of not only myself, but especially all the other teachers. I think it is a pity that we had to cancel, but by cancelling we still have a fresh start whenever we want to try again in a different format.

Whilst preparing this workshop I noticed that peer review, or a better term would be refereeing, is not popular. It is seen as a task that task up to much time, with too much political consequences and little reward etc. New initiatives like Pubmed commons and other post publication peer review systems are regarded by some as answers to some of these problems. But what is the future of refereeing, when young epidemiologist are not intrinsically motivated to contribute time and effort to the publication process? Only time will tell.

For those who are still interested in this crash course, please contact me via email.

 

Research in the media

Research in the media. It is however not my own research, but these two newspaper articles are related to my research.The first article (pdf) is on the role of helmets for scooters. This is linked to the publication on the risks related to motorised two-wheel vehicle crashes. (cick here for the pubmed entry)
1

The second article from the same edition of the NRC is related to the topic of my thesis. It is about the role of FXII in thrombosis, based on a publication by Thomas Renne et al in Science translational medicine. Antibodies against FXII downregulate the pathological thrombogenenis during extracorporeal circulation. These antibodies might be used in the prevention of clots during heart-lung surgery, but might also be applied in the prevention of thrombosis, both arterial and venous. Click here (pdf) for the NRC newspaper article, and here for the original research by Renne et al.

2

New article: the intrinsic coagulation proteins and the risk of arterial thrombosis

I got good news today! A manuscript on the role of the intrinsic coagulation factors in the causal mechanisms leading to myocardial infarction and ischaemic stroke has been accepted for publication by the JTH. It took sometime, but in the end I’m very glad that this paper was published in the JTH because its readership is both clinical as well as biomedical: just the place where I feel most at home.

The basic message? These factors do contribute to ischaemic risk, but not to the risk of myocardial infarction. This is mostly the case for coagulation factor XI, which is a nice finding, because it could be a new target for anti-thrombotic therapies.

The article is now in print and will be made available soon. In the mean time, you can refer to my thesis, in which this research was also described.

Retraction Watch – blog on retractions of scientific articles

I’ve been a fond reader of retraction watch for over a year now. It is quite interesting to read the reports of how science corrects their own mistakes. Sometimes it is just plain old fraud, such as the case of Stapel, but also other Dutch researchers. But sometimes the stories behind the retractions show that there are also ‘legitimate mistakes’ that lead to such a retraction, for example this retraction from Genes and Development in which “it’s quite clear there isn’t even a whiff of misconduct or fraud”. Please check out the Retraction Watch blog or read an interview with one of its founders  which appeared in the de Volkskrant.