Leaving Berlin, returning to Leiden

Minerva, patron of the Leiden University, photographed by Erwin Olaf (collectie Lakenhal)

It is time.

After almost six years in Berlin, it is time to move on. And when I say move on, I mean move back to Leiden to work at my old Alma Mater, the University of Leiden / Leiden University Medical Center. The move is mainly driven by personal reasons – it will be great for my family to be closer to our friends and extended families.

But there is also an exciting job waiting for me focussed around the theme of the Quality and Integrity of science. For 50% of my time, I will be appointed as an assistant professor at the department of clinical epidemiology and set up a Q&I (meta)research line. The other 50% of my time, I will be working at the “directorate of research”, the team that supports LUMC researchers in general and the dean specifically. I will be responsible for the new program “Quality and Integrity of science”. The idea behind that program is that I will come up, execute and evaluate several interventions – big and small, some visible, some not – to improve science is executed at the LUMC.

I cannot provide any details, as they are simply not yet known. First, it is time to wrap up up my different projects here, all whilst working under corona pandemic circumstances. That makes these last weeks bittersweet – looking forward to a new chapter, whilst realizing what a great time I had in Berlin. I learned so much, was able to do so many things, and worked with so many interesting and smart people.

I will miss Berlin dearly.

Three new papers – part III

As explained here and here, I temporarily combine the announcements of published papers in one blog to save some time. This is part III, where I focus on ordinal outcomes. Of all recent papers, these are the most exciting to me, as they really are bringing something new to the field of thrombosis and COVID-19 research.

Measuring functional limitations after venous thromboembolism: Optimization of the Post-VTE Functional Status (PVFS) Scale. I have written about our call to action, and this is the follow-up paper, with research primarily done in the LUMC. With input from patients as well as 50+ experts through a Delphi process, we were able to optimize our initial scale.

Confounding adjustment performance of ordinal analysis methods in stroke studies. In this simulation study, we show that ordinal data from observational can also be analyzed with a non-parametric approach. Benefits: it allows us to analyze without the need of the proportional odds assumption and still get an easy to understand point estimate of the effect.

The Post-COVID-19 Functional Status (PCFS) Scale: a tool to measure functional status over time after COVID-19. In this letter to the European Respiratory, colleagues from Leiden, Maastricht, Zurich, Mainz, Hasselt, Winterthur, and of course Berlin, we propose to use a scale that is basically the same as the PVFS to monitor and study the long term consequence of COVID-19.

Three new papers published – part II

In my last post, I explained why I am at the moment not writing one post per new paper. Instead, I group them. This time with a common denominator, namely the role of cardiac troponin and stroke:

High-Sensitivity Cardiac Troponin T and Cognitive Function in Patients With Ischemic Stroke. This paper finds its origins in the PROSCIS study, in which we studied other biomarkers as well. In fact, there is a whole lot more coming. The analyses of these longitudinal data showed a – let’s say ‘medium-sized’ – relationship between cardiac troponin and cognitive function. A whole lot of caveats – a presumptive learning curve, not a big drop in cognitive function to work with anyway. After all, these are only mild to moderately affected stroke patients.

Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis. This paper investigates several patient populations -the general population, increased risk population, and stroke patients. The number of patients individuals in the title might, therefore, be a little bit deceiving – I think you should really only look at the results with those separate groups in mind. Not only do I think that the biology might be different, the methodological aspects (e.g. heterogeneity) and interpretation (relative risks with high absolute risks) are also different.

Response by Siegerink et al to Letter Regarding Article, “Association Between High-Sensitivity Cardiac Troponin and Risk of Stroke in 96 702 Individuals: A Meta-Analysis”. We did the meta-analysis as much as possible “but the book”. We pre-registered our plan and published accordingly. This all to discourage ourselves (and our peer reviewers) to go and “hunt for specific results”. But then there was a letter to the editor with the following central point: Because in the subgroup of patients with material fibrillation, the cut-offs used for the cardiac troponin are so different that pooling these studies together in one analysis does not make sense. At first glance, it looks like the authors have a point: it is difficult to actually get a very strict interpretation from the results that we got. This paper described our response. Hint: upon closer inspection, we do not agree and make a good counterargument (at least, that’s what we think).

Three new papers published

Normally I publish a new post for each new paper that we publish. But with COVID-19, normal does not really work anymore. But i don’t want to completely throw my normal workflow overboard. Therefore, a quick update on a couple of publications, all in one blogpost, yet without a common denominator:

Stachulski, F., Siegerink, B. and Bösel, J. (2020) ‘Dying in the Neurointensive Care Unit After Withdrawal of Life-Sustaining Therapy: Associations of Advance Directives and Health-Care Proxies With Timing and Treatment Intensity’, Journal of Intensive Care Medicine A paper about the role of advanced directives and treatment in the neurointensive care unit. Not normally the topic I publish about, as the severity of disease in these patients is luckily not what we normally see in stroke patients.

Impact of COPD and anemia on motor and cognitive performance in the general older population: results from the English longitudinal study of ageing. This paper makes use of the ELSA study – an open-access database – and hinges on the idea that sometimes two risk factors only lead to the progression of disease/symptoms if they work jointly. This idea behind interaction is often “tested” with a simple statistical interaction model. There are many reasons why this is not the best thing to do, so we also looked at biological (or additive interaction).

Thrombo-Inflammation in Cardiovascular Disease: An Expert Consensus Document from the Third Maastricht Consensus Conference on Thrombosis. This is a hefty paper, with just as many authors as pages it seems. But this is not a normal paper – it is the consensus statement of the thrombosis meeting last year in Maastricht. I really liked that meeting, not only because I got to see old friends, but also because of a number of ideas and papers were the product of this meeting. This paper is, of course, one of them. But after this one, some papers on the development of an ordinal outcome for functional status after venous thrombosis. But they will be part of a later blog post.

New paper – Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative

I report often on this blog about new papers that I have co-authored. Every time I highlight something that is special about that particular publication. This time I want to highlight a paper that I co-authored, but also didn’t. Let me explain.

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000576#sec014

The paper, with the title, Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative, was published in PLOS Biology and describes the QUEST center. The author list mentions three individual QUEST researchers, but it also has this interesting “on behalf of the QUEST group” author reference. What does that actually mean?

Since I have reshuffled my research, I am officially part of the QUEST team, and therefore I am part of that group. I gave some input on the paper, like many of my colleagues, but nowhere near enough to justify full authorship. That would, after all, require the following 4(!) elements, according to the ICMJE,

  • Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
  • Drafting the work or revising it critically for important intellectual content; AND
  • Final approval of the version to be published; AND
  • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

This is what the ICMJE says about large author groups: “Some large multi-author groups designate authorship by a group name, with or without the names of individuals. When submitting a manuscript authored by a group, the corresponding author should specify the group name if one exists, and clearly identify the group members who can take credit and responsibility for the work as authors. The byline of the article identifies who is directly responsible for the manuscript, and MEDLINE lists as authors whichever names appear on the byline. If the byline includes a group name, MEDLINE will list the names of individual group members who are authors or who are collaborators, sometimes called non-author contributors, if there is a note associated with the byline clearly stating that the individual names are elsewhere in the paper and whether those names are authors or collaborators.”

I think that this format should be used more, but that will only happen if people take the collaborator status seriously as well. Other “contribution solutions” can help to give some insight into what it means to be a collaborator, such as a detailed description like in movie credits or a standardized contribution table. We have to start appreciating all forms of contributions.

On the value of data – routinely vs purposefully

I listen to a bunch of podcasts, and the podcast “The Pitch” is one of them. In that podcast, Entrepreneurs of start-up companies pitch their ideas to investors. Not only is it amusing to hear some of these crazy business ideas, but the podcast also help me to understand about professional life works outside of science. One thing i learned is that it is ok if not expected, to oversell by about a factor 142.

Another thing that I learned is the apparent value of data. The value of data seems to be undisputed in these pitches. In fact, the product or service the company is selling or providing is often only a byproduct: collecting data about their users which subsequently can be leveraged for targeted advertisement seems to be the big play in many start-up companies.

I think this type of “value of data” is what it is: whatever the investors want to pay for that type of data is what it is worth. But it got me thinking about the value of data that we actually collect in medical. Let us first take a look at routinely data, which can be very cheap to collect. But what is the value of the data? The problem is that routinely collected data is often incomplete, rife with error and can lead to enormous biases – both information bias as well as selection bias. Still, some research questions can be answered with routinely collected data – as long as you make some real efforts to think about your design and analyses. So, there is value in routinely collected data as it can provide a first glance into the matter at hand.

And what is the case for purposefully collected data? The idea behind this is that the data is much more reliable: trained staff collects data in a standardised way resulting in datasets without many errors or holes. The downside is the “purpose” which often limits the scope and thereby the amount collected data per included individual. this is the most obvious in randomised clinical trials in which often millions of euro’s are spent to answer one single question. Trials often do no have the precision to provide answers to other questions. So it seems that the data can lose it value after answering that single question.

Luckily, many efforts were made to let purposefully collected keep some if its value even after they have served their purpose. Standardisation efforts between trials make it now possible to pool the data and thus obtain a higher precision. A good example from the field of stroke research is the VISTA collaboration, i.e the Virtual International Stroke Trials Archive”. Here, many trials – and later some observational studies – are combined to answer research questions with enough precision that otherwise would never be possible. This way we can answer questions with high quality of purposefully collected data with numbers otherwise unthinkable.

This brings me to a recent paper we published with data from the VISTA collaboration: “Early in-hospital exposure to statins and outcome after intracerebral haemorrhage”. The underlying question whether and when statins should be initiated / continued after ICH is clinically relevant but also limited in scope and impact, so is it justified to start a trial? We took the the easier and cheaper solution and analysed the data from VISTA. We conclude that

… early in-hospital exposure to statins after acute ICH was associated with better functional outcome compared with no statin exposure early after the event. Our data suggest that this association is particularly driven by continuation of pre-existing statin use within the first two days after the event. Thus, our findings provide clinical evidence to support current expert recommendations that prevalent statin use should be continued during the early in-hospital phase.1921

link

And this shows the limitations of even well collected data from RCT: as long as the exposure of interest is potentially provided to a certain subgroup (i.e. Confounding by indication), you can never really be certain about the treatment effects. To solve this, we would really need to break the bond between exposure and any other clinical characteristic, i.e. randomize. That remains the golden standard for intended effects of treatments. Still, our paper provided a piece of the puzzle and gave more insight, form data that retained some of its value due to standardisation and pooling. But there is no dollar value that we can put on the value of medical research data – routinely or purposefully collected alike- as it all depends on the question you are trying to answer.

Our paper, with JD in the lead, was published last year in the European Stroke Journal, and can be found here as well as on my Publons profile and Mendeley profile.

The story of a paper on the relationship between cancer and stroke that is both new and not so new.

Science is not quick. In fact, it is slow most of the time. Therefore, most researchers work on multiple papers at the same time. This is not necessarily bad, as parallel activities can be leveraged to increase the quality of the different projects. But sometimes this approach leads to significant delays. Imagine a paper that is basically done, and then during the peer review process, all the lead figures in the author team get different positions. Perhaps a Ph.D. student moves institutes for a post-doc, or junior doctors finish their training and set up their own practices, or start their demanding clinical duties in an academic medical center. All these steps are understandable and good for science in general but can hurt the speediness of individual papers.

This happened for example with a recently published paper in the Dutch PSI study. I say recently published because the work started > 5 years ago and has been finished more or less for the majority of that time. In this paper, we show that cancer prevalence is higher for stroke patients. But not all cancers are affected: it is primarily bladder cancer and head and neck type of effect. This might be explained by the shared risk factor smoking (bladder cancer, repository tract) and perhaps cancer treatment (central nervous system/ head and neck cancer). Not world shocking results with direct clinical implications, but relevant if you want to have a clear understanding of the consequences of cancer.


Now don’t get me wrong, I am very glad we, in the end, got all their ducks in a row and find a good place for the paper to be published. But the story is also a good warning: It was the willpower of some in the team to make this happen. Next time such a situation comes around, we might not have the right people with will right amount of power to keep on going with a paper like this. 

How to avoid this? Is “pre-print” the solution? I am not sure. On the surface, it indeed seems the answer, as it will give others at least the chance to see the work we did. But I am a firm believer that some form of peer review is necessary – just ‘dumping’ papers on a pre-print server is really a non-solution and I am afraid that a culture like that will only diminish the drive to get things formally published is even lower when manuscripts are already in the public domain. Post-publication peer review then? I am also skeptical here, as I the idea of pre-publication peer review is so deeply embedded within the current scientific enterprise that  I do not see post-publication peer review playing a big role anytime soon. The lack of incentive for peer review – let alone post-publication peer review – is really not helping us to make the needed changes anytime sooner. 


Luckily, there is a thing called intrinsic motivation, and I am glad that JW and LS had enough to get this paper published. The paper, with the title “Cancer prevalence higher in stroke patients than in the general population: the Dutch String-of-Pearls Institute (PSI) Stroke study. is published in European Journal of Neurology and can be found on Pubmed, as well as on my Mendeley and Publons profile.

Helping patients to navigate the fragmented healthcare landscape in Berlin: the NAVICARE stroke-atlas

the cover the Berlin Stroke Atlas

Research on healthcare delivery can only do so much to improve the lives of patients. Identifying the weak spots is important to start off with, but is not going to help patients one bit if they don’t get information that is actually useful let alone in time.

It is for that reason that the NAVICARE project not only focusses on doing research but also to provide information for patients, as well as bringing healthcare providers together in the NAVICARE network. The premise of NAVICARE is that somehow we need to help patients to navigate the fragmented healthcare landscape. We do so by using the diseases stroke and lung cancer as model diseases, prototypical diseases that help us focus our attention.

One deliverable is the stroke atlas: a document that provides different healthcare providers – and people and organizations who can help you in the broadest sense possible once you or your loved one is affected by a stroke. This stroke atlas, in conjunction with our personal approach at the stroke service point of the CSB/BSA, will help our patients. You can find the stroke atlas here (in German of course).

But this is only a first step. the navigator model is currently being further developed, for which NAVICARe has received additional funding this summer. I will not be part of those steps (see my post on my reshuffled research focus), but others at the CSB will.

Five years in Berlin and counting – reshuffling my research

I started to work in the CSB about 5 years ago. I took over an existing research group, CEHRIS, which provided services to other research groups in our center. Data management, project management and biostatisticians who worked on both clinical and preclinical research where all included in this team. My own research was a bit on the side, including old collaborations with Leiden and a new Ph.D. project with JR.

But then, early summer 2018 things started to change. The generous funding under the IFB scheme ran out, and CSB 3.0 had to switch to a skeleton crew. Now, for most research activities this had no direct impact, as funding for many key projects did not come from the CSB 2.0 grant. However, a lot of services to make our researchers perform at peak capability were hit. this included my team. CEHRIS, the service group ready to help other researchers was no longer.

But I stayed on, and I used the opportunity to focus my efforts on my own interest. I detached myself from projects I inherited but were not so engaged with, and I engaged myself with projects that interested me. This was, of course, a process over many months, starting end 2017. I feel now that it is time to share with you that I have a clear idea of what my new direction is. It boils downs to this:

My stroke research focuses on three projects in which we collect(ed) data ourselves: PROSCIS, BSPATIAL, BELOVE. The data collection in each of these projects is in different phases, and more papers will be coming out of these projects sooner later than later. Next to this, I will also help to analyze and publish data from others – that is after all what epidemiologists do. My methods research remains a bit of a hodgepodge where I still need to find focus and momentum. The problem here is that funding for this type of research has been lacking so far and will always be difficult to find – especially in Germany. But many ideas that come to from stroke projects have ripened into methodology working papers and abstracts, hopefully resulting in fully published papers quite soon. The third pillar is formed by the meta-research activities that I undertake with QUEST. Until now, these activities were a bit of a hobby, and always on the side. That has changed with the funding of SPOKES.

SPOKES is a new project that wants to improve the way we do biomedical research, especially translational research. Just pointing towards the problem (meta-research) or new top-down policy (ivory tower alert) is not enough. There has to be time and money for early and mid-career researchers to chip in as well in the process. SPOKES is going to facilitate that by making both time and money available. This starts with dedicated time and money for myself: I now work one day a week with the QUEST team. I will provide more details on SPOKES in a later post, but for now, I will just acknowledge that looking forward to this project within the framework of the Wellcome Trust Translational Partnership.

So there you have it, the three new pillars of my research activities in a single blog post. I have decided to lose the name CEHRIS to show that the old service focussed research group is no more. I have been struggling with choosing a new name, but in the end, I have settled for the German standard “AG-Siegerink”. Part lack of imagination, part laziness, and part underlining that there are three related but distinct research lines within that research group.

Up to the next 5 years!?

STEMO, our stroke ambulance, has had a bumpy ride…

STEMO in front of our clinic, source.

Pfew, there has been quite some excitement when it comes to the STEMO, the stroke ambulance in Berlin. The details are too specific -and way too German- for this blog, but the bottomline is this: during our evaluation of the STEMO, we noticed that STEMO was not always used as it should be. And if you do not use a tool like you should, it is hot half as effective. So we keep on trying to improve how STEMO is used in Berlin, even though the evaluation is going on.

We need to take these changes into account, so we wrote a new plan to evaluate STEMO, which was published open access the new BMC journal Neurological Research and Practice. The money to continue the evaluation was secured and we thought we were ready to go. But then reality set in: during budget negotiations a lower committee from the Berlin Senate said simply “NO” to STEMO. A day later however, the Major of Berlin used a “Machtword”, an informal veto to say that STEMO will be kept in the budget in order to finish the formal evaluation.

A true rollercoaster, which will show how directly our research has an impact on the society. The numerous calls, tweets and emails we have received in support of our now 3 STEMO ambulances over last couple of weeks underlines this even more (just the fact that a complete stranger started a petition with all nuances of the case taken into account is just mind boggeling !). But the science has to speak, and we still need to definitively evaluate the effectiveness of STEMO when used like it should be – something we will do over the next months with renewed energy in the whole team.

Auto-immune antibodies and their relevance for stroke patients – a new paper in Stroke

KMfor CVD+mortatily after stroke, stratified to serostatus for the anti-NMDA-R auto-antibody. taken from (doi: 10.1161/STROKEAHA.119.026100)

We recently published one of our projects embedded within the PROSCIS study. This follow-up study that includes 600+ men and women with acute stroke forms the basis of many active projects in the team (1 published, many coming up).

For this paper, PhD candidate PS measured auto-antibodies to the NMDAR receptor. Previous studies suggest that having these antibodies might be a marker, or even induce a kind of neuroprotective effect. That is not what we found: we showed that seropositive patients, especially those with the highest titers have a 3-3.5 fold increase in the risk of having a worse outcome, as well as almost 2-fold increased risk of CVD and death following the initial stroke.

Interesting findings, but some elements in our design do not allow us to draw very strong conclusions. One of them is the uncertainty of the seropositivity status of the patient over time. Are the antibodies actually induced over time? Are they transient? PS has come up with a solid plan to answer some of these questions, which includes measuring the antibodies at multiple time points just after stroke. Now, in PROSCIS we only have one blood sample, so we need to use biosamples from other studies that were designed with multiple blood draws. The team of AM was equally interested in the topic, so we teamed up. I am looking forward to follow-up on the questions that our own research brings up!

The effort was led by PS and most praise should go to her. The paper is published in Stroke, can be found online via pubmed, or via my Mendeley profile (doi: 10.1161/STROKEAHA.119.026100)

Update January 2020: There was a letter to the editor regarding our paper. We wrote a response.

Now hiring!

The text below is the English version of the official and very formal German text.

The QUEST center is looking for a project manager for the SPOKES project. SPOKES is part of the Wellcome Trust translational partnership program and aims to “Create Traction and Stimulate Grass-Root Activities to Promote a Culture of Translation Focused on Value”. SPOKES will be looking for grassroots activities from early and mid-career scientist who want to sustainably increase the value of the research in their own field.

The position will be located within the QUEST Center for Transforming Biomedical Research at the Berlin Institute of Health (BIH). The goal of QUEST is to optimize biomedical research in terms of sound scientific methodology, bio-ethics and access to research.

SPOKES is a new program organized by the QUEST Team at the Berlin Institute of Health. SPOKES enables our own researchers at the Charité / BIH to improve the way we do science. Your task is to identify and support these scientists. More specifically, we expect you to:

  • Promote the program within the BIH research community (interviews, newsletters, social media, events, etc)
  • Find the right candidates for this program (recruiting and selection)
  • Organize the logistics and help prepare the content of all our meetings (workshops, progress meetings, symposia, etc)
  • Support the selected researchers in their projects where possible (design, schedule and execute)

Next to this, there is an opportunity to perform some meta-research yourself.

We are looking for somebody with

  • A degree in biomedical research (MD, MSc, PhD or equivalent)
  • Proficiency in both English and German (both minimally C1)
  • Enthusiasm for improving science – if possible demonstrated by previous courses or other activities

Although no formal training as a project manager is required, we are looking for people who have some experience in setting up and running projects of any kind that involve people with different (scientific) backgrounds.

Intrinsic Coagulation Pathway, History of Headache, and Risk of Ischemic Stroke: a story about interacting risk factors

Yup, another paper from the long-standing collaboration with Leiden. this time, it was PhD candidate HvO who came up with the idea to take a look at the risk of stroke in relation to two risk factors that independently increase the risk. So what then is the new part of this paper? It is about the interaction between the two.

Migraine is a known risk factor for ischemic for stroke in young women. Previous work also indicated that increased levels of the intrinsic coagulation proteins are associated with an increase in ischemic stroke risk. Both roughly double the risk. so what does the combination do?

Let us take a look at the results of analyses in the RATIO study. High levels if antigen levels of coagulation factor FXI are associated with a relative risk of 1.7. A history of severe headache doubles the risk of ischemic stroke. so what can we then expect is both risks just added up? Well, we need to take the standard risk that everybody has into account, which is RR of 1. Then we add the added risk in terms of RR based on the two risk factors. For FXI this is (1.7-1=) 0.7. For headache that is 2.0-1=) 1.0. So we would expect a RR of (1+0.7+1.0=) 2.7. However, we found that the women who had both risk factors had a 5-fold increase in risk, more than what can b expected.

For those who are keeping track, I am of course talking about additive interaction or sometimes referred to biological interaction. this concept is quite different from statistical interaction which – for me – is a useless thing to look at when your underlying research is of a causal nature.

What does this mean? you could interpret this that some women only develop the disease because they are exposed to both risk factors. IN some way, that combination becomes a third ‘risk entity’ that increases the risk in the population. How that works on a biochemical level cannot be answered with this epidemiological study, but some hints from the literature do exist as we discuss in our paper

Of course, some notes have to be taken into account. In addition to the standard limitations of case-control studies, two things stand out: because we study the combination of two risk factors, the precision of our study is relatively low. But then again, what other study is going to answer this question? The absolute risk of ischemic stroke is too low in the general population to perform prospective studies, even when enriched with loads of migraineurs. Another thing that is suboptimal is that the questionnaires used do not allow to conclude that the women who report severe headache actually have a migraine. Our assumption is that many -if not most- do. even though mixing ‘normal’ headaches with migraines in one group would only lead to an underestimation of the true effect of migraine on stroke risk, but still, we have to be careful and therefore stick to the term ‘headache’.

HvO took the lead in this project, which included two short visits to Berlin supported by our Virchow scholarship. The paper has been published in Stroke and can be seen ahead of print on their website.

Migraine and venous thrombosis: Another important piece of the puzzle

Asking the right question is arguably the hardest thing to do in science, or at least in epidemiology. The question that you want to answer dictates the study design, the data that you collect and the type of analyses you are going to use. Often, especially in causal research, this means scrutinizing how you should frame your exposure/outcome relationship. After all, there needs to be positivity and consistency which you can only ensure through “the right research question”. Of note, the third assumption for causal inference i.e. exchangeability, conditional or not, is something you can pursue through study design and analyses. But there is a third part of an epidemiological research question that makes all the difference: the domain of the study, as is so elegantly displayed by the cartoon of Todays Random Medical News or the twitter hash-tag “#inmice“.

The domain is the type of individuals to which the answer has relevance. Often, the domain has a one-to-one relationship with the study population. This is not always the case, as sometimes the domain is broader than the study population at hand. A strong example is that you could use young male infants to have a good estimation of the genetic distribution of genotypes in a case-control study for venous thrombosis in middle-aged women. I am not saying that that case-control study has the best design, but there is a case to be made, especially if we can safely assume that the genotype distribution is not sex chromosome dependent or has shifted through the different generations.

The domain of the study is not only important if you want to know to whom the results of your study actually are relevant, but also if you want to compare the results of different studies. (as a side note, keep in mind the absolute risks of the outcome that come with the different domains: they highly affect how you should interpret the relative risks)

Sometimes, studies look like they fully contradict with each other. One study says yes, the other says no. What to conclude? Who knows! But are you sure both studies actually they answer the same question? Comparing the way the exposure and the outcome are measured in the two studies is one thing – an important thing at that – but it is not the only thing. You should also make sure that you take potential differences and similarities between the domains of the studies into account.

This brings us to the paper by KA and myself that just got published in the latest volume of RPTH. In fact, it is a commentary written after we have reviewed a paper by Folsom et al. that did a very thorough job at analyzing the role between migraine and venous thrombosis in the elderly. They convincingly show that there is no relationship, completely in apparent contrast to previous papers. So we asked ourselves: “Why did the study by Folsom et al report findings in apparent contrast to previous studies?  “

There is, of course, the possibility f just chance. But next to this, we should consider that the analyses by Folsom look at the long term risk in an older population. The other papers looked at at a shorter term, and in a younger population in which migraine is most relevant as migraine often goes away with increasing age. KA and I argue that both studies might just be right, even though they are in apparent contradiction. Why should it not be possible to have a transient increase in thrombosis risk when migraines are most frequent and severe, and that there is no long term increase in risk in the elderly, an age when most migraineurs report less frequent and severe attacks?

The lesson of today: do not look only at the exposure of the outcome when you want to bring the evidence of two or more studies into one coherent theory. Look at the domain as well, as you might just dismiss an important piece of the puzzle.

medRxiv: the pre-print server for medicine

Pre-print servers are a place to place share your academic work before actual peer review and subsequent publication. They are not so new completely new to academia, as many different disciplines have adopted pre-print servers to quickly share ideas and keep the academic discussion going. Many have praised the informal peer-review that you get when you post on pre-print servers, but I primarily like the speed.

But medicine is not one of those disciplines. Up until recently, the medical community had to use bioRxiv, a pre-print server for biology. Very unsatisfactory; as the fields are just too far apart, and the idiosyncrasies of the medical sciences bring some extra requirements. (e.g. ethical approval, trial registration, etc.). So here comes medRxiv, from the makers of bioRxiv with support of the BMJ. Let’s take a moment to listen to the people behind medRxiv to explain the concept themselves.

source: https://www.medrxiv.org/content/about-medrxiv

I love it. I am not sure whether it will be adopted by the community at the same space as some other disciplines have, but doing nothing will never be part of the way forward. Critical participation is the only way.

So, that’s what I did. I wanted to be part of this new thing and convinced with co-authors for using the pre-print concept. I focussed my efforts on the paper in which we describe the BeLOVe study. This is a big cohort we are currently setting up, and in a way, is therefore well-suited for pre-print servers. The pre-print servers allow us to describe without restrictions in word count, appendices or tables and graphs to describe what we want to the level of detail of our choice. The speediness is also welcome, as we want to inform the world on our effects while we are still in the pilot phase and are still able to tweak the design here or there. And that is actually what happened: after being online for a couple of days, our pre-print already sparked some ideas by others.

Now we have to see how much effort it took us, and how much benefit w drew from this extra effort. It would be great if all journals would permit pre-prints (not all do…) and if submitting to a journal would just be a “one click’ kind of effort after jumping through the hoops for the medRxiv.

This is not my first pre-print. For example, the paper that I co-authored on the timely publication of trials from Germany was posted on biorXiv. But being the guy who actually uploads the manuscript is a whole different feeling.

REWARD | EQUATOR Conference 2020 in Berlin

https://www.reward-equator-conference-2020.com

Almost 5 years ago something interesting happened in Edinburgh. REWARD and EQUATOR teamed up and organized a joint conference on “Increasing value and reducing waste in biomedical research “. Over the last five years, that topic has dominated Meta-research and research improvement activities all over the world. Now 5 years later, it is again time for another REWARD and EQUATOR conference, this time in Berlin. And I have the honor to serve on the local organizing committee.

My role is so small, that the LOC is currently not even mentioned on the website. But the website does show some other names, promising a great event! it starts with the theme. which is “Challenges and opportunities for Improvement for Ethics Committees and Regulators, Publishers, Institutions and Researchers, Funders – and Methods for measuring and testing Interventions”. That is not a sexy title like 5 years ago, but it shows that the field has outgrown the alarmistic phase and is now looking for real and lasting changes for the better – a move I can only encourage. See you in Berlin?

https://www.reward-equator-conference-2020.com

Results dissemination from clinical trials conducted at German university medical centers was delayed and incomplete.

My interests are broader than stroke, as you can see my tweets as well as my publications. I am interested in how the medical scientific enterprise works – and more importantly how it can be improved. The latest paper looks at both.

The paper, with the relatively boring title “Results dissemination from clinical trials conducted at German university medical centres was delayed and incomplete.” is a collaboration with QUEST, and carried by DS and his team. The short form of the title might just as well have been “RCT don’t get published, and even if they do it is often too late.”

Now, this is not a new finding, in the sense that older publications also showed high rates of non-publishing. Newer activities in this field, such as the trial trackers for the FDAA and the EU, confirm this idea. The cool thing about these newer trackers is that they rely on continuous data collection through bots that crawl all over the interwebs to look for new trials. This upside thas a couple of downsides though: with constant being updated, these trackers do not work that well as a benchmarking tool. Second, they might miss some obscure type of publication which might lead to underreporting of reporting. Third, to keep the trackers simple they tend to only use one definition as what counts as “timely publication” even though the field, nor the guidelines, are conclusive.

So our project is something different. To get a good benchmark, we looked at whether trials executed by/at German University medical centers were published in a timely fashion. We collected the data automatically as far as we could, but also did a complete double check by hand to ensure we didn’t skip publications (hint, we did, hand search is important, potentially because of the language thing). Then we put all the data in a database, made a shiny app so that readers themselves can decide what definitions and subsets they are interested in. The bottomline, on average only ~50% of trials get published within two years after their formal end. That is too little and too slow.

shiny app

This is a cool publication because it provides a solid benchmark that truly captures the current state. Now, it is up to us, and the community to improve our reporting. We should track progress in the upcoming years by automated trackers, and in 5 years or so do the whole manual tracking once more. But that is not the only reason why it was so inspiring to work on the projects; it was the diverse team of researchers from many different groups that made the work fun to do. The discussions we had on the right methodology were complex and even led to an ancillary paper by DS and his group. But the way this publication was published in the most open way possible (open data, preprint, etc) was also a good experience.

The paper is here on Pubmed, the project page on OSF can be found here and the preprint is on bioRxiv, and let us not forget the shiny app where you can check out the results yourself. Kudos go out to DS and SW who really took the lead in this project.

Joining the PLOS Biology editorial board

I am happy and honored that I can share that I am going to be part of the PLOS Biology editorial board. PLOS Biology has a special model for their editorial duties, with the core of the work being done by in-house staff editors – all scientist turned professional science communicators/publishers. They are supported by the academic editors – scientists who are active in their field and can help the in-house editors with insight/insider knowledge. I will join the team of academic editors.

When the staff editors asked me to join the editorial board, it quickly became clear that they invited because I might be able to contribute to the Meta-research section in the journal. After all, next to some of my peer review reports I wrote for the journal, I published a paper on missing mice, the idea behind sequential designs in preclinical research, and more recently about the role of exact replication.

Next to the meta-research manuscripts that need evaluation, I am also looking forward to just working with the professional and smart editorial office. The staff editors already teased a bit that a couple of new innovations are coming up. So, next to helping meta-research forward, I am looking forward to help shape and evaluate these experiments in scholarly publishing.

Kuopio Stroke Symposium

Kuopio in summer

Every year there is a Neurology symposium organized in the quiet and beautiful town of Kuopio in Finland. Every three years, just like this year, the topic is stroke and for that reason, I was invited to be part of the faculty. A true honor, especially if you consider the other speakers on the program who all delivered excellent talks!

But these symposia are much more than just the hard cold science and prestige. It is also about making new friends and reconnecting with old ones. Leave that up to the Fins, whose decision to get us all on a boat and later in a sauna after a long day in the lecture hall proved to be a stroke of genius.

So, it was not for nothing that many of the talks boiled down to the idea that the best science is done with friends – in a team. This is true for when you are running a complex international stroke rehabilitation RCT, or you are investigating whether the lower risk in CVD morbidity and mortality amongst frequent sauna visitors. Or, in my case, about the role of hypercoagulability in young stroke – pdf of my slides can be found here –

My talk in Augsburg – beyond the binary

@BobSiegerink & Jakob Linseisen discussing the p-values. Thank you for your visit and great talk pic.twitter.com/iBt5ZQxaMi— Sebastian Baumeister (@baumeister_se) 3 May 2019

I am writing this as I am sitting in the train on my way back to Berlin. I was in Augsburg today (2x 5.5 hours in the train!), a small University city next to Munich in the south of Berlin. SB, fellow epidemiologist and BEMC alumnus, invited me to give a talk in their Vortragsreihe.

I had a blast – in part because this talk posed a challenge for me as they have a very mixed audience. I really had to think long and hard how I could provide something a stimulating talk with a solid attention arc for everybody on the audience. Take a look at my slides to see if I succeeded: http://tiny.cc/beyondbinary

My talk at Kuopio stroke symposium

In 6 weeks or so I will be traveling to Finland to speak at the Kuopio stroke symposium. They asked me to talk about my favorite subject, hypercoagulability and ischemic stroke. although I still working on the last details of the slides, I can already provide you with the abstract.

The categories “vessel wall damage” and “disturbance of blood flow” from Virchow’s Triad can easily be used to categorize some well known risk factors for ischemic stroke. This is different for the category “increased clotting propensity”, also known as hypercoagulability. A meta-analysis shows that markers of hypercoagulability are stronger associated with the risk of first ischemic stroke compared to myocardial infarction. This effect seems to be most pronounced in women and in the young, as the RATIO case-control study provides a large portion of the data in this meta-analysis. Although interesting from a causal point of view, understanding the role of hypercoagulability in the etiology of first ischemic stroke in the young does not directly lead to major actionable clinical insights. For this, we need to shift our focus to stroke recurrence. However, literature on the role of hypercoagulability on stroke recurrence is limited. Some emerging treatment targets can however can be identified. These include coagulation Factor XI and XII for which now small molecule and antisense oligonucleotide treatments are being developed and tested. Their relative small role in hemostasis, but critical role in pathophysiological thrombus formation suggest that targeting these factors could reduce stroke risk without increasing the risk of bleeds. The role of Neutrophilic Extracellular Traps, negatively charged long DNA molecules that could act as a scaffold for the coagulation proteins, is also not completely understood although there are some indications that they could be targeted as co-treatment for thrombolysis.

I am looking forward to this conference, not in the least to talk to some friends, get inspired by great speakers and science and enjoy the beautiful surroundings of Kuopio.

postscript: here are my slides that I used in Kuopio

Should you drink one glass of alcohol to reduce your stroke risk?

The answer: no. For a long time there has been doubt whether or not we should believe the observational data whether or not limited alcohol use is in fact good for. You know, the old “U-curve” association. Now, with some smart thinking from the KADORIE guys from China/ Oxford as well as some other methods experts, the ultimate analyses has been done: A Mendelian Randomization study published recently in the Lancet.

If you wanna know what that actually does, you can read a paper I co-wrote a couple of years ago for NDT or the version in Dutch for the NTVG. In short, the technique uses genetic variation as a proxy for the actual phenotype you are interested in. This can be a biomarker, or in this case, alcohol consumption. A large proportion of the Chinese population has some genetic variations in the genes that code for the enzymes that break down alcohol in your blood. These genetic markers are therefore a good indicators how much you can actually can drink – at least on a group level. And as in most regions in China alcohol drinking is the standard, at least for men- how much you can drink is actually a good proxy of how much you actually do drink. Analyse the risk of stroke according the unbiased genetic determined alcohol consumption instead of the traditional questionnaire based alcohol consumption and voila: No U curve in sight –> No protective effect of drinking a little bit of alcohol.

Why I am writing about that study on my own blog? I didn’t work on the research, that is for sure! No, it is because the Dutch newspaper NRC actually contacted me to get some background information which I was happy to do. The science section in the NRC has always been one of the best in the NL, which made it quite an honor as well as an adventure to get involved like that. The journalist, SV, did an excellent job or wrapping all what we discussed in that 30-40 video call into just under 600 words, which you can read here (Dutch).  I really learned a lot helping out and I am looking forward doing this type of work sometime in the future.

Go beyond the binary outcome!

You were just diagnosed with a debilitating disease. You try to make sense of what the next steps are going to be. You ask your doctor, what do I need to do in order to get back to fully functioning adult as good as humanly possible. The doctor starts to tell what to tell you in order to reduce the risk of future events.

That sounds logical at first sight, but in reality, it is not. The question and the answer are disconnected on various levels: what is good for lowering your risk is not necessarily the same thing as the thing that will bring functionality back into your live. Also, they are about different time scales: getting back to a normal life is about weeks, perhaps months, and trying to keep recurrence risk as low as possible is a long term game – lifelong in fact.
A lot of research in various fields have bungled these two things up. The effects of acute treatment are evaluated in studies with 3-5 years of follow up. Or reducing recurrence risk is studied in large cohorts with only 6-12 months of follow up. I am not arguing that this is always a bad idea, but i do think that a better distinction between these concepts could help some fields make some progress. 

We do that in stroke. Since a while now we have adopted the so called modified Rankin scale as the primary outcome in acute stroke trials. It is a 7 category ordinal scale often measured at 90 days after the stroke that actually tells us whether the patients completely recovered (mRS 0) or actually dies (mRS 6) and anything in between. This made so much sense for stroke that I started to wonder whether this would also make sense for other diseases.

I think it does. In a recent paper published a couple of months ago in the RPTH by JLR and me, we call upon the greater thrombosis community to consider to look beyond a binary outcome. I stand by this idea, and for that reason I brought it up again at the Maastricht Consensus Conference on Thrombosis. During that conference another speaker, EK, said that the field needed a new way to capture functionality after VTE. You guessed it, we got together over coffee, shared ideas, recruited SB as a third critical thinker, and we came up with this: a call to action to improve measuring functional limitations after venous thromboembolism.

This is not just a call from us to others to get some action, this is a start of some new upcoming research activity together with EK, SB and myself. First we need the input from other experts on the scale itself. Second, we need to standardize the way we actually score patients, then test this and get the patients perspective on the logistics and questions behind the scale. third we need to know the reliability of scale and how the logistics work in a true RCT setting. Only when we complete all these steps, we will be certain whether looking the binary outcome indeed brings more actionable information when you have talk to your doctor and you ask yourself “how do i increase my chances of getting back to a fully functioning adult as good as humanly possible”.

Replication: how exact do you want to be?

Doing exactly the same experiment for the second time around doesn’t really tell you much. In fact, if you quickly glance over the statistics it might look like you might as well do a coin flip. Wait.. What? Yup, a coin flip. After all, doing the exact same experiment will provide you with a 50/50 when it comes to detecting the true effect (50% power).


The kernel of truth is of course that a coin flip never adds new useful information. But what does an exact replication experiment actually add? This is the question we are trying to answer in latest paper in PLOS Biology where we explore the added value of replications in biomedical research. (see figure). The bottom line is that doing the exact same thing (including the same sample size) really has only limited added value. To understand what than the power implications for replication experiments actually are, we developed a shiny app, where readers can play around with different scenarios. Want to learn more? take a look here: s-quest.bihealth.org/power_replication


The project was carried by SP, which resulted in a paper published in PLOS Biology (find it here). The paper got some traction on news sites as well as twitter, as you see from this altmetric overview

Reusing open data

I was thrilled when I learned that the QUEST center at the BIH was going to reward open data reuse with awards. The details can be found on their website, but the bottom line is this: open science does not only implicate opening up your data, but actually the use of open data. So if everybody open up their data, but nobody is actually using it, the added values is quite limited. 

For that reason I started some projects back in 2015/2016 designed to see how easy it actually is to find data that could be used to answer a question that you are actually interested in. The answer is, not always as easy. The required variables might not be there, and even i they are, it is quite complex to start using a database that is not build by yourself. To understand the value of your results, you have to understand how the data was collected. One study proofed to be so well documented that it was a contender: the English Longitudinal Study on Aging. One of the subsequent analyses that we did was published in a paper –mentioned before on this blog-.and that paper is the reason why I am writing this blog. We received the Open data reuse award.

The award has a 1000 euro attached to it, money the group can spend on travel and consumables. Now, do not get me wrong, 1000 euro is nothing to sneeze at. But 1000 euro is not going to be major driver in your decision whether to reuse open data or not. But the award is nice and I hope effective in stimulating open science, especially as can stimulate the conversation and critical evaluation on the value of reusing open data .     

Long journey, short(ish) story

This is a short story about a long journey. It is about a of which the journey started in 2013 if I am not mistaken. In that year, we decided to link the RATIO case-control study to the data from the Central Buro of Statistics (CBS) in the Netherlands, allowing us to turn the case-control study into a follow-up study.

The first results of this analyses were already published some time ago under as “Recurrence and Mortality in Young Women With Myocardial Infarction or Ischemic Stroke”. To get these results in that journal, we were asked to reduce the paper to a letter. WE did and hope we were able to keep the core message clean and clear: the risk of arterial events, after arterial events, remains high over long period of time 15+ years) and remain true to type.

Just last week (!) we published another analyses of the data, where we contrast the long term risk for those with a presumably hypercoagulable blood profile to those who do not show a tendency to clotting. The bottom line is that, if anything, there is a dose-response between hypercoagulability and arterial thrombosis for ischemic stroke patients, but not for myocardial infarction patients. This is all in line with the conclusions on the role of hypercoagulability and stroke based on data from the same study. But I have to be honest: the evidence is not that overwhelming: the precision is low, as seen by the broad confidence intervals. And with regard to the point estimates, no clinically relevant effects seen. Then again, it is a piece of the puzzle that is needed to understand the role of hypercoagulability in young stroke.

main figure from the paper: Q4 vs Q1 is almost doubling in risk

There is a lot to tell about this publication: how difficult it was to get the study data linked to the CBS to get to the 15 year follow up, how AM did a fantastic job organizing the whole project,  how quartile analyses are possibly not the best way to capture all information that is in the data, how we had tremendous delays because of peer review – especially in the last journal, or how bad some of the peer review reports were, how one of the peer reviewers was a commercial enterprise – which for some time paid people to do peer review, how the peer review reports are all open, how it was to get the funding for getting the paper not locked away behind a paywall.

But I want to keep this story short and not dwell too much on the past. The follow-up period was long, the time it took u to get this published was long, let us keep the rest of the story as short as possible. I am just glad that it is published and finally to be shared with the world.

Pre-prints start to sound better and better…

Finding consensus in Maastricht

source https://twitter.com/hspronk

Last week, I attended and spoke at the Maastricht Consensus Conference on Thrombosis (MCCT). This is not your standard, run-of-the-mill, conference where people share their most recent research. The MCCT is different, and focuses on the larger picture, by giving faculty the (plenary) stage to share their thoughts on opportunities and challenges in the field. Then, with the help of a team of PhD students, these thoughts are than further discussed in a break out session. All was wrapped up by a plenary discussion of what was discussed in the workshops. Interesting format, right?

It was my first MCCT, and I had difficulty envisioning how exactly this format will work out beforehand. Now that I have experienced it all, I can tell you that it really depends on the speaker and the people attending the workshops. When it comes to the 20 minute introductions by the faculty, I think that just an overview of the current state of the art is not enough. The best presentations were all about the bigger picture, and had either an open question, a controversial statement or some form of “crystal ball” vision of the future. It really is difficult to “find consensus” when there is no controversy as was the case in some plenary talks. Given the break-out nature of the workshops, my observations are limited in number. But from what I saw, some controversy (if need be only constructed for the workshop) really did foster discussion amongst the workshop participants.

Two specific activities stand out for me. The first is the lecture and workshop on post PE syndrome and how we should able to monitor the functional outcome of PE. Given my recent plea in RPTH for more ordinal analyses in the field of thrombosis and hemostasis – learning from stroke research with its mRS- we not only had a great academic discussion, but made immediately plans for a couple of projects where we actually could implement this. The second activity I really enjoyed is my own workshop, where I not only gave a general introduction into stroke (prehospital treatment and triage, clinical and etiological heterogeneity etc) but also focused on the role of FXI and NETS. We discussed the role of DNase as a potential for co-treatment for tPA in the acute setting (talking about “crystal ball” type of discussions!). Slides from my lecture can be found here (PDF). An honorable mention has to go out to the PhD students P and V who did a great job in supporting me during the prep for the lecture and workshop. Their smart questions and shared insights really shaped my contribution.

Now, I said it was not always easy to find consensus, which means that it isn’t impossible. In fact, I am sure that themes that were discussed all boil down to a couple opportunities and challenges. A first step was made by HtC and HS from the MCCT leadership team in the closing session on Friday which will proof to be a great jumping board for the consensus paper that will help set the stage for future research in our field of arterial thrombosis.

Messy epidemiology: the tale of transient global amnesia and three control groups

Clinical epidemiology is sometimes messy. The methods and data that you might want to use might not be available or just too damn expensive. Does that mean that you should throw in the towel? I do not think so.

I am currently working in a more clinical oriented setting, as the only researcher trained as a clinical epidemiologist. I could tell about being misunderstood and feeling lonely as the only who one who has seen the light, but that would just be lying. The fact is that my position is one privilege and opportunity, as I work with many different groups together on a wide variety of research questions that have the potential to influence clinical reality directly and bring small, but meaningful progress to the field.

Sometimes that work is messy: not the right methods, a difference in interpretation, a p value in table 1… you get the idea. But sometimes something pretty comes out of that mess. That is what happened with this paper, that just got published online (e-pub) in the European Journal of Neurology.  The general topic is the heart brain interaction, and more specifically to what extent damage to the heart actually has a role in transient global amnesia. Now, the idea that there might be a link is due to some previous case series, as well as the clinical experience of some of my colleagues. Next step would of course to do a formal case control-study, and if you want to estimate true measure of rate ratios, a lot effort has to go into the collection of data from a population based control group. We had neither time nor money to do so, and upon closer inspection, we also did not really need that clean control group to answer some of our questions that would progress to the field.

So instead, we chose three different control groups, perhaps better referred as reference groups, all three with some neurological disease. Yes, there are selections at play for each of these groups, but we could argue that those selections might be true for all groups. If these selection processes are similar for all groups, strong differences in patient characteristics of biomarkers suggest that other biological systems are at play. The trick is not to hide these limitations, but as a practiced judoka, leverage these weaknesses and turn them into a strengths. Be open about what you did, show the results, so that others can build on that experience.

So that is what we did. Compared patients with migraine with aura, vestibular neuritis and transient ischemic attack, patients with transient global amnesia are more likely to exhibitsigns of myocardial stress. This study was not designed – nor will if even be able to – understand the cause of this link, not do we pretend that our odds ratios are in fact estimates of rate ratios or something fancy like that. Still, even though many aspects of this study are not “by the book”, it did provide some new insights that help further thinking about and investigations of this debilitating and impactful disease.

The effort was lead by EH, and the final paper can be found here on pubmed.

Genetic determinants of activity and antigen levels of contact system factors

2018-11-08 12_43_09-RATIO instol zymogen.ppt [Compatibility Mode] - PowerPoint
One of my slides with a cartoon of the intrinsic coagulation system. I know, the reality is way more complicated, but still, I like the picture!
The contact system, or intrinsic coagulation system, have for a long time been an undervalued part of the thrombosis and hemostasis field. Not by me. I love FXI & FXII Not just now, since FXI is suddenly the “new kid on the block” as the new target for antithrombotic treatment through ASOs, but already since I started my PhD in 2007/2008. As any of my colleagues from back then will confirm, I couldn’t shut up about FXI and FXII as I thought that my topic was the only relevant topic in the world. Although common amongst young researcher, I do apologize for this now that I have 20/20 hindsight.

Still, it is only natural that some of the work I continues to be focused on those little bit weird coagulation proteins. Are they relevant to hemostasis? Are they relevant in pathological thrombus formation? What is their role in other biological systems? Questions that the field is only slowly getting answers to. Our latest contribution to this is the analyses of genetic variations in the genes that code for these protein, and estimate if the levels of activation and antigen are in fact -in part- genetically determinant.

This analysis was performed in the RATIO study, from which we primarily focused on the control group. That control group is relatively small for a genetic analyses, but given that we have a relative young group the hope is that the noise is not too bad to pick up some signals. Additionally, given the previous work in the RATIO study, I think this is the only dataset that has a comprehensive phenotyping of the intrinsic coagulation proteins as it includes measures of protein activity, antigen and activation.

The results, which we published in the JTH, are threefold: we were able to confirm previously reported associations between known genetic variations and phenotype. Se were also able to identify two new loci (i.e. KLKB1 rs4253243 for prekallikrein and KNG1rs5029980 for HMWK levels). Third, we did not find evidence of strong associations between variation in the studied genes and the risk of ischemic stroke or myocardial infarction. Small effects can however not be ruled, as the sample size of this study is not enough to yield very precise estimates. 

The work was spearheaded by JLR, with tons of help by HdH, and in collaboration with the thrombosis group at the LUMC.

The paper is published in the JTH, and as always, can also be found at my Mendeley profile.

Getting your life back on track after stroke: returning to work

https://goo.gl/CbNPSE

Stroke severity and incidence might be stabilizing, or even decreasing over time in western countries, but this sure is not true for other parts of the world. But here is something to think about: with increasing survival, people will suffer longer from the consequences of stroke. This is of course especially true if the stroke occured at a young age.

To understand the true impact of stroke, we need to look beyond increased risk of secondary events. We need to understand how the disease affects day-to-day life, especially long term in young stroke patients. The team in Helsinki (HSYR) took a look at the pattern of young stroke patients returning to work. The results:

We included a total of 769 patients, of whom 289 (37.6%) were not working at 1 year, 323 (42.0%) at 2 years, and 361 (46.9%) at 5 years from IS.

That is quite shocking! But how about the pattern? For that we used lasagna plots, something like heatmaps for longitudinal epidemiological data. The results are above: the top panel is just the data like in our database, while the lower data has some sorting to help interpret the results a bit better. 

The paper can be found here, and I am proud to say that it is open access, but you can as always just check my Mendeley profile.

Aarnio K, Rodríguez-Pardo J, Siegerink B, Hardt J, Broman J, Tulkki L, Haapaniemi E, Kaste M, Tatlisumak T, Putaala J. Return to work after ischemic stroke in young adults. Neurology 2018; 0: 1.

Cardiac troponin T and severity of cerebral white matter lesions: quantile regression to the rescue

quantile regression of high vs low troponin T and white matter lesion quantile

A new paper, this time venturing into the field of the so-called heart-brain interaction. We often see stroke patients with cardiac problems, and vice versa. And to make it even more complex, there is also a link to dementia! What to make of this? Is it a case of chicken and the egg, or just confounding by a third variable?  How do these diseases influence each other?

This paper tries to get a grip on this matter by zooming in on a marker of cardiac damage, i.e. cardiac troponin T. We looked at this marker in our stroke patients. Logically, stroke patients do not have increased levels of troponin T, yet, they do. More interestingly, the patients that exhibit high levels of this biomarker also have high level of structural changes in the brain, so called cerebral white matter lesions. 

But the problem is that patients with high levels of troponin T are different from those who have no marker of cardiac damage. They are older and have more comorbidities, so a classic case for adjustment for confounding, right? But then we realize that both troponin as well as white matter lesions are a left skewed data. Log transformation of the variables before you run linear regression, but then the interpretation of the results get a bit complex if you want clear point estimates as answers to your research question.

So we decided to go with a quantile regression, which models the quantile cut offs with all the multivariable regression benefits. The results remain interpretable and we don’t force our data into distribution where it doesn’t fit. From our paper:

In contrast to linear regression analysis, quantile regression can compare medians rather than means, which makes the results more robust to outliers [21]. This approach also allows to model different quantiles of the dependent variable, e.g. 80th percentile. That way, it is possible to investigate the association between hs-cTnT in relation to both the lower and upper parts of the WML distribution. For this study, we chose to perform a median quantile regression analysis, as well as quantile regression analysis for quintiles of WML (i.e. 20th, 40th, 60th and 80th percentile). Other than that, the regression coefficients indicate the effects of the covariate on the cut-offs of the respective quantiles of the dependent variable, adjusted for potential covariates, just like in any other regression model.

Interestingly, the result show that association between high troponin T and white matter lesions is the strongest in the higher quantiles. If you want to stretch to a causal statement that means that high troponin T has a more pronounced effect on white matter lesions in stroke patients who are already at the high end of the distribution of white matter lesions. 

But we should’t stretch it that far. This is a relative simple study, and the clinical relevance of our insights still needs to be established. For example, our unadjusted results might indicate that the association in itself might be strong enough to help predict post stroke cognitive decline. The adjusted numbers are less pronounced, but still, it might be enough to help prediction models.

The paper, led by RvR, is now published in J of Neurol, and can be found here, as well as on my mendeley profile.

 von Rennenberg R, Siegerink B, Ganeshan R, Villringer K, Doehner W, Audebert HJ, Endres M, Nolte CH, Scheitz JF. High-sensitivity cardiac troponin T and severity of cerebral white matter lesions in patients with acute ischemic stroke. J Neurol Springer Berlin Heidelberg; 2018; 0: 0.

Impact of your results: Beyond the relative risk

I wrote about this in an earlier topic: JLR and I published a paper in which we explain that a single relative risk, irrespective of its form, is jus5t not enough. Some crucial elements go missing in this dimensionless ratio. The RR could allow us to forget about the size of the denominator, the clinical context, the crude binary nature of the outcome. So we have provided some methods and ways of thinking to go beyond the RR in an tutorial published in RPTH (now in early view). The content and message are nothing new for those trained in clinical research (one would hope). Even for those without a formal training most concepts will have heard the concepts discussed in a talk or poster . But with all these concepts in one place, with an explanation why they provide a tad more insight than the RR alone, we hope that we will trigger young (and older) researchers to think whether one of these measures would be useful. Not for them, but for the readers of their papers. The paper is open access CC BY-NC-ND 4.0, and can be downloaded from the website of RPTH, or from my mendeley profile.  

How you quantify the impact of your results matters. A bit.

This is not about altmetrics. Nor is about the emails you get from colleagues or patients. It is about the impact of a certain risk factor. A single relative risk is meaningless. As it is a ratio, it is dimensionless, and without the context hidden in the numerator and denominator, it can be tricky to interpret results. Together with JLR I have a paper coming up in which we plead to use one of the many ways one could interpret the impact of your results, and just simply go beyond the simple relative risk. This will be published in RPTH, the relatively new journal of the ISTH, where I also happen to be on the editorial board.
Venn diagram illustrating the intersections of the independent predictors and poor outcome 12 months after stroke.  https://doi.org/10.1371/journal.pone.0204285.g003
One of those ways it to report the population attributable risk: the percent of cases which can be attributed to the risk factor in question. It is often said that if we had a magic wand and would use it to make the risk factor disappear X% of the patient will not develop the disease. Some interpret this as the causal fraction, which is not completely correct if you dive really deep into epidemiological theory, but still, you get the idea. In a paper based on PROSCIS data, with first author CM at the helm, we have tested several ways to calculate the PAR of five well known and established risk factors for bad outcome after stroke. Understanding what lies behind which patient gets has a bad outcome and which doesn’t is one the things we really struggle with, as many patient with well established risk factors just don’t develop a poor outcome. Quantifying the impact of risk factors, and arguably more importantly, ranking the risk factors is a good tool to help MDs, patients, researchers and public health officials to know where to focus on. However, when we compared the PARs calculated by different methods, we came to the conclusion there is quite some variation. The details are in the table below, but the bottom line is this. It is not a good sign when your results depend on the method. Similar methods should get similar results. But upon closer inspection (and somewhat reassuring) the order of magnitude as well as the rank of the 5 risk factors stays almost similar.
https://doi.org/10.1371/journal.pone.0204285.g003
So, yes, it is possible to measure the impact of your results. These measures do depend on the type of method you have used, which in itself is somewhat worrying, but given that we don’t have magic wand of which we expect to remove a fraction of the disease of up to 2 decimals precise, the PAR is a great tool to get some more grip on the context of RR. The paper was published in PLOS One and can be found on their website or on my mendeley profile PS This paper is one of the first papers with patient data in which we provided the data together with the manuscript. From the paper: “The data that support the findings of this study are available to all interested researchers at Harvard Dataverse (https://doi.org/10.7910/DVN/REBNRX).” Nice.

Teaching award from the German society for epidemiology

teaching at ESOC 2018 summer school
teaching an interactive session on study design at ESOC 2018 summer school

The German society for epidemiology has an annual teaching award, i.e. the “Preis für exzellente Lehre in der Epidemiologie”. From their website:

Mit der Auszeichnung sollen herausragende Leistungen oder überdurchschnittliches Engagement in der Lehre der Epidemiologie gewürdigt werden. (…)  Preiswürdig sind innovative, originelle oder nachhaltige Angebote, ebenso wie ein besonders hoher persönlicher Einsatz für die Lehre.”

In short, anything goes in terms of format, innovation, personal commitment etc. However, there is a trick: only students can nominate you. So what happened? My students nominated me for my “overall teaching concept”. Naturally, the DGEpi wondered what that teaching concept actually was and asked me to provide some more information. So I took that opportunity and actually described what and why I teach, to see what the actual concept behind this all is. Here is the result.

The bottom line is simple: I think you learn the best not only by reading a book, but that you learn by doing, help in the organization and help teach in various epi related activities. You need to get exposed in several formats with different people. So I have helped set up a plethora of activities for the young student to learn epidemiology in different ways on different levels: read classics, discuss in weekly journal clubs, use popular scientific books  in book clubs, but also organize platforms for discussion, interaction and inspiration (yes, I am talking about BEMC). The most important thing might be that students should learn the basics for epidemiology, even though they might not need that for that own research projects. This is especially true for medical students who want to learn about clinical research.

Last week I learned that the award in the end was awarded to me. Of course I am honored on a personal level, and this honors needs to be extended to my mentors. But I also take this award as an indication that the recent and increasing Berlin based epi-activities I helped to organize together with epi enthusiast at the IPH, iBIKE and QUEST did not go unnoticed by the German epidemiological community.

I will pick up the price in Bremen at the yearly conference of the DGEPI. See you there?

Cerebral microbleeds and interaction with antihypertensive treatment in patients with ICH; a tale of two rejected letters

ICH is not my topic, but as we were preparing for the ESO Summerschool I explored the for me as yet untouched areas of stroke research. That brought me to this paper by Shoamanesh et al in JAMA Neurology which investigates a potential interaction between CMB and the treatment at hand in relation to outcome in patients with ICH. Their conclusion: no interaction. The paper is easy to read and has some at first glance convincing data, but then I realized some elements are just not right:

  • the outcome is not rare, still a logistic model is used to assess relative risk
  • interaction is assessed based on multiplicative interaction even while adding variables could lead to other estimates of interaction due to the non-collapsibility of the OR
  • the underlying clinical question of interaction is arguable better answered with an analyses of additive interaction.

I decided to write a letter to the editor. Why? Well, additionally to the methodological issues mentioned above, the power of the analyses was quite low and the conclusion of “no effect” based on a p value >0.05 with low power is in itself a problem. Do I expect that there is a massive shift in how I would interpret the data when they would have analysed the data differently? I don’t think so, especially as the precision of any quantification of additive interaction will be quite low. But that is not the main issue here: the way the data were presented does not allow the reader to assess additive interaction. So my letter was focused on that: suggesting to present the data in a slight different way, and then we can discuss whether the conclusions as drawn by the authors still holds. Then, and only then we get the full picture of the value of CMB in treatment decision. The thing is that we will then realize that the full picture is actually not the full picture, as the data are quite limited and imprecise and more research is required before strong conclusions can be drawn.

But the letter was rejected by JAMA Neurology because of space limitations and priority. I didn’t appeal. The same happened when I submitted an edited version of the paper to Neuro-epidemiology. I didn’t appeal. In the meantime, I’ve contacted the corresponding author, but he did not get back to me. So now what? Pubmed commons died. Pubpeer is, to my taste, too much focused on catching image frauds, even though they do welcome other types of contributions. I know my comments are only interesting for the methodologically inclined, and in the greater scheme of things, their value is limited. I also do understand space limitation when it comes to print, but how about online?Anyway, a lot of reasons why things happened why they happened. But somebody told me that if it was important enough to write a letter, it is important enough to publish it somewhere. So here I am, posting my initial letter on my own website, which almost certainly means that no single reader of the original paper will find out about these comments.

Post publication peer review ideas anybody?

The original paper can be found here, on the website of JAMA Neurology.

My letter can be found here: CMB and intense blood pressure lowering in ICH_ is there an additive effect

FVIII, Protein C and the Risk of Arterial Thrombosis: More than the Sum of Its Parts.

maxresdefault
source: https://www.youtube.com/watch?v=jGMRLLySc4w 

Peer review is not a pissing contest. Peer reviewing is not about findings the smallest of errors and delay publication because of it. Peer review is not about being right. Peer review is not about rewriting the paper under review. Peer review is not about asking for yet another experiment.

 

Peer review is about making sure that the conclusions presented in the paper are justified by the data presented and peer review is about helping the authors get the best report on what they did.

At least that what I try to remind myself of when I write my peer review report. So what happened when I wrote a peer review about a paper presenting data on the two hemostatic factors protein C and FVIII in relation to arterial thrombosis. These two proteins are known to have a direct interaction with each other. But does this also translate into the situation where a combination of the two risk factors of the “have both, get extra risk for free”?

There are two approaches to test so-called interaction: statistical and biological. The authors presented one approach, while I thought the other approach was better suited to analyze and interpret the data. Did that result in an academic battle of arguments, or perhaps a peer review deadlock? No, the authors were quite civil to entertain my rambling thoughts and comments with additional analyses and results, but convinced me in the end that their approach have more merit in this particular situation. The editor of thrombosis and hemostasis saw this all going down and agreed with my suggestion that an accompanying editorial on this topic to help the readers understand what actually happened during the peer review process. The nice thing about this is that the editor asked me to that editorial, which can be found here, the paper by Zakai et al can be found here.

All this learned me a thing or two about peer review: Cordial peer review is always better (duh!) than a peer review street brawl, and that sharing aspects from the peer review process could help readers understand the paper in more detail. Open peer review, especially the parts where peer review is not anonymous and reports are open to readers after publication, is a way to foster both practices. In the meantime, this editorial will have to do.

 

New paper: External defibrillator use by bystanders and patient outcomes

source: https://goo.gl/HkZkV5
Main analyses showing the effect of AED use on several endpoints

In this paper, together with researchers from Harvard and the Institute of Public Health at the Charite, we used data from the CARES dataset to answer some questions regarding the use of automated external defibrillator (AED) in the United States.

It is known from previous studies that AED use does improve clinical outcome of those who are treated with AED. Less known is whether the treatment effect of AEDs administrated by untrained bystanders has a similar beneficial effect, especially because

1) so called neighborhood characteristics have not been taken into account previous analyses and

2) it is difficult to find the right control group.

This paper focuses on these two aspects by taking neighborhood characteristics into account and using so called “negative controls” (i.e. patients who were treated with AED but did not have a shockable rhythm).

I had a lot of fun in this project: i like when my skills are helpful outside of the fields that I am not usually working in. NOt only does it allow me to see how research methodology is applied in different fields, but it also help me understand my own field much better. After all, both AED and STEMO are methods that aim to deliver treatment to a patient as soon as possible, in fact “pre-hospitalisation”. If only a CT scanner could be that small… or can it…

The main lifting on this publication has been done by LWA. thanks for letting me join for the adventure!

The paper can be found on pubmed, and on my mendeley profile

BEMC has a Journal Club now

cropped-favicon_bemc1

After a year of successful BEMC talks and seeing the BEMC grow,  it was time for something new. We are starting a new journal club within the BEMC community, purely focussed on methods. The text below describes what we are going to to do, starting in February. (text comes from the BEMC website)

BEMC is trying something new: a journal club. In february, we will start a monthly journal to accompany the BEMC talks as an experiment. The format is subject to change as we will adapt after gaining more experience in what works and what not. For now, we are thinking along the following lines:

Why another journal club?

Aren’t we already drowning in Journal clubs? Perhaps, but not with this kind of journal club. BEMC JClub is focussed on the methods of clinical research. Many epidemiological inclined researchers work at departments who are not focussed on methodology, but rather on a disease or field of medicine. This is reflected in the topics of the different journal clubs around town. We believe there is a need for a methods journal club in Berlin. Our hope for the BEMC JClub is to fulfill that need through interdisciplinary and methodological discussions of the papers that we read.

Who is going to participate?

First of all, please remember that the BEMC community focussed on researchers with a medium to advanced epidemiological knowledge and skill set. This is not only true for our BEMC talks, but also for our JClub.

Next to this, we hope that we will end up with a good group that reflects the BEMC community. This means that we are looking for a group with a nice mix in background and experience. That means that if you think you have unique background and focus in your work, we highly encourage you to join us and make our group as diverse as possible. We strive for this diversity as we do not want the JClub sessions to become echo chambers or teaching sessions, but truly discussions that promote knowledge exchange between methodologist from different fields.

What will we read?

Anything that is relevant for those who attend. The BEMC team will ultimately determine which papers we will read, but we are nice people and listen carefully to the suggestions of regulars. Sometimes we will pick a paper on the same (or related) topic of the BEMC talk of that month.

Even though the BEMC team has the lead in the organisation, the content of the JClub should come from everybody attending. Everybody that attends the Jclub is asked to provide some points, remarks or questions to jumpstart the discussion.

What about students?

Difficult to say. The BEMC JClub is not designed to teach medical students the basics in epidemiology. Then again, everybody who is smart, can keep up and contribute to the discussion is welcome.

Are you a student and in doubt whether the BEMC JClub is for you? Just send us an email.

Where? When?

Details like this can on the BEMC Jclub website. Just click here.

new paper: pulmonary dysfunction and CVD outcome in the ELSA study

 This is a special paper to me, as this is a paper that is 100% the product of my team at the CSB.Well, 100%? Not really. This is the first paper from a series of projects where we work with open data, i.e. data collected by others who subsequently shared it. A lot of people talk about open data, and how all the data created should be made available to other researchers, but not a lot of people talk about using that kind of data. For that reason we have picked a couple of data resources to see how easy it is to work with data that is initially not collected by ourselves.

It is hard, as we now have learned. Even though the studies we have focussed on (ELSA study and UK understanding society) have a good description of their data and methods, understanding this takes time and effort. And even after putting in all the time and effort you might still not know all the little details and idiosyncrasies in this data.

A nice example lies in the exposure that we used in this analyses, pulmonary dysfunction. The data for this exposure was captured in several different datasets, in different variables. Reverse engineering a logical and interpretable concept out of these data points was not easy. This is perhaps also true in data that you do collect yourself, but then at least these thoughts are being more or less done before data collection starts and no reverse engineering is needed. new paper: pulmonary dysfunction and CVD outcome in the ELSA study

So we learned a lot. Not only about the role of pulmonary dysfunction as a cause of CVD (hint, it is limited), or about the different sensitivity analyses that we used to check the influence of missing data on the conclusions of our main analyses (hint, limited again) or the need of updating an exposure that progresses over time (hint, relevant), but also about how it is to use data collected by others (hint, useful but not easy).

The paper, with the title “Pulmonary dysfunction and development of different cardiovascular outcomes in the general population.” with IP as the first author can be found here on pubmed or via my mendeley profile.

New Masterclass: “Papers and Books”

“Navigating numbers” is a series of Masterclass initiated by a team of Charité researchers who think that our students should be able to get more familiar how numbers shape the field of medicine, i.e. both medical practice and medical research. And I get to organize the next in line.

I am very excited to organise the next Masterclass together with J.O. a bright researcher with a focus on health economics. As the full title of the masterclass is “Papers and Books – series 1 – intended effect of treatments”, some health economics knowledge is a must in this journal club style series of meetings.

But what will we exactly do? This Masterclass will focus on reading some papers as well as a book (very surprising), all with a focus on study design and how to do proper research into “intended effect of treatment” . I borrowed this term from one of my former epidemiology teachers, Jan Vandenbroucke, as it helps to denote only a part of the field of medical research with its own idiosyncrasies, yet not limited by study design.

The Masterclass runs for 8 meetings only, and as such not nearly enough to have the students understand all in and outs of proper study design. But that is also not the goal: we want to show the participants how one should go about when the ultimate question is medicine is asked: “should we treat or not?”

If you want to participate, please check out our flyer