The post COVID-19 functional status scale has been adopted in 100+ clinics or research projects. I think this is a great testimony to the power that science and collaborations brings to this pandemic. But with a new tool comes also a way of thinking that is perhaps standard in stroke research but perhaps not so obvious outside that field. I might have underestimated that when we proposed the PCFS. To provide some guidance, just think about this. Know, be consistent and open about when, what and who you count when assessing the PCFS in your clinic or cohort. And yeah, do not forget the dead.
WHEN: when you count the PCFS in your patients is just perhaps one of the most important aspects of the PCFS when you want to make use of all its potential. When you assess the PCFS must be standardized. It would be best if it were standardized between studies or clinics (e.g. @discharge and 4 & 8 weeks thereafter, as we suggest in the original proposal) but this might not be suitable in all instances. If you can’t keep this, try at least to standardize the moment of assessing the PCFS constant, with a narrow time window, within one data collection. As COVID-19 patients are likely to improve overtime, it matters when who is interviewed. Irrespective of whether you were able to keep the time window as tight as possible, make sure to report the details on the window in your papers. Better even – share the data.
WHO: If you only assess the PCFS in the survivors of COVID-19, you build in a selection. And when there is a selection, selection bias is around the corner. The clearest example comes from the comparison of those patients admitted to the ICU vs those who were not. If we do not count the dead in the ICU population, but we do in the other group, it might well be that the PCFS distribution amongst those with the PCFS assessment is in favor of being in the ICU group. All patients who enter the cohort need a PCFS assessment, also the dead. Again, whatever you did, make sure you describe who were assessed and who weren’t in your methods and results of your papers.
HOW: in our proposal we give the option to do an interview or a self-assessment questionnaire. We don’t have enough evidence to support one over the other. We think interviews provide a little more depth, but the bottom line is that professionals should choose the one that fits their needs the best. Both outcome assessments methods will provide different scores and that is just fine, as long as it is clear to others what you actually did. But be aware: mixing two types in one study or cohort can bring some bias – see above. Make sure you provide an adequate description of what you did, even when you follow the proposed methods in our manual – the PCFS is not completely standardized in the literature, so you need to bring your colleagues up to speed.
In a clinical setting it is easy to take these three variables into account when discussing a single patient. After all, you only need to put the PCFS in the context of one individual. At most, you need to consider the PCFS measured over multiple time points. But if you want to learn from your experiences, it is best to make the assessment as standardized as possible. It will help to interpret the data of an individual patients quicker, see patterns within and perhaps between patients, and as a final kicker, might make it possible to do some research with that valuable data.
The question seems to be straightforward: “what bad stuff happens when after somebody develops an intracerebral hemorrhage, and how will I know whether that will also happen to me now that I have one”? The answer is, as always, “it depends”. It depends on how you actually specify the question. What does “bad stuff” mean? Which “when” are you interested? And what are your personal risk factors? We need all this information in order to get an answer from a clinical prediction model.
The thing is, we also need a good working clinical prediction model – that is it should distinguish those who develop the bad stuff from those who don’t, but it should also make sure that the absolute risks are about right. This new paper (project carried by JW) discusses all ins and outs when it comes to the current state of affairs when it comes to predictions. Written for neurologist, some of these comments and points that we rise will not be new to methodologists. But as it is not a given that methodologist will be involved somebody decides that a new prediction model needs to be developed, we wrote it all in up in this review.
The number of existing prediction models for this disease is already quite big – and the complexity of the models seem to increase overtime, without a clear indication that the performance of these models gets better. A lot of these models use different definitions for the type of outcome, as well as the moment that the outcome is assessed – all leading to wildly different models, which are difficult to compare.
The statistical workup is limited: The performance is often only measured in a simple AUC- calibration and net benefit is not reported on. Even more worryingly, external validation not always possible, as the original publications do not provide point estimates.
Given the severity of the disease, the so-called “withdrawal of care bias” is an important element when thinking and talking about prognostic scores. This bias, in which those with a bad score do not receive treatment can lead to a self-fulfilling prophecy type of situation in the clinic, captured in the data.
In short – when you think you want to develop a new model, think again. Think long and hard. Identify why the current models are working or are not working. Can you improve? Do you have the insights and skill set to do so? Really? If you think so, please do so, but just don’t add another not so useful prediction model to the already saturated literature.
I report often on this blog about new papers that I have co-authored. Every time I highlight something that is special about that particular publication. This time I want to highlight a paper that I co-authored, but also didn’t. Let me explain.
Since I have reshuffled my research, I am officially part of the QUEST team, and therefore I am part of that group. I gave some input on the paper, like many of my colleagues, but nowhere near enough to justify full authorship. That would, after all, require the following 4(!) elements, according to the ICMJE,
Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
Drafting the work or revising it critically for important intellectual content; AND
Final approval of the version to be published; AND
Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
This is what the ICMJE says about large author groups: “Some large multi-author groups designate authorship by a group name, with or without the names of individuals. When submitting a manuscript authored by a group, the corresponding author should specify the group name if one exists, and clearly identify the group members who can take credit and responsibility for the work as authors. The byline of the article identifies who is directly responsible for the manuscript, and MEDLINE lists as authors whichever names appear on the byline. If the byline includes a group name, MEDLINE will list the names of individual group members who are authors or who are collaborators, sometimes called non-author contributors, if there is a note associated with the byline clearly stating that the individual names are elsewhere in the paper and whether those names are authors or collaborators.”
I think that this format should be used more, but that will only happen if people take the collaborator status seriously as well. Other “contribution solutions” can help to give some insight into what it means to be a collaborator, such as a detailed description like in movie credits or a standardized contribution table. We have to start appreciating all forms of contributions.
We recently published one of our projects embedded within the PROSCIS study. This follow-up study that includes 600+ men and women with acute stroke forms the basis of many active projects in the team (1 published, many coming up).
For this paper, PhD candidate PS measured auto-antibodies to the NMDAR receptor. Previous studies suggest that having these antibodies might be a marker, or even induce a kind of neuroprotective effect. That is not what we found: we showed that seropositive patients, especially those with the highest titers have a 3-3.5 fold increase in the risk of having a worse outcome, as well as almost 2-fold increased risk of CVD and death following the initial stroke.
Interesting findings, but some elements in our design do not allow us to draw very strong conclusions. One of them is the uncertainty of the seropositivity status of the patient over time. Are the antibodies actually induced over time? Are they transient? PS has come up with a solid plan to answer some of these questions, which includes measuring the antibodies at multiple time points just after stroke. Now, in PROSCIS we only have one blood sample, so we need to use biosamples from other studies that were designed with multiple blood draws. The team of AM was equally interested in the topic, so we teamed up. I am looking forward to follow-up on the questions that our own research brings up!
I am happy and honored that I can share that I am going to be part of the PLOS Biology editorial board. PLOS Biology has a special model for their editorial duties, with the core of the work being done by in-house staff editors – all scientist turned professional science communicators/publishers. They are supported by the academic editors – scientists who are active in their field and can help the in-house editors with insight/insider knowledge. I will join the team of academic editors.
Next to the meta-research manuscripts that need evaluation, I am also looking forward to just working with the professional and smart editorial office. The staff editors already teased a bit that a couple of new innovations are coming up. So, next to helping meta-research forward, I am looking forward to help shape and evaluate these experiments in scholarly publishing.
Every year there is a Neurology symposium organized in the quiet and beautiful town of Kuopio in Finland. Every three years, just like this year, the topic is stroke and for that reason, I was invited to be part of the faculty. A true honor, especially if you consider the other speakers on the program who all delivered excellent talks!
But these symposia are much more than just the hard cold science and prestige. It is also about making new friends and reconnecting with old ones. Leave that up to the Fins, whose decision to get us all on a boat and later in a sauna after a long day in the lecture hall proved to be a stroke of genius.
So, it was not for nothing that many of the talks boiled down to the idea that the best science is done with friends – in a team. This is true for when you are running a complex international stroke rehabilitation RCT, or you are investigating whether the lower risk in CVD morbidity and mortality amongst frequent sauna visitors. Or, in my case, about the role of hypercoagulability in young stroke – pdf of my slides can be found here –
Last week, I attended and spoke at the Maastricht Consensus Conference on Thrombosis (MCCT). This is not your standard, run-of-the-mill, conference where people share their most recent research. The MCCT is different, and focuses on the larger picture, by giving faculty the (plenary) stage to share their thoughts on opportunities and challenges in the field. Then, with the help of a team of PhD students, these thoughts are than further discussed in a break out session. All was wrapped up by a plenary discussion of what was discussed in the workshops. Interesting format, right?
It was my first MCCT, and I had difficulty envisioning how
exactly this format will work out beforehand. Now that I have experienced it
all, I can tell you that it really depends on the speaker and the people
attending the workshops. When it comes to the 20 minute introductions by the
faculty, I think that just an overview of the current state of the art is not
enough. The best presentations were all about the bigger picture, and had
either an open question, a controversial statement or some form of “crystal ball” vision of the future. It really is difficult to “find consensus” when there is no controversy as was the case in some
plenary talks. Given the break-out nature of the workshops, my observations are
limited in number. But from what I saw, some controversy (if need be only constructed
for the workshop) really did foster discussion amongst the workshop participants.
Two specific activities stand out for me. The first is the lecture and workshop on post PE syndrome and how we should able to monitor the functional outcome of PE. Given my recent plea in RPTH for more ordinal analyses in the field of thrombosis and hemostasis – learning from stroke research with its mRS- we not only had a great academic discussion, but made immediately plans for a couple of projects where we actually could implement this. The second activity I really enjoyed is my own workshop, where I not only gave a general introduction into stroke (prehospital treatment and triage, clinical and etiological heterogeneity etc) but also focused on the role of FXI and NETS. We discussed the role of DNase as a potential for co-treatment for tPA in the acute setting (talking about “crystal ball” type of discussions!). Slides from my lecture can be found here (PDF). An honorable mention has to go out to the PhD students P and V who did a great job in supporting me during the prep for the lecture and workshop. Their smart questions and shared insights really shaped my contribution.
Now, I said it was not always easy to find consensus, which
means that it isn’t impossible. In fact, I am
sure that themes that were discussed all boil down to a couple opportunities
and challenges. A first step was made by HtC and HS from the MCCT leadership
team in the closing session on Friday which will proof to be a great jumping
board for the consensus paper that will help set the stage for future research
in our field of arterial thrombosis.
I wrote about this in an earlier topic: JLR and I published a paper in which we explain that a single relative risk, irrespective of its form, is jus5t not enough. Some crucial elements go missing in this dimensionless ratio. The RR could allow us to forget about the size of the denominator, the clinical context, the crude binary nature of the outcome.
So we have provided some methods and ways of thinking to go beyond the RR in an tutorial published in RPTH (now in early view). The content and message are nothing new for those trained in clinical research (one would hope). Even for those without a formal training most concepts will have heard the concepts discussed in a talk or poster . But with all these concepts in one place, with an explanation why they provide a tad more insight than the RR alone, we hope that we will trigger young (and older) researchers to think whether one of these measures would be useful. Not for them, but for the readers of their papers.
The paper is open access CC BY-NC-ND 4.0, and can be downloaded from the website of RPTH, or from my mendeley profile.