Retracting our own paper

I wrote a series of emails in the last couple of weeks I never thought I would need to write: I gave the final okay on the wording of a retraction notice for one of the papers that I have worked on during my time in Berlin. Let me provide some more insight than a regular retraction notice provides.

Let’s start with the paper that we needed to retract. It is a paper in which we investigate the so-called smoking paradox – the idea that those who smoke might have more beneficial effects from thrombolysis treatment for stroke. Because of the presumed mechanisms, as well as the direct method of treatment delivery IA thrombolysis is of particular interest here. The paper, “The smoking paradox in ischemic stroke patients treated with intra-arterial thrombolysis in combination with mechanical thrombectomy–VISTA-Endovascular”, looked at this presumed relation, but we were not able to find evidence that was in support of the hypothesis.

But why then the retraction? To study this phenomenon, we needed data rich with people who were treated with IA thrombolysis and solid data on smoking behavior. We found this combination in the form of a dataset from the VISTA collaboration. VISTA is founded to collect useful data from several sources and combine them in a way to further strengthen international stroke research where possible. But something went wrong: the variables we used did not actually represent what we thought they did. This is a combination of limited documentation, sub-optimal data management, etc etc. In short, a mistake by the people who managed the data made us analyze faulty data. The data managers identified the mistake and contacted us. Together we looked at whether we could actually fix the error (i.e. prepare a correction to the paper), but the number of people who had the treatment of interest in the corrected dataset is just too low to actually analyze the data and get to a somewhat reliable answer to our research question.

So, a retraction is indicated. The co-authors, VISTA, as well as the people on the ethics team at PLOS were all quite professional and looking for the most suitable way to handle this situation. This is not a quick process, by the way – from the moment that we first identified the mistake, it took us ~10 weeks to get the retraction published. This is because we first wanted to make sure that retraction is the right step, get all the technical details regarding the issue, then we had to inform our co-authors and get their formal OK on the request for retraction, then got in touch with the PLOS ethics team, then we had two rounds of getting formal OK’s on the final retraction text, etc, and only then the retraction notice went into production. The final product is only the following couple of sentences:

After this article [1] was published, the authors became aware of a dataset error that renders the article’s conclusions invalid.

Specifically, due to data labelling and missing information issues, the ‘IAT’ data reflect intra-arterial (IA) treatment rather than the more restricted treatment type of IA-thrombolysis. Further investigation of the dataset revealed that only 24 individuals in the study population received IA-thrombolysis, instead of N = 216 as was reported in [1]. Hence, the article’s main conclusion is not valid or reliable as it is based on the wrong data.

Furthermore, due to the small size of the IA-thrombolysis-positive group, the dataset is not sufficiently powered to address the research question.

In light of the above concerns, the authors retract this article.

All authors agree with retraction.

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0279276

Do you know what is weird? You know you are doing the right thing, but still… it feels as if it is not the sciency thing to do. I now have to recognize that retracting a paper, even when it is to correct a mistake without any scientific fraud involved, triggers feelings of anxiety. What will people actually think of me when I have a retraction on my track record? Rationally, I can argue the issue and explain why it is a good thing to have a retraction on your record when it is required. But still, those feeling pop up in my brain from time to time. When that happens, I just try to remember the best thing that came out of this new experience: my lectures on scientific retractions will never be the same.

Know, be consistent and open about when, who, and how to count when you use the PCFS – do not forget the dead!

The post COVID-19 functional status scale has been adopted in 100+ clinics or research projects. I think this is a great testimony to the power that science and collaborations brings to this pandemic. But with a new tool comes also a way of thinking that is perhaps standard in stroke research but perhaps not so obvious outside that field. I might have underestimated that when we proposed the PCFS. To provide some guidance, just think about this. Know, be consistent and open about when, what and who you count when assessing the PCFS in your clinic or cohort. And yeah, do not forget the dead.

  • WHEN: when you count the PCFS in your patients is just perhaps one of the most important aspects of the PCFS when you want to make use of all its potential. When you assess the PCFS must be standardized. It would be best if it were standardized between studies or clinics (e.g. @discharge and 4 & 8 weeks thereafter, as we suggest in the original proposal) but this might not be suitable in all instances. If you can’t keep this, try at least to standardize the moment of assessing the PCFS constant, with a narrow time window, within one data collection. As COVID-19 patients are likely to improve overtime, it matters when who is interviewed. Irrespective of whether you were able to keep the time window as tight as possible, make sure to report the details on the window in your papers. Better even – share the data.
  • WHO: If you only assess the PCFS in the survivors of COVID-19, you build in a selection. And when there is a selection, selection bias is around the corner. The clearest example comes from the comparison of those patients admitted to the ICU vs those who were not. If we do not count the dead in the ICU population, but we do in the other group, it might well be that the PCFS distribution amongst those with the PCFS assessment is in favor of being in the ICU group. All patients who enter the cohort need a PCFS assessment, also the dead. Again, whatever you did, make sure you describe who were assessed and who weren’t in your methods and results of your papers.
  • HOW: in our proposal we give the option to do an interview or a self-assessment questionnaire. We don’t have enough evidence to support one over the other. We think interviews provide a little more depth, but the bottom line is that professionals should choose the one that fits their needs the best. Both outcome assessments methods will provide different scores and that is just fine, as long as it is clear to others what you actually did. But be aware: mixing two types in one study or cohort can bring some bias – see above. Make sure you provide an adequate description of what you did, even when you follow the proposed methods in our manual – the PCFS is not completely standardized in the literature, so you need to bring your colleagues up to speed.

In a clinical setting it is easy to take these three variables into account when discussing a single patient. After all, you only need to put the PCFS in the context of one individual. At most, you need to consider the PCFS measured over multiple time points. But if you want to learn from your experiences, it is best to make the assessment as standardized as possible. It will help to interpret the data of an individual patients quicker, see patterns within and perhaps between patients, and as a final kicker, might make it possible to do some research with that valuable data.

More information on the PCFS can be found on https://osf.io/qgpdv/.

Results dissemination from clinical trials conducted at German university medical centers was delayed and incomplete.

My interests are broader than stroke, as you can see my tweets as well as my publications. I am interested in how the medical scientific enterprise works – and more importantly how it can be improved. The latest paper looks at both.

The paper, with the relatively boring title “Results dissemination from clinical trials conducted at German university medical centres was delayed and incomplete.” is a collaboration with QUEST, and carried by DS and his team. The short form of the title might just as well have been “RCT don’t get published, and even if they do it is often too late.”

Now, this is not a new finding, in the sense that older publications also showed high rates of non-publishing. Newer activities in this field, such as the trial trackers for the FDAA and the EU, confirm this idea. The cool thing about these newer trackers is that they rely on continuous data collection through bots that crawl all over the interwebs to look for new trials. This upside thas a couple of downsides though: with constant being updated, these trackers do not work that well as a benchmarking tool. Second, they might miss some obscure type of publication which might lead to underreporting of reporting. Third, to keep the trackers simple they tend to only use one definition as what counts as “timely publication” even though the field, nor the guidelines, are conclusive.

So our project is something different. To get a good benchmark, we looked at whether trials executed by/at German University medical centers were published in a timely fashion. We collected the data automatically as far as we could, but also did a complete double check by hand to ensure we didn’t skip publications (hint, we did, hand search is important, potentially because of the language thing). Then we put all the data in a database, made a shiny app so that readers themselves can decide what definitions and subsets they are interested in. The bottomline, on average only ~50% of trials get published within two years after their formal end. That is too little and too slow.

shiny app

This is a cool publication because it provides a solid benchmark that truly captures the current state. Now, it is up to us, and the community to improve our reporting. We should track progress in the upcoming years by automated trackers, and in 5 years or so do the whole manual tracking once more. But that is not the only reason why it was so inspiring to work on the projects; it was the diverse team of researchers from many different groups that made the work fun to do. The discussions we had on the right methodology were complex and even led to an ancillary paper by DS and his group. But the way this publication was published in the most open way possible (open data, preprint, etc) was also a good experience.

The paper is here on Pubmed, the project page on OSF can be found here and the preprint is on bioRxiv, and let us not forget the shiny app where you can check out the results yourself. Kudos go out to DS and SW who really took the lead in this project.

BEMC has a Journal Club now

cropped-favicon_bemc1

After a year of successful BEMC talks and seeing the BEMC grow,  it was time for something new. We are starting a new journal club within the BEMC community, purely focussed on methods. The text below describes what we are going to to do, starting in February. (text comes from the BEMC website)

BEMC is trying something new: a journal club. In february, we will start a monthly journal to accompany the BEMC talks as an experiment. The format is subject to change as we will adapt after gaining more experience in what works and what not. For now, we are thinking along the following lines:

Why another journal club?

Aren’t we already drowning in Journal clubs? Perhaps, but not with this kind of journal club. BEMC JClub is focussed on the methods of clinical research. Many epidemiological inclined researchers work at departments who are not focussed on methodology, but rather on a disease or field of medicine. This is reflected in the topics of the different journal clubs around town. We believe there is a need for a methods journal club in Berlin. Our hope for the BEMC JClub is to fulfill that need through interdisciplinary and methodological discussions of the papers that we read.

Who is going to participate?

First of all, please remember that the BEMC community focussed on researchers with a medium to advanced epidemiological knowledge and skill set. This is not only true for our BEMC talks, but also for our JClub.

Next to this, we hope that we will end up with a good group that reflects the BEMC community. This means that we are looking for a group with a nice mix in background and experience. That means that if you think you have unique background and focus in your work, we highly encourage you to join us and make our group as diverse as possible. We strive for this diversity as we do not want the JClub sessions to become echo chambers or teaching sessions, but truly discussions that promote knowledge exchange between methodologist from different fields.

What will we read?

Anything that is relevant for those who attend. The BEMC team will ultimately determine which papers we will read, but we are nice people and listen carefully to the suggestions of regulars. Sometimes we will pick a paper on the same (or related) topic of the BEMC talk of that month.

Even though the BEMC team has the lead in the organisation, the content of the JClub should come from everybody attending. Everybody that attends the Jclub is asked to provide some points, remarks or questions to jumpstart the discussion.

What about students?

Difficult to say. The BEMC JClub is not designed to teach medical students the basics in epidemiology. Then again, everybody who is smart, can keep up and contribute to the discussion is welcome.

Are you a student and in doubt whether the BEMC JClub is for you? Just send us an email.

Where? When?

Details like this can on the BEMC Jclub website. Just click here.

Berlin Epidemiological Methods Colloquium kicks of with SER event

A small group of epi-nerds (JLR, TK and myself) decided to start a colloquium on epidemiological methods. This colloquium series kicks off with a webcast of an event organised by the Society for Epidemiological Research (SER), but in general we will organize meetings focussed on advanced topics in epidemiological methods. Anyone interested is welcome. Regularly meetings will start in February 2017. All meetings will be held in English.
More information on the first event can be found below or via this link:

“Perspective of relative versus absolute effect measures” via SERdigital

Date: Wednesday, November 16th 2016 Time: 6:00pm – 9:00pm
Location: Seminar Room of the Neurology Clinic, first floor (Alte Nervenklinik)
Bonhoefferweg 3, Charite Universitätsmedizin Berlin- Campus Mitte, 10117 Berlin
(Map: https://www.charite.de/service/lageplan/plan/map/ccm_bonhoefferweg_3)

Description:
Join us for a live, interactive viewing party of a debate between two leading epidemiologists, Dr. Charlie Poole and Dr. Donna Spiegelman, about the merits of relative versus absolute effect measures. Which measure of effect should epidemiologists prioritize? This digital event organized by the Society for Epidemiologic Research will also include three live oral presentations selected from submitted abstracts. There will be open discussion with other viewers from across the globe and opportunities to submit questions to the speakers. And since no movie night is complete without popcorn, we will provide that, too! For more information, see: https://epiresearch.org/ser50/serdigital

If you plan to attend, please register (space limited): https://goo.gl/forms/3Q0OsOxufk4rL9Pu1

 

Diane 35 and thrombosis risk – Zembla broadcast

The oral contraceptive pill ‘Diane 35- was’ in the news again. I wrote about the diane-35 pill on this website before, even twice,  when there was a broadcast of the radio show Argos.

The first time I wrote:

[…] this is a bit strange: there is nothing new about the information that third and fourth generation oral contraceptives have an increased risk of thrombosis compared to the risk conveyed by second generation oral contraceptives. Because the desired effects of the older and newer generation pills are similar (not getting pregnant, preventing or curing acne) there is limited, if any, reason to prescribe the newest and more expensive pills. See also the recent comment by Helmerhorst and Rosendaal in the BMJ. However, still 160.000+ (Diane 35) 500.000 (third generation) women take these newer pills. […]

Those words also fit the broadcast of the TV show Zembla last week. Zembla has a reputation to be ‘activist reporters’ and some of the broadcast is not to my taste. It is however good to see that Zembla tried to figure out how it is possible that Diane-35, which is not registered as an anti-conception pill, still gets prescribed as such. However, the broadcast leaves me unsatisfied for it does not provide answers, or even get to talk to everybody they wanted to? (Why did they reporters did not proceed to work on their WOB? a missed change!)

As in the previous two blog posts on this topic, I feel like these story are important but they also need to have the proper amount of nuance. Therefore, also this time I conclude with saying that the absolute risk of thrombosis in young women (both venous and arterial) is very low, even when using oral contraceptives. But all unnecessary risk without any benefit that can be avoided should be avoided. As always, consult your GP if you have any questions.

Diane 35 and thrombosis risk – Argos broadcast part II

Last week I wrote a post after hearing the radio broadcast of Argos. They concluded that broadcast with the promise to discuss how it is possibe that a more expensive, just as effective medicine which has more side effects still can be prescribed (in large numbers) in the Netherlands.

So I’ve listen with great interest the second part of the story, which can be heard on the Argos website. They journalists did a good job by covering all sides of the story , and they provide insight in the differences between ‘advertisement’ and ‘providing information’. What if information that is provided is only one sided? Does that count as advertisement? and if you want to play a nice game during the broadcast, ‘spot the logical fallacy’ is good suggestion… Gems!

In case you are wondering: the absolute risk of thrombosis in young women is low, even when using oral contraceptives. But I still believe that all unnecessary added risk without any benefit that can be avoided should be avoided by you in dialogue with your GP!

Diane 35 and thrombosis risk – Argos broadcast

The oral contraceptive pill – especially the Diane 35- was in the news again. However, this is a bit strange: there is nothing new about the information that third and fourth generation oral contraceptives have an increased risk of thrombosis compared to the risk conveyed by second generation oral contraceptives. Because the desired effects of the older and newer generation pills are similar (not getting pregnant, preventing or curing acne) there is limited, if any, reason to prescribe the newest and more expensive pills. See also the recent comment by Helmerhorst and Rosendaal in the BMJ. However, still 160.000+ (Diane 35) 500.000 (third generation) women take these newer pills. Since thrombosis risk might be highest in the first few months, it is unclear whether these women all should switch to the safer second generation oral contraceptives. But for women who get their first prescription, a second generation oral contraceptive the best way to go (also according the Dutch GP guidelines).

A lot of the research on this topic has been executed by my colleagues from both the MEGA study and the RATIO study. Want to learn more about the pill controversy, please listen this episode of Argos, a Dutch radio programme.

In case you are wondering: the absolute risk of thrombosis in young women is low, even when using a newer generation oral contraceptives. But all added risk that can be avoided should be avoided by you in dialogue with your GP!

Masterclass “Noordwijk” covered in the LUMC magazine Cicero

The LUMC magazine “Cicero” covered our Masterclass in Noordwijk. Its a nice description (in Dutch) of two weekend of undergrad-die-hard-epidemiology. One of the students is also interviewed and she concludes:

“Het lukt de docenten om de studenten de hele tijd
te blijven boeien, gedurende twee weekenden van donderdagavond tot zaterdagmiddag. Ik was bang dat ik dat niet zou volhouden. Maar het ging, en het bleef nog leuk ook.”

The text of the article can be found below and here in pdf (cicero 29 jan 2013).  More articles etc can be found on the media page.

Continue reading “Masterclass “Noordwijk” covered in the LUMC magazine Cicero”