The collaboration with the group in finland has turned into a nice new publication, with the title
“Cardiovascular events after ischemic stroke in young adults”
this work, with data from Finland was primarily done by KA and JP. KA came to Berlin to learn some epidemiology with the aid of the Virchow scholarship, so that is where we came in. It was great to have KA to be part of the team, and even better to have been working on their great data.
Now onto the results of the paper: like in the results of the RATIO follow-up study, the risk of recurrent young stroke remained present for a long-term time after the stroke in this analysis of the Helsinki Young Stroke Registry. But unlike the RATIO paper, this data had more information on their patients, for example the TOAST criteria. this means that we were able to identify that the group with a LAA had a very high risk of recurrence.
Easter brought another publication, this time with the title
“Statins and risk of poststroke hemorrhagic complications”
I am very pleased with this paper as it demonstrates two important aspects of my job. First, I was able to share my thought on comparing current users vs never users. As has been argued before (e.g. by the group of Hérnan) and also articulated in a letter to the editor I wrote with colleagues from Leiden, such a comparison brings forth an inherent survival bias: you are comparing never users (i.e. those without indication) vs current users (those who have the indication, can handle the side-effects of the medication, and stay alive long enough to be enrolled into the study as users). This matter is of course only relevant if you want to test the effect of statins, not if you are interested in the mere predictive value of being a statin user.
The second thing about this paper is the way we were able to use data from the VISTA collaboration, which is a large amount of data pooled from previous stroke studies (RCT and observational). I believe such ways of sharing data brings forward science. Should all data be shared online for all to use? I do am not sure of that, but the easy access model of the VISTA collaboration (which includes data maintenance and harmonization etc) is certainly appealing.
We published a new article just in PLOS Biology today, with the title:
“Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke”
This is a wonderful collaboration between three fields: stats, epi and lab researchers. Combined we took a look at what is called attrition in the preclinical labs, that is the loss of data in animal experiments. This could be because the animal died before the needed data could be obtained, or just because a measurement failed. This loss of data can be translated to the concept of loss to follow-up in epidemiological cohort studies, and from this field we know that this could lead to substantial loss of statistical power and perhaps even bias.
But it was unknown to what extent this also was a problem in preclinical research, so we did two things. We looked at how often papers indicated there was attrition (with an alarming number of papers that did not provide the data for us to establish whether there was attrition), and we did some simulation what happens if there is attrition in various scenarios. The results paint a clear picture: the loss of power but also the bias is substantial. The degree of these is of course dependent on the scenario of attrition, but the message of the paper is clear: we should be aware of the problems that come with attrition and that reporting on attrition is the first step in minimising this problem.
A nice thing about this paper is that coincides with the start of a new research section in the PLOS galaxy, being “meta-research”, a collection of papers that all focus on how science works, behaves, and can or even should be improved. I can only welcome this, as more projects on this topic are in our pipeline!
A quick update on a new article that was published on friday in the NTVG. This article with the title
“Conducting your own research: a revised recipe for a clinical research training project”
– gives a couple of suggestions for young clinicians/researchers on how they should organise their epidemiological research projects. This paper was written to commemorate the retirement of prof JvdB, who wrote the original article back in 1989. I am quite grew quite fond of this article, as it combines insights from 25 years back as well as quite recent insights (e.g. STROBE and cie Schuyt and resulted in a article that will help young research to rethink how they plan and execute their own research project.
There are 5 key suggestions that form the backbone of this article i.e. limit the research question, conduct a pilot study, write the article before you collect the data, streamline the research process and be accountable. As the article is in Dutch only at this moment, I will work on an English version. First drafts of this ms, each discussing each of the 5 recommendations might appear on this website. And how about a German version?
Anyway, it has to be mentioned that if it not was for JvdB, this article would have never come to light. Not only because he wrote the original, but mostly because he is one of the most inspiring teachers of epidemiology.
This website is to keep track of all things that sound ‘sciency’, and so all the papers that I contributed end up here with a short description. Normally this means that I am one of the authors and I know well ahead of time that an article will be published online or in print. Today, however, I got a little surprise: I got notice that I am a co-author on a paper (pdf) which I knew was coming, but I didn’t know that I was a co-author. And my amazement grew even more the moment that I discovered that I was placed as the last author, a place reserved for senior authorship in most medical journals.
However , there is a catch… I had to share my ‘last authorship’ position with 3186 others, an unprecedented number!
You might have guessed that this is not just a normal paper and that there is something weird going on here. Well weird is not the right word. Unusual is the word I would like to use since this paper is an example of something that I hope will happen more often! Citizen scientists. A citizen scientist is where ordinary people without any background or training can help in a scientific experiment of some sorts by helping just a little to obtain the data after some minimal instruction. This is wonderfully explained by this project, the iSpex project, where I contributed not as an epidemiologist, but as a citizen scientist. If you want to know more, just read what I have written previously on this blog in the post ‘measuring aerosols with your iPhone’.
So the researcher who initiated the iSpex project have now analysed their data and submitted the results to the journal Geophysical research letters, and as a bonus made all contributing citizen scientist co-author. Cool!
Now lets get back to the question stated in the title… Did I deserve an authorship on this paper? Basically no: none of the 3187 citizen scientist do not fulfil the criteria of authorship that I am used to (i.e. ICMJE), nor fulfil the criteria of the journal itself. I am no exception. However, I do believe that it is quite clear for any reader what the role of these citizen scientist was in this project. So this new form of a authorship, i.e. ‘gift authorship to a group of citizen scientists’ is a cool way to keep the public engaged to science. A job well done!
Fraud, shoddy and sloppy science, conflicts of interest… Who said a science career is boring? When I write on these topics I sometimes have the feeling that I am doing science more harm than good; am I doing science a favor by showing its weaknesses and caveats? The answer still remains yes, for I believe that we need problems need to be identified before you can act on them. This is also the theme of this post: What is all being done on these topics in the last couple of days. A point by point list:
AllTrials: The AllTrials initiative which I support is going into its next round.Pharmaceutical companies are opening up (LEO, GSK), there are hearings in brussels and the debate in Medical journals (especially the BMJ, as one of the founders of AllTrials) is going on. Great stuff!
PubMed commons (a commenting system in PubMed, as a new post publication peer review) got online. It’s still a trial, but boy this is cool. I love its punchline: “A forum for scientific discourse”.
We organised a try out of our ‘on being a scientist’ workshop on which i wrote earlier this post. IN this post i say that is if going to be a LUMC workshop, but this changed to a workshop for all starting PhD students from the university Leiden, thus including all faculties. I am truly exciting and it our first run in november works out, this workshop might even become part of the official PhD education program of the university Leiden. The economist published a coverstory on How science goes wrong. It tells how science, peer review, statistical false positives etc work. It is a good read, especially when you are interested in science as a social process. Some remarks can be made: it’s not all that bad because scientist tend to be familiar with how the system works… the system might not be perfect, but it is at the moment the best we can do… luckily there are ways to get better, ways that are also discussed in the article.It is good that the economist and other media shares these concerns, because now this might up to build to critical mass to really change some of the weak points in the system. I thought about using the graph published next to the paper, but once I discovered the animated version of the graph i fell in love. See for yourself below. (PS false positives: another reason why not only to rely on statistical testing!)
– edit: i changed the title of the pot… the first title was a bit pretentious –
Today I participated in crowdsourced science: Measuring aerosols with my smartphone. Thousands of measurements in one day, all about the air quality in the Netherlands. The nice thing about this project is that laypeople are the researchers: everybody that ordered a free gadget for their iPhone is a researcher on this great sunny day. How the measurements work? See for yourself!
More on this project can be seen on their website ispex.nl. This project was made possible with funds fron the Dutch lung foundation and the Academische Jaarprijs. So is it time to think of a big epi project in which crowd sourced data can be used?
Together with my colleague TdC from the department of geriatrics I am working on a workshop for starting PhD students on the topic of scientific integrity under the working title “On being a scientist: a workshop in scientific integrity”
The LUMC code of scientific integrity, the recent KNAW report of cie. Schuyt and the publication of the National Academy of Science “On being a scientist” will form the backbone of the this workshop (see also the video below of the NAS, with the great quote “scientist should be people too!”). We are still developing the actual content, but this workshop will primarily based on several cases that will be discussed, ranging from cases of clear scientific misconduct to cases of conflicting demands of supervisors. How can you spot these problems in advance, solve or preferably prevent them? What additional measure should be put in place to sustain a critical but workable environment?
I am excited that I can be part of the team that develops this workshop. As I said before, I do not believe that this workshop will prevent all possible scientific misconduct, but I do believe that educating PhD students helps to prevent hem from making honest mistaken. Also, I hope that this course will help to create a critical but positive atmosphere in which science will thrive.
This workshop will be part of the PhD training that the LUMC offers free of charge. The first edition of the this workshop will be held on September 18 2013. Please contact me via email for more information.