medRxiv: the pre-print server for medicine

Pre-print servers are a place to place share your academic work before actual peer review and subsequent publication. They are not so new completely new to academia, as many different disciplines have adopted pre-print servers to quickly share ideas and keep the academic discussion going. Many have praised the informal peer-review that you get when you post on pre-print servers, but I primarily like the speed.

But medicine is not one of those disciplines. Up until recently, the medical community had to use bioRxiv, a pre-print server for biology. Very unsatisfactory; as the fields are just too far apart, and the idiosyncrasies of the medical sciences bring some extra requirements. (e.g. ethical approval, trial registration, etc.). So here comes medRxiv, from the makers of bioRxiv with support of the BMJ. Let’s take a moment to listen to the people behind medRxiv to explain the concept themselves.

source: https://www.medrxiv.org/content/about-medrxiv

I love it. I am not sure whether it will be adopted by the community at the same space as some other disciplines have, but doing nothing will never be part of the way forward. Critical participation is the only way.

So, that’s what I did. I wanted to be part of this new thing and convinced with co-authors for using the pre-print concept. I focussed my efforts on the paper in which we describe the BeLOVe study. This is a big cohort we are currently setting up, and in a way, is therefore well-suited for pre-print servers. The pre-print servers allow us to describe without restrictions in word count, appendices or tables and graphs to describe what we want to the level of detail of our choice. The speediness is also welcome, as we want to inform the world on our effects while we are still in the pilot phase and are still able to tweak the design here or there. And that is actually what happened: after being online for a couple of days, our pre-print already sparked some ideas by others.

Now we have to see how much effort it took us, and how much benefit w drew from this extra effort. It would be great if all journals would permit pre-prints (not all do…) and if submitting to a journal would just be a “one click’ kind of effort after jumping through the hoops for the medRxiv.

This is not my first pre-print. For example, the paper that I co-authored on the timely publication of trials from Germany was posted on biorXiv. But being the guy who actually uploads the manuscript is a whole different feeling.

REWARD | EQUATOR Conference 2020 in Berlin

https://www.reward-equator-conference-2020.com

Almost 5 years ago something interesting happened in Edinburgh. REWARD and EQUATOR teamed up and organized a joint conference on “Increasing value and reducing waste in biomedical research “. Over the last five years, that topic has dominated Meta-research and research improvement activities all over the world. Now 5 years later, it is again time for another REWARD and EQUATOR conference, this time in Berlin. And I have the honor to serve on the local organizing committee.

My role is so small, that the LOC is currently not even mentioned on the website. But the website does show some other names, promising a great event! it starts with the theme. which is “Challenges and opportunities for Improvement for Ethics Committees and Regulators, Publishers, Institutions and Researchers, Funders – and Methods for measuring and testing Interventions”. That is not a sexy title like 5 years ago, but it shows that the field has outgrown the alarmistic phase and is now looking for real and lasting changes for the better – a move I can only encourage. See you in Berlin?

https://www.reward-equator-conference-2020.com

Results dissemination from clinical trials conducted at German university medical centers was delayed and incomplete.

My interests are broader than stroke, as you can see my tweets as well as my publications. I am interested in how the medical scientific enterprise works – and more importantly how it can be improved. The latest paper looks at both.

The paper, with the relatively boring title “Results dissemination from clinical trials conducted at German university medical centres was delayed and incomplete.” is a collaboration with QUEST, and carried by DS and his team. The short form of the title might just as well have been “RCT don’t get published, and even if they do it is often too late.”

Now, this is not a new finding, in the sense that older publications also showed high rates of non-publishing. Newer activities in this field, such as the trial trackers for the FDAA and the EU, confirm this idea. The cool thing about these newer trackers is that they rely on continuous data collection through bots that crawl all over the interwebs to look for new trials. This upside thas a couple of downsides though: with constant being updated, these trackers do not work that well as a benchmarking tool. Second, they might miss some obscure type of publication which might lead to underreporting of reporting. Third, to keep the trackers simple they tend to only use one definition as what counts as “timely publication” even though the field, nor the guidelines, are conclusive.

So our project is something different. To get a good benchmark, we looked at whether trials executed by/at German University medical centers were published in a timely fashion. We collected the data automatically as far as we could, but also did a complete double check by hand to ensure we didn’t skip publications (hint, we did, hand search is important, potentially because of the language thing). Then we put all the data in a database, made a shiny app so that readers themselves can decide what definitions and subsets they are interested in. The bottomline, on average only ~50% of trials get published within two years after their formal end. That is too little and too slow.

shiny app

This is a cool publication because it provides a solid benchmark that truly captures the current state. Now, it is up to us, and the community to improve our reporting. We should track progress in the upcoming years by automated trackers, and in 5 years or so do the whole manual tracking once more. But that is not the only reason why it was so inspiring to work on the projects; it was the diverse team of researchers from many different groups that made the work fun to do. The discussions we had on the right methodology were complex and even led to an ancillary paper by DS and his group. But the way this publication was published in the most open way possible (open data, preprint, etc) was also a good experience.

The paper is here on Pubmed, the project page on OSF can be found here and the preprint is on bioRxiv, and let us not forget the shiny app where you can check out the results yourself. Kudos go out to DS and SW who really took the lead in this project.

Joining the PLOS Biology editorial board

I am happy and honored that I can share that I am going to be part of the PLOS Biology editorial board. PLOS Biology has a special model for their editorial duties, with the core of the work being done by in-house staff editors – all scientist turned professional science communicators/publishers. They are supported by the academic editors – scientists who are active in their field and can help the in-house editors with insight/insider knowledge. I will join the team of academic editors.

When the staff editors asked me to join the editorial board, it quickly became clear that they invited because I might be able to contribute to the Meta-research section in the journal. After all, next to some of my peer review reports I wrote for the journal, I published a paper on missing mice, the idea behind sequential designs in preclinical research, and more recently about the role of exact replication.

Next to the meta-research manuscripts that need evaluation, I am also looking forward to just working with the professional and smart editorial office. The staff editors already teased a bit that a couple of new innovations are coming up. So, next to helping meta-research forward, I am looking forward to help shape and evaluate these experiments in scholarly publishing.

FVIII, Protein C and the Risk of Arterial Thrombosis: More than the Sum of Its Parts.

maxresdefault
source: https://www.youtube.com/watch?v=jGMRLLySc4w 

Peer review is not a pissing contest. Peer reviewing is not about findings the smallest of errors and delay publication because of it. Peer review is not about being right. Peer review is not about rewriting the paper under review. Peer review is not about asking for yet another experiment.

 

Peer review is about making sure that the conclusions presented in the paper are justified by the data presented and peer review is about helping the authors get the best report on what they did.

At least that what I try to remind myself of when I write my peer review report. So what happened when I wrote a peer review about a paper presenting data on the two hemostatic factors protein C and FVIII in relation to arterial thrombosis. These two proteins are known to have a direct interaction with each other. But does this also translate into the situation where a combination of the two risk factors of the “have both, get extra risk for free”?

There are two approaches to test so-called interaction: statistical and biological. The authors presented one approach, while I thought the other approach was better suited to analyze and interpret the data. Did that result in an academic battle of arguments, or perhaps a peer review deadlock? No, the authors were quite civil to entertain my rambling thoughts and comments with additional analyses and results, but convinced me in the end that their approach have more merit in this particular situation. The editor of thrombosis and hemostasis saw this all going down and agreed with my suggestion that an accompanying editorial on this topic to help the readers understand what actually happened during the peer review process. The nice thing about this is that the editor asked me to that editorial, which can be found here, the paper by Zakai et al can be found here.

All this learned me a thing or two about peer review: Cordial peer review is always better (duh!) than a peer review street brawl, and that sharing aspects from the peer review process could help readers understand the paper in more detail. Open peer review, especially the parts where peer review is not anonymous and reports are open to readers after publication, is a way to foster both practices. In the meantime, this editorial will have to do.

 

Associate editor at BMC Thrombosis Journal

source: https://goo.gl/CS2XtJ
source: https://goo.gl/CS2XtJ

In the week just before Christmas, HtC approached me by asking whether or not I would like to join the editorial board of BMC Thrombosis Journal as an Associate Editor. the aims and scope of the journal, taken from their website:

“Thrombosis Journal  is an open-access journal that publishes original articles on aspects of clinical and basic research, new methodology, case reports and reviews in the areas of thrombosis.Topics of particular interest include the diagnosis of arterial and venous thrombosis, new antithrombotic treatments, new developments in the understanding, diagnosis and treatments of atherosclerotic vessel disease, relations between haemostasis and vascular disease, hypertension, diabetes, immunology and obesity.”

I talked to HtC, someone at BMC, as well as some of my friends and colleagues whether or not this would be a wise thing to do. Here is an overview of the points that came up:

Experience: Thrombosis is the field where I grew up in as a researcher. I know the basics, and have some extensive knowledge on specific parts of the field. But with my move to Germany, I started to focus on stroke, so one might wonder why not use your time to work with a stroke related journal. My answer is that the field of thrombosis is a stroke related field and that my position in both worlds is a good opportunity to learn from both fields. Sure, there will be topics that I have less knowledge off, but here is where an associate editor should rely on expert reviewers and fellow editors.

This new position will also provide me with a bunch of new experiences in itself: for example, sitting on the other side of the table in a peer review process might help me to better understand a rejection of one of my own papers. Bottom line is that I think that I both bring and gain relevant experiences in this new position.

Time: These things cost time. A lot. Especially when you need to learn the skills needed for the job, like me. But learning these skills as an associate editor is an integral part of the science apparatus, and I am sure that the time that I invest will help me develop as a scientist. Also, the time that I need to spend is not necessary the type of time that I currently lack, i.e. writing time. For writing and doing research myself I need decent blocks of time to dive in and focus  — 4+ hours if possible. The time I need to perform my associate editor tasks is more fragmented: find peer reviewers, read their comments and make a final judgement are relative fragmented activities and I am sure that as soon as I get the hang of it I can squeeze those activities within shorter slots of time. Perhaps a nice way to fill those otherwise lost 30 minutes between two meetings?

Open science: Thrombosis journal is part of the Biomed central family. As such, it is an 100% OA journal. It is not that I am an open science fanboy or sceptic, but I am very curious how OA is developing and working with an OA journal will help me to understand what OA can and cannot deliver.

Going over these points, I am convinced that I can contribute to the journal with my experience in the fields of coagulation, stroke and research methodology. Also, I think that the time that it will take to learn the skills needed are an investment that in the end will help me to grow as a researcher. So, I replied HtC with a positive answer. Expect email requesting for a peer review report soon!

The paradox of the BMI paradox

2016-10-19-17_52_02-physbe-talk-bs-pdf-adobe-reader

I had the honor to be invited to the PHYSBE research group in Gothenburg, Sweden. I got to talk about the paradox of the BMI paradox. In the announcement abstract I wrote:

“The paradox of the BMI paradox”
Many fields have their own so-called “paradox”, where a risk factor in certain
instances suddenly seems to be protective. A good example is the BMI paradox,
where high BMI in some studies seems to be protective of mortality. I will
argue that these paradoxes can be explained by a form of selection bias. But I
will also discuss that these paradoxes have provided researchers with much
more than just an erroneous conclusion on the causal link between BMI and
mortality.

I first address the problem of BMI as an exposure. Easy stuff. But then we come to index even bias, or collider stratification bias. and how selections do matter in a recurrence research paradox -like PFO & stroke- or a health status research like BMI- and can introduce confounding into the equation.

I see that the confounding might not be enough to explain all that is observed in observational research, so I continued looking for other reasons there are these strong feelings on these paradoxes. Do they exist, or don’t they?I found that the two sides tend to “talk in two worlds”. One side talks about causal research and asks what we can learn from the biological systems that might play a role, whereas others think with their clinical  POV and start to talk about RCTs and the need for weight control programs in patients. But there is huge difference in study design, RQ and interpretation of results between the studies that they cite and interpret. Perhaps part of the paradox can be explained by this misunderstanding.

But the cool thing about the paradox is that through complicated topics, new hypothesis , interesting findings and strong feelings about the existence of paradoxes, I think that the we can all agree: the field of obesity research has won in the end. and with winning i mean that the methods are now better described, better discussed and better applied. New hypothesis are being generated and confirmed or refuted. All in all, the field makes progress not despite, but because the paradox. A paradox that doesn’t even exist. How is that for a paradox?

All in all an interesting day, and i think i made some friends in Gothenburg. Perhaps we can do some cool science together!

Slides can be found here.

How to set up a research group

A couple of weeks ago I wrote down some thoughts I had while writing a paper for the JTH series on Early Career Researchers. I was asked to write how one sets up a research group, and the four points I described in my previous post can be recognised in the final paper.

But I also added some reading tips in the paper. reading on a particular topic helps me not only to learn what is written in the books, but also to get my mind in a certain mindset. So, when i knew that i was going to take over a research group in Berlin I read a couple of books, both fiction and non fiction. Some where about Berlin (e.g. Cees Nootebooms Berlijn 1989/2009), some were focussed on academic life (e.g. Porterhouse Blue). They help to get my mind in a certain gear to help me prepare of what is going on. In that sense, my bookcase says a lot about myself.

The number one on the list of recommended reads are the standard management best sellers, as I wrote in the text box:

// Management books There are many titles that I can mention here; whether it the best-seller Seven Habits of Highly Effective People or any of the smaller booklets by Ken Blanchard, I am convinced that reading some of these texts can help you in your own development as a group leader. Perhaps you will like some of the techniques and approaches that are proposed and decide to adopt them. Or, like me, you may initially find yourself irritated because you cannot envision the approaches working in the academic setting. If this happens, I encourage you to keep reading because even in these cases, I learned something about how academia works and what my role as a group leader could be through this process of reflection. My absolute top recommendation in this category is Leadership and Self-Deception: a text that initially got on my nerves but in the end taught me a lot.

I really think that is true. You should not only read books that you agree with, or which story you enjoy. Sometimes you can like a book not for its content but the way it makes you question your own preexisting beliefs and habits. But it is true that this sometimes makes it difficult to actually finnish such a book.

Next to books, I am quite into podcasts so I also wrote

// Start up. Not a book, but a podcast from Gimlet media about “what it’s really like to get a business off the ground.” It is mostly about tech start-ups, but the issues that arise when setting up a business are in many ways similar to those you encounter when you are starting up a research group. I especially enjoyed seasons 1 and 3.

I thought about including the sponsored podcast “open for business” from Gimlet Creative, as it touches upon some very relevant aspects of starting something new. But for me the jury is still out on the “sponsored podcast” concept  – it is branded content from amazon, and I am not sure to what extent I like that. For now, i do not like it enough to include it in the least in my JTH-paper.

The paper is not online due to the summer break,but I will provide a link asap.

– update 11.10.2016 – here is a link to the paper. 

 

 

 

 

Does d-dimer really improve DVT prediction in stroke?

369
elsevier.com

Good question, and even though thromboprofylaxis is already given according to guidelines in some countries, I can see the added value of a good discriminating prediction rule. Especially finding those patients with low DVT risk might be useful. But using d-dimer is a whole other question. To answer this, a thorough prediction model needs to be set up both with and without the information of d-dimer and only a direct comparison of these two models will provide the information we need.

In our view, that is not what the paper by Balogun et al did. And after critical appraisal of the tables and text, we found some inconsistencies that prohibits the reader from understanding what exactly was done and which results were obtained. In the end, we decided to write a letter to the editor, especially to prevent that other readers to mistakenly take over the conclusion of the authors. This conclusion, being that “D-dimer concentration with in 48 h of acute stroke is independently associated with development of DVT.This observation would require confirmation in a large study.” Our opinion is that the data from this study needs to be analysed properly to justify such an conclusion. One of the key elements in our letter is that the authors never compare the AUC of the model with and without d-dimer. This is needed as that would provide the bulk of the answer whether or not d-dimer should be measured. The only clue we have are the ORs of d-dimer, which range between 3-4, which is not really impressive when it comes to diagnosis and prediction. For more information on this, please check this paper on the misuse of the OR as a measure of interest for diagnosis/prediction by Pepe et al.

A final thing I want to mention is that our letter was the result of a mini-internship of one of the students at the Master programme of the CSB and was drafted in collaboration with our Virchow scholar HGdH from the Netherlands. Great team work!

The letter can be found on the website of Thrombosis Research as well as on my Mendeley profile.

 

Starting a research group: some thoughts for a new paper

isth-logo

It has been 18 months since I started in Berlin to start at the CSB to take over the lead of the clinical epidemiology research group. Recently, the ISTH early career taskforce  have contacted me whether I would be willing to write something about my experiences over the last 18 months as a rookie group leader. The idea is that these experiences, combined with a couple of other papers on similar useful topics for early career researchers, will be published in JTH.

I was a bit reluctant at first, as I believe that how people handle new situations that one encounters as a new group leader is quite dependent on personality and the individual circumstances. But then again, the new situations that i encountered might be more generalizable to other people. So I decided to go ahead and focus more on the description of the new situations I found myself in while trying to keep the personal experiences limited and only for illustrations only.

While writing, I have discerned that there are basically 4 new things about my new situations that I would have loved to realise a bit earlier.

  1. A new research group is never without context; get to know the academic landscape of your research group as this is where you find people for new collaboration etc
  2. You either start a new research group from scratch, or your inherit a research group; be aware that both have very different consequences and require different approaches.
  3. Try to find training and mentoring to help you cope with your new roles that group leaders have; it is not only the role of group leader that you need to get adjusted to. HR manager, accountant, mentor, researcher, project initiator, project manager, consultant are just a couple of roles that I also need to fulfill on a regular basis.
  4. New projects; it is tempting to put all your power, attention time and money behind a project. but sometimes new projects fail. Perhaps start a couple of small side projects as a contingency?

As said, the stuff I describe in the paper might be very specific for my situation and as such not likely to be applicable for everyone. Nonetheless, I hope that reading the paper might help other young researchers to help them prepare for the transition from post-doc to group leader. I will report back when the paper is finished and available online.

 

Causal Inference in Law: An Epidemiological Perspective

source:ejrr

Finally, it is here. The article I wrote together with WdH, MZ and RM was published in the European Journal of Risk and Regulation last week. And boy, did it take time! This whole project, an interdisciplinary project where epidemiological thinking was applied to questions of causal inference in tort law, took > 3 years – with only a couple of months writing… the rest was waiting and waiting and waiting and some peer review. but more on this later.

First some content. in the article we discuss the idea of proportional liability that adheres to the epidemiological concept of multi-causality. But the article is more: as this is a journal for non epidemiologist, we also provide a short and condensed overview of study design, bias and other epidemiological concepts such as counterfactual thinking. You might have recognised the theme from my visits to the Leiden Law school for some workshops. The EJRR editorial describes it asas: “(…) discuss the problem of causal inference in law, by providing an epidemiological viewpoint. More specifically, by scrutinizing the concept of the so-called “proportional liability”, which embraces the epidemiological notion of multi-causality, they demonstrate how the former can be made more proportional to a defendant’s relative contribution in the known causal mechanism underlying a particular damage.”

Getting this thing published was tough: the quality of the peer review was low (dare I say zero?),communication was difficult, submission system flawed etc. But most of all the editorial office was slow – first submission was June 2013! This could be a non-medical journal thing, i do not know, but still almost three years. And this all for an invited article that was planned to be part of a special edition on the link between epi and law, which never came. Due several delays (surprise!) of the other articles for this edition, it was decided that our article is not waiting for this special edition anymore. Therefore, our cool little insight into epidemiology now seems to be lost between all those legal and risk regulation articles. A shame if you ask me, but I am glad that we are not waiting any longer!

Although i do love interdisciplinary projects, and I think the result is a nice one, I do not want to go through this process again. No more EJRR for me.

Ow, one more thing… the article is behind a pay wall and i do not have access through my university, nor did the editorial office provide me with a link to a pdf of the final version. So, to be honest, I don’t have the final article myself! Feels weird. I hope EJRR will provide me with a pdf quite soon. In the meantime, anybody with access to this article, please feel free to send me a copy!

Changing stroke incidence and prevalence

changing stroke population

Lower changing incidences of disease over time do not necessarily mean that the number of patients in care also goes down, as the prevalence of the disease is a function of incidence and mortality. “Death Cures”. Combine this notion with the fact that both the incidence and mortality rates of the different stroke subtypes change different over time, and you will see that the group of patients that suffer from stroke will be quite different from the current one.

I made this picture to accompany a small text on declining stroke incidences which I have written for the newsletter of the Kompetenznetz Schlaganfall. which can be found in this pdf.

The professor as an entrepeneur

picture: onderzoeksredactie.nl

Today, I’ve read a long read from the onderzoekdsredactie, which is a Dutch initiative for high quality research journalism. In this article they present their results from their research into the conflicts of interest of profs in the Netherlands. They were very thorough: they published a summary in article from, but also made sure that all methodological choices, the questionnaire they used, the results etc are all available for further scrutiny of the reader. It is a shame though that the complete dataset is not available for further analyses (what characteristics make that some prof do not disclose their COI?)

The results are, although unpleasant to realise, not new. At least not to me. I can imagine that for most people the idea of prof with COI is indeed a rarity, but working in academia I’ve seen numbers of cases to know that this is not the case. The article that I’ve read was thorough in their analyses: it is not only because profs just want to get rich, but this concept of the prof as an entrepreneur is even supported by the Dutch government. Recent changes in the funding structure of research makes that ‘valorisation’, spinn-offs and collaboration with industry partners are promoted. this is all to further enlarge the ‘societal impact’ of science. These changes mightinded enforce such a thing, but I think that the academic freedom that researchers have should never be the victim.

How science goes wrong? we’re improving!

econ

Fraud, shoddy and sloppy science, conflicts of interest… Who said a science career is boring? When I write on these topics I sometimes have the feeling that I am doing science more harm than good; am I doing science a favor by showing its weaknesses and caveats? The answer still remains yes, for I believe that we need problems need to be identified before you can act on them. This is also the theme of this post: What is all being done on these topics in the last couple of days. A point by point list:

  • AllTrials: The AllTrials initiative which I support is going into its next round.Pharmaceutical companies are opening up (LEO, GSK), there are hearings in brussels and the debate in Medical journals (especially the BMJ, as one of the founders of AllTrials) is going on. Great stuff!
  • PubMed commons (a commenting system in PubMed, as a new post publication peer review) got online. It’s still a trial, but boy this is cool. I love its punchline: “A forum for scientific discourse”.
  • We organised a try out of our ‘on being a scientist’ workshop on which i wrote earlier this post. IN this post i say that is if going to be a LUMC workshop, but this changed to a workshop for all starting PhD students from the university Leiden, thus including all faculties. I am truly exciting and it our first run in november works out, this workshop might even become part of the official PhD education program of the university Leiden. The economist published a coverstory on How science goes wrong. It tells how science, peer review, statistical false positives etc work. It is a good read, especially when you are interested in science as a social process. Some remarks can be made: it’s not all that bad because scientist tend to be familiar with how the system works… the system might not be perfect, but it is at the moment the best we can do… luckily there are ways to get better, ways that are also discussed in the article.It is good that the economist and other media shares these concerns, because now this might up to build to critical mass to really change some of the weak points in the system. I thought about using the graph published next to the paper, but once I discovered the animated version of the graph i fell in love. See for yourself below. (PS false positives: another reason why not only to rely on statistical testing!)
  •  – edit: i changed the title of the pot… the first title was a bit pretentious –

Continue reading “How science goes wrong? we’re improving!”

Mendeley bought by elsevier – good or bad?

(image via litroost)

I use Mendeley (paid subscription) to keep track of my literature and as a reference manager etc. I even use it as my main method to share my publication on the internet (see also my personal Mendeley profile) I like it. I like it a lot. Sure, there were some hiccups, it being  start-up and all, but I like the idea: share with colleagues all the papers and your comments on them you need to write that paper. But now Mendeley has been bought by Elsevier, which is not really known for its friendly attitude towards the whole idea of sharing scientific articles. More about this Elsevier-Mendeley buying operation can be read in this column of the New Yorker. So what to do? cancel my subscription as a sign to the folks at Elsevier? Well, the guys from Mendeley promised that all will be all right (it will even get better!) and that the great adventure of sharing your literature and results will not be harmed with the acquisition, so perhaps canceling my subscription might be to fast. Let’s wait and see…

News from AllTrials.net

So I got an email from the folks from alltrials.net on their progress. I explained the initiative in an earlier post in which I also told that I  as well as the Dutch Epidemiological Society (VVE) signed the petition. So did it help? Just read the following section from their email.

You, and 40,000 other people around the world, have signed the AllTrials petition. We are on the threshold of significant change, but we now urgently need help from all of you to make this a reality.

Your support has already persuaded hundreds of organisations to commit to the aim of getting all clinical trials registered and their results reported. These include regulators and faculties. GSK, one of the biggest drug companies in the world, has signed up and others are considering it. Some of these groups are now starting discussions about the practical ways to stop trial results being withheld.

So far we’ve created a ripple, and got some important commitments. We have empowered individuals in large organisations to speak up, and it has changed the mainstream opposition on this issue. In doing so, we have also challenged those who try to pretend that the problem doesn’t exist, or who falsely claim that it has already been fixed.

But this is only the start if you ask the alltrials.net folks: they want to push on with three goals:

  • One million signatures on the petition. 
  • More international organisations signed up.
  • £40,000 so we can keep going.

I can only agree: consider signing (if you haven’t done so already)!

LUMC workshop on scientific integrity

Together with my colleague TdC from the department of geriatrics I am working on a workshop for starting PhD students on the topic of scientific integrity under the working title “On being a scientist: a workshop in scientific integrity”

The LUMC code of scientific integrity, the recent KNAW report of cie. Schuyt and the publication of the National Academy of Science “On being a scientist” will form the backbone of the this workshop (see also the video below of the NAS, with the great quote “scientist should be people too!”). We are still developing the actual content, but this workshop will primarily based on several cases that will be discussed, ranging from cases of clear scientific misconduct to cases of conflicting demands of supervisors. How can you spot these problems in advance, solve or preferably prevent them? What additional measure should be put in place to sustain a critical but workable environment?

I am excited that I can be part of the team that develops this workshop. As I said before, I do not believe that this workshop will prevent all possible scientific misconduct, but I do believe that educating PhD students helps to prevent hem from making honest mistaken. Also, I hope that this course will help to create a critical but positive atmosphere in which science will thrive.

This workshop will be part of the PhD training that the LUMC offers free of charge. The first edition of the this workshop will be held on September 18 2013. Please contact me via email for more information.

————————–

video “on being a scientist” from the NAS

Grant awarded to investigate the role of coagulation FVIII in the aetiology of ischaemic stroke in RATIO study

I just received a letter from the KNAW stating that the grant proposal I sent to one of the fund of the KNAW, the van Leersumfunds, was awarded. From their website, we can only learn a little about this fund:

“The Van Leersum Fund supports neuro(bio)logical, radiological and pharmaceutical research by awarding a series of research grants.

The Fund was established in 1922 and is named after P. van Leersum. The assets of the fund are made up of his estate and the estate of Ms I.G. Harbers-Kramer.”

With this grant we will be able to measure coagulation favtor VIII in the ischaemic stroke substudy of the RATIO study. Coagulation factor FVIII is one of the most potent risk factors for venous thrombosis in the coagulation system, and were quite curious what effect it has on the risk of ischaemic stroke in young women.

Is science self-cleansing? An article in the “Academische Boekengids” discussing report cie. Schuyt

Earlier I wrote about the “Adviescommissie onderzoeksgegevens in de wetenschap van de KNAW” and their report “Zorgvuldig en integer omgaan met wetenschappelijke onderzoeksgegevens”. This report induced a discussion in the March 2013 edition of the Academische Boekengids .

Three scientist give their vision: Miedema (dean of Medicine at the UMCU) Vandenbroucke (KNAW professor and professor of epidemiology at the LUMC, member of committee Schuyt) and Paul (professor of secularization studies in Groningen). Important to note is that the contradiction between the authors was known beforehand.

Miedema identifies a change in science, especially medical and social science, in which economic and social forces influence science and scientists. These forces have led to a ‘system failure’ of science, leading to shoddy science or in his words ‘post academic science’. Miedema argues that these changes cannot be undone and certain measures need to be taken to correct this system failure. What measures? Miedema points toward Quality Assurance and Quality Control (QA/QC) making a comparison with so called pharmaceutical research embedded within Good Clinical Practice (GCP). This should be done by governments, universities and funding bodies. Interestingly, he leaves scientist out of this list. And what does Miedema think of the report of the committee? He believes the vision of the report is based on the old idea of science where all scientist are directly held accountable by peer pressure, a vision that according to Miedema is not valid in this day and age.

Vandenbroucke points out an error in the argumentation: Miedema targets post academic science. Vandenbroucke agrees that this is a problem, but not the problem discussed by the committee. Their task was to see how data during and after research should be treated in order to keep science workable without to many hiccups and problems. The committee provides some answers but one of the main themes is that scientist should self-regulate, for they are the only experts in this area. This is in contrast to who Miedema who abhors the idea of self-regulation: science is not science anymore, so how can scientist self-regulate with all these strong forces that are incomprehensible to grasp for a simple scientist. Vandenbroucke counteracts Miedema by explaining that his vision of science (science is the search for truth) is not at odds with the problems that arise with post academic science (science is a complex social construct in which forces other than the truth have a big influence). Even more: these two notions can coexist, a concept first noted by Stephen Jay Gould.

Paul tries to reconcile the two previous writers: he agrees with Miedema that in earlier times the scientist was appreciated for his behavior as a person, whereas this view seems to be outdated in this day and age. But Paul also approaches the problem from the other side: the solution of the problems that come with post academic science calls for strong personalities that can counter unwanted forces that trouble science. Paul mentions the work of the science historian which he – ought enough in this context- announces as an ‘honored scientist’ (Dutch: gelauwerde wetenschapper) who published his ‘handsome study’ (Dutch: fraaie studie).

So what are the suggested solutions? Because the authors disagree on the origins of the problem, their solution also differ. Especially Vandenbroucke and Miedema find themselves on first glance diametrically opposed to one other. Vandenbroucke wants to start a discussion bottom up on what it is to be a good scientist, whereas Miedema wants top down QA and QC. These ideas are not new. For example, Jacobus Lubsen also brought this concept in an article in the NRC of December of 2011. Quality control and forensic statistics should increase the detection rate of wrongdoing and should therefore be instituted. I responded to this article with a small letter to the NRC in which I state that complete control is difficult and expensive and often only identifies shoddy and fraudulent science with hindsight. Additionally it will have a preventive effect on bad science, but will it have such an effect on fraud? After all, other fields that have huge governance structures such as banking and accountancy also have their fraud scandals. Even more, the frequency of sloppy science is hardly affected by these measures. A better way to prevent both sloppy and fraudulent science is, I believe, a better training of young scientist. By introducing young scientist to the key concepts of scientific conduct, creating a critical but non-repressive atmosphere, perhaps even in several research groups to prevent tunnel vision of individuals, will lead to an increased informal control and a decrease in sloppy and shoddy science. The committee also mentions this concept and calls this “increasing peer pressure” and puts scientist at the helm of this operation.

It will not surprise you that I agree with Vandenbroucke for the most part. But I also see merit in the argumentation of Miedema. Perhaps I agree with both to some extent because they address two different concepts: science is the quest for knowledge and based on epistemic virtues. Self-regulation by education of young aspiring scientist in a positive but critical atmosphere will increase the quality of research over time. But science is also a social construct and scientist need, besides guidance by peers, governance and regulations for certain scenarios: the cases Stapel and Poldermans as well as the previously discussed book ‘Bad Pharma’ by Ben Goldacre are examples why this might be true. Besides informal peer review and guidance, an extended system of checks and balances, GCP or not, might help to keep colleagues accountable for their work. Science in itself is a system of checks and balances, but this system might be expanded with some form of regulation and standardization with efficacy and efficiency kept in mind. But most of all, now is the time train the young.

– update on 25/3/2013: an interview with both JvdB and FM was published in the NTVG. Together with the editor-in-chief they discuss performing research, obtaining a PhD and publishing your results. click here for the pdf (NTVG website, in Dutch)

 

Ben Goldacres ‘Bad Pharma’ and research from the LUMC

Ben Goldacre, known from the bestseller Bad Science (book and blog) has a new book, Bad Pharma. Goldacre is always fun to read: science, both the method as the social phenomenon, explained for non-scientist while still interesting for scientist. The same goes for his new title Bad Pharma, where he explains what is right and wrong in the field of clinical trials needed to determine what treatment is best given. Before I am going to review the complete book, perhaps this TED talk will explain it all:

Basically, his point is that for good answers to questions on what treatment is best to save lives, it is pivotal that all the results of all trials are published. This sounds a bit old, since there are databases in which trials should be registered. However, only registering the existence of a trial is not enough: all data should become known to the public. This sounds familiar: this standpoint is off course the same standpoint of the AllTrials.net petition, which is initiated by a.o. Ben Goldacre. For more on AllTrials.net, please see a previous post.

While reading the book of Goldacre it started reading about reasearch done in the Netherlands, where 250 students were looking into the adverts for medication: they checked their quality (was the science OK?) and correct use (does it support the claim?) of the trials in major journals and found that half was of good quality and only half supported the claim. And the nice thing about this research? It was executed at our department as part of one of our  undergraduate courses! All students scored trials and a couple of students were also engaged in the analyses/writing/submission process. The paper from this research, cited by Goldacre, is available from the website of the Netherlands Journal of Medicine.  (pdf, open access) An earlier paper with the same concept but focussed on rheumatoid arthritis medication is also published, also open acces. (pdf)

A small column in the Epistel: ‘Fraude en integriteit in de wetenschap’

The following column was written for publication in Epistel, the monthly publication of the VVE. It roughly summarises the findings of KNAW-committee Schuyt on how to handle scientific data and ensure the integrity of the data, scientist and science. It also provides a little personal view on the issue and a call for action in line with the findings of report: each epidemiologist should read the full report and discuss it with colleagues.

The text can be downloaded here (pdf), or ‘continue reading’ below.

Continue reading “A small column in the Epistel: ‘Fraude en integriteit in de wetenschap’”