Increasing efficiency of preclinical research by group sequential designs: a new paper in PLOS biology

We have another paper published in PLOS Biology. The theme is in the same area as the first paper I published in that journal, which had the wonderful title “where have all the rodents gone”, but this time we did not focus on threats to internal validity, but we explored whether sequential study designs can be useful in preclinical research.

Sequential designs, what are those? It is a family of study designs (perhaps you could call it the “adaptive study size design” family) where one takes a quick peek at the results before the total number of subject is enrolled. But, this peek comes at a cost: it should be taken into account in the statistical analyses, as it has direct consequence for the interpretation of the final result of the experiment. But the bottom line is this: with the information you get half way through can decide to continue with the experiment or to stop because of efficacy or futility reasons. If this sounds familiar to those familiar with interim analyses in clinical trials, it is because it is the sam concept. however, we explored its impact when applied to animal experiments.

Figure from our publication in PLOS Biology describing sequential study designs in or computer simulations

Old wine in new bottles” one might say, and some of the reviewers for this paper published rightfully pointed out that our paper was not novel in terms of showing how sequential designs are more efficient compared to non sequential designs. But there is not where the novelty lies. Up untill now, we have not seen people applying this approach to preclinical research in a formal way. However, our experience is that a lot of preclinical studies are done with some kind of informal sequential aspect. No p<0.05? Just add another mouse/cell culture/synapse/MRI scan to the mix! The problem here is that there is no formal framework in which this is done, leading to cherry picking, p-hacking and other nasty stuff that you can’t grasp from the methods and results section.

Should all preclinical studies from now on half sequential designs? My guess would be NO, and there are two major reasons why. First of all, sequential data analyses have their ideosyncrasies and might not be for everyone. Second, the logistics of sequential study designs are complex, especially if you are affraid to introduce batch effects. We only wanted to show preclinical researchers that the sequential approach has their benefits: the same information with on average less costs. If you translate “costs” into animals the obvious conclusion is: apply sequential designs where you can, and the decrease in animals can “re-invested” in more animals per study to obtain higher power in preclinical research. But I hope that the side effect of this paper (or perhaps its main effect!) will be that the readers just think about their current practices and whether thise involve those ‘informal sequential designs’ that really hurt science.

The paper, this time with aless exotic title, “Increasing efficiency of preclinical research by group sequential designs” can be found on the website of PLOS biology.

Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke

 

source: journals.plos.org/plosbiology

We published a new article just in PLOS Biology today, with the title:

“Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke”

This is a wonderful collaboration between three fields: stats, epi and lab researchers. Combined we took a look at what is called attrition in the preclinical labs, that is the loss of data in animal experiments. This could be because the animal died before the needed data could be obtained, or just because a measurement failed. This loss of data can be translated to the concept of loss to follow-up in epidemiological cohort studies, and from this field we know that this could lead to substantial loss of statistical power and perhaps even bias.

But it was unknown to what extent this also was a problem in preclinical research, so we did two things. We looked at how often papers indicated there was attrition (with an alarming number of papers that did not provide the data for us to establish whether there was attrition), and we did some simulation what happens if there is attrition in various scenarios. The results paint a clear picture: the loss of power but also the bias is substantial. The degree of these is of course dependent on the scenario of attrition, but the message of the paper is clear: we should be aware of the problems that come with attrition and that reporting on attrition is the first step in minimising this problem.

A nice thing about this paper is that coincides with the start of a new research section in the PLOS galaxy, being “meta-research”, a collection of papers that all focus on how science works, behaves, and can or even should be improved. I can only welcome this, as more projects on this topic are in our pipeline!

The article can be found on pubmed and my mendeley profile.

Update 6.1.16: WOW what a media attention for this one. Interviews with outlets from UK, US, Germany, Switzerland, Argentina, France, Australia etc, German Radio, the dutch Volkskrant, and a video on focus.de. More via the corresponding altmetrics page . Also interesting is the post by UD, the lead in this project and chief of the CSB,  on his own blog “To infinity, and beyond!”