Today, I’ve read a long read from the onderzoekdsredactie, which is a Dutch initiative for high quality research journalism. In this article they present their results from their research into the conflicts of interest of profs in the Netherlands. They were very thorough: they published a summary in article from, but also made sure that all methodological choices, the questionnaire they used, the results etc are all available for further scrutiny of the reader. It is a shame though that the complete dataset is not available for further analyses (what characteristics make that some prof do not disclose their COI?)
The results are, although unpleasant to realise, not new. At least not to me. I can imagine that for most people the idea of prof with COI is indeed a rarity, but working in academia I’ve seen numbers of cases to know that this is not the case. The article that I’ve read was thorough in their analyses: it is not only because profs just want to get rich, but this concept of the prof as an entrepreneur is even supported by the Dutch government. Recent changes in the funding structure of research makes that ‘valorisation’, spinn-offs and collaboration with industry partners are promoted. this is all to further enlarge the ‘societal impact’ of science. These changes mightinded enforce such a thing, but I think that the academic freedom that researchers have should never be the victim.
A new publication became available, again an ‘educational’. However, this time the topic is new. It is about the application of directed acyclic graphs, a technique widely used in different areas of science. Ranging from computer science, mathematics, psychology, economics and epidemiology, this specific type of graphs has shown to be useful to describe the underlying causal structure of mechanisms of interest. This comes in very handy, since it can help to determine the sources of confounding for a specific epidemiological research question.
But, isn’t that what epidemiologist do all the time? What is new about these graphs, except for the fancy concepts as colliders, edges, and backdoor paths? Well, the idea behind DAGs are not new, there have been diagrams in epidemiology since years, but each epidemiologist has his own specific ways to draw the different relationship between various variables factors. Did you ever got stuck in a discussion about if something is a confounder or not? If you don’t get it resolved by talking, you might want to draw out the your point of view in a diagram, only to see that your colleagues is used to a different way of drawing epidemiological diagrams. DAGs resolve this. There is a clear set on rules that each DAG should comply with and if they do, they provides a clear overview of the sources of confounding and identify the minimal set of variables to account for all confounding present.
So that’s it… DAGs are a nifty method to talk the same idiom while discussing the causal questions you want to resolve. The only thing that you and your colleague now can fight over is the validity of the assumptions made by the DAG you just drew. And that is called good science!
Together with HdH and AvHV I wrote an article for the Dutch NTVG on Mendelian Randomisation in the Methodology series, which was published online today. This is not the first time; I wrote in the NTVG before for this up-to-date series (not 1 but 2 papers on crossover design) but I also wrote on Mendelian Randomisation before. In fact that was one of the first ‘ educationals’ I ever wrote. The weird thing is that I never formally applied mendelian randomisation analyses in a paper. I did apply the underlying reasoning in a paper, but no two-stage-least-squares analyses or similar. Does this bother me? Only a bit, but I think this just shows the limited value of formal Mendelian Randomsation studies: you need a lot of power and untestable assumptions which greatly reduces the applicability of this method in practice. however, the underlying reasoning is a good insight in the origin, and effects of confounding (and perhaps even others forms of bias) in epidemiological studies.Thats why I love Mendelian Randomisation; it is just another tool in the epidemiolgists toolbox.
I worked together with some partners on a new workshop for young epidemiologist. The title says it all: WEON preconference workshop ‘crash course peer review’.
Unfortunately, we had to cancel the workshop because the number of participants was to low to justify the effort of not only myself, but especially all the other teachers. I think it is a pity that we had to cancel, but by cancelling we still have a fresh start whenever we want to try again in a different format.
Whilst preparing this workshop I noticed that peer review, or a better term would be refereeing, is not popular. It is seen as a task that task up to much time, with too much political consequences and little reward etc. New initiatives like Pubmed commons and other post publication peer review systems are regarded by some as answers to some of these problems. But what is the future of refereeing, when young epidemiologist are not intrinsically motivated to contribute time and effort to the publication process? Only time will tell.
For those who are still interested in this crash course, please contact me via email.
Research in the media. It is however not my own research, but these two newspaper articles are related to my research.The first article (pdf) is on the role of helmets for scooters. This is linked to the publication on the risks related to motorised two-wheel vehicle crashes. (cick here for the pubmed entry)
The second article from the same edition of the NRC is related to the topic of my thesis. It is about the role of FXII in thrombosis, based on a publication by Thomas Renne et al in Science translational medicine. Antibodies against FXII downregulate the pathological thrombogenenis during extracorporeal circulation. These antibodies might be used in the prevention of clots during heart-lung surgery, but might also be applied in the prevention of thrombosis, both arterial and venous. Click here (pdf) for the NRC newspaper article, and here for the original research by Renne et al.
I’ve been a fond reader of retraction watch for over a year now. It is quite interesting to read the reports of how science corrects their own mistakes. Sometimes it is just plain old fraud, such as the case of Stapel, but also other Dutch researchers. But sometimes the stories behind the retractions show that there are also ‘legitimate mistakes’ that lead to such a retraction, for example this retraction from Genes and Development in which “it’s quite clear there isn’t even a whiff of misconduct or fraud”. Please check out the Retraction Watch blog or read an interview with one of its founders which appeared in the de Volkskrant.