The answer: no. For a long time there has been doubt whether or not we should believe the observational data whether or not limited alcohol use is in fact good for. You know, the old “U-curve” association. Now, with some smart thinking from the KADORIE guys from China/ Oxford as well as some other methods experts, the ultimate analyses has been done: A Mendelian Randomization study published recently in the Lancet.
If you wanna know what that actually does, you can read a paper I co-wrote a couple of years ago for NDT or the version in Dutch for the NTVG. In short, the technique uses genetic variation as a proxy for the actual phenotype you are interested in. This can be a biomarker, or in this case, alcohol consumption. A large proportion of the Chinese population has some genetic variations in the genes that code for the enzymes that break down alcohol in your blood. These genetic markers are therefore a good indicators how much you can actually can drink – at least on a group level. And as in most regions in China alcohol drinking is the standard, at least for men- how much you can drink is actually a good proxy of how much you actually do drink. Analyse the risk of stroke according the unbiased genetic determined alcohol consumption instead of the traditional questionnaire based alcohol consumption and voila: No U curve in sight –> No protective effect of drinking a little bit of alcohol.
Why I am writing about that study on my own blog? I didn’t work on the research, that is for sure! No, it is because the Dutch newspaper NRC actually contacted me to get some background information which I was happy to do. The science section in the NRC has always been one of the best in the NL, which made it quite an honor as well as an adventure to get involved like that. The journalist, SV, did an excellent job or wrapping all what we discussed in that 30-40 video call into just under 600 words, which you can read here (Dutch). I really learned a lot helping out and I am looking forward doing this type of work sometime in the future.
Together with HdH and AvHV I wrote an article for the Dutch NTVG on Mendelian Randomisation in the Methodology series, which was published online today. This is not the first time; I wrote in the NTVG before for this up-to-date series (not 1 but 2 papers on crossover design) but I also wrote on Mendelian Randomisation before. In fact that was one of the first ‘ educationals’ I ever wrote. The weird thing is that I never formally applied mendelian randomisation analyses in a paper. I did apply the underlying reasoning in a paper, but no two-stage-least-squares analyses or similar. Does this bother me? Only a bit, but I think this just shows the limited value of formal Mendelian Randomsation studies: you need a lot of power and untestable assumptions which greatly reduces the applicability of this method in practice. however, the underlying reasoning is a good insight in the origin, and effects of confounding (and perhaps even others forms of bias) in epidemiological studies.Thats why I love Mendelian Randomisation; it is just another tool in the epidemiolgists toolbox.
At the department of Clinical Epidemiology of the LUMC we have a continuous course/journal in which we read epi-literature and books in a nice little group. The group, called Capita Selecta, has a nice website which can be found here. sometime ago we’ve read an article that proposed to include dormant Mendelian Randomisation studies in RCT, to figure out the causal pathways of a treatment for chronic diseases. This could be most helpful when there is a discrepancy between the expected effect and the observed effect. During the discussion of this article we did not agree with the authors for several reasons. We, AGCB/IP/myself, decided to write a LTTE with these points. The journal was nice enough to publish our concerns, together with a response by the authors of the original article. The PDF can be found via the links below which will take you to the website of the American Journal of Epidemiology. The PDF of our LTTE can also be found at my mendeley profile.
Tomorrow I will teach at the graduate course ‘Design and analysis of clinical research’. My part is to introduce the concept of confounding which i demonstrate through the general idea of ‘confusing of effects’. Perhaps a bit ‘oldskool’, but it works as a nice introduction to the concept without a direct confrontation with DAGs etc, especially since it helps to think in ways to prevent / solve this problem in data analyses. What ‘arrow’ in the classic confounding triangle can be removed?