Saturday, July 11, 2020

Do Masks Help Part 4: Fluid Mechanics to the rescue

"Visualizing the effectiveness of face masks in obstructing respiratory jets" by Verna et al. reports on a simulation based on a manikin simulating coughs with a recreational smoke machine. Whether or not this should be considered an acceptable simulation of coughs is not something I can assess. The report looks at several scenarios: uncovered simulated coughs and simulated coughs through a variety of masks. While the results are quite striking, I believe they are answering the wrong question. Most of the arguments I've seen for wearing masks focus on reducing the spread from asymptomatic people,  especially in reducing the droplets spread through talking. I'd hope most people these days are considerate enough to cover their masks when the cough, so how well a mask stops the spread of droplets when the person coughs is not the question we should be seeking to answer.

Thursday, July 9, 2020

Do Masks help Part 3: A study from PNAS

In "Identifying airborne transmission as the dominant route for the spread of COVID-19," Zhang et al. claim to demonstrate a big impact mask-wearing has on the spread of COVID. I'm skeptical about their results because the results are too tidy (direct link to image):


As I understand the spread of COVID, there is a delay between exposure and positive tests; an intervention that either increases or decreases the spread of COVID should take about 5 days to show up in the numbers infected. In this case, the response shows up immediately, which makes me think there is something else going on.

Friday, July 3, 2020

The Henry Ford study on Hydroxychloroquine

The Henry Ford Health System published a study claiming Hydroxychlorquine (HCQ) saves lives in COVID patients. I don't find the study convincing. The study was well-analyzed based on how I was trained such studies should be analyzed. However, the study is an observational study of the patients of one health system. Then need to be careful about who they measure when drawing larger conclusions. Moreover, the patients getting HCQ met certain clinical guidelines, and thus were not like the patients not getting HCQ, so they needed to have been more careful about what they measured.


Thursday, July 2, 2020

A Portugese study claiming HCQ prevents COVID

A recent preprint, "Chronic treatment with hydroxychloroquine and SARS-CoV-2 infection" by Ferreira et al. reports on a study of data from Portugal. In Portugal, there is anonymized data available on who gets prescriptions, who is tested for COVID, and who tested positive. Ferreira et al. analyzed this data to see if the people taking hydroxychloroquine (HCQ) tested positive for COVID at a higher or lower rate than the general population. They found that the relative risk of testing positive for COVID among patients taking HCQ was about half that those that weren't.  Based on these findings, the authors suggest using HCQ as a preventative measure for preventing infections of COVID (with caveats about monitoring usage, and praising it as an inexpensive drug).

Based on the issue that we need to be careful who we measure that I wrote about before, I completely disagree with this assessment. People who were prescribed HCQ in Portugal are not representative of the general population. For starters, they have rheumatoid arthritis, lupus, or another autoimmune disease; those taking it for malaria had too small a dose to be included in the study. My assumption is that people with arthritis severe enough to require medication are less likely to be going out and about as much, and will mix less with other people. These less active people are less likely to be exposed to COVID, leading to the reduced infection rate. Am I right? I don't know. But my story is consistent with the data, and an assumption that HCQ does nothing for COVID. Hence the study doesn't show HCQ helps, and certainly doesn't justify pursuing any changes of policies or medical interventions.  

Wednesday, July 1, 2020

Checking the assumptions that led to headlines reporting 8.7 million infected with COVID

Media reports, based on a Penn State University press release, have discussed "a new study [that] estimates that the number of early COVID cases in the U.S. may have been more than 80 times greater and doubled nearly twice as fast as originally believed." This has led some to conclude that "By the time governors in the U.S. forced lockdowns, COVID-19 had already extended beyond a point in which lockdowns could be effective in slowing the spread" The key piece to this chain of reasoning is the "may have been," and I believe this is a great case study in the challenges of scientific communication.

The study attempts to answer the question of how many undiagnosed cases of COVID are in the US. The answer is based on analyzing an existing Center for Disease Control (CDC) database of Influenza-Like Illness (ILI), estimating how much ILI has increased over previous years, and attributing an appropriate amount of the excess ILI to COVID cases. In transforming excess ILI at participating hospitals to COVID cases in the general population, a number of assumptions are necessary; I think the answer is highly dependent on these assumptions. I am very pleased that the authors of the study properly caveat their conclusions, and call for the appropriate follow-on studies. They are also careful to summarize their major assumptions and highlight most of them. However, I believe a number of their assumptions are flawed, and as a result, they run the risk of greatly over-estimating the true number of total cases. The only way to tell the actual number of early COVID cases in the U.S. is through further testing, as suggested by the authors, because these numbers are, after all, extrapolated from multiple assumptions. 

What this study is not is definitive proof that the shutdowns were an example of proverbially locking the barn door after the horse left.

Wednesday, June 24, 2020

Covid risk and blood type O

23andMe reported findings that people with type O blood had a lower rate of COVID-19 than others in a recent blog post. The study looked at 23andMe customers and volunteers who had been diagnosed with COVID-19, and compared them to customers without a COVID-19 diagnosis. The 23andMe genetic tests discovered that the genotype for type O blood is associated with having 88% of the chance of a COVID-19 diagnosis than those without. This finding echoes several previous studies.

Do masks help part II: A study claiming a natural experiment in Germany

The claim is made in "Face Masks Considerably ReduceCOVID-19 Cases in Germany:A Synthetic Control Method Approach", Mitze et al., that by using synthetic controls, the effect of mandatory face mask use can be measured. Mitze et al. estimate that masks reduce the rate of reported infections by around 40%.

I don't find this paper convincing.


Tuesday, June 23, 2020

Do masks help part I: Methodological Limitations

One of the big themes in the popular news on COVID research these days is studies that report on how much masks help reduce the spread of COVID. These studies generally have some serious methodological limitations; I have yet to see one that can distinguish between the hypothesis that masks limit the spread of COVID by blocking stuff from the hypothesis that people wearing masks will be more mindful of other restrictions, and thus reduce the spread of COVID due to better adherence to social distancing and hand-washing guidelines.

This is the start of a series of posts discussing some of these studies. This post will give an overview of the different types of evidence typically used to argue this point; specific studies will be addressed in future posts.


Comparing reported rates of COVID across communities is hard

On June 10th, there were 2,000,000 COVID cases in the US, with 113,000 deaths. In California, there were about 130,000 cases, with 4,600 deaths. What can we conclude from this? California has slightly over 10% of the US population; does this mean it has a lower infection rate than the country overall (even after removing New York and New Jersey’s infections)? Is its 3.5% fatality rate evidence of better medical care than average for the US with a 5.6% fatality rate? Or the world’s 5.7% fatality rate?

We can conclude nothing.

Friday, June 19, 2020

Be careful who you measure

One of the challenges with the observational studies conducted on COVID-19 is that the studied population may be different from the larger population, limiting the validity of drawing conclusions for making policies for general populations. Beyond that, many of the studies either take advantage of differences that may have unusual or undocumented causes, or attempt to match to other, similar people. However, differences may exist that were not measured or used in the matching that can affect the conclusions.


Thursday, June 18, 2020

Be careful what you measure



When evaluating studies, a lot of subtle mistakes can under-cut the conclusions authors claim. Most health studies can be separated into one of two categories. Randomized clinical trials occur when people meeting some criteria are randomized into treatment and control groups. This is widely considered the gold-standard in how to do clinical science. However, it is not always practical or ethical to perform randomized clinical trials, so instead a lot of science is done through observational studies. Because groups being compared are not otherwise identical, measurements are taken, and scientists use models to correct for these differences. Unfortunately, what is measured can be a proxy for what actually is causing the effect of interest, so we need to be careful that when we observe a difference between two groups based on an observable difference. What we base our group separation on may not be causing the differences observed!

Put another way, correlation does not imply causation, however much we wish it did. One example many people find amusing is that Ice Cream sales are correlated with murder rates - both go up in the summer! So even though we can estimate murder rates based on Ice Cream sales, reducing Ice Cream sales will, I believe, have no discernable impact on murder rates.

Wednesday, June 17, 2020

Addendum to Cox Proportional Hazard Model results may not mean what you think they mean

A recent paper in the New England Journal of Medicine reports on a double-blinded clinical trial that was ended early due to beneficial effects. “Remdesivir for the Treatment of Covid-19 — Preliminary Report” It reports a median time to recovery of 11 days for those treated, as opposed to 15 days for those in the control group. There is a VERY low chance of this being a difference due to chance. The 14-day mortality rates of 7% for the treatment group and 12% for the control group were not statistically significant. If we conclude that Remdesivir has no effect on survival rates based on this non-significant difference (not a conclusion I’d support), then this appears to behave like the hypothetical drug of the previous post on the issues with the Cox Proportional Hazards model. Thankfully, this was not how they analyzed the relevant data.


Disclosure: I know, and have written papers with, people involved with the study.

Caution: Cox Proportional Hazards Model results may not mean what you think they mean



The currently statistical methods for identifying specific risk factors and possible beneficial treatments for Covid-19 are asking the wrong question. Many, if not most, of these studies use analysis of patient hospital records as their primary data source. The standard statistical analysis asks what rate people die in the hospital under different conditions. Unfortunately, under a simple scenario, this can conclude a beneficial medicine that also reduces the length of hospital stays of patients who will recover is harmful.