In 2010, the National Research Council of the National Academies published a report titled "The Prevention and Treatment of Missing Data in Clinical Trials." While acknowledging there is a problem is a great first step, the report fell somewhat short.
In any longitudinal study (a study that occurs over time), like a randomized control trial analyzing cancer treatments on survival, there is going to be attrition from the study. Over time, people will leave the study for many different types of reasons. Some reasons people leave a study have no effect on statistical inference. For example, if a patient or a patient's spouse gets a work transfer to different location without access to a study center. However, there are some reasons why people leave a study that may have a large impact on statistical inference. For example, a patient may leave the study simply because they feel the treatment is not working. It is this second reason for leaving that is associated with "attrition bias."
The report makes a number of very good points. It has some relatively simple and easy to implement suggestions for how to adjust trial design to reduce or better account for attrition bias. It makes it clear that if the attrition is "non-random" then any assumptions that the researcher or statistician makes about how the data is missing cannot be tested or verified. I was also pleasantly surprised to see that the report discussed a number of ideas that have been developed in economics including "Heckman selection" models, instrumental variables, and local average treatment effects.
Even so, there were two recommendations I didn't see, but would have liked to have:
1. Present bounds. Econometrician Charles Manski and bio-statistician James Robins (in his paper on the treatment effect of AZT on AIDS patients) introduced the idea of bounding the average treatment effect when faced with variables "missing not at random" in late 1980s. It would have been nice to see this idea mentioned as a possible solution.
2. Discuss the implications. If there is concern about bias, that concern should be raised by the researchers. The researchers should discuss the implications of the results and their policy recommendations.
The report makes a number of very good points. It has some relatively simple and easy to implement suggestions for how to adjust trial design to reduce or better account for attrition bias. It makes it clear that if the attrition is "non-random" then any assumptions that the researcher or statistician makes about how the data is missing cannot be tested or verified. I was also pleasantly surprised to see that the report discussed a number of ideas that have been developed in economics including "Heckman selection" models, instrumental variables, and local average treatment effects.
Even so, there were two recommendations I didn't see, but would have liked to have:
1. Present bounds. Econometrician Charles Manski and bio-statistician James Robins (in his paper on the treatment effect of AZT on AIDS patients) introduced the idea of bounding the average treatment effect when faced with variables "missing not at random" in late 1980s. It would have been nice to see this idea mentioned as a possible solution.
2. Discuss the implications. If there is concern about bias, that concern should be raised by the researchers. The researchers should discuss the implications of the results and their policy recommendations.
No comments:
Post a Comment