Friday, March 21, 2014

Selection-Into-Study (SIS) Bias

Two friends of mine are currently looking for a clinical trial to participate in.  Unfortunately, they are undertaking this search because the current standard of care in metastatic colon cancer is not working for them.  Ironically, if and when they find a randomized control trial, one of the treatment arms is likely to be the current standard of care.

Who exactly participates in randomized control trials for cancer?  Why do patients choose to find a randomized control trial?  Which trial do they choose?

Patient participation rates by drug type for AIDS before and after
In an unpublished working paper, Anup Malani and Tomas Philipson, present compelling evidence that the introduction of the AIDS drug cocktail (HAART) in 1996 dramatically affected the willingness of AIDS patients to participate in clinical trials.  In the lead up to the release of HAART on to the market, participation in clinical trials rose to an amazing 30%, however in the next ten years after the introduction of HAART, participation fell back to 5%, about half of what it had been in 1990.

If patients are spending time and energy selecting to participate in a particular study, then that study may suffer from selection-into-study bias.  Following in the tradition of the great Art Goldberger, I believe that if readers are to take an issue seriously, then that issue needs to have an exotic polysyllable name.  While, it is no where near as clever or as funny as Goldberger's micronumerosity, it is hoped that with the help of the double-hyphenated (acronym enhanced), selection-into-study (SIS) bias, we may make progress on a serious issue in randomized cancer trials.

To highlight the concern with SIS bias consider one of the most famous uses of randomized control trials in economics, the Lalonde study.  The study recruited people into a training program and then randomized those recruited into the study into two trial arms.  One group received training and the other group did not.  The researchers collected wages for each of the participants in the study both before and after the training program.  Lalonde then compared the results from the two trial arms to a sample of similar people who did not participate in the program but who did participate in a large nationally representative survey of wages over the same period.

If the group of people who decided to participate in the study differed substantially from general population then the results of the Lalonde study would be biased, making the results difficult to interpret.  That said, the study does provide a possible test for SIS bias.  If those who participated in the study believed that their incomes were likely to be higher with training than without, then we should see that in the data.  In fact, those that received training had a greater probability of a larger income increase than those who were randomized into the arm that did not receive training.  Of course, this result is perfectly consistent with the fact that training increases income.  But what if training only increases income for those who believed it would?  Unfortunately, there is no easy way to determine whether we are observing an average effect for the population or a selection-into-study effect.

There may be another way to test for SIS bias.  A researcher could use distance travelled by study participants.  The researcher can split study population between those who travelled a long distance and those who did not, and test whether the "treatment effect" is larger for those who travelled farther.  If it is, then the study may suffer from SIS bias.  We can think of distance as a "cost" of participating in the trial.  Patients who travel further, give up more in order to participate and may believe that they will have better outcomes than the standard of care.  Note that this test is only valid if it reasonable to assume that treatment outcomes are not associated with patient distance from the trial center.

No comments:

Post a Comment