In case-control studies of screening to prevent cancer mortality, exposure is ideally defined as screening that takes place within that period prior to diagnosis during which the cancer is potentially detectable using the screening modality under study.
This interval has been called the detectable preclinical period (DPP).
Misspecifying the duration of the DPP can bias the results of such studies.
This article quantifies the impact of incorrectly estimating the duration of the DPP or using the correct average DPP but failing to consider its variability.
The authors developed a computer simulation model of disease incidence and mortality with and without screening.
The authors then selected cases and controls from the generated population and compared their screening histories.
The results indicate that underestimation of the duration of the DPP generally leads to greater bias than does overestimation, but in both instances the extent of the bias is modified by the relative length of the DPP and the average interscreening interval.
In practice, the authors recommend that to prevent a falsely low estimate of the effectiveness of a screening test in reducing mortality, a high percentile of the DPP distribution be used when analyzing the results of case-control studies of screening.
Mots-clés Pascal : Dépistage, Tumeur maligne, Stade précoce, Durée, Mortalité, Incidence, Etude cas témoin, Modèle mathématique, Analyse statistique, Méthodologie, Homme, Epidémiologie, Modèle simulation, Etats Unis, Amérique du Nord, Amérique, Stade préclinique
Mots-clés Pascal anglais : Medical screening, Malignant tumor, Early stage, Duration, Mortality, Incidence, Case control study, Mathematical model, Statistical analysis, Methodology, Human, Epidemiology, Simulation model, United States, North America, America
Notice produite par :
Inist-CNRS - Institut de l'Information Scientifique et Technique
Cote : 98-0424281
Code Inist : 002B30A01A1. Création : 25/01/1999.