To test whether conventional data reliability assessment overestimates reliability, an assessment and a comparison of the reliability of complex quality indicators and their simpler components were conducted.
Medical records of 1078 Medicare cases with principal diagnoses of initial episodes of acute myocardial infarction (AMI) were independently reabstracted at two national Clinical Data Abstraction Centers (CDACs).
The interrater agreement beyond chance (kappa) of reabstracted and original quality indicators and key components were computed and compared.
Results showed excellent agreement (kappas ranging from 0.88 to 0.95) for simple determinations of whether standard medical therapies were provided.
Repeatability of eligibility status and the more complex determinations of whether « ideal » candidates were not treated showed moderate to excellent kappa values ranging from 0.41 to 0.79.
A planned comparison of five similar quality indicators and their key components showed that the simpler treatment components, as a group, had significantly higher kappas than the more complexly derived eligibility components and composite indicators (Fisher's exact, p<0.02).
Reliability assessment of quality indicators should he based upon the repeatability of the whole indicator, accounting for both data and logic, and not just one simple clement.
Mots-clés Pascal : Indicateur, Qualité, Evaluation, Fiabilité, Donnée, Statistique, Méthodologie
Mots-clés Pascal anglais : Indicator, Quality, Evaluation, Reliability, Data, Statistics, Methodology
Notice produite par :
Inist-CNRS - Institut de l'Information Scientifique et Technique
Cote : 98-0404700
Code Inist : 002B30A01A1. Création : 25/01/1999.