[特别数学讲座第13期] Statistical Methods for Analysis of the Accuracy of Diagnostic Tests with Imperfect Standard Bias
主讲人: Xiao-Hua Zhou Professor, University of Washington
活动时间: 从 2010-07-16 08:00 到 2010-07-30 18:00
场地: 北京国际数学研究中心
时间地点:7月20日,7月21日,7月22日, 7月27日,7月28日,7月29日 上午 9:30-11:30 资源大厦1328教室
In estimation of diagnostic accuracy of medical tests, we assume that the gold standard for establishing the true disease status of a subject exists. However, for many disease conditions, it is difficult or impossible to establish a definitive diagnosis of the disease. A perfect gold standard may not exist or may be too expensive or impractical to administer. This is especially true for complex clinical conditions in the usual clinical practice setting. For example, the diagnosis of Alzheimer`s disease cannot be definitive until a patient has died and a neuropathological examination is performed. Even the "definitive" diagnosis of a well-defined condition, such as an infection by a known agent, requires culture of the organism or other detection methods, any of which may besubject to laboratory and other errors. Consequently, in many diagnostic accuracy studies, an imperfect standard is used to evaluate the test. When an imperfect standard is used as if it werea gold standard, the accuracy of the test is often either underestimated or overestimated. This type of bias is called imperfect reference standard bias.
In the absence of a gold standard, one may be just interested inassessing agreement between a new test and an imperfect referencestandard, which is often reported as Kappa statistic. Agreement data on two tests sometimes may provide some information about the accuracy of the tests: if two tests disagree, then one of them must be incorrect. However, agreement data on two tests will not tell us the accuracy of the tests - if two tests agree, they both can either correct or wrong.
In this short course we present methods for correcting imperfect standard bias in the analysis of accuracy data. When we are interested in assessing the accuracy of tests without a gold standard, we run into a model non-identifiable problem, which occurs when different sets of parameter values correspond to the one same distribution of observed data. Model identification is a necessary condition for many nice asymptotic properties associated with model-based parameter estimators, such as root-n-consistency.
To overcome the estimation problem of a non-identified model, frequentist methods impose additional constraints on the parameters in the model, and Bayesian methods assume a proper prior distribution and make inferences on parameters, based on the posterior distribution. In order to obtain useful parameter estimates, the Bayesian methods need to elicit an informative prior on at least as many parameters as would be constrained when using the frequentist method. We discuss both frequentist and Bayesian methods in this short course. We first describe the impact of verification bias on estimated accuracy of diagnostic tests. Then, we describe bias-correction methods for estimating the sensitivity and specificity of diagnostic tests against an imperfect gold standard. Finally, we present bias-correction methods for estimating the ROC curves of tests when tests are ordinal-scale or continuous.