TESTING A TEST - med.uottawa.ca

TESTING A TEST - med.uottawa.ca

Interpreting Diagnostic Tests Ian McDowell Department of Epidemiology & Community Medicine January 2012 Note to readers: you may find the additional notes & explanations in the ppt notes panel helpful. 1 The Challenge of Clinical Measurement Diagnoses are based on information - from formal measurements and/or from clinical judgment. This information is seldom perfectly accurate: Random errors can occur (machine needs calibrating?) Biases in judgment or measurement can occur (this patient seems anxious: is he exaggerating?) Due to biological variability, this patient may not fit the general rule Diagnosis (e.g., hypertension) involves a categorical

judgment; this often requires dividing a continuous score (blood pressure) into categories. How to choose a cuttingpoint? 2 Therefore You need to be aware That we express these complexities in terms of probabilities That using a quantitative approach is better than just guessing! That you will gradually become familiar with the typical accuracy of measurements in your chosen clinical field That the principles apply to both diagnostic and screening tests Of how we describe the accuracy of a measurement. 3 Test characteristics 1. Reliability: consistency or reproducibility;

this considers chance or random errors (which sometimes increase, sometimes decrease, scores). Is it measuring something? 2. Validity: Is it measuring what it is supposed to measure? By extension, what diagnostic conclusion can I draw from a particular score on this test? Validity may be affected by bias, which refers to systematic errors (these fall in a certain direction) 3. Safety, Acceptability, Cost, etc. 4 Reliability and Validity Reliability Low Validity Low High

Biased result! High Average of these inaccurate results is not bad. This is probably how screening questionnaires (e.g.,

for depression) work 5 Ways of Assessing Validity Content or Face validity: does it make clinical or biological sense? Does it include the relevant symptoms? Criterion: comparison to a gold standard

definitive measure (e.g., biopsy, autopsy) Expressed as sensitivity and specificity Construct validity (this is used with abstract themes, such as quality of life for which there is no definitive standard) 6 Criterion validation: Gold Standard The criterion that your clinical observation or simple test is judged against: more definitive (but expensive or invasive) tests, such as a complete work-up, or the clinical outcome (for screening tests, when workup of well patients is unethical). Sensitivity and specificity are calculated from a research study that compares the test to a gold standard.

7 2 x 2 table for validating a test Test score: Test positive Test negative Validity: Gold standard Disease Disease Present Absent a (TP) b (FP) c (FN) d (TN) Sensitivity Specificity

= a/(a+c) = d/(b+d) = TP/Diseased = TN/Healthy TP = true positive; FP = false positive Golden Rule: always calculate based on the gold standard 8 Sensitivity = tests ability to detect disease when it is present a/(a+c) = TP/(TP+FN) = TP/disease A sensitive person is one who can perceive your feelings (1 seNsitivity) = false Negative rate: how many cases are missed by the test? Specificity = precision of the test: identifies only that type of disease. Nothing else looks like this A specific test generates few false positives. So, if the result is positive, the patient has this diagnosis. (1- sPecificity) = false Positive rate: how many are

9 falsely classified as having the disease?) Test Errors False Positives can arise due to other factors (diet; taking other medications, etc.) They entail the cost and danger of further investigations, labeling, worry for the patient. This is similar to Type I or alpha error in a test of statistical significance (the possibility of falsely concluding that there is an effect of an intervention). False Negatives imply missed cases, so potentially bad outcomes if untreated: an adverse event. Cf. Type II or beta error: the chance of missing a true difference 10 Most Tests Provide a Continuous Score. Selecting a Cutting Point

Test scores for a healthy population Sick population Healthy scores Pathological Possible cut-point scores Move this way to Move this way to increase sensitivity increase specificity (include more of (exclude healthy people) sick group) Crucial issue: changing cut-point can improve 11 sensitivity or specificity, but never both

Improving the test: Healthy population Sick population Healthy scores Improved test reduces overlap, increasing sen & spec. Pathological scores 12 Clinical applications A specific test can be useful to

rule in a disease. Why? D+ DT+ a b T- c d Specific tests give few false positives. So, if the result is positive, you can be sure the patient has the condition (nothing else would give this result): SpPin A sensitive test can be useful for ruling a disease out: A negative result on a very sensitive test (which detects all true cases) reassures you that the patient does not have the disease: SnNout 14 Your Patients Question: Doctor, how likely am I to have this disease?

This introduces Predictive Values Sensitivity & specificity dont answer this, because they work from the gold standard. The clinician sees the test result, but does not know whether this person is a true positive or a false positive (or a true or false negative). Hmmm How accurately does a positive (or negative) test result predict disease (or health)? 15 Start from Prevalence Before you apply any test, the best guide you have to a diagnosis is based on prevalence: Common conditions (in this population) are the more likely diagnosis Prevalence indicates the pre-test probability of disease. You will then refine this informed guess in a

series of stages: First, consider the patients age and sex; use the prevalence for a similar person. Then, based on the patients history you may modify the estimate. 16 2 x 2 table: Prevalence Test positive Test negative Total Disease present Disease absent Total

a c a+c b d b+d a+b c+d N Prevalence = a+c / N 17 Predictive Values D+ Db T+ a

d T- c Based on rows, not columns PPV = a/(a+b); interprets positive test: false positive rate NPV = d/(c+d); interprets negative test: false negative rate Immediately useful to clinician: they tell us about the test in this population and thus this patient Vary with the prevalence of disease, so must be determined for each clinical setting As prevalence goes down, PPV goes down and NPV rises Prevalence and Predictive Values B. Primary care A. Specialist referral hospital D+

D- T+ 50 10 T- 5 100 D+ D- T+

50 100 T- 5 1000 Sensitivity = 50/55 = 91% Specificity = 100/110 = 91% Sensitivity = 50/55 = 91% Specificity = 1000/1100 = 91% Prevalence = 55/165 = 33% Prevalence = 55/1155 = 3%

PPV = 50/60 = 83% NPV = 100/105 = 95% PPV = 50/150 = 33% NPV = 1000/1005 = 99.5% 19 Exercise ECG (aka "treadmill test") A 22 year old male with chest pain has a pretest probability of obstructive CAD of roughly 1%. With a "positive" exercise ECG, his posttest probability is still less than 5%, in other words, there's a greater than 95% chance that he doesnt have important CAD, despite a "positive" test. The same applies in the opposite direction for a 72 year old male with typical anginal chest pain. Pretest probability is 95%; if the exercise ECG is negative, the posttest probability is still probably greater than 80%. The overarching guideline is to treat the patient, not the test. To display the effects of changing cut-points and prevalence on predictive values, click here.

(scroll down to the middle of the page) 20 From the literature you can get Sensitivity & Specificity. To work out PPV and NPV for your practice, you need to guess prevalence, then work backwards: Fill cells in following order: Truth Disease Disease Total Predictive Present Absent Values Test Pos th th th 4 7th

8 10 Test Neg 5th 6th 9th 11th Total 2nd 3rd 1st (from estimated prevalence) (from sensitivity) (from specificity) 21 Predictive Values D+ DT + TP FP T - FN TN

High specificity = few FPs: Sp = TN/(TN+FP). FPs also drive PPV: PPV = TP/(TP + FP); So, with a high PPV the clinician is more certain that a patient with a positive test has the disease (it rules in the disease) The higher the sensitivity, the higher the NPV: Sn = TP/(TP+FN); NPV = TN/(TN+FN); the clinician can be more confident that a patient with a negative score does not have the diagnosis (because there are few false negatives). So, high NPV can rule out a disease. 22 Gasp! Isnt there an easier way to do all this? Yes (good!) But first, you need a couple more concepts (less good) We said that before you apply a test, prevalence gives your best guess about the chances that this

patient has the disease. This is known as Pretest Probability of Disease: (a+c) / N in the 2 x 2 table: a b It can also be expressed as odds of c d disease: (a+c) / (b+d), as long as N the disease is rare 23 This Leads to Likelihood Ratios Defined as the odds that a given level of a diagnostic test result would be expected in a patient with the disease, as opposed to a patient without: true positive rate / false positive rate [TP / FP] Advantages: Combines sensitivity and specificity into one number

Can be calculated for many cut-points on the test Can be turned into predictive values LR for positive test = Sensitivity / (1-Specificity) LR for negative test = (1-Sensitivity) / Specificity 24 Practical application: a Nomogram 1) You need the LR for this test 2) Plot the likelihood ratio on center axis (e.g., LR+ = 20) 3) Select pretest probability (prevalence) on left axis (e.g. Prevalence = 30%) 4) Draw line through these points to right axis to indicate post-test probability of

disease Example: Post-test probability = 91% 25 There is another way to combine sensitivity and specificity: Meet Receiver Operating Characteristic (ROC) curves Work out Sen and Spec for every possible cut-point, then plot these. Area under the curve indicates the information provided by the test 1 Sensitivity 0.8 0.6 0.4 0.2 0 0

0.2 0.4 0.6 0.8 1-Specificity ( = false positives) 1 In an ideal test, the blue line would reach the top left corner. For a useless test it would lie along the diagonal: no better than guessing

26 Chaining LRs Together (1) Example: 45 year-old woman presents with chest pain Based on her age, pretest probability that a vague chest pain indicates CAD is about 1% Take a fuller history. She reports a 1-month history of intermittent chest pain, suggesting angina (substernal pain; radiating down arm; induced by effort; relieved by rest) LR of this history for angina is about 100 The previous example: 1. From the History: Shes young; pretest probability

about 1% LR 100 Pretest probability rises to 50% based on history 28 Chaining LRs Together (2) 45 year-old woman with 1-month history of intermittent chest pain After the history, post test probability is now about 50%. What will you do? A more precise (but also more costly) test: Record an ECG Results = 2.2 mm ST-segment depression. LR for ECG 2.2 mm result = 10. This raises post test probability to > 90% for

coronary artery disease (see next slide) 29 The previous example: ECG Results Post-test probability now rises to 90% Now start pretest probability (i.e. 50%, prior to ECG, based on history) 30

Recently Viewed Presentations

  • AMERICAN FILM GENRES TET2040e 3 op - Varieur Film Studies

    AMERICAN FILM GENRES TET2040e 3 op - Varieur Film Studies

    AMERICAN FILM GENRES TET2040e 3 op ... (genre films vs. art films) technique, style, mode, formula or thematic grouping may be treated as a genre David Bordwell: 'any theme may appear in any genre' Robert Stam, genre definition often based...
  • SORN PROCESS WITH PRA REQUIREMENT - esd.whs.mil

    SORN PROCESS WITH PRA REQUIREMENT - esd.whs.mil

    "Paperwork" Reduction Act. The concept and intent of the law is the approval of "collections of information, regardless of form or format." In 1943, IT Systems were not a consideration. New to Everyone. ALL Federal Agencies are going through the...
  • The Federation of NZ SeniorNet Societies

    The Federation of NZ SeniorNet Societies

    The Positive Ageing Strategy. The strategy promotes a society where people can age positively and where older people are highly valued and recognised as an integral part of families and communities.
  • Review of the ocean zones - Ms. Smart's science

    Review of the ocean zones - Ms. Smart's science

    Benthic Deep Ocean. The Deep Ocean begins at the continental slope and extends all the way to the sea floor. The Benthic zone is the deepest, darkest, and coldest part of the ocean. Pressure greatly increases as you travel from...
  • Microsoft Word 2010 - Illustrated Unit C: Formatting

    Microsoft Word 2010 - Illustrated Unit C: Formatting

    Serif fonts have a small stroke, called a serif, on the ends of characters, and are often used for body text: Times New Roman. Garamond. Book Antiqua. Californian FB. Sans serif fonts do not have a serif, and are often...
  • Computer Vision: Edges and Scale

    Computer Vision: Edges and Scale

    Times New Roman Arial cmmi10 cmr7 cmr10 Blank Presentation Microsoft Photo Editor 3.0 Photo Edges and Scale Origin of Edges Detecting edges Effects of noise Solution: smooth first Associative property of convolution Laplacian of Gaussian 2D edge detection filters The...
  • Health and Wellness in the college population: factors ...

    Health and Wellness in the college population: factors ...

    RESEARCH QUESTION. What factors are linked to mood in college students at Benedictine University? The generalresearch focus of our thesis advisor is mental health, and she has investigated mood in healthy populations before, so we decided to utilize our opportunity...
  • Presentazione di PowerPoint

    Presentazione di PowerPoint

    CUORE (Cryogenic Underground Osservatory for Rare Events) and CUORICINO E.Fiorini, Neutrino 2004 Paris , June 17, 2004 For searches on neutrinoless bb decay , WIMPs and axions interactions and on rare nuclear events