fbpx
Wikipedia

Positive and negative predictive values

The positive and negative predictive values (PPV and NPV respectively) are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative results, respectively.[1] The PPV and NPV describe the performance of a diagnostic test or other statistical measure. A high result can be interpreted as indicating the accuracy of such a statistic. The PPV and NPV are not intrinsic to the test (as true positive rate and true negative rate are); they depend also on the prevalence.[2] Both PPV and NPV can be derived using Bayes' theorem.

Positive and negative predictive values
Positive and negative predictive values - 2

Although sometimes used synonymously, a positive predictive value generally refers to what is established by control groups, while a post-test probability refers to a probability for an individual. Still, if the individual's pre-test probability of the target condition is the same as the prevalence in the control group used to establish the positive predictive value, the two are numerically equal.

In information retrieval, the PPV statistic is often called the precision.

Definition edit

Positive predictive value (PPV) edit

The positive predictive value (PPV), or precision, is defined as

 

where a "true positive" is the event that the test makes a positive prediction, and the subject has a positive result under the gold standard, and a "false positive" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard. The ideal value of the PPV, with a perfect test, is 1 (100%), and the worst possible value would be zero.

The PPV can also be computed from sensitivity, specificity, and the prevalence of the condition:

 

cf. Bayes' theorem

The complement of the PPV is the false discovery rate (FDR):

 

Negative predictive value (NPV) edit

The negative predictive value is defined as:

 

where a "true negative" is the event that the test makes a negative prediction, and the subject has a negative result under the gold standard, and a "false negative" is the event that the test makes a negative prediction, and the subject has a positive result under the gold standard. With a perfect test, one which returns no false negatives, the value of the NPV is 1 (100%), and with a test which returns no true negatives the NPV value is zero.

The NPV can also be computed from sensitivity, specificity, and prevalence:

 
 

The complement of the NPV is the false omission rate (FOR):

 

Although sometimes used synonymously, a negative predictive value generally refers to what is established by control groups, while a negative post-test probability rather refers to a probability for an individual. Still, if the individual's pre-test probability of the target condition is the same as the prevalence in the control group used to establish the negative predictive value, then the two are numerically equal.

Relationship edit

The following diagram illustrates how the positive predictive value, negative predictive value, sensitivity, and specificity are related.

Predicted condition Sources: [3][4][5][6][7][8][9][10][11]
Total population
= P + N
Predicted Positive (PP) Predicted Negative (PN) Informedness, bookmaker informedness (BM)
= TPR + TNR − 1
Prevalence threshold (PT)
= TPR × FPR - FPR/TPR - FPR
Actual condition
Positive (P)[a] True positive (TP),
hit
[b]
False negative (FN),
type II error, miss,
underestimation
[c]
True positive rate (TPR), recall, sensitivity (SEN), probability of detection, hit rate, power
= TP/P = 1 − FNR
False negative rate (FNR),
miss rate
= FN/P = 1 − TPR
Negative (N)[d] False positive (FP),
type I error, false alarm,
overestimation
[e]
True negative (TN),
correct rejection
[f]
False positive rate (FPR),
probability of false alarm, fall-out
= FP/N = 1 − TNR
True negative rate (TNR),
specificity (SPC), selectivity
= TN/N = 1 − FPR
Prevalence
= P/P + N
Positive predictive value (PPV), precision
= TP/PP = 1 − FDR
False omission rate (FOR)
= FN/PN = 1 − NPV
Positive likelihood ratio (LR+)
= TPR/FPR
Negative likelihood ratio (LR−)
= FNR/TNR
Accuracy (ACC)
= TP + TN/P + N
False discovery rate (FDR)
= FP/PP = 1 − PPV
Negative predictive value (NPV)
= TN/PN = 1 − FOR
Markedness (MK), deltaP (Δp)
= PPV + NPV − 1
Diagnostic odds ratio (DOR)
= LR+/LR−
Balanced accuracy (BA)
= TPR + TNR/2
F1 score
= 2 PPV × TPR/PPV + TPR = 2 TP/2 TP + FP + FN
Fowlkes–Mallows index (FM)
= PPV × TPR
Matthews correlation coefficient (MCC)
= TPR × TNR × PPV × NPV - FNR × FPR × FOR × FDR
Threat score (TS), critical success index (CSI), Jaccard index
= TP/TP + FN + FP
  1. ^ the number of real positive cases in the data
  2. ^ A test result that correctly indicates the presence of a condition or characteristic
  3. ^ Type II error: A test result which wrongly indicates that a particular condition or attribute is absent
  4. ^ the number of real negative cases in the data
  5. ^ Type I error: A test result which wrongly indicates that a particular condition or attribute is present
  6. ^ A test result that correctly indicates the absence of a condition or characteristic


Note that the positive and negative predictive values can only be estimated using data from a cross-sectional study or other population-based study in which valid prevalence estimates may be obtained. In contrast, the sensitivity and specificity can be estimated from case-control studies.

Worked example edit

Suppose the fecal occult blood (FOB) screen test is used in 2030 people to look for bowel cancer:

Fecal occult blood screen test outcome
Total population
(pop.) = 2030
Test outcome positive Test outcome negative Accuracy (ACC)
= (TP + TN) / pop.
= (20 + 1820) / 2030
90.64%
F1 score
= 2 × precision × recall/precision + recall
0.174
Patients with
bowel cancer
(as confirmed
on endoscopy)
Actual condition
positive (AP)
= 30
(2030 × 1.48%)
True positive (TP)
= 20
(2030 × 1.48% × 67%)
False negative (FN)
= 10
(2030 × 1.48% × (100% − 67%))
True positive rate (TPR), recall, sensitivity
= TP / AP
= 20 / 30
66.7%
False negative rate (FNR), miss rate
= FN / AP
= 10 / 30
33.3%
Actual condition
negative (AN)
= 2000
(2030 × (100% − 1.48%))
False positive (FP)
= 180
(2030 × (100% − 1.48%) × (100% − 91%))
True negative (TN)
= 1820
(2030 × (100% − 1.48%) × 91%)
False positive rate (FPR), fall-out, probability of false alarm
= FP / AN
= 180 / 2000
= 9.0%
Specificity, selectivity, true negative rate (TNR)
= TN / AN
= 1820 / 2000
= 91%
Prevalence
= AP / pop.
= 30 / 2030
1.48%
Positive predictive value (PPV), precision
= TP / (TP + FP)
= 20 / (20 + 180)
= 10%
False omission rate (FOR)
= FN / (FN + TN)
= 10 / (10 + 1820)
0.55%
Positive likelihood ratio (LR+)
= TPR/FPR
= (20 / 30) / (180 / 2000)
7.41
Negative likelihood ratio (LR−)
= FNR/TNR
= (10 / 30) / (1820 / 2000)
0.366
False discovery rate (FDR)
= FP / (TP + FP)
= 180 / (20 + 180)
= 90.0%
Negative predictive value (NPV)
= TN / (FN + TN)
= 1820 / (10 + 1820)
99.45%
Diagnostic odds ratio (DOR)
= LR+/LR−
20.2

The small positive predictive value (PPV = 10%) indicates that many of the positive results from this testing procedure are false positives. Thus it will be necessary to follow up any positive result with a more reliable test to obtain a more accurate assessment as to whether cancer is present. Nevertheless, such a test may be useful if it is inexpensive and convenient. The strength of the FOB screen test is instead in its negative predictive value — which, if negative for an individual, gives us a high confidence that its negative result is true.

Problems edit

Other individual factors edit

Note that the PPV is not intrinsic to the test—it depends also on the prevalence.[2] Due to the large effect of prevalence upon predictive values, a standardized approach has been proposed, where the PPV is normalized to a prevalence of 50%.[12] PPV is directly proportional[dubious ] to the prevalence of the disease or condition. In the above example, if the group of people tested had included a higher proportion of people with bowel cancer, then the PPV would probably come out higher and the NPV lower. If everybody in the group had bowel cancer, the PPV would be 100% and the NPV 0%.[citation needed]

To overcome this problem, NPV and PPV should only be used if the ratio of the number of patients in the disease group and the number of patients in the healthy control group used to establish the NPV and PPV is equivalent to the prevalence of the diseases in the studied population, or, in case two disease groups are compared, if the ratio of the number of patients in disease group 1 and the number of patients in disease group 2 is equivalent to the ratio of the prevalences of the two diseases studied. Otherwise, positive and negative likelihood ratios are more accurate than NPV and PPV, because likelihood ratios do not depend on prevalence.[citation needed]

When an individual being tested has a different pre-test probability of having a condition than the control groups used to establish the PPV and NPV, the PPV and NPV are generally distinguished from the positive and negative post-test probabilities, with the PPV and NPV referring to the ones established by the control groups, and the post-test probabilities referring to the ones for the tested individual (as estimated, for example, by likelihood ratios). Preferably, in such cases, a large group of equivalent individuals should be studied, in order to establish separate positive and negative predictive values for use of the test in such individuals.[citation needed]

Bayesian updating edit

Bayes' theorem confers inherent limitations on the accuracy of screening tests as a function of disease prevalence or pre-test probability. It has been shown that a testing system can tolerate significant drops in prevalence, up to a certain well-defined point known as the prevalence threshold, below which the reliability of a positive screening test drops precipitously. That said, Balayla et al.[13] showed that sequential testing overcomes the aforementioned Bayesian limitations and thus improves the reliability of screening tests. For a desired positive predictive value   that approaches some constant  , the number of positive test iterations   needed is:

 

where

  •   is the desired PPV
  •   is the number of testing iterations necessary to achieve  
  •   is the sensitivity
  •   is the specificity
  •   is disease prevalence, and
  •   is a constant.

Of note, the denominator of the above equation is the natural logarithm of the positive likelihood ratio (LR+).

Different target conditions edit

PPV is used to indicate the probability that in case of a positive test, that the patient really has the specified disease. However, there may be more than one cause for a disease and any single potential cause may not always result in the overt disease seen in a patient. There is potential to mix up related target conditions of PPV and NPV, such as interpreting the PPV or NPV of a test as having a disease, when that PPV or NPV value actually refers only to a predisposition of having that disease.[citation needed]

An example is the microbiological throat swab used in patients with a sore throat. Usually publications stating PPV of a throat swab are reporting on the probability that this bacterium is present in the throat, rather than that the patient is ill from the bacteria found. If presence of this bacterium always resulted in a sore throat, then the PPV would be very useful. However the bacteria may colonise individuals in a harmless way and never result in infection or disease. Sore throats occurring in these individuals are caused by other agents such as a virus. In this situation the gold standard used in the evaluation study represents only the presence of bacteria (that might be harmless) but not a causal bacterial sore throat illness. It can be proven that this problem will affect positive predictive value far more than negative predictive value.[14] To evaluate diagnostic tests where the gold standard looks only at potential causes of disease, one may use an extension of the predictive value termed the Etiologic Predictive Value.[15][16]

See also edit

References edit

  1. ^ Fletcher, Robert H. Fletcher; Suzanne W. (2005). Clinical epidemiology : the essentials (4th ed.). Baltimore, Md.: Lippincott Williams & Wilkins. pp. 45. ISBN 0-7817-5215-9.{{cite book}}: CS1 maint: multiple names: authors list (link)
  2. ^ a b Altman, DG; Bland, JM (1994). "Diagnostic tests 2: Predictive values". BMJ. 309 (6947): 102. doi:10.1136/bmj.309.6947.102. PMC 2540558. PMID 8038641.
  3. ^ Balayla, Jacques (2020). "Prevalence threshold (ϕe) and the geometry of screening curves". PLOS ONE. 15 (10): e0240215. doi:10.1371/journal.pone.0240215. PMID 33027310.
  4. ^ Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010. S2CID 2027090.
  5. ^ Piryonesi S. Madeh; El-Diraby Tamer E. (2020-03-01). "Data Analytics in Asset Management: Cost-Effective Prediction of the Pavement Condition Index". Journal of Infrastructure Systems. 26 (1): 04019036. doi:10.1061/(ASCE)IS.1943-555X.0000512. S2CID 213782055.
  6. ^ Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63.
  7. ^ Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer. doi:10.1007/978-0-387-30164-8. ISBN 978-0-387-30164-8.
  8. ^ Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17.
  9. ^ Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.
  10. ^ Chicco D, Toetsch N, Jurman G (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (13): 13. doi:10.1186/s13040-021-00244-z. PMC 7863449. PMID 33541410.
  11. ^ Tharwat A. (August 2018). "Classification assessment methods". Applied Computing and Informatics. 17: 168–192. doi:10.1016/j.aci.2018.08.003.
  12. ^ Heston, Thomas F. (2011). "Standardizing predictive values in diagnostic imaging research". Journal of Magnetic Resonance Imaging. 33 (2): 505, author reply 506–7. doi:10.1002/jmri.22466. PMID 21274995.
  13. ^ Jacques Balayla. Bayesian Updating and Sequential Testing: Overcoming Inferential Limitations of Screening Tests. ArXiv 2020. https://arxiv.org/abs/2006.11641.
  14. ^ Orda, Ulrich; Gunnarsson, Ronny K; Orda, Sabine; Fitzgerald, Mark; Rofe, Geoffry; Dargan, Anna (2016). "Etiologic predictive value of a rapid immunoassay for the detection of group A Streptococcus antigen from throat swabs in patients presenting with a sore throat" (PDF). International Journal of Infectious Diseases. 45 (April): 32–5. doi:10.1016/j.ijid.2016.02.002. PMID 26873279.
  15. ^ Gunnarsson, Ronny K.; Lanke, Jan (2002). "The predictive value of microbiologic diagnostic tests if asymptomatic carriers are present". Statistics in Medicine. 21 (12): 1773–85. doi:10.1002/sim.1119. PMID 12111911. S2CID 26163122.
  16. ^ Gunnarsson, Ronny K. "EPV Calculator". Science Network TV.

positive, negative, predictive, values, this, article, needs, additional, citations, verification, please, help, improve, this, article, adding, citations, reliable, sources, unsourced, material, challenged, removed, find, sources, news, newspapers, books, sch. This article needs additional citations for verification Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources Positive and negative predictive values news newspapers books scholar JSTOR March 2012 Learn how and when to remove this template message The positive and negative predictive values PPV and NPV respectively are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative results respectively 1 The PPV and NPV describe the performance of a diagnostic test or other statistical measure A high result can be interpreted as indicating the accuracy of such a statistic The PPV and NPV are not intrinsic to the test as true positive rate and true negative rate are they depend also on the prevalence 2 Both PPV and NPV can be derived using Bayes theorem Positive and negative predictive valuesPositive and negative predictive values 2Although sometimes used synonymously a positive predictive value generally refers to what is established by control groups while a post test probability refers to a probability for an individual Still if the individual s pre test probability of the target condition is the same as the prevalence in the control group used to establish the positive predictive value the two are numerically equal In information retrieval the PPV statistic is often called the precision Contents 1 Definition 1 1 Positive predictive value PPV 1 2 Negative predictive value NPV 1 3 Relationship 1 4 Worked example 2 Problems 2 1 Other individual factors 2 2 Bayesian updating 2 3 Different target conditions 3 See also 4 ReferencesDefinition editPositive predictive value PPV edit The positive predictive value PPV or precision is defined as PPV Number of true positives Number of true positives Number of false positives Number of true positives Number of positive calls displaystyle text PPV frac text Number of true positives text Number of true positives text Number of false positives frac text Number of true positives text Number of positive calls nbsp dd where a true positive is the event that the test makes a positive prediction and the subject has a positive result under the gold standard and a false positive is the event that the test makes a positive prediction and the subject has a negative result under the gold standard The ideal value of the PPV with a perfect test is 1 100 and the worst possible value would be zero The PPV can also be computed from sensitivity specificity and the prevalence of the condition PPV sensitivity prevalence sensitivity prevalence 1 specificity 1 prevalence displaystyle text PPV frac text sensitivity times text prevalence text sensitivity times text prevalence 1 text specificity times 1 text prevalence nbsp dd cf Bayes theoremThe complement of the PPV is the false discovery rate FDR FDR 1 PPV Number of false positives Number of true positives Number of false positives Number of false positives Number of positive calls displaystyle text FDR 1 text PPV frac text Number of false positives text Number of true positives text Number of false positives frac text Number of false positives text Number of positive calls nbsp dd Negative predictive value NPV edit The negative predictive value is defined as NPV Number of true negatives Number of true negatives Number of false negatives Number of true negatives Number of negative calls displaystyle text NPV frac text Number of true negatives text Number of true negatives text Number of false negatives frac text Number of true negatives text Number of negative calls nbsp dd where a true negative is the event that the test makes a negative prediction and the subject has a negative result under the gold standard and a false negative is the event that the test makes a negative prediction and the subject has a positive result under the gold standard With a perfect test one which returns no false negatives the value of the NPV is 1 100 and with a test which returns no true negatives the NPV value is zero The NPV can also be computed from sensitivity specificity and prevalence NPV specificity 1 prevalence specificity 1 prevalence 1 sensitivity prevalence displaystyle text NPV frac text specificity times 1 text prevalence text specificity times 1 text prevalence 1 text sensitivity times text prevalence nbsp dd NPV T N T N F N displaystyle text NPV frac TN TN FN nbsp dd The complement of the NPV is the false omission rate FOR FOR 1 NPV Number of false negatives Number of true negatives Number of false negatives Number of false negatives Number of negative calls displaystyle text FOR 1 text NPV frac text Number of false negatives text Number of true negatives text Number of false negatives frac text Number of false negatives text Number of negative calls nbsp dd Although sometimes used synonymously a negative predictive value generally refers to what is established by control groups while a negative post test probability rather refers to a probability for an individual Still if the individual s pre test probability of the target condition is the same as the prevalence in the control group used to establish the negative predictive value then the two are numerically equal Relationship edit The following diagram illustrates how the positive predictive value negative predictive value sensitivity and specificity are related Predicted condition Sources 3 4 5 6 7 8 9 10 11 viewtalkeditTotal population P N Predicted Positive PP Predicted Negative PN Informedness bookmaker informedness BM TPR TNR 1 Prevalence threshold PT TPR FPR FPR TPR FPRActual condition Positive P a True positive TP hit b False negative FN type II error miss underestimation c True positive rate TPR recall sensitivity SEN probability of detection hit rate power TP P 1 FNR False negative rate FNR miss rate FN P 1 TPRNegative N d False positive FP type I error false alarm overestimation e True negative TN correct rejection f False positive rate FPR probability of false alarm fall out FP N 1 TNR True negative rate TNR specificity SPC selectivity TN N 1 FPRPrevalence P P N Positive predictive value PPV precision TP PP 1 FDR False omission rate FOR FN PN 1 NPV Positive likelihood ratio LR TPR FPR Negative likelihood ratio LR FNR TNRAccuracy ACC TP TN P N False discovery rate FDR FP PP 1 PPV Negative predictive value NPV TN PN 1 FOR Markedness MK deltaP Dp PPV NPV 1 Diagnostic odds ratio DOR LR LR Balanced accuracy BA TPR TNR 2 F1 score 2 PPV TPR PPV TPR 2 TP 2 TP FP FN Fowlkes Mallows index FM PPV TPR Matthews correlation coefficient MCC TPR TNR PPV NPV FNR FPR FOR FDR Threat score TS critical success index CSI Jaccard index TP TP FN FP the number of real positive cases in the data A test result that correctly indicates the presence of a condition or characteristic Type II error A test result which wrongly indicates that a particular condition or attribute is absent the number of real negative cases in the data Type I error A test result which wrongly indicates that a particular condition or attribute is present A test result that correctly indicates the absence of a condition or characteristic Note that the positive and negative predictive values can only be estimated using data from a cross sectional study or other population based study in which valid prevalence estimates may be obtained In contrast the sensitivity and specificity can be estimated from case control studies Worked example edit Suppose the fecal occult blood FOB screen test is used in 2030 people to look for bowel cancer Fecal occult blood screen test outcome viewtalkeditTotal population pop 2030 Test outcome positive Test outcome negative Accuracy ACC TP TN pop 20 1820 2030 90 64 F1 score 2 precision recall precision recall 0 174Patients withbowel cancer as confirmedon endoscopy Actual conditionpositive AP 30 2030 1 48 True positive TP 20 2030 1 48 67 False negative FN 10 2030 1 48 100 67 True positive rate TPR recall sensitivity TP AP 20 30 66 7 False negative rate FNR miss rate FN AP 10 30 33 3 Actual conditionnegative AN 2000 2030 100 1 48 False positive FP 180 2030 100 1 48 100 91 True negative TN 1820 2030 100 1 48 91 False positive rate FPR fall out probability of false alarm FP AN 180 2000 9 0 Specificity selectivity true negative rate TNR TN AN 1820 2000 91 Prevalence AP pop 30 2030 1 48 Positive predictive value PPV precision TP TP FP 20 20 180 10 False omission rate FOR FN FN TN 10 10 1820 0 55 Positive likelihood ratio LR TPR FPR 20 30 180 2000 7 41 Negative likelihood ratio LR FNR TNR 10 30 1820 2000 0 366False discovery rate FDR FP TP FP 180 20 180 90 0 Negative predictive value NPV TN FN TN 1820 10 1820 99 45 Diagnostic odds ratio DOR LR LR 20 2The small positive predictive value PPV 10 indicates that many of the positive results from this testing procedure are false positives Thus it will be necessary to follow up any positive result with a more reliable test to obtain a more accurate assessment as to whether cancer is present Nevertheless such a test may be useful if it is inexpensive and convenient The strength of the FOB screen test is instead in its negative predictive value which if negative for an individual gives us a high confidence that its negative result is true Problems editOther individual factors edit Note that the PPV is not intrinsic to the test it depends also on the prevalence 2 Due to the large effect of prevalence upon predictive values a standardized approach has been proposed where the PPV is normalized to a prevalence of 50 12 PPV is directly proportional dubious discuss to the prevalence of the disease or condition In the above example if the group of people tested had included a higher proportion of people with bowel cancer then the PPV would probably come out higher and the NPV lower If everybody in the group had bowel cancer the PPV would be 100 and the NPV 0 citation needed To overcome this problem NPV and PPV should only be used if the ratio of the number of patients in the disease group and the number of patients in the healthy control group used to establish the NPV and PPV is equivalent to the prevalence of the diseases in the studied population or in case two disease groups are compared if the ratio of the number of patients in disease group 1 and the number of patients in disease group 2 is equivalent to the ratio of the prevalences of the two diseases studied Otherwise positive and negative likelihood ratios are more accurate than NPV and PPV because likelihood ratios do not depend on prevalence citation needed When an individual being tested has a different pre test probability of having a condition than the control groups used to establish the PPV and NPV the PPV and NPV are generally distinguished from the positive and negative post test probabilities with the PPV and NPV referring to the ones established by the control groups and the post test probabilities referring to the ones for the tested individual as estimated for example by likelihood ratios Preferably in such cases a large group of equivalent individuals should be studied in order to establish separate positive and negative predictive values for use of the test in such individuals citation needed Bayesian updating edit Bayes theorem confers inherent limitations on the accuracy of screening tests as a function of disease prevalence or pre test probability It has been shown that a testing system can tolerate significant drops in prevalence up to a certain well defined point known as the prevalence threshold below which the reliability of a positive screening test drops precipitously That said Balayla et al 13 showed that sequential testing overcomes the aforementioned Bayesian limitations and thus improves the reliability of screening tests For a desired positive predictive value r displaystyle rho nbsp that approaches some constant k displaystyle k nbsp the number of positive test iterations n i displaystyle n i nbsp needed is n i lim r k ln r ϕ 1 ϕ r 1 ln a 1 b displaystyle n i lim rho to k left lceil frac ln left frac rho phi 1 phi rho 1 right ln left frac a 1 b right right rceil nbsp where r displaystyle rho nbsp is the desired PPV n i displaystyle n i nbsp is the number of testing iterations necessary to achieve r displaystyle rho nbsp a displaystyle a nbsp is the sensitivity b displaystyle b nbsp is the specificity ϕ displaystyle phi nbsp is disease prevalence and k displaystyle k nbsp is a constant Of note the denominator of the above equation is the natural logarithm of the positive likelihood ratio LR Different target conditions edit PPV is used to indicate the probability that in case of a positive test that the patient really has the specified disease However there may be more than one cause for a disease and any single potential cause may not always result in the overt disease seen in a patient There is potential to mix up related target conditions of PPV and NPV such as interpreting the PPV or NPV of a test as having a disease when that PPV or NPV value actually refers only to a predisposition of having that disease citation needed An example is the microbiological throat swab used in patients with a sore throat Usually publications stating PPV of a throat swab are reporting on the probability that this bacterium is present in the throat rather than that the patient is ill from the bacteria found If presence of this bacterium always resulted in a sore throat then the PPV would be very useful However the bacteria may colonise individuals in a harmless way and never result in infection or disease Sore throats occurring in these individuals are caused by other agents such as a virus In this situation the gold standard used in the evaluation study represents only the presence of bacteria that might be harmless but not a causal bacterial sore throat illness It can be proven that this problem will affect positive predictive value far more than negative predictive value 14 To evaluate diagnostic tests where the gold standard looks only at potential causes of disease one may use an extension of the predictive value termed the Etiologic Predictive Value 15 16 See also editBinary classification Sensitivity and specificity False discovery rate Relevance information retrieval Receiver operator characteristic Diagnostic odds ratio Sensitivity indexReferences edit Fletcher Robert H Fletcher Suzanne W 2005 Clinical epidemiology the essentials 4th ed Baltimore Md Lippincott Williams amp Wilkins pp 45 ISBN 0 7817 5215 9 a href Template Cite book html title Template Cite book cite book a CS1 maint multiple names authors list link a b Altman DG Bland JM 1994 Diagnostic tests 2 Predictive values BMJ 309 6947 102 doi 10 1136 bmj 309 6947 102 PMC 2540558 PMID 8038641 Balayla Jacques 2020 Prevalence threshold ϕe and the geometry of screening curves PLOS ONE 15 10 e0240215 doi 10 1371 journal pone 0240215 PMID 33027310 Fawcett Tom 2006 An Introduction to ROC Analysis PDF Pattern Recognition Letters 27 8 861 874 doi 10 1016 j patrec 2005 10 010 S2CID 2027090 Piryonesi S Madeh El Diraby Tamer E 2020 03 01 Data Analytics in Asset Management Cost Effective Prediction of the Pavement Condition Index Journal of Infrastructure Systems 26 1 04019036 doi 10 1061 ASCE IS 1943 555X 0000512 S2CID 213782055 Powers David M W 2011 Evaluation From Precision Recall and F Measure to ROC Informedness Markedness amp Correlation Journal of Machine Learning Technologies 2 1 37 63 Ting Kai Ming 2011 Sammut Claude Webb Geoffrey I eds Encyclopedia of machine learning Springer doi 10 1007 978 0 387 30164 8 ISBN 978 0 387 30164 8 Brooks Harold Brown Barb Ebert Beth Ferro Chris Jolliffe Ian Koh Tieh Yong Roebber Paul Stephenson David 2015 01 26 WWRP WGNE Joint Working Group on Forecast Verification Research Collaboration for Australian Weather and Climate Research World Meteorological Organisation Retrieved 2019 07 17 Chicco D Jurman G January 2020 The advantages of the Matthews correlation coefficient MCC over F1 score and accuracy in binary classification evaluation BMC Genomics 21 1 6 1 6 13 doi 10 1186 s12864 019 6413 7 PMC 6941312 PMID 31898477 Chicco D Toetsch N Jurman G February 2021 The Matthews correlation coefficient MCC is more reliable than balanced accuracy bookmaker informedness and markedness in two class confusion matrix evaluation BioData Mining 14 13 13 doi 10 1186 s13040 021 00244 z PMC 7863449 PMID 33541410 Tharwat A August 2018 Classification assessment methods Applied Computing and Informatics 17 168 192 doi 10 1016 j aci 2018 08 003 Heston Thomas F 2011 Standardizing predictive values in diagnostic imaging research Journal of Magnetic Resonance Imaging 33 2 505 author reply 506 7 doi 10 1002 jmri 22466 PMID 21274995 Jacques Balayla Bayesian Updating and Sequential Testing Overcoming Inferential Limitations of Screening Tests ArXiv 2020 https arxiv org abs 2006 11641 Orda Ulrich Gunnarsson Ronny K Orda Sabine Fitzgerald Mark Rofe Geoffry Dargan Anna 2016 Etiologic predictive value of a rapid immunoassay for the detection of group A Streptococcus antigen from throat swabs in patients presenting with a sore throat PDF International Journal of Infectious Diseases 45 April 32 5 doi 10 1016 j ijid 2016 02 002 PMID 26873279 Gunnarsson Ronny K Lanke Jan 2002 The predictive value of microbiologic diagnostic tests if asymptomatic carriers are present Statistics in Medicine 21 12 1773 85 doi 10 1002 sim 1119 PMID 12111911 S2CID 26163122 Gunnarsson Ronny K EPV Calculator Science Network TV Retrieved from https en wikipedia org w index php title Positive and negative predictive values amp oldid 1209451273, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.