fbpx
Wikipedia

F-score

In statistical analysis of binary classification and information retrieval systems, the F-score or F-measure is a measure of predictive performance. It is calculated from the precision and recall of the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as positive predictive value, and recall is also known as sensitivity in diagnostic binary classification.

Precision and recall

The F1 score is the harmonic mean of the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more generic score applies additional weights, valuing one of precision or recall more than the other.

The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if either precision or recall are zero.

Etymology edit

The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth Message Understanding Conference (MUC-4, 1992).[1]

Definition edit

The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:[2]

 .

Fβ score edit

A more general F score,  , that uses a positive real factor  , where   is chosen such that recall is considered   times as important as precision, is:

 .

In terms of Type I and type II errors this becomes:

 .

Two commonly used values for   are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision.

The F-measure was derived so that   "measures the effectiveness of retrieval with respect to a user who attaches   times as much importance to recall as precision".[3] It is based on Van Rijsbergen's effectiveness measure

 .

Their relationship is   where  .

Diagnostic testing edit

This is related to the field of binary classification where recall is often termed "sensitivity".

Predicted condition Sources: [4][5][6][7][8][9][10][11][12]
Total population
= P + N
Predicted Positive (PP) Predicted Negative (PN) Informedness, bookmaker informedness (BM)
= TPR + TNR − 1
Prevalence threshold (PT)
= 
Actual condition
Positive (P) True positive (TP),
hit
False negative (FN),
type II error, miss,
underestimation
True positive rate (TPR), recall, sensitivity (SEN), probability of detection, hit rate, power
= TP/P = 1 − FNR
False negative rate (FNR),
miss rate
= FN/P = 1 − TPR
Negative (N) False positive (FP),
type I error, false alarm,
overestimation
True negative (TN),
correct rejection
False positive rate (FPR),
probability of false alarm, fall-out
= FP/N = 1 − TNR
True negative rate (TNR),
specificity (SPC), selectivity
= TN/N = 1 − FPR
Prevalence
= P/P + N
Positive predictive value (PPV), precision
= TP/PP = 1 − FDR
False omission rate (FOR)
= FN/PN = 1 − NPV
Positive likelihood ratio (LR+)
= TPR/FPR
Negative likelihood ratio (LR−)
= FNR/TNR
Accuracy (ACC) = TP + TN/P + N False discovery rate (FDR)
= FP/PP = 1 − PPV
Negative predictive value (NPV) = TN/PN = 1 − FOR Markedness (MK), deltaP (Δp)
= PPV + NPV − 1
Diagnostic odds ratio (DOR) = LR+/LR−
Balanced accuracy (BA) = TPR + TNR/2 F1 score
= 2 PPV × TPR/PPV + TPR = 2 TP/2 TP + FP + FN
Fowlkes–Mallows index (FM) =   Matthews correlation coefficient (MCC)
=  
Threat score (TS), critical success index (CSI), Jaccard index = TP/TP + FN + FP
 
Normalised harmonic mean plot where x is precision, y is recall and the vertical axis is F1 score, in percentage points
 
Precision-Recall Curve: points from different thresholds are color coded, the point with optimal F-score is highlighted in red

Dependence of the F-score on class imbalance edit

Precision-recall curve, and thus the   score, explicitly depends on the ratio   of positive to negative test cases.[13] This means that comparison of the F-score across different problems with differing class ratios is problematic. One way to address this issue (see e.g., Siblini et al, 2020[14] ) is to use a standard class ratio   when making such comparisons.

Applications edit

The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.[15] It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class.

Earlier works focused primarily on the F1 score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[16] and so   is seen in wide application.

The F-score is also used in machine learning.[17] However, the F-measures do not take true negatives into account, hence measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferred to assess the performance of a binary classifier.[18]

The F-score has been widely used in the natural language processing literature,[19] such as in the evaluation of named entity recognition and word segmentation.

Properties edit

The F1 score is the Dice coefficient of the set of retrieved items and the set of relevant items.[20]

  • The F-score of a classifier with always predicts the positive class converges to 1 as the probability of the positive class increases.
  • The F-score of a classifier with always predicts the positive class is equal to the precision.
  • If the scoring model is uninformative (cannot distinguish between the positive and negative class) then the optimal threshold is 0 so that the positive class is always predicted.
  • F-1 is concave in the true positive rate.[21]

Criticism edit

David Hand and others criticize the widespread use of the F1 score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.[22]

According to Davide Chicco and Giuseppe Jurman, the F1 score is less truthful and informative than the Matthews correlation coefficient (MCC) in binary evaluation classification.[23]

David Powers has pointed out that F1 ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measures Informedness and Markedness for the two directions, noting that their geometric mean is correlation.[24]

Another source of critique of F1 is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism is met by the P4 metric definition, which is sometimes indicated as a symmetrical extension of F1.[25]

Difference from Fowlkes–Mallows index edit

While the F-measure is the harmonic mean of recall and precision, the Fowlkes–Mallows index is their geometric mean.[26]

Extension to multi-class classification edit

The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). In this setup, the final score is obtained by micro-averaging (biased by class frequency) or macro-averaging (taking all classes as equally important). For macro-averaging, two different formulas have been used by applicants: the F-score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F-scores, where the latter exhibits more desirable properties.[27]

See also edit

References edit

  1. ^ Sasaki, Y. (2007). "The truth of the F-measure" (PDF).
  2. ^ Aziz Taha, Abdel (2015). "Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool". BMC Medical Imaging. 15 (29): 1–28. doi:10.1186/s12880-015-0068-x. PMC 4533825. PMID 26263899.
  3. ^ Van Rijsbergen, C. J. (1979). Information Retrieval (2nd ed.). Butterworth-Heinemann.
  4. ^ Balayla, Jacques (2020). "Prevalence threshold (ϕe) and the geometry of screening curves". PLOS ONE. 15 (10): e0240215. doi:10.1371/journal.pone.0240215. PMID 33027310.
  5. ^ Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010. S2CID 2027090.
  6. ^ Piryonesi S. Madeh; El-Diraby Tamer E. (2020-03-01). "Data Analytics in Asset Management: Cost-Effective Prediction of the Pavement Condition Index". Journal of Infrastructure Systems. 26 (1): 04019036. doi:10.1061/(ASCE)IS.1943-555X.0000512. S2CID 213782055.
  7. ^ Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63.
  8. ^ Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer. doi:10.1007/978-0-387-30164-8. ISBN 978-0-387-30164-8.
  9. ^ Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17.
  10. ^ Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.
  11. ^ Chicco D, Toetsch N, Jurman G (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (13): 13. doi:10.1186/s13040-021-00244-z. PMC 7863449. PMID 33541410.
  12. ^ Tharwat A. (August 2018). "Classification assessment methods". Applied Computing and Informatics. 17: 168–192. doi:10.1016/j.aci.2018.08.003.
  13. ^ Brabec, Jan; Komárek, Tomáš; Franc, Vojtěch; Machlica, Lukáš (2020). "On model evaluation under non-constant class imbalance". International Conference on Computational Science. Springer. pp. 74–87. arXiv:2001.05571. doi:10.1007/978-3-030-50423-6_6.
  14. ^ Siblini, W.; Fréry, J.; He-Guelton, L.; Oblé, F.; Wang, Y. Q. (2020). "Master your metrics with calibration". In M. Berthold; A. Feelders; G. Krempl (eds.). Advances in Intelligent Data Analysis XVIII. Springer. pp. 457–469. arXiv:1909.02827. doi:10.1007/978-3-030-44584-3_36.
  15. ^ Beitzel., Steven M. (2006). On Understanding and Classifying Web Queries (Ph.D. thesis). IIT. CiteSeerX 10.1.1.127.634.
  16. ^ X. Li; Y.-Y. Wang; A. Acero (July 2008). Learning query intent from regularized click graphs. Proceedings of the 31st SIGIR Conference. p. 339. doi:10.1145/1390334.1390393. ISBN 9781605581644. S2CID 8482989.
  17. ^ See, e.g., the evaluation of the [1].
  18. ^ Powers, David M. W (2015). "What the F-measure doesn't measure". arXiv:1503.06410 [cs.IR].
  19. ^ Derczynski, L. (2016). Complementarity, F-score, and NLP Evaluation. Proceedings of the International Conference on Language Resources and Evaluation.
  20. ^ Manning, Christopher (April 1, 2009). An Introduction to Information Retrieval (PDF). Exercise 8.7: Cambridge University Press. p. 200. Retrieved 18 July 2022.{{cite book}}: CS1 maint: location (link)
  21. ^ Lipton, Z.C., Elkan, C.P., & Narayanaswamy, B. (2014). F1-Optimal Thresholding in the Multi-Label Setting. ArXiv, abs/1402.1892.
  22. ^ Hand, David. "A note on using the F-measure for evaluating record linkage algorithms - Dimensions". app.dimensions.ai. doi:10.1007/s11222-017-9746-6. hdl:10044/1/46235. S2CID 38782128. Retrieved 2018-12-08.
  23. ^ Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (6): 6. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.
  24. ^ Powers, David M W (2011). "Evaluation: From Precision, Recall and F-Score to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63. hdl:2328/27165.
  25. ^ Sitarz, Mikolaj (2022). "Extending F1 metric, probabilistic approach". arXiv:2210.11997 [cs.LG].
  26. ^ Tharwat A (August 2018). "Classification assessment methods". Applied Computing and Informatics. 17: 168–192. doi:10.1016/j.aci.2018.08.003.
  27. ^ J. Opitz; S. Burst (2019). "Macro F1 and Macro F1". arXiv:1911.03347 [stat.ML].

score, significance, test, test, statistical, analysis, binary, classification, information, retrieval, systems, measure, measure, predictive, performance, calculated, from, precision, recall, test, where, precision, number, true, positive, results, divided, n. For the significance test see F test In statistical analysis of binary classification and information retrieval systems the F score or F measure is a measure of predictive performance It is calculated from the precision and recall of the test where the precision is the number of true positive results divided by the number of all samples predicted to be positive including those not identified correctly and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive Precision is also known as positive predictive value and recall is also known as sensitivity in diagnostic binary classification Precision and recallThe F1 score is the harmonic mean of the precision and recall It thus symmetrically represents both precision and recall in one metric The more generic F b displaystyle F beta score applies additional weights valuing one of precision or recall more than the other The highest possible value of an F score is 1 0 indicating perfect precision and recall and the lowest possible value is 0 if either precision or recall are zero Contents 1 Etymology 2 Definition 2 1 Fb score 3 Diagnostic testing 4 Dependence of the F score on class imbalance 5 Applications 6 Properties 7 Criticism 8 Difference from Fowlkes Mallows index 9 Extension to multi class classification 10 See also 11 ReferencesEtymology editThe name F measure is believed to be named after a different F function in Van Rijsbergen s book when introduced to the Fourth Message Understanding Conference MUC 4 1992 1 Definition editThis section needs additional citations for verification Please help improve this article by adding citations to reliable sources in this section Unsourced material may be challenged and removed December 2018 Learn how and when to remove this template message The traditional F measure or balanced F score F1 score is the harmonic mean of precision and recall 2 F 1 2 r e c a l l 1 p r e c i s i o n 1 2 p r e c i s i o n r e c a l l p r e c i s i o n r e c a l l 2 t p 2 t p f p f n displaystyle F 1 frac 2 mathrm recall 1 mathrm precision 1 2 frac mathrm precision cdot mathrm recall mathrm precision mathrm recall frac 2 mathrm tp 2 mathrm tp mathrm fp mathrm fn nbsp Fb score edit A more general F score F b displaystyle F beta nbsp that uses a positive real factor b displaystyle beta nbsp where b displaystyle beta nbsp is chosen such that recall is considered b displaystyle beta nbsp times as important as precision is F b 1 b 2 p r e c i s i o n r e c a l l b 2 p r e c i s i o n r e c a l l displaystyle F beta 1 beta 2 cdot frac mathrm precision cdot mathrm recall beta 2 cdot mathrm precision mathrm recall nbsp In terms of Type I and type II errors this becomes F b 1 b 2 t r u e p o s i t i v e 1 b 2 t r u e p o s i t i v e b 2 f a l s e n e g a t i v e f a l s e p o s i t i v e displaystyle F beta frac 1 beta 2 cdot mathrm true positive 1 beta 2 cdot mathrm true positive beta 2 cdot mathrm false negative mathrm false positive nbsp Two commonly used values for b displaystyle beta nbsp are 2 which weighs recall higher than precision and 0 5 which weighs recall lower than precision The F measure was derived so that F b displaystyle F beta nbsp measures the effectiveness of retrieval with respect to a user who attaches b displaystyle beta nbsp times as much importance to recall as precision 3 It is based on Van Rijsbergen s effectiveness measure E 1 a p 1 a r 1 displaystyle E 1 left frac alpha p frac 1 alpha r right 1 nbsp Their relationship is F b 1 E displaystyle F beta 1 E nbsp where a 1 1 b 2 displaystyle alpha frac 1 1 beta 2 nbsp Diagnostic testing editThis is related to the field of binary classification where recall is often termed sensitivity Predicted condition Sources 4 5 6 7 8 9 10 11 12 viewtalkeditTotal population P N Predicted Positive PP Predicted Negative PN Informedness bookmaker informedness BM TPR TNR 1 Prevalence threshold PT TPR FPR FPR TPR FPR displaystyle mathsf tfrac sqrt text TPR times text FPR text FPR text TPR text FPR nbsp Actual condition Positive P True positive TP hit False negative FN type II error miss underestimation True positive rate TPR recall sensitivity SEN probability of detection hit rate power TP P 1 FNR False negative rate FNR miss rate FN P 1 TPRNegative N False positive FP type I error false alarm overestimation True negative TN correct rejection False positive rate FPR probability of false alarm fall out FP N 1 TNR True negative rate TNR specificity SPC selectivity TN N 1 FPRPrevalence P P N Positive predictive value PPV precision TP PP 1 FDR False omission rate FOR FN PN 1 NPV Positive likelihood ratio LR TPR FPR Negative likelihood ratio LR FNR TNRAccuracy ACC TP TN P N False discovery rate FDR FP PP 1 PPV Negative predictive value NPV TN PN 1 FOR Markedness MK deltaP Dp PPV NPV 1 Diagnostic odds ratio DOR LR LR Balanced accuracy BA TPR TNR 2 F1 score 2 PPV TPR PPV TPR 2 TP 2 TP FP FN Fowlkes Mallows index FM PPV TPR displaystyle scriptstyle mathsf sqrt text PPV times text TPR nbsp Matthews correlation coefficient MCC TPR TNR PPV NPV displaystyle scriptstyle mathsf sqrt text TPR times text TNR times text PPV times text NPV nbsp FNR FPR FOR FDR displaystyle scriptstyle mathsf sqrt text FNR times text FPR times text FOR times text FDR nbsp Threat score TS critical success index CSI Jaccard index TP TP FN FP nbsp Normalised harmonic mean plot where x is precision y is recall and the vertical axis is F1 score in percentage points nbsp Precision Recall Curve points from different thresholds are color coded the point with optimal F score is highlighted in redDependence of the F score on class imbalance editPrecision recall curve and thus the F b displaystyle F beta nbsp score explicitly depends on the ratio r displaystyle r nbsp of positive to negative test cases 13 This means that comparison of the F score across different problems with differing class ratios is problematic One way to address this issue see e g Siblini et al 2020 14 is to use a standard class ratio r 0 displaystyle r 0 nbsp when making such comparisons Applications editThe F score is often used in the field of information retrieval for measuring search document classification and query classification performance 15 It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class Earlier works focused primarily on the F1 score but with the proliferation of large scale search engines performance goals changed to place more emphasis on either precision or recall 16 and so F b displaystyle F beta nbsp is seen in wide application The F score is also used in machine learning 17 However the F measures do not take true negatives into account hence measures such as the Matthews correlation coefficient Informedness or Cohen s kappa may be preferred to assess the performance of a binary classifier 18 The F score has been widely used in the natural language processing literature 19 such as in the evaluation of named entity recognition and word segmentation Properties editThe F1 score is the Dice coefficient of the set of retrieved items and the set of relevant items 20 The F score of a classifier with always predicts the positive class converges to 1 as the probability of the positive class increases The F score of a classifier with always predicts the positive class is equal to the precision If the scoring model is uninformative cannot distinguish between the positive and negative class then the optimal threshold is 0 so that the positive class is always predicted F 1 is concave in the true positive rate 21 Criticism editDavid Hand and others criticize the widespread use of the F1 score since it gives equal importance to precision and recall In practice different types of mis classifications incur different costs In other words the relative importance of precision and recall is an aspect of the problem 22 According to Davide Chicco and Giuseppe Jurman the F1 score is less truthful and informative than the Matthews correlation coefficient MCC in binary evaluation classification 23 David Powers has pointed out that F1 ignores the True Negatives and thus is misleading for unbalanced classes while kappa and correlation measures are symmetric and assess both directions of predictability the classifier predicting the true class and the true class predicting the classifier prediction proposing separate multiclass measures Informedness and Markedness for the two directions noting that their geometric mean is correlation 24 Another source of critique of F1 is its lack of symmetry It means it may change its value when dataset labeling is changed the positive samples are named negative and vice versa This criticism is met by the P4 metric definition which is sometimes indicated as a symmetrical extension of F1 25 Difference from Fowlkes Mallows index editWhile the F measure is the harmonic mean of recall and precision the Fowlkes Mallows index is their geometric mean 26 Extension to multi class classification editThe F score is also used for evaluating classification problems with more than two classes Multiclass classification In this setup the final score is obtained by micro averaging biased by class frequency or macro averaging taking all classes as equally important For macro averaging two different formulas have been used by applicants the F score of arithmetic class wise precision and recall means or the arithmetic mean of class wise F scores where the latter exhibits more desirable properties 27 See also editBLEU Confusion matrix Hypothesis tests for accuracy METEOR NIST metric Receiver operating characteristic ROUGE metric Uncertainty coefficient aka Proficiency Word error rate LEPORReferences edit Sasaki Y 2007 The truth of the F measure PDF Aziz Taha Abdel 2015 Metrics for evaluating 3D medical image segmentation analysis selection and tool BMC Medical Imaging 15 29 1 28 doi 10 1186 s12880 015 0068 x PMC 4533825 PMID 26263899 Van Rijsbergen C J 1979 Information Retrieval 2nd ed Butterworth Heinemann Balayla Jacques 2020 Prevalence threshold ϕe and the geometry of screening curves PLOS ONE 15 10 e0240215 doi 10 1371 journal pone 0240215 PMID 33027310 Fawcett Tom 2006 An Introduction to ROC Analysis PDF Pattern Recognition Letters 27 8 861 874 doi 10 1016 j patrec 2005 10 010 S2CID 2027090 Piryonesi S Madeh El Diraby Tamer E 2020 03 01 Data Analytics in Asset Management Cost Effective Prediction of the Pavement Condition Index Journal of Infrastructure Systems 26 1 04019036 doi 10 1061 ASCE IS 1943 555X 0000512 S2CID 213782055 Powers David M W 2011 Evaluation From Precision Recall and F Measure to ROC Informedness Markedness amp Correlation Journal of Machine Learning Technologies 2 1 37 63 Ting Kai Ming 2011 Sammut Claude Webb Geoffrey I eds Encyclopedia of machine learning Springer doi 10 1007 978 0 387 30164 8 ISBN 978 0 387 30164 8 Brooks Harold Brown Barb Ebert Beth Ferro Chris Jolliffe Ian Koh Tieh Yong Roebber Paul Stephenson David 2015 01 26 WWRP WGNE Joint Working Group on Forecast Verification Research Collaboration for Australian Weather and Climate Research World Meteorological Organisation Retrieved 2019 07 17 Chicco D Jurman G January 2020 The advantages of the Matthews correlation coefficient MCC over F1 score and accuracy in binary classification evaluation BMC Genomics 21 1 6 1 6 13 doi 10 1186 s12864 019 6413 7 PMC 6941312 PMID 31898477 Chicco D Toetsch N Jurman G February 2021 The Matthews correlation coefficient MCC is more reliable than balanced accuracy bookmaker informedness and markedness in two class confusion matrix evaluation BioData Mining 14 13 13 doi 10 1186 s13040 021 00244 z PMC 7863449 PMID 33541410 Tharwat A August 2018 Classification assessment methods Applied Computing and Informatics 17 168 192 doi 10 1016 j aci 2018 08 003 Brabec Jan Komarek Tomas Franc Vojtech Machlica Lukas 2020 On model evaluation under non constant class imbalance International Conference on Computational Science Springer pp 74 87 arXiv 2001 05571 doi 10 1007 978 3 030 50423 6 6 Siblini W Frery J He Guelton L Oble F Wang Y Q 2020 Master your metrics with calibration In M Berthold A Feelders G Krempl eds Advances in Intelligent Data Analysis XVIII Springer pp 457 469 arXiv 1909 02827 doi 10 1007 978 3 030 44584 3 36 Beitzel Steven M 2006 On Understanding and Classifying Web Queries Ph D thesis IIT CiteSeerX 10 1 1 127 634 X Li Y Y Wang A Acero July 2008 Learning query intent from regularized click graphs Proceedings of the 31st SIGIR Conference p 339 doi 10 1145 1390334 1390393 ISBN 9781605581644 S2CID 8482989 See e g the evaluation of the 1 Powers David M W 2015 What the F measure doesn t measure arXiv 1503 06410 cs IR Derczynski L 2016 Complementarity F score and NLP Evaluation Proceedings of the International Conference on Language Resources and Evaluation Manning Christopher April 1 2009 An Introduction to Information Retrieval PDF Exercise 8 7 Cambridge University Press p 200 Retrieved 18 July 2022 a href Template Cite book html title Template Cite book cite book a CS1 maint location link Lipton Z C Elkan C P amp Narayanaswamy B 2014 F1 Optimal Thresholding in the Multi Label Setting ArXiv abs 1402 1892 Hand David A note on using the F measure for evaluating record linkage algorithms Dimensions app dimensions ai doi 10 1007 s11222 017 9746 6 hdl 10044 1 46235 S2CID 38782128 Retrieved 2018 12 08 Chicco D Jurman G January 2020 The advantages of the Matthews correlation coefficient MCC over F1 score and accuracy in binary classification evaluation BMC Genomics 21 6 6 doi 10 1186 s12864 019 6413 7 PMC 6941312 PMID 31898477 Powers David M W 2011 Evaluation From Precision Recall and F Score to ROC Informedness Markedness amp Correlation Journal of Machine Learning Technologies 2 1 37 63 hdl 2328 27165 Sitarz Mikolaj 2022 Extending F1 metric probabilistic approach arXiv 2210 11997 cs LG Tharwat A August 2018 Classification assessment methods Applied Computing and Informatics 17 168 192 doi 10 1016 j aci 2018 08 003 J Opitz S Burst 2019 Macro F1 and Macro F1 arXiv 1911 03347 stat ML Retrieved from https en wikipedia org w index php title F score amp oldid 1194946760, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.