fbpx
Wikipedia

One-class classification

In machine learning, one-class classification (OCC), also known as unary classification or class-modelling, tries to identify objects of a specific class amongst all objects, by primarily learning from a training set containing only the objects of that class,[1] although there exist variants of one-class classifiers where counter-examples are used to further refine the classification boundary. This is different from and more difficult than the traditional classification problem, which tries to distinguish between two or more classes with the training set containing objects from all the classes. Examples include the monitoring of helicopter gearboxes,[2][3][4] motor failure prediction,[5] or the operational status of a nuclear plant as 'normal':[6] In this scenario, there are few, if any, examples of catastrophic system states; only the statistics of normal operation are known.

While many of the above approaches focus on the case of removing a small number of outliers or anomalies, one can also learn the other extreme, where the single class covers a small coherent subset of the data, using an information bottleneck approach.[7]

Overview edit

The term one-class classification (OCC) was coined by Moya & Hush (1996)[8] and many applications can be found in scientific literature, for example outlier detection, anomaly detection, novelty detection. A feature of OCC is that it uses only sample points from the assigned class, so that a representative sampling is not strictly required for non-target classes.[9]

Introduction edit

 
The hypersphere containing the target data having center c and radius r. Objects on the boundary are support vectors, and two objects lie outside the boundary having slack greater than 0.

SVM based one-class classification (OCC) relies on identifying the smallest hypersphere (with radius r, and center c) consisting of all the data points.[10] This method is called Support Vector Data Description (SVDD). Formally, the problem can be defined in the following constrained optimization form,

 

However, the above formulation is highly restrictive, and is sensitive to the presence of outliers. Therefore, a flexible formulation, that allow for the presence of outliers is formulated as shown below,

 

 

From Karush-Kuhn-Tucker (KKT) optimality conditions, we get

 

where the  's are the solution to the following optimization problem:

 

subject to,  

The introduction of kernel function provide additional flexibility to the One-class SVM (OSVM) algorithm.[11]

PU (Positive Unlabeled) learning edit

A similar problem is PU learning, in which a binary classifier is learned in a semi-supervised way from only positive and unlabeled sample points.[12]

In PU learning, two sets of examples are assumed to be available for training: the positive set   and a mixed set  , which is assumed to contain both positive and negative samples, but without these being labeled as such. This contrasts with other forms of semisupervised learning, where it is assumed that a labeled set containing examples of both classes is available in addition to unlabeled samples. A variety of techniques exist to adapt supervised classifiers to the PU learning setting, including variants of the EM algorithm. PU learning has been successfully applied to text,[13][14][15] time series,[16] bioinformatics tasks,[17][18] and Remote-Sensing Data.[19]

Approaches edit

Several approaches have been proposed to solve one-class classification (OCC). The approaches can be distinguished into three main categories, density estimation, boundary methods, and reconstruction methods.[6]

Density estimation methods edit

Density estimation methods rely on estimating the density of the data points, and set the threshold. These methods rely on assuming distributions, such as Gaussian, or a Poisson distribution. Following which discordancy tests can be used to test the new objects. These methods are robust to scale variance.

Gaussian model[20] is one of the simplest methods to create one-class classifiers. Due to Central Limit Theorem (CLT),[21] these methods work best when large number of samples are present, and they are perturbed by small independent error values. The probability distribution for a d-dimensional object is given by:

 

Where,   is the mean and   is the covariance matrix. Computing the inverse of covariance matrix ( ) is the costliest operation, and in the cases where the data is not scaled properly, or data has singular directions pseudo-inverse  is used to approximate the inverse, and is calculated as  .[22]

Boundary methods edit

Boundary methods focus on setting boundaries around a few set of points, called target points. These methods attempt to optimize the volume. Boundary methods rely on distances, and hence are not robust to scale variance. K-centers method, NN-d, and SVDD are some of the key examples.

K-centers

In K-center algorithm,[23]   small balls with equal radius are placed to minimize the maximum distance of all minimum distances between training objects and the centers. Formally, the following error is minimized,

 

The algorithm uses forward search method with random initialization, where the radius is determined by the maximum distance of the object, any given ball should capture. After the centers are determined, for any given test object   the distance can be calculated as,

 

Reconstruction methods edit

Reconstruction methods use prior knowledge and generating process to build a generating model that best fits the data. New objects can be described in terms of a state of the generating model. Some examples of reconstruction methods for OCC are, k-means clustering, learning vector quantization, self-organizing maps, etc.

Applications edit

Document classification edit

The basic Support Vector Machine (SVM) paradigm is trained using both positive and negative examples, however studies have shown there are many valid reasons for using only positive examples. When the SVM algorithm is modified to only use positive examples, the process is considered one-class classification. One situation where this type of classification might prove useful to the SVM paradigm is in trying to identify a web browser's sites of interest based only off of the user's browsing history.

Biomedical studies edit

One-class classification can be particularly useful in biomedical studies where often data from other classes can be difficult or impossible to obtain. In studying biomedical data it can be difficult and/or expensive to obtain the set of labeled data from the second class that would be necessary to perform a two-class classification. A study from The Scientific World Journal found that the typicality approach is the most useful in analysing biomedical data because it can be applied to any type of dataset (continuous, discrete, or nominal).[24] The typicality approach is based on the clustering of data by examining data and placing it into new or existing clusters.[25] To apply typicality to one-class classification for biomedical studies, each new observation,  , is compared to the target class,  , and identified as an outlier or a member of the target class.[24]

Unsupervised Concept Drift Detection edit

One-class classification has similarities with unsupervised concept drift detection, where both aim to identify whether the unseen data share similar characteristics to the initial data. A concept is referred to as the fixed probability distribution which data is drawn from. In unsupervised concept drift detection, the goal is to detect if the data distribution changes without utilizing class labels. In one-class classification, the flow of data is not important. Unseen data is classified as typical or outlier depending on its characteristics, whether it is from the initial concept or not. However, unsupervised drift detection monitors the flow of data, and signals a drift if there is a significant amount of change or anomalies. Unsupervised concept drift detection can be identified as the continuous form of one-class classification.[26] One-class classifiers are used for detecting concept drifts.[27]

See also edit

References edit

  1. ^ Oliveri P (August 2017). "Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial". Analytica Chimica Acta. 982: 9–19. doi:10.1016/j.aca.2017.05.013. hdl:11567/881059. PMID 28734370.
  2. ^ Japkowicz N, Myers C, Gluck M (1995). "A Novelty Detection Approach to Classification". pp. 518–523. CiteSeerX 10.1.1.40.3663.
  3. ^ Japkowicz N (1999). Concept-Learning in the Absence of Counter-Examples:An Autoassociation-Based Approach to Classification (Thesis). Rutgers University.
  4. ^ Japkowicz N (2001). "Supervised Versus Unsupervised Binary-Learning by Feedforward Neural Networks" (PDF). Machine Learning. 42: 97–122. doi:10.1023/A:1007660820062. S2CID 7298189.
  5. ^ Petsche T, Marcantonio A, Darken C, Hanson S, Kuhn G, Santoso I (1996). "A Neural Network Autoassociator for Induction Motor Failure Prediction" (PDF). NIPS.
  6. ^ a b Tax D (2001). One-class classification: Concept-learning in the absence of counter-examples (PDF) (Ph.D. thesis). The Netherlands: University of Delft.
  7. ^ Crammer, Koby (2004). "A needle in a haystack". Twenty-first international conference on Machine learning - ICML '04. p. 26. doi:10.1145/1015330.1015399. ISBN 978-1-58113-838-2. S2CID 8736254.
  8. ^ Moya, M.; Hush, D. (1996). "Network constraints and multi- objective optimization for one-class classification". Neural Networks. 9 (3): 463–474. doi:10.1016/0893-6080(95)00120-4.
  9. ^ Rodionova OY, Oliveri P, Pomerantsev AL (2016-12-15). "Rigorous and compliant approaches to one-class classification". Chemometrics and Intelligent Laboratory Systems. 159: 89–96. doi:10.1016/j.chemolab.2016.10.002. hdl:11567/864539.
  10. ^ Zineb, Noumir; Honeine, Paul; Richard, Cedue (2012). "On simple one-class classification methods". IEEE International Symposium on Information Theory Proceedings. IEEE, 2012.
  11. ^ Khan, Shehroz S.; Madden, Michael G. (2010). Coyle, Lorcan; Freyne, Jill (eds.). "A Survey of Recent Trends in One Class Classification". Artificial Intelligence and Cognitive Science. Lecture Notes in Computer Science. 6206. Springer Berlin Heidelberg: 188–197. doi:10.1007/978-3-642-17080-5_21. hdl:10379/1472. ISBN 978-3-642-17080-5. S2CID 36784649.
  12. ^ Liu, Bing (2007). Web Data Mining. Springer. pp. 165–178.
  13. ^ Bing Liu; Wee Sun Lee; Philip S. Yu & Xiao-Li Li (2002). Partially supervised classification of text documents. ICML. pp. 8–12.
  14. ^ Hwanjo Yu; Jiawei Han; Kevin Chen-Chuan Chang (2002). PEBL: positive example based learning for web page classification using SVM. ACM SIGKDD.
  15. ^ Xiao-Li Li & Bing Liu (2003). Learning to classify text using positive and unlabeled data. IJCAI.
  16. ^ Minh Nhut Nguyen; Xiao-Li Li & See-Kiong Ng (2011). Positive Unlabeled Learning for Time Series Classification. IJCAI.
  17. ^ Peng Yang; Xiao-Li Li; Jian-Ping Mei; Chee-Keong Kwoh & See-Kiong Ng (2012). Positive-Unlabeled Learning for Disease Gene Identification. Bioinformatics, Vol 28(20).
  18. ^ Bugnon, L. A.; Yones, C.; Milone, D. H. & Stegmayer, G. (2020). "Genome-wide discovery of pre-miRNAs: comparison of recent approaches based on machine learning". Oxford Bioinformatics. 22 (3). doi:10.1093/bib/bbaa184. PMID 32814347.
  19. ^ Li, W.; Guo, Q.; Elkan, C. (February 2011). "A Positive and Unlabeled Learning Algorithm for One-Class Classification of Remote-Sensing Data". IEEE Transactions on Geoscience and Remote Sensing. 49 (2): 717–725. Bibcode:2011ITGRS..49..717L. doi:10.1109/TGRS.2010.2058578. ISSN 0196-2892. S2CID 267120.
  20. ^ Bishop, Christopher M.; Bishop, Professor of Neural Computing Christopher M. (1995-11-23). Neural Networks for Pattern Recognition. Clarendon Press. ISBN 978-0-19-853864-6.
  21. ^ Ullman, Neil R (2017-01-01). Elementary statistics.[dead link]
  22. ^ "Introduction to Applied Mathematics". SIAM Bookstore. Retrieved 2019-04-29.
  23. ^ Ypma, Alexander; Duin, Robert P. W. (1998). Niklasson, Lars; Bodén, Mikael; Ziemke, Tom (eds.). "Support objects for domain approximation". Icann 98. Perspectives in Neural Computing. Springer London: 719–724. doi:10.1007/978-1-4471-1599-1_110. ISBN 978-1-4471-1599-1.
  24. ^ a b Irigoien I, Sierra B, Arenas C (2014). "Towards application of one-class classification methods to medical data". TheScientificWorldJournal. 2014: 730712. doi:10.1155/2014/730712. PMC 3980920. PMID 24778600.
  25. ^ Irigoien I, Arenas C (July 2008). "INCA: new statistic for estimating the number of clusters and identifying atypical units". Statistics in Medicine. 27 (15): 2948–73. doi:10.1002/sim.3143. PMID 18050154. S2CID 24791212.
  26. ^ Gözüaçık, Ömer; Can, Fazli (November 2020). "Concept learning using one-class classifiers for implicit drift detection in evolving data streams". Artificial Intelligence Review. 54 (5): 3725–3747. doi:10.1007/s10462-020-09939-x. hdl:11693/77042. S2CID 229506136.
  27. ^ Krawczyk, Bartosz; Woźniak, Michał (2015). "One-class classifiers with incremental learning and forgetting for data streams with concept drift". Soft Computing. 19 (12): 3387–3400. doi:10.1007/s00500-014-1492-5. S2CID 207011971.

class, classification, machine, learning, class, classification, also, known, unary, classification, class, modelling, tries, identify, objects, specific, class, amongst, objects, primarily, learning, from, training, containing, only, objects, that, class, alt. In machine learning one class classification OCC also known as unary classification or class modelling tries to identify objects of a specific class amongst all objects by primarily learning from a training set containing only the objects of that class 1 although there exist variants of one class classifiers where counter examples are used to further refine the classification boundary This is different from and more difficult than the traditional classification problem which tries to distinguish between two or more classes with the training set containing objects from all the classes Examples include the monitoring of helicopter gearboxes 2 3 4 motor failure prediction 5 or the operational status of a nuclear plant as normal 6 In this scenario there are few if any examples of catastrophic system states only the statistics of normal operation are known While many of the above approaches focus on the case of removing a small number of outliers or anomalies one can also learn the other extreme where the single class covers a small coherent subset of the data using an information bottleneck approach 7 Contents 1 Overview 2 Introduction 2 1 PU Positive Unlabeled learning 3 Approaches 3 1 Density estimation methods 3 2 Boundary methods 3 3 Reconstruction methods 4 Applications 4 1 Document classification 4 2 Biomedical studies 4 3 Unsupervised Concept Drift Detection 5 See also 6 ReferencesOverview editThe term one class classification OCC was coined by Moya amp Hush 1996 8 and many applications can be found in scientific literature for example outlier detection anomaly detection novelty detection A feature of OCC is that it uses only sample points from the assigned class so that a representative sampling is not strictly required for non target classes 9 Introduction edit nbsp The hypersphere containing the target data having center c and radius r Objects on the boundary are support vectors and two objects lie outside the boundary having slack greater than 0 SVM based one class classification OCC relies on identifying the smallest hypersphere with radius r and center c consisting of all the data points 10 This method is called Support Vector Data Description SVDD Formally the problem can be defined in the following constrained optimization form min r c r 2 subject to F x i c 2 r 2 i 1 2 n displaystyle min r c r 2 text subject to Phi x i c 2 leq r 2 forall i 1 2 n nbsp However the above formulation is highly restrictive and is sensitive to the presence of outliers Therefore a flexible formulation that allow for the presence of outliers is formulated as shown below min r c z r 2 1 n n i 1 n z i displaystyle min r c zeta r 2 frac 1 nu n sum i 1 n zeta i nbsp subject to F x i c 2 r 2 z i i 1 2 n displaystyle text subject to Phi x i c 2 leq r 2 zeta i forall i 1 2 n nbsp From Karush Kuhn Tucker KKT optimality conditions we getc i 1 n a i F x i displaystyle c sum i 1 n alpha i Phi x i nbsp where the a i displaystyle alpha i nbsp s are the solution to the following optimization problem max a i 1 n a i k x i x i i j 1 n a i a j k x i x j displaystyle max alpha sum i 1 n alpha i kappa x i x i sum i j 1 n alpha i alpha j kappa x i x j nbsp subject to i 1 n a i 1 and 0 a i 1 n n for all i 1 2 n displaystyle sum i 1 n alpha i 1 text and 0 leq alpha i leq frac 1 nu n text for all i 1 2 n nbsp The introduction of kernel function provide additional flexibility to the One class SVM OSVM algorithm 11 PU Positive Unlabeled learning edit A similar problem is PU learning in which a binary classifier is learned in a semi supervised way from only positive and unlabeled sample points 12 In PU learning two sets of examples are assumed to be available for training the positive set P displaystyle P nbsp and a mixed set U displaystyle U nbsp which is assumed to contain both positive and negative samples but without these being labeled as such This contrasts with other forms of semisupervised learning where it is assumed that a labeled set containing examples of both classes is available in addition to unlabeled samples A variety of techniques exist to adapt supervised classifiers to the PU learning setting including variants of the EM algorithm PU learning has been successfully applied to text 13 14 15 time series 16 bioinformatics tasks 17 18 and Remote Sensing Data 19 Approaches editSeveral approaches have been proposed to solve one class classification OCC The approaches can be distinguished into three main categories density estimation boundary methods and reconstruction methods 6 Density estimation methods edit Density estimation methods rely on estimating the density of the data points and set the threshold These methods rely on assuming distributions such as Gaussian or a Poisson distribution Following which discordancy tests can be used to test the new objects These methods are robust to scale variance Gaussian model 20 is one of the simplest methods to create one class classifiers Due to Central Limit Theorem CLT 21 these methods work best when large number of samples are present and they are perturbed by small independent error values The probability distribution for a d dimensional object is given by p N x m S 1 2 p d 2 S 1 2 exp 1 2 z m T S 1 z m displaystyle p mathcal N x mu Sigma frac 1 2 pi frac d 2 Sigma frac 1 2 exp frac 1 2 z mu T Sigma 1 z mu nbsp Where m displaystyle mu nbsp is the mean and S displaystyle Sigma nbsp is the covariance matrix Computing the inverse of covariance matrix S 1 displaystyle Sigma 1 nbsp is the costliest operation and in the cases where the data is not scaled properly or data has singular directions pseudo inverse S displaystyle Sigma nbsp is used to approximate the inverse and is calculated as S T S S T 1 displaystyle Sigma T Sigma Sigma T 1 nbsp 22 Boundary methods edit Boundary methods focus on setting boundaries around a few set of points called target points These methods attempt to optimize the volume Boundary methods rely on distances and hence are not robust to scale variance K centers method NN d and SVDD are some of the key examples K centersIn K center algorithm 23 k displaystyle k nbsp small balls with equal radius are placed to minimize the maximum distance of all minimum distances between training objects and the centers Formally the following error is minimized e k c e n t e r max i min k x i m k 2 displaystyle varepsilon k center max i min k x i mu k 2 nbsp The algorithm uses forward search method with random initialization where the radius is determined by the maximum distance of the object any given ball should capture After the centers are determined for any given test object z displaystyle z nbsp the distance can be calculated as d k c e n t r z min k z m k 2 displaystyle d k centr z min k z mu k 2 nbsp Reconstruction methods edit Reconstruction methods use prior knowledge and generating process to build a generating model that best fits the data New objects can be described in terms of a state of the generating model Some examples of reconstruction methods for OCC are k means clustering learning vector quantization self organizing maps etc Applications editDocument classification edit The basic Support Vector Machine SVM paradigm is trained using both positive and negative examples however studies have shown there are many valid reasons for using only positive examples When the SVM algorithm is modified to only use positive examples the process is considered one class classification One situation where this type of classification might prove useful to the SVM paradigm is in trying to identify a web browser s sites of interest based only off of the user s browsing history Biomedical studies edit One class classification can be particularly useful in biomedical studies where often data from other classes can be difficult or impossible to obtain In studying biomedical data it can be difficult and or expensive to obtain the set of labeled data from the second class that would be necessary to perform a two class classification A study from The Scientific World Journal found that the typicality approach is the most useful in analysing biomedical data because it can be applied to any type of dataset continuous discrete or nominal 24 The typicality approach is based on the clustering of data by examining data and placing it into new or existing clusters 25 To apply typicality to one class classification for biomedical studies each new observation y 0 displaystyle y 0 nbsp is compared to the target class C displaystyle C nbsp and identified as an outlier or a member of the target class 24 Unsupervised Concept Drift Detection edit One class classification has similarities with unsupervised concept drift detection where both aim to identify whether the unseen data share similar characteristics to the initial data A concept is referred to as the fixed probability distribution which data is drawn from In unsupervised concept drift detection the goal is to detect if the data distribution changes without utilizing class labels In one class classification the flow of data is not important Unseen data is classified as typical or outlier depending on its characteristics whether it is from the initial concept or not However unsupervised drift detection monitors the flow of data and signals a drift if there is a significant amount of change or anomalies Unsupervised concept drift detection can be identified as the continuous form of one class classification 26 One class classifiers are used for detecting concept drifts 27 See also editMulticlass classification Anomaly detection Supervised learningReferences edit Oliveri P August 2017 Class modelling in food analytical chemistry Development sampling optimisation and validation issues A tutorial Analytica Chimica Acta 982 9 19 doi 10 1016 j aca 2017 05 013 hdl 11567 881059 PMID 28734370 Japkowicz N Myers C Gluck M 1995 A Novelty Detection Approach to Classification pp 518 523 CiteSeerX 10 1 1 40 3663 Japkowicz N 1999 Concept Learning in the Absence of Counter Examples An Autoassociation Based Approach to Classification Thesis Rutgers University Japkowicz N 2001 Supervised Versus Unsupervised Binary Learning by Feedforward Neural Networks PDF Machine Learning 42 97 122 doi 10 1023 A 1007660820062 S2CID 7298189 Petsche T Marcantonio A Darken C Hanson S Kuhn G Santoso I 1996 A Neural Network Autoassociator for Induction Motor Failure Prediction PDF NIPS a b Tax D 2001 One class classification Concept learning in the absence of counter examples PDF Ph D thesis The Netherlands University of Delft Crammer Koby 2004 A needle in a haystack Twenty first international conference on Machine learning ICML 04 p 26 doi 10 1145 1015330 1015399 ISBN 978 1 58113 838 2 S2CID 8736254 Moya M Hush D 1996 Network constraints and multi objective optimization for one class classification Neural Networks 9 3 463 474 doi 10 1016 0893 6080 95 00120 4 Rodionova OY Oliveri P Pomerantsev AL 2016 12 15 Rigorous and compliant approaches to one class classification Chemometrics and Intelligent Laboratory Systems 159 89 96 doi 10 1016 j chemolab 2016 10 002 hdl 11567 864539 Zineb Noumir Honeine Paul Richard Cedue 2012 On simple one class classification methods IEEE International Symposium on Information Theory Proceedings IEEE 2012 Khan Shehroz S Madden Michael G 2010 Coyle Lorcan Freyne Jill eds A Survey of Recent Trends in One Class Classification Artificial Intelligence and Cognitive Science Lecture Notes in Computer Science 6206 Springer Berlin Heidelberg 188 197 doi 10 1007 978 3 642 17080 5 21 hdl 10379 1472 ISBN 978 3 642 17080 5 S2CID 36784649 Liu Bing 2007 Web Data Mining Springer pp 165 178 Bing Liu Wee Sun Lee Philip S Yu amp Xiao Li Li 2002 Partially supervised classification of text documents ICML pp 8 12 Hwanjo Yu Jiawei Han Kevin Chen Chuan Chang 2002 PEBL positive example based learning for web page classification using SVM ACM SIGKDD Xiao Li Li amp Bing Liu 2003 Learning to classify text using positive and unlabeled data IJCAI Minh Nhut Nguyen Xiao Li Li amp See Kiong Ng 2011 Positive Unlabeled Learning for Time Series Classification IJCAI Peng Yang Xiao Li Li Jian Ping Mei Chee Keong Kwoh amp See Kiong Ng 2012 Positive Unlabeled Learning for Disease Gene Identification Bioinformatics Vol 28 20 Bugnon L A Yones C Milone D H amp Stegmayer G 2020 Genome wide discovery of pre miRNAs comparison of recent approaches based on machine learning Oxford Bioinformatics 22 3 doi 10 1093 bib bbaa184 PMID 32814347 Li W Guo Q Elkan C February 2011 A Positive and Unlabeled Learning Algorithm for One Class Classification of Remote Sensing Data IEEE Transactions on Geoscience and Remote Sensing 49 2 717 725 Bibcode 2011ITGRS 49 717L doi 10 1109 TGRS 2010 2058578 ISSN 0196 2892 S2CID 267120 Bishop Christopher M Bishop Professor of Neural Computing Christopher M 1995 11 23 Neural Networks for Pattern Recognition Clarendon Press ISBN 978 0 19 853864 6 Ullman Neil R 2017 01 01 Elementary statistics dead link Introduction to Applied Mathematics SIAM Bookstore Retrieved 2019 04 29 Ypma Alexander Duin Robert P W 1998 Niklasson Lars Boden Mikael Ziemke Tom eds Support objects for domain approximation Icann 98 Perspectives in Neural Computing Springer London 719 724 doi 10 1007 978 1 4471 1599 1 110 ISBN 978 1 4471 1599 1 a b Irigoien I Sierra B Arenas C 2014 Towards application of one class classification methods to medical data TheScientificWorldJournal 2014 730712 doi 10 1155 2014 730712 PMC 3980920 PMID 24778600 Irigoien I Arenas C July 2008 INCA new statistic for estimating the number of clusters and identifying atypical units Statistics in Medicine 27 15 2948 73 doi 10 1002 sim 3143 PMID 18050154 S2CID 24791212 Gozuacik Omer Can Fazli November 2020 Concept learning using one class classifiers for implicit drift detection in evolving data streams Artificial Intelligence Review 54 5 3725 3747 doi 10 1007 s10462 020 09939 x hdl 11693 77042 S2CID 229506136 Krawczyk Bartosz Wozniak Michal 2015 One class classifiers with incremental learning and forgetting for data streams with concept drift Soft Computing 19 12 3387 3400 doi 10 1007 s00500 014 1492 5 S2CID 207011971 Retrieved from https en wikipedia org w index php title One class classification amp oldid 1169307470, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.