fbpx
Wikipedia

Apriori algorithm

Apriori[1] is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database: this has applications in domains such as market basket analysis.

Overview edit

The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate on databases containing transactions (for example, collections of items bought by customers, or details of a website frequentation or IP addresses[2]). Other algorithms are designed for finding association rules in data having no transactions (Winepi and Minepi), or having no timestamps (DNA sequencing). Each transaction is seen as a set of items (an itemset). Given a threshold  , the Apriori algorithm identifies the item sets which are subsets of at least   transactions in the database.

Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as candidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found.

Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length   from item sets of length  . Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent  -length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates.

The pseudo code for the algorithm is given below for a transaction database  , and a support threshold of  . Usual set theoretic notation is employed, though note that   is a multiset.   is the candidate set for level  . At each step, the algorithm is assumed to generate the candidate sets from the large item sets of the preceding level, heeding the downward closure lemma.   accesses a field of the data structure that represents candidate set  , which is initially assumed to be zero. Many details are omitted below, usually the most important part of the implementation is the data structure used for storing the candidate sets, and counting their frequencies.

Apriori(T, ε) L1 ← {large 1 - itemsets} k ← 2 while Lk−1 is not empty Ck ← Apriori_gen(Lk−1, k) for transactions t in T Dt ← {c in Ck : c ⊆ t} for candidates c in Dt count[c] ← count[c] + 1 Lk ← {c in Ck : count[c] ≥ ε} k ← k + 1 return Union(Lk) Apriori_gen(L, k) result ← list() for all p ∈ L, q ∈ L where p1 = q1, p2 = q2, ..., pk-2 = qk-2 and pk-1 < qk-1 c = p ∪ {qk-1} if u ∈ L for all u ⊆ c where |u| = k-1 result.add(c) return result 

Examples edit

Example 1 edit

Consider the following database, where each row is a transaction and each cell is an individual item of the transaction:

alpha beta epsilon
alpha beta theta
alpha beta epsilon
alpha beta theta

The association rules that can be determined from this database are the following:

  1. 100% of sets with alpha also contain beta
  2. 50% of sets with alpha, beta also have epsilon
  3. 50% of sets with alpha, beta also have theta

we can also illustrate this through a variety of examples.

Example 2 edit

Assume that a large supermarket tracks sales data by stock-keeping unit (SKU) for each item: each item, such as "butter" or "bread", is identified by a numerical SKU. The supermarket has a database of transactions where each transaction is a set of SKUs that were bought together.

Let the database of transactions consist of following itemsets:

Itemsets
{1,2,3,4}
{1,2,4}
{1,2}
{2,3,4}
{2,3}
{3,4}
{2,4}

We will use Apriori to determine the frequent item sets of this database. To do this, we will say that an item set is frequent if it appears in at least 3 transactions of the database: the value 3 is the support threshold.

The first step of Apriori is to count up the number of occurrences, called the support, of each member item separately. By scanning the database for the first time, we obtain the following result

Item Support
{1} 3
{2} 6
{3} 4
{4} 5

All the itemsets of size 1 have a support of at least 3, so they are all frequent.

The next step is to generate a list of all pairs of the frequent items.

For example, regarding the pair {1,2}: the first table of Example 2 shows items 1 and 2 appearing together in three of the itemsets; therefore, we say item {1,2} has support of three.

Item Support
{1,2} 3
{1,3} 1
{1,4} 2
{2,3} 3
{2,4} 4
{3,4} 3

The pairs {1,2}, {2,3}, {2,4}, and {3,4} all meet or exceed the minimum support of 3, so they are frequent. The pairs {1,3} and {1,4} are not. Now, because {1,3} and {1,4} are not frequent, any larger set which contains {1,3} or {1,4} cannot be frequent. In this way, we can prune sets: we will now look for frequent triples in the database, but we can already exclude all the triples that contain one of these two pairs:

Item Support
{2,3,4} 2

in the example, there are no frequent triplets. {2,3,4} is below the minimal threshold, and the other triplets were excluded because they were super sets of pairs that were already below the threshold.

We have thus determined the frequent sets of items in the database, and illustrated how some items were not counted because one of their subsets was already known to be below the threshold.

Limitations edit

Apriori, while historically significant, suffers from a number of inefficiencies or trade-offs, which have spawned other algorithms. Candidate generation generates large numbers of subsets (The algorithm attempts to load up the candidate set, with as many as possible subsets before each scan of the database). Bottom-up subset exploration (essentially a breadth-first traversal of the subset lattice) finds any maximal subset S only after all   of its proper subsets.

The algorithm scans the database too many times, which reduces the overall performance. Due to this, the algorithm assumes that the database is permanently in the memory.

Also, both the time and space complexity of this algorithm are very high:  , thus exponential, where   is the horizontal width (the total number of items) present in the database.

Later algorithms such as Max-Miner[3] try to identify the maximal frequent item sets without enumerating their subsets, and perform "jumps" in the search space rather than a purely bottom-up approach.

References edit

  1. ^ Rakesh Agrawal and Ramakrishnan Srikant.Fast algorithms for mining association rules. Proceedings of the 20th International Conference on Very Large Data Bases, VLDB, pages 487-499, Santiago, Chile, September 1994.
  2. ^ The data science behind IP address matching 2021-08-22 at the Wayback Machine Published by deductive.com, September 6, 2018, retrieved September 7, 2018
  3. ^ Bayardo Jr, Roberto J. (1998). "Efficiently mining long patterns from databases" (PDF). ACM SIGMOD Record. 27 (2).

External links edit

  • ARtool, GPL Java association rule mining application with GUI, offering implementations of multiple algorithms for discovery of frequent patterns and extraction of association rules (includes Apriori)
  • SPMF offers Java open-source implementations of Apriori and several variations such as AprioriClose, UApriori, AprioriInverse, AprioriRare, MSApriori, AprioriTID, and other more efficient algorithms such as FPGrowth and LCM.
  • Christian Borgelt provides C implementations for Apriori and many other frequent pattern mining algorithms (Eclat, FPGrowth, etc.). The code is distributed as free software under the MIT license.
  • The R package arules contains Apriori and Eclat and infrastructure for representing, manipulating and analyzing transaction data and patterns.
  • Efficient-Apriori is a Python package with an implementation of the algorithm as presented in the original paper.

apriori, algorithm, apriori, algorithm, frequent, item, mining, association, rule, learning, over, relational, databases, proceeds, identifying, frequent, individual, items, database, extending, them, larger, larger, item, sets, long, those, item, sets, appear. Apriori 1 is an algorithm for frequent item set mining and association rule learning over relational databases It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database this has applications in domains such as market basket analysis Contents 1 Overview 2 Examples 2 1 Example 1 2 2 Example 2 3 Limitations 4 References 5 External linksOverview editThe Apriori algorithm was proposed by Agrawal and Srikant in 1994 Apriori is designed to operate on databases containing transactions for example collections of items bought by customers or details of a website frequentation or IP addresses 2 Other algorithms are designed for finding association rules in data having no transactions Winepi and Minepi or having no timestamps DNA sequencing Each transaction is seen as a set of items an itemset Given a threshold C displaystyle C nbsp the Apriori algorithm identifies the item sets which are subsets of at least C displaystyle C nbsp transactions in the database Apriori uses a bottom up approach where frequent subsets are extended one item at a time a step known as candidate generation and groups of candidates are tested against the data The algorithm terminates when no further successful extensions are found Apriori uses breadth first search and a Hash tree structure to count candidate item sets efficiently It generates candidate item sets of length k displaystyle k nbsp from item sets of length k 1 displaystyle k 1 nbsp Then it prunes the candidates which have an infrequent sub pattern According to the downward closure lemma the candidate set contains all frequent k displaystyle k nbsp length item sets After that it scans the transaction database to determine frequent item sets among the candidates The pseudo code for the algorithm is given below for a transaction database T displaystyle T nbsp and a support threshold of e displaystyle varepsilon nbsp Usual set theoretic notation is employed though note that T displaystyle T nbsp is a multiset Ck displaystyle C k nbsp is the candidate set for level k displaystyle k nbsp At each step the algorithm is assumed to generate the candidate sets from the large item sets of the preceding level heeding the downward closure lemma count c displaystyle mathrm count c nbsp accesses a field of the data structure that represents candidate set c displaystyle c nbsp which is initially assumed to be zero Many details are omitted below usually the most important part of the implementation is the data structure used for storing the candidate sets and counting their frequencies Apriori T e L1 large 1 itemsets k 2 while Lk 1 is not empty Ck Apriori gen Lk 1 k for transactions t in T Dt c in Ck c t for candidates c in Dt count c count c 1 Lk c in Ck count c e k k 1 return Union Lk Apriori gen L k result list for all p L q L where p1 q1 p2 q2 pk 2 qk 2 and pk 1 lt qk 1 c p qk 1 if u L for all u c where u k 1 result add c return resultExamples editExample 1 edit Consider the following database where each row is a transaction and each cell is an individual item of the transaction alpha beta epsilonalpha beta thetaalpha beta epsilonalpha beta thetaThe association rules that can be determined from this database are the following 100 of sets with alpha also contain beta 50 of sets with alpha beta also have epsilon 50 of sets with alpha beta also have thetawe can also illustrate this through a variety of examples Example 2 edit Assume that a large supermarket tracks sales data by stock keeping unit SKU for each item each item such as butter or bread is identified by a numerical SKU The supermarket has a database of transactions where each transaction is a set of SKUs that were bought together Let the database of transactions consist of following itemsets Itemsets 1 2 3 4 1 2 4 1 2 2 3 4 2 3 3 4 2 4 We will use Apriori to determine the frequent item sets of this database To do this we will say that an item set is frequent if it appears in at least 3 transactions of the database the value 3 is the support threshold The first step of Apriori is to count up the number of occurrences called the support of each member item separately By scanning the database for the first time we obtain the following result Item Support 1 3 2 6 3 4 4 5All the itemsets of size 1 have a support of at least 3 so they are all frequent The next step is to generate a list of all pairs of the frequent items For example regarding the pair 1 2 the first table of Example 2 shows items 1 and 2 appearing together in three of the itemsets therefore we say item 1 2 has support of three Item Support 1 2 3 1 3 1 1 4 2 2 3 3 2 4 4 3 4 3The pairs 1 2 2 3 2 4 and 3 4 all meet or exceed the minimum support of 3 so they are frequent The pairs 1 3 and 1 4 are not Now because 1 3 and 1 4 are not frequent any larger set which contains 1 3 or 1 4 cannot be frequent In this way we can prune sets we will now look for frequent triples in the database but we can already exclude all the triples that contain one of these two pairs Item Support 2 3 4 2in the example there are no frequent triplets 2 3 4 is below the minimal threshold and the other triplets were excluded because they were super sets of pairs that were already below the threshold We have thus determined the frequent sets of items in the database and illustrated how some items were not counted because one of their subsets was already known to be below the threshold Limitations editApriori while historically significant suffers from a number of inefficiencies or trade offs which have spawned other algorithms Candidate generation generates large numbers of subsets The algorithm attempts to load up the candidate set with as many as possible subsets before each scan of the database Bottom up subset exploration essentially a breadth first traversal of the subset lattice finds any maximal subset S only after all 2 S 1 displaystyle 2 S 1 nbsp of its proper subsets The algorithm scans the database too many times which reduces the overall performance Due to this the algorithm assumes that the database is permanently in the memory Also both the time and space complexity of this algorithm are very high O 2 D displaystyle O left 2 D right nbsp thus exponential where D displaystyle D nbsp is the horizontal width the total number of items present in the database Later algorithms such as Max Miner 3 try to identify the maximal frequent item sets without enumerating their subsets and perform jumps in the search space rather than a purely bottom up approach References edit Rakesh Agrawal and Ramakrishnan Srikant Fast algorithms for mining association rules Proceedings of the 20th International Conference on Very Large Data Bases VLDB pages 487 499 Santiago Chile September 1994 The data science behind IP address matching Archived 2021 08 22 at the Wayback Machine Published by deductive com September 6 2018 retrieved September 7 2018 Bayardo Jr Roberto J 1998 Efficiently mining long patterns from databases PDF ACM SIGMOD Record 27 2 External links editARtool GPL Java association rule mining application with GUI offering implementations of multiple algorithms for discovery of frequent patterns and extraction of association rules includes Apriori SPMF offers Java open source implementations of Apriori and several variations such as AprioriClose UApriori AprioriInverse AprioriRare MSApriori AprioriTID and other more efficient algorithms such as FPGrowth and LCM Christian Borgelt provides C implementations for Apriori and many other frequent pattern mining algorithms Eclat FPGrowth etc The code is distributed as free software under the MIT license The R package arules contains Apriori and Eclat and infrastructure for representing manipulating and analyzing transaction data and patterns Efficient Apriori is a Python package with an implementation of the algorithm as presented in the original paper Retrieved from https en wikipedia org w index php title Apriori algorithm amp oldid 1210610568, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.