fbpx
Wikipedia

Latent semantic analysis

Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (the distributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.[1]

An information retrieval technique using latent semantic structure was patented in 1988 (US Patent 4,839,853, now expired) by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval, it is sometimes called latent semantic indexing (LSI).[2]

Overview edit

Animation of the topic detection process in a document-word matrix. Every column corresponds to a document, every row to a word. A cell stores the weighting of a word in a document (e.g. by tf-idf), dark cells indicate high weights. LSA groups both documents that contain similar words, as well as words that occur in a similar set of documents. The resulting patterns are used to detect latent components.[3]

Occurrence matrix edit

LSA can use a document-term matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.

This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.

Rank lowering edit

After the construction of the occurrence matrix, LSA finds a low-rank approximation[4] to the term-document matrix. There could be various reasons for these approximations:

  • The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an approximation (a "least and necessary evil").
  • The original term-document matrix is presumed noisy: for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a de-noisified matrix (a better matrix than the original).
  • The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. That is, the original matrix lists only the words actually in each document, whereas we might be interested in all words related to each document—generally a much larger set due to synonymy.

The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:

{(car), (truck), (flower)} → {(1.3452 * car + 0.2828 * truck), (flower)}

This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem with polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.

Derivation edit

Let   be a matrix where element   describes the occurrence of term   in document   (this can be, for example, the frequency).   will look like this:

 

Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:

 

Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:

 

Now the dot product   between two term vectors gives the correlation between the terms over the set of documents. The matrix product   contains all these dot products. Element   (which is equal to element  ) contains the dot product   ( ). Likewise, the matrix   contains the dot products between all the document vectors, giving their correlation over the terms:  .

Now, from the theory of linear algebra, there exists a decomposition of   such that   and   are orthogonal matrices and   is a diagonal matrix. This is called a singular value decomposition (SVD):

 

The matrix products giving us the term and document correlations then become

 

Since   and   are diagonal we see that   must contain the eigenvectors of  , while   must be the eigenvectors of  . Both products have the same non-zero eigenvalues, given by the non-zero entries of  , or equally, by the non-zero entries of  . Now the decomposition looks like this:

 

The values   are called the singular values, and   and   the left and right singular vectors. Notice the only part of   that contributes to   is the   row. Let this row vector be called  . Likewise, the only part of   that contributes to   is the   column,  . These are not the eigenvectors, but depend on all the eigenvectors.

It turns out that when you select the   largest singular values, and their corresponding singular vectors from   and  , you get the rank   approximation to   with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vector   then has   entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vector   is an approximation in this lower-dimensional space. We write this approximation as

 

You can now do the following:

  • See how related documents   and   are in the low-dimensional space by comparing the vectors   and   (typically by cosine similarity).
  • Comparing terms   and   by comparing the vectors   and  . Note that   is now a column vector.
  • Documents and term vector representations can be clustered using traditional clustering algorithms like k-means using similarity measures like cosine.
  • Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.

To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:

 

Note here that the inverse of the diagonal matrix   may be found by inverting each nonzero value within the matrix.

This means that if you have a query vector  , you must do the translation   before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:

 
 
 

Applications edit

The new low-dimensional space typically can be used to:

  • Compare the documents in the low-dimensional space (data clustering, document classification).
  • Find similar documents across languages, after analyzing a base set of translated documents (cross-language information retrieval).
  • Find relations between terms (synonymy and polysemy).
  • Given a query of terms, translate it into the low-dimensional space, and find matching documents (information retrieval).
  • Find the best similarity between small groups of terms, in a semantic way (i.e. in a context of a knowledge corpus), as for example in multi choice questions MCQ answering model.[5]
  • Expand the feature space of machine learning / text mining systems [6]
  • Analyze word association in text corpus [7]

Synonymy and polysemy are fundamental problems in natural language processing:

  • Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "physicians", even though the words have the same meaning.
  • Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.

Commercial applications edit

LSA has been used to assist in performing prior art searches for patents.[8]

Applications in human memory edit

The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of free recall and memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the Semantic Proximity Effect.[9]

When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.[10]

Another model, termed Word Association Spaces (WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.[11]

Implementation edit

The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory.[12] A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed.[13] MATLAB and Python implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution. In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.[14]

Limitations edit

Some of LSA's drawbacks include:

  • The resulting dimensions might be difficult to interpret. For instance, in
{(car), (truck), (flower)} ↦ {(1.3452 * car + 0.2828 * truck), (flower)}
the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to
{(car), (bottle), (flower)} ↦ {(1.3452 * car + 0.2828 * bottle), (flower)}
will occur. This leads to results which can be justified on the mathematical level, but have no immediately obvious meaning in natural language. Though, the (1.3452 * car + 0.2828 * bottle) component could be justified on account of the fact that both bottles and cars have transparent and opaque parts, are man made and with high probability contain logos/words on their surface; thus, in many ways these two concepts "share semantics." That is, within a language in question, there may not be a readily available word to assign and explainability becomes an analysis task as opposed to simple word/class/concept assignment task.
  • LSA can only partially capture polysemy (i.e., multiple meanings of a word) because each occurrence of a word is treated as having the same meaning due to the word being represented as a single point in space. For example, the occurrence of "chair" in a document containing "The Chair of the Board" and in a separate document containing "the chair maker" are considered the same. The behavior results in the vector representation being an average of all the word's different meanings in the corpus, which can make it difficult for comparison.[15] However, the effect is often lessened due to words having a predominant sense throughout a corpus (i.e. not all meanings are equally likely).
  • Limitations of bag of words model (BOW), where a text is represented as an unordered collection of words. To address some of the limitation of bag of words model (BOW), multi-gram dictionary can be used to find direct and indirect association as well as higher-order co-occurrences among terms.[16]
  • The probabilistic model of LSA does not match observed data: LSA assumes that words and documents form a joint Gaussian model (ergodic hypothesis), while a Poisson distribution has been observed. Thus, a newer alternative is probabilistic latent semantic analysis, based on a multinomial model, which is reported to give better results than standard LSA.[17]

Alternative methods edit

Semantic hashing edit

In semantic hashing [18] documents are mapped to memory addresses by means of a neural network in such a way that semantically similar documents are located at nearby addresses. Deep neural network essentially builds a graphical model of the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method. [clarification needed]

Latent semantic indexing edit

Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.[19]

LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri[20] in the early 1970s, to a contingency table built from word counts in documents.

Called "latent semantic indexing" because of its ability to correlate semantically related terms that are latent in a collection of text, it was first applied to text at Bellcore in the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.

Benefits of LSI edit

LSI helps overcome synonymy by increasing recall, one of the most problematic constraints of Boolean keyword queries and vector space models.[15] Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of information retrieval systems.[21] As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.

LSI is also used to perform automated document categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.[22] Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories.[23] LSI uses example documents to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.

Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text.

Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguistic concept searching and example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.[citation needed]

LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.[24]

LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.).[25] This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data.

Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.

LSI has proven to be a useful solution to a number of conceptual matching problems.[26][27] The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.[28]

LSI timeline edit

  • Mid-1960s – Factor analysis technique first described and tested (H. Borko and M. Bernick)
  • 1988 – Seminal paper on LSI technique published [19]
  • 1989 – Original patent granted [19]
  • 1992 – First use of LSI to assign articles to reviewers[29]
  • 1994 – Patent granted for the cross-lingual application of LSI (Landauer et al.)
  • 1995 – First use of LSI for grading essays (Foltz, et al., Landauer et al.)
  • 1999 – First implementation of LSI technology for intelligence community for analyzing unstructured text (SAIC).
  • 2002 – LSI-based product offering to intelligence-based government agencies (SAIC)

Mathematics of LSI edit

LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contained in the text.

Term-document matrix edit

LSI begins by constructing a term-document matrix,  , to identify the occurrences of the   unique terms within a collection of   documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell,  , initially representing the number of times the associated term appears in the indicated document,  . This matrix is usually very large and very sparse.

Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell,   of  , to be the product of a local term weight,  , which describes the relative frequency of a term in a document, and a global weight,  , which describes the relative frequency of the term within the entire collection of documents.

Some common local weighting functions[30] are defined in the following table.

Binary   if the term exists in the document, or else  
TermFrequency  , the number of occurrences of term   in document  
Log  
Augnorm  

Some common global weighting functions are defined in the following table.

Binary  
Normal  
GfIdf  , where   is the total number of times term   occurs in the whole collection, and   is the number of documents in which term   occurs.
Idf (Inverse Document Frequency)  
Entropy  , where  

Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets.[31] In other words, each entry   of   is computed as:

 
 

Rank-reduced singular value decomposition edit

A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI.[32] It computes the term and document vector spaces by approximating the single term-frequency matrix,  , into three other matrices— an m by r term-concept vector matrix  , an r by r singular values matrix  , and a n by r concept-document vector matrix,  , which satisfy the following relations:

 

 

 

In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors.

The SVD is then truncated to reduce the rank by keeping only the largest k « r diagonal entries in the singular value matrix S, where k is typically on the order 100 to 300 dimensions. This effectively reduces the term and document vector matrix sizes to m by k and n by k respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as:

A ≈ Ak = Tk Sk DkT

Efficient LSI algorithms only compute the first k singular values and term and document vectors as opposed to computing a full SVD and then truncating it.

Note that this rank reduction is essentially the same as doing Principal Component Analysis (PCA) on the matrix A, except that PCA subtracts off the means. PCA loses the sparseness of the A matrix, which can make it infeasible for large lexicons.

Querying and augmenting LSI vector spaces edit

The computed Tk and Dk matrices define the term and document vector spaces, which with the computed singular values, Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.

The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, d, for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.

A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.

The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called folding in. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in [13]) is needed.

Additional uses of LSI edit

It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome.

LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.[33] Below are some other ways in which LSI is being used:

  • Information discovery[34] (eDiscovery, Government/Intelligence community, Publishing)
  • Automated document classification (eDiscovery, Government/Intelligence community, Publishing)[35]
  • Text summarization[36] (eDiscovery, Publishing)
  • Relationship discovery[37] (Government, Intelligence community, Social Networking)
  • Automatic generation of link charts of individuals and organizations[38] (Government, Intelligence community)
  • Matching technical papers and grants with reviewers[39] (Government)
  • Online customer support[40] (Customer Management)
  • Determining document authorship[41] (Education)
  • Automatic keyword annotation of images[42]
  • Understanding software source code[43] (Software Engineering)
  • Filtering spam[44] (System Administration)
  • Information visualization[45]
  • Essay scoring[46] (Education)
  • Literature-based discovery[47]
  • Stock returns prediction[6]
  • Dream Content Analysis (Psychology) [7]

LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.[48]

Challenges to LSI edit

Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques.[49] However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open source gensim software package.[50]

Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents).[51] However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.[52] Checking the proportion of variance retained, similar to PCA or factor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.[53] When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality.

See also edit

References edit

  1. ^ Susan T. Dumais (2005). "Latent Semantic Analysis". Annual Review of Information Science and Technology. 38: 188–230. doi:10.1002/aris.1440380105.
  2. ^ "The Latent Semantic Indexing home page".
  3. ^ http://topicmodels.west.uni-koblenz.de/ckling/tmt/svd_ap.html
  4. ^ Markovsky I. (2012) Low-Rank Approximation: Algorithms, Implementation, Applications, Springer, 2012, ISBN 978-1-4471-2226-5 [page needed]
  5. ^ Alain Lifchitz; Sandra Jhean-Larose; Guy Denhière (2009). "Effect of tuned parameters on an LSA multiple choice questions answering model" (PDF). Behavior Research Methods. 41 (4): 1201–1209. arXiv:0811.0146. doi:10.3758/BRM.41.4.1201. PMID 19897829. S2CID 480826.
  6. ^ a b Ramiro H. Gálvez; Agustín Gravano (2017). "Assessing the usefulness of online message board mining in automatic stock prediction systems". Journal of Computational Science. 19: 1877–7503. doi:10.1016/j.jocs.2017.01.001.
  7. ^ a b Altszyler, E.; Ribeiro, S.; Sigman, M.; Fernández Slezak, D. (2017). "The interpretation of dream meaning: Resolving ambiguity using Latent Semantic Analysis in a small corpus of text". Consciousness and Cognition. 56: 178–187. arXiv:1610.01520. doi:10.1016/j.concog.2017.09.004. PMID 28943127. S2CID 195347873.
  8. ^ Gerry J. Elman (October 2007). "Automated Patent Examination Support - A proposal". Biotechnology Law Report. 26 (5): 435–436. doi:10.1089/blr.2007.9896.
  9. ^ Marc W. Howard; Michael J. Kahana (1999). "Contextual Variability and Serial Position Effects in Free Recall" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
  10. ^ Franklin M. Zaromb; et al. (2006). Temporal Associations and Prior-List Intrusions in Free Recall (PDF). Interspeech'2005.
  11. ^ Nelson, Douglas. "The University of South Florida Word Association, Rhyme and Word Fragment Norms". Retrieved May 8, 2011.
  12. ^ Geneviève Gorrell; Brandyn Webb (2005). (PDF). Interspeech'2005. Archived from the original (PDF) on 2008-12-21.
  13. ^ a b Matthew Brand (2006). "Fast Low-Rank Modifications of the Thin Singular Value Decomposition" (PDF). Linear Algebra and Its Applications. 415: 20–30. doi:10.1016/j.laa.2005.07.021.
  14. ^ Ding, Yaguang; Zhu, Guofeng; Cui, Chenyang; Zhou, Jian; Tao, Liang (2011). "A parallel implementation of Singular Value Decomposition based on Map-Reduce and PARPACK". Proceedings of 2011 International Conference on Computer Science and Network Technology. pp. 739–741. doi:10.1109/ICCSNT.2011.6182070. ISBN 978-1-4577-1587-7. S2CID 15281129.
  15. ^ a b Deerwester, Scott; Dumais, Susan T.; Furnas, George W.; Landauer, Thomas K.; Harshman, Richard (1990). "Indexing by latent semantic analysis". Journal of the American Society for Information Science. 41 (6): 391–407. CiteSeerX 10.1.1.108.8490. doi:10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9.
  16. ^ Abedi, Vida; Yeasin, Mohammed; Zand, Ramin (27 November 2014). "Empirical study using network of semantically related associations in bridging the knowledge gap". Journal of Translational Medicine. 12 (1): 324. doi:10.1186/s12967-014-0324-9. PMC 4252998. PMID 25428570.
  17. ^ Thomas Hofmann (1999). "Probabilistic Latent Semantic Analysis". Uncertainty in Artificial Intelligence. arXiv:1301.6705.
  18. ^ Salakhutdinov, Ruslan, and Geoffrey Hinton. "Semantic hashing." RBM 500.3 (2007): 500.
  19. ^ a b c Deerwester, S., et al, Improving Information Retrieval with Latent Semantic Indexing, Proceedings of the 51st Annual Meeting of the American Society for Information Science 25, 1988, pp. 36–40.
  20. ^ Benzécri, J.-P. (1973). L'Analyse des Données. Volume II. L'Analyse des Correspondences. Paris, France: Dunod.
  21. ^ Furnas, G. W.; Landauer, T. K.; Gomez, L. M.; Dumais, S. T. (1987). "The vocabulary problem in human-system communication". Communications of the ACM. 30 (11): 964–971. CiteSeerX 10.1.1.118.4768. doi:10.1145/32206.32212. S2CID 3002280.
  22. ^ Landauer, T., et al., Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report, M. I. Jordan, M. J. Kearns & S. A. Solla (Eds.), Advances in Neural Information Processing Systems 10, Cambridge: MIT Press, 1998, pp. 45–51.
  23. ^ Dumais, S.; Platt, J.; Heckerman, D.; Sahami, M. (1998). "Inductive learning algorithms and representations for text categorization" (PDF). Proceedings of the seventh international conference on Information and knowledge management - CIKM '98. pp. 148. CiteSeerX 10.1.1.80.8909. doi:10.1145/288627.288651. ISBN 978-1581130614. S2CID 617436.
  24. ^ Homayouni, R.; Heinrich, K.; Wei, L.; Berry, M. W. (2004). "Gene clustering by Latent Semantic Indexing of MEDLINE abstracts". Bioinformatics. 21 (1): 104–115. doi:10.1093/bioinformatics/bth464. PMID 15308538.
  25. ^ Price, R. J.; Zukas, A. E. (2005). "Application of Latent Semantic Indexing to Processing of Noisy Text". Intelligence and Security Informatics. Lecture Notes in Computer Science. Vol. 3495. p. 602. doi:10.1007/11427995_68. ISBN 978-3-540-25999-2.
  26. ^ Ding, C., A Similarity-based Probability Model for Latent Semantic Indexing, Proceedings of the 22nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 1999, pp. 59–65.
  27. ^ Bartell, B., Cottrell, G., and Belew, R., Latent Semantic Indexing is an Optimal Special Case of Multidimensional Scaling[dead link], Proceedings, ACM SIGIR Conference on Research and Development in Information Retrieval, 1992, pp. 161–167.
  28. ^ Graesser, A.; Karnavat, A. (2000). "Latent Semantic Analysis Captures Causal, Goal-oriented, and Taxonomic Structures". Proceedings of CogSci 2000: 184–189. CiteSeerX 10.1.1.23.5444.
  29. ^ Dumais, S.; Nielsen, J. (1992). "Automating the assignment of submitted manuscripts to reviewers". Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '92. pp. 233–244. CiteSeerX 10.1.1.16.9793. doi:10.1145/133160.133205. ISBN 978-0897915236. S2CID 15038631.
  30. ^ Berry, M. W., and Browne, M., Understanding Search Engines: Mathematical Modeling and Text Retrieval, Society for Industrial and Applied Mathematics, Philadelphia, (2005).
  31. ^ Landauer, T., et al., Handbook of Latent Semantic Analysis, Lawrence Erlbaum Associates, 2007.
  32. ^ Berry, Michael W., Dumais, Susan T., O'Brien, Gavin W., Using Linear Algebra for Intelligent Information Retrieval, December 1994, SIAM Review 37:4 (1995), pp. 573–595.
  33. ^ Dumais, S., Latent Semantic Analysis, ARIST Review of Information Science and Technology, vol. 38, 2004, Chapter 4.
  34. ^ Best Practices Commentary on the Use of Search and Information Retrieval Methods in E-Discovery, the Sedona Conference, 2007, pp. 189–223.
  35. ^ Foltz, P. W. and Dumais, S. T. Personalized Information Delivery: An analysis of information filtering methods, Communications of the ACM, 1992, 34(12), 51-60.
  36. ^ Gong, Y., and Liu, X., Creating Generic Text Summaries, Proceedings, Sixth International Conference on Document Analysis and Recognition, 2001, pp. 903–907.
  37. ^ Bradford, R., Efficient Discovery of New Information in Large Text Databases, Proceedings, IEEE International Conference on Intelligence and Security Informatics, Atlanta, Georgia, LNCS Vol. 3495, Springer, 2005, pp. 374–380.
  38. ^ Bradford, R. B. (2006). "Application of Latent Semantic Indexing in Generating Graphs of Terrorist Networks". Intelligence and Security Informatics. Lecture Notes in Computer Science. Vol. 3975. pp. 674–675. doi:10.1007/11760146_84. ISBN 978-3-540-34478-0.
  39. ^ Yarowsky, D., and Florian, R., Taking the Load off the Conference Chairs: Towards a Digital Paper-routing Assistant, Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in NLP and Very-Large Corpora, 1999, pp. 220–230.
  40. ^ Caron, J., Applying LSA to Online Customer Support: A Trial Study, Unpublished Master's Thesis, May 2000.
  41. ^ Soboroff, I., et al, Visualizing Document Authorship Using N-grams and Latent Semantic Indexing, Workshop on New Paradigms in Information Visualization and Manipulation, 1997, pp. 43–48.
  42. ^ Monay, F., and Gatica-Perez, D., On Image Auto-annotation with Latent Space Models, Proceedings of the 11th ACM international conference on Multimedia, Berkeley, CA, 2003, pp. 275–278.
  43. ^ Maletic, J.; Marcus, A. (November 13–15, 2000). "Using latent semantic analysis to identify similarities in source code to support program understanding". Proceedings 12th IEEE Internationals Conference on Tools with Artificial Intelligence. ICTAI 2000. Vancouver, British Columbia. pp. 46–53. CiteSeerX 10.1.1.36.6652. doi:10.1109/TAI.2000.889845. ISBN 978-0-7695-0909-9. S2CID 10354564.{{cite book}}: CS1 maint: location missing publisher (link)
  44. ^ Gee, K., Using Latent Semantic Indexing to Filter Spam, in: Proceedings, 2003 ACM Symposium on Applied Computing, Melbourne, Florida, pp. 460–464.
  45. ^ Landauer, T., Laham, D., and Derr, M., From Paragraph to Graph: Latent Semantic Analysis for Information Visualization, Proceedings of the National Academy of Sciences, 101, 2004, pp. 5214–5219.
  46. ^ Foltz, Peter W., Laham, Darrell, and Landauer, Thomas K., Automated Essay Scoring: Applications to Educational Technology, Proceedings of EdMedia, 1999.
  47. ^ Gordon, M., and Dumais, S., Using Latent Semantic Indexing for Literature Based Discovery, Journal of the American Society for Information Science, 49(8), 1998, pp. 674–685.
  48. ^ There Has to be a Better Way to Search, 2008, White Paper, Fios, Inc.
  49. ^ Karypis, G., Han, E., Fast Supervised Dimensionality Reduction Algorithm with Applications to Document Categorization and Retrieval, Proceedings of CIKM-00, 9th ACM Conference on Information and Knowledge Management.
  50. ^ Radim Řehůřek (2011). "Subspace Tracking for Latent Semantic Analysis". Advances in Information Retrieval. Lecture Notes in Computer Science. Vol. 6611. pp. 289–300. doi:10.1007/978-3-642-20161-5_29. ISBN 978-3-642-20160-8.
  51. ^ Bradford, R., An Empirical Study of Required Dimensionality for Large-scale Latent Semantic Indexing Applications, Proceedings of the 17th ACM Conference on Information and Knowledge Management, Napa Valley, California, USA, 2008, pp. 153–162.
  52. ^ Landauer, Thomas K., and Dumais, Susan T., Latent Semantic Analysis, Scholarpedia, 3(11):4356, 2008.
  53. ^ Landauer, T. K., Foltz, P. W., & Laham, D. (1998). Introduction to Latent Semantic Analysis. Discourse Processes, 25, 259-284

Further reading edit

  • Landauer, Thomas; Foltz, Peter W.; Laham, Darrell (1998). "Introduction to Latent Semantic Analysis" (PDF). Discourse Processes. 25 (2–3): 259–284. CiteSeerX 10.1.1.125.109. doi:10.1080/01638539809545028. S2CID 16625196.
  • Deerwester, Scott; Dumais, Susan T.; Furnas, George W.; Landauer, Thomas K.; Harshman, Richard (1990). (PDF). Journal of the American Society for Information Science. 41 (6): 391–407. CiteSeerX 10.1.1.33.2447. doi:10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9. Archived from the original (PDF) on 2012-07-17. Original article where the model was first exposed.
  • Berry, Michael; Dumais, Susan T.; O'Brien, Gavin W. (1995). "Using Linear Algebra for Intelligent Information Retrieval". {{cite journal}}: Cite journal requires |journal= (help) (PDF) 2018-11-23 at the Wayback Machine. Illustration of the application of LSA to document retrieval.
  • Chicco, D; Masseroli, M (2015). "Software suite for gene and protein annotation prediction and similarity search". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 12 (4): 837–843. doi:10.1109/TCBB.2014.2382127. hdl:11311/959408. PMID 26357324. S2CID 14714823.
  • . InfoVis. Archived from the original on 2020-02-18. Retrieved 2005-07-01.
  • Fridolin Wild (November 23, 2005). "An Open Source LSA Package for R". CRAN. Retrieved November 20, 2006.
  • Thomas Landauer, Susan T. Dumais. "A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge". Retrieved 2007-07-02.

External links edit

Articles on LSA edit

  • Latent Semantic Analysis, a scholarpedia article on LSA written by Tom Landauer, one of the creators of LSA.

Talks and demonstrations edit

  • LSA Overview, talk by Prof. Thomas Hofmann describing LSA, its applications in Information Retrieval, and its connections to probabilistic latent semantic analysis.
  • Complete LSA sample code in C# for Windows. The demo code includes enumeration of text files, filtering stop words, stemming, making a document-term matrix and SVD.

Implementations edit

Due to its cross-domain applications in Information Retrieval, Natural Language Processing (NLP), Cognitive Science and Computational Linguistics, LSA has been implemented to support many different kinds of applications.

  • Sense Clusters, an Information Retrieval-oriented perl implementation of LSA
  • S-Space Package, a Computational Linguistics and Cognitive Science-oriented Java implementation of LSA
  • Semantic Vectors applies Random Projection, LSA, and Reflective Random Indexing to Lucene term-document matrices
  • Infomap Project, an NLP-oriented C implementation of LSA (superseded by semanticvectors project)
  • Text to Matrix Generator, A MATLAB Toolbox for generating term-document matrices from text collections, with support for LSA
  • Gensim contains a Python implementation of LSA for matrices larger than RAM.

latent, semantic, analysis, this, article, uses, bare, urls, which, uninformative, vulnerable, link, please, consider, converting, them, full, citations, ensure, article, remains, verifiable, maintains, consistent, citation, style, several, templates, tools, a. This article uses bare URLs which are uninformative and vulnerable to link rot Please consider converting them to full citations to ensure the article remains verifiable and maintains a consistent citation style Several templates and tools are available to assist in formatting such as reFill documentation and Citation bot documentation August 2022 Learn how and when to remove this template message Latent semantic analysis LSA is a technique in natural language processing in particular distributional semantics of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms LSA assumes that words that are close in meaning will occur in similar pieces of text the distributional hypothesis A matrix containing word counts per document rows represent unique words and columns represent each document is constructed from a large piece of text and a mathematical technique called singular value decomposition SVD is used to reduce the number of rows while preserving the similarity structure among columns Documents are then compared by cosine similarity between any two columns Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents 1 An information retrieval technique using latent semantic structure was patented in 1988 US Patent 4 839 853 now expired by Scott Deerwester Susan Dumais George Furnas Richard Harshman Thomas Landauer Karen Lochbaum and Lynn Streeter In the context of its application to information retrieval it is sometimes called latent semantic indexing LSI 2 Contents 1 Overview 1 1 Occurrence matrix 1 2 Rank lowering 1 3 Derivation 2 Applications 2 1 Commercial applications 2 2 Applications in human memory 3 Implementation 4 Limitations 5 Alternative methods 5 1 Semantic hashing 5 2 Latent semantic indexing 6 Benefits of LSI 7 LSI timeline 8 Mathematics of LSI 8 1 Term document matrix 8 2 Rank reduced singular value decomposition 9 Querying and augmenting LSI vector spaces 10 Additional uses of LSI 11 Challenges to LSI 12 See also 13 References 14 Further reading 15 External links 15 1 Articles on LSA 15 2 Talks and demonstrations 15 3 ImplementationsOverview edit source source source source source source Animation of the topic detection process in a document word matrix Every column corresponds to a document every row to a word A cell stores the weighting of a word in a document e g by tf idf dark cells indicate high weights LSA groups both documents that contain similar words as well as words that occur in a similar set of documents The resulting patterns are used to detect latent components 3 Occurrence matrix edit LSA can use a document term matrix which describes the occurrences of terms in documents it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents A typical example of the weighting of the elements of the matrix is tf idf term frequency inverse document frequency the weight of an element of the matrix is proportional to the number of times the terms appear in each document where rare terms are upweighted to reflect their relative importance This matrix is also common to standard semantic models though it is not necessarily explicitly expressed as a matrix since the mathematical properties of matrices are not always used Rank lowering edit After the construction of the occurrence matrix LSA finds a low rank approximation 4 to the term document matrix There could be various reasons for these approximations The original term document matrix is presumed too large for the computing resources in this case the approximated low rank matrix is interpreted as an approximation a least and necessary evil The original term document matrix is presumed noisy for example anecdotal instances of terms are to be eliminated From this point of view the approximated matrix is interpreted as a de noisified matrix a better matrix than the original The original term document matrix is presumed overly sparse relative to the true term document matrix That is the original matrix lists only the words actually in each document whereas we might be interested in all words related to each document generally a much larger set due to synonymy The consequence of the rank lowering is that some dimensions are combined and depend on more than one term car truck flower 1 3452 car 0 2828 truck flower dd This mitigates the problem of identifying synonymy as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings It also partially mitigates the problem with polysemy since components of polysemous words that point in the right direction are added to the components of words that share a similar meaning Conversely components that point in other directions tend to either simply cancel out or at worst to be smaller than components in the directions corresponding to the intended sense Derivation edit Let X displaystyle X nbsp be a matrix where element i j displaystyle i j nbsp describes the occurrence of term i displaystyle i nbsp in document j displaystyle j nbsp this can be for example the frequency X displaystyle X nbsp will look like this dj tiT x1 1 x1 j x1 n xi 1 xi j xi n xm 1 xm j xm n displaystyle begin matrix amp textbf d j amp downarrow textbf t i T rightarrow amp begin bmatrix x 1 1 amp dots amp x 1 j amp dots amp x 1 n vdots amp ddots amp vdots amp ddots amp vdots x i 1 amp dots amp x i j amp dots amp x i n vdots amp ddots amp vdots amp ddots amp vdots x m 1 amp dots amp x m j amp dots amp x m n end bmatrix end matrix nbsp Now a row in this matrix will be a vector corresponding to a term giving its relation to each document tiT xi 1 xi j xi n displaystyle textbf t i T begin bmatrix x i 1 amp dots amp x i j amp dots amp x i n end bmatrix nbsp Likewise a column in this matrix will be a vector corresponding to a document giving its relation to each term dj x1 j xi j xm j displaystyle textbf d j begin bmatrix x 1 j vdots x i j vdots x m j end bmatrix nbsp Now the dot product tiTtp displaystyle textbf t i T textbf t p nbsp between two term vectors gives the correlation between the terms over the set of documents The matrix product XXT displaystyle XX T nbsp contains all these dot products Element i p displaystyle i p nbsp which is equal to element p i displaystyle p i nbsp contains the dot product tiTtp displaystyle textbf t i T textbf t p nbsp tpTti displaystyle textbf t p T textbf t i nbsp Likewise the matrix XTX displaystyle X T X nbsp contains the dot products between all the document vectors giving their correlation over the terms djTdq dqTdj displaystyle textbf d j T textbf d q textbf d q T textbf d j nbsp Now from the theory of linear algebra there exists a decomposition of X displaystyle X nbsp such that U displaystyle U nbsp and V displaystyle V nbsp are orthogonal matrices and S displaystyle Sigma nbsp is a diagonal matrix This is called a singular value decomposition SVD X USVT displaystyle begin matrix X U Sigma V T end matrix nbsp The matrix products giving us the term and document correlations then become XXT USVT USVT T USVT VTTSTUT USVTVSTUT USSTUTXTX USVT T USVT VTTSTUT USVT VSTUTUSVT VSTSVT displaystyle begin matrix XX T amp amp U Sigma V T U Sigma V T T U Sigma V T V T T Sigma T U T U Sigma V T V Sigma T U T U Sigma Sigma T U T X T X amp amp U Sigma V T T U Sigma V T V T T Sigma T U T U Sigma V T V Sigma T U T U Sigma V T V Sigma T Sigma V T end matrix nbsp Since SST displaystyle Sigma Sigma T nbsp and STS displaystyle Sigma T Sigma nbsp are diagonal we see that U displaystyle U nbsp must contain the eigenvectors of XXT displaystyle XX T nbsp while V displaystyle V nbsp must be the eigenvectors of XTX displaystyle X T X nbsp Both products have the same non zero eigenvalues given by the non zero entries of SST displaystyle Sigma Sigma T nbsp or equally by the non zero entries of STS displaystyle Sigma T Sigma nbsp Now the decomposition looks like this XUSVT dj d j tiT x1 1 x1 j x1 n xi 1 xi j xi n xm 1 xm j xm n t iT u1 ul s1 0 0 sl v1 vl displaystyle begin matrix amp X amp amp amp U amp amp Sigma amp amp V T amp textbf d j amp amp amp amp amp amp amp hat textbf d j amp downarrow amp amp amp amp amp amp amp downarrow textbf t i T rightarrow amp begin bmatrix x 1 1 amp dots amp x 1 j amp dots amp x 1 n vdots amp ddots amp vdots amp ddots amp vdots x i 1 amp dots amp x i j amp dots amp x i n vdots amp ddots amp vdots amp ddots amp vdots x m 1 amp dots amp x m j amp dots amp x m n end bmatrix amp amp hat textbf t i T rightarrow amp begin bmatrix begin bmatrix textbf u 1 end bmatrix dots begin bmatrix textbf u l end bmatrix end bmatrix amp cdot amp begin bmatrix sigma 1 amp dots amp 0 vdots amp ddots amp vdots 0 amp dots amp sigma l end bmatrix amp cdot amp begin bmatrix begin bmatrix amp amp textbf v 1 amp amp end bmatrix vdots begin bmatrix amp amp textbf v l amp amp end bmatrix end bmatrix end matrix nbsp The values s1 sl displaystyle sigma 1 dots sigma l nbsp are called the singular values and u1 ul displaystyle u 1 dots u l nbsp and v1 vl displaystyle v 1 dots v l nbsp the left and right singular vectors Notice the only part of U displaystyle U nbsp that contributes to ti displaystyle textbf t i nbsp is the i th displaystyle i textrm th nbsp row Let this row vector be called t iT displaystyle hat textrm t i T nbsp Likewise the only part of VT displaystyle V T nbsp that contributes to dj displaystyle textbf d j nbsp is the j th displaystyle j textrm th nbsp column d j displaystyle hat textrm d j nbsp These are not the eigenvectors but depend on all the eigenvectors It turns out that when you select the k displaystyle k nbsp largest singular values and their corresponding singular vectors from U displaystyle U nbsp and V displaystyle V nbsp you get the rank k displaystyle k nbsp approximation to X displaystyle X nbsp with the smallest error Frobenius norm This approximation has a minimal error But more importantly we can now treat the term and document vectors as a semantic space The row term vector t iT displaystyle hat textbf t i T nbsp then has k displaystyle k nbsp entries mapping it to a lower dimensional space These new dimensions do not relate to any comprehensible concepts They are a lower dimensional approximation of the higher dimensional space Likewise the document vector d j displaystyle hat textbf d j nbsp is an approximation in this lower dimensional space We write this approximation as Xk UkSkVkT displaystyle X k U k Sigma k V k T nbsp You can now do the following See how related documents j displaystyle j nbsp and q displaystyle q nbsp are in the low dimensional space by comparing the vectors Sk d j displaystyle Sigma k cdot hat textbf d j nbsp and Sk d q displaystyle Sigma k cdot hat textbf d q nbsp typically by cosine similarity Comparing terms i displaystyle i nbsp and p displaystyle p nbsp by comparing the vectors Sk t i displaystyle Sigma k cdot hat textbf t i nbsp and Sk t p displaystyle Sigma k cdot hat textbf t p nbsp Note that t displaystyle hat textbf t nbsp is now a column vector Documents and term vector representations can be clustered using traditional clustering algorithms like k means using similarity measures like cosine Given a query view this as a mini document and compare it to your documents in the low dimensional space To do the latter you must first translate your query into the low dimensional space It is then intuitive that you must use the same transformation that you use on your documents d j Sk 1UkTdj displaystyle hat textbf d j Sigma k 1 U k T textbf d j nbsp Note here that the inverse of the diagonal matrix Sk displaystyle Sigma k nbsp may be found by inverting each nonzero value within the matrix This means that if you have a query vector q displaystyle q nbsp you must do the translation q Sk 1UkTq displaystyle hat textbf q Sigma k 1 U k T textbf q nbsp before you compare it with the document vectors in the low dimensional space You can do the same for pseudo term vectors tiT t iTSkVkT displaystyle textbf t i T hat textbf t i T Sigma k V k T nbsp t iT tiTVk TSk 1 tiTVkSk 1 displaystyle hat textbf t i T textbf t i T V k T Sigma k 1 textbf t i T V k Sigma k 1 nbsp t i Sk 1VkTti displaystyle hat textbf t i Sigma k 1 V k T textbf t i nbsp Applications editThe new low dimensional space typically can be used to Compare the documents in the low dimensional space data clustering document classification Find similar documents across languages after analyzing a base set of translated documents cross language information retrieval Find relations between terms synonymy and polysemy Given a query of terms translate it into the low dimensional space and find matching documents information retrieval Find the best similarity between small groups of terms in a semantic way i e in a context of a knowledge corpus as for example in multi choice questions MCQ answering model 5 Expand the feature space of machine learning text mining systems 6 Analyze word association in text corpus 7 Synonymy and polysemy are fundamental problems in natural language processing Synonymy is the phenomenon where different words describe the same idea Thus a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query For example a search for doctors may not return a document containing the word physicians even though the words have the same meaning Polysemy is the phenomenon where the same word has multiple meanings So a search may retrieve irrelevant documents containing the desired words in the wrong meaning For example a botanist and a computer scientist looking for the word tree probably desire different sets of documents Commercial applications edit LSA has been used to assist in performing prior art searches for patents 8 Applications in human memory edit The use of Latent Semantic Analysis has been prevalent in the study of human memory especially in areas of free recall and memory search There is a positive correlation between the semantic similarity of two words as measured by LSA and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns They also noted that in these situations the inter response time between the similar words was much quicker than between dissimilar words These findings are referred to as the Semantic Proximity Effect 9 When participants made mistakes in recalling studied items these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list These prior list intrusions as they have come to be called seem to compete with items on the current list for recall 10 Another model termed Word Association Spaces WAS is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72 000 distinct word pairs 11 Implementation editThe SVD is typically computed using large matrix methods for example Lanczos methods but may also be computed incrementally and with greatly reduced resources via a neural network like approach which does not require the large full rank matrix to be held in memory 12 A fast incremental low memory large matrix SVD algorithm has recently been developed 13 MATLAB and Python implementations of these fast algorithms are available Unlike Gorrell and Webb s 2005 stochastic approximation Brand s algorithm 2003 provides an exact solution In recent years progress has been made to reduce the computational complexity of SVD for instance by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality 14 Limitations editSome of LSA s drawbacks include The resulting dimensions might be difficult to interpret For instance in car truck flower 1 3452 car 0 2828 truck flower dd the 1 3452 car 0 2828 truck component could be interpreted as vehicle However it is very likely that cases close to car bottle flower 1 3452 car 0 2828 bottle flower dd will occur This leads to results which can be justified on the mathematical level but have no immediately obvious meaning in natural language Though the 1 3452 car 0 2828 bottle component could be justified on account of the fact that both bottles and cars have transparent and opaque parts are man made and with high probability contain logos words on their surface thus in many ways these two concepts share semantics That is within a language in question there may not be a readily available word to assign and explainability becomes an analysis task as opposed to simple word class concept assignment task LSA can only partially capture polysemy i e multiple meanings of a word because each occurrence of a word is treated as having the same meaning due to the word being represented as a single point in space For example the occurrence of chair in a document containing The Chair of the Board and in a separate document containing the chair maker are considered the same The behavior results in the vector representation being an average of all the word s different meanings in the corpus which can make it difficult for comparison 15 However the effect is often lessened due to words having a predominant sense throughout a corpus i e not all meanings are equally likely Limitations of bag of words model BOW where a text is represented as an unordered collection of words To address some of the limitation of bag of words model BOW multi gram dictionary can be used to find direct and indirect association as well as higher order co occurrences among terms 16 The probabilistic model of LSA does not match observed data LSA assumes that words and documents form a joint Gaussian model ergodic hypothesis while a Poisson distribution has been observed Thus a newer alternative is probabilistic latent semantic analysis based on a multinomial model which is reported to give better results than standard LSA 17 Alternative methods editSemantic hashing edit In semantic hashing 18 documents are mapped to memory addresses by means of a neural network in such a way that semantically similar documents are located at nearby addresses Deep neural network essentially builds a graphical model of the word count vectors obtained from a large set of documents Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document This way of extending the efficiency of hash coding to approximate matching is much faster than locality sensitive hashing which is the fastest current method clarification needed Latent semantic indexing edit Latent semantic indexing LSI is an indexing and retrieval method that uses a mathematical technique called singular value decomposition SVD to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text LSI is based on the principle that words that are used in the same contexts tend to have similar meanings A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts 19 LSI is also an application of correspondence analysis a multivariate statistical technique developed by Jean Paul Benzecri 20 in the early 1970s to a contingency table built from word counts in documents Called latent semantic indexing because of its ability to correlate semantically related terms that are latent in a collection of text it was first applied to text at Bellcore in the late 1980s The method also called latent semantic analysis LSA uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries commonly referred to as concept searches Queries or concept searches against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don t share a specific word or words with the search criteria Benefits of LSI editLSI helps overcome synonymy by increasing recall one of the most problematic constraints of Boolean keyword queries and vector space models 15 Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of information retrieval systems 21 As a result Boolean or keyword queries often return irrelevant results and miss information that is relevant LSI is also used to perform automated document categorization In fact several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text 22 Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories 23 LSI uses example documents to establish the conceptual basis for each category During categorization processing the concepts contained in the documents being categorized are compared to the concepts contained in the example items and a category or categories is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster This is very useful when dealing with an unknown collection of unstructured text Because it uses a strictly mathematical approach LSI is inherently independent of language This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures such as dictionaries and thesauri LSI can also perform cross linguistic concept searching and example based categorization For example queries can be made in one language such as English and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages citation needed LSI is not restricted to working only with words It can also process arbitrary character strings Any object that can be expressed as text can be represented in an LSI vector space For example tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations 24 LSI automatically adapts to new and changing terminology and has been shown to be very tolerant of noise i e misspelled words typographical errors unreadable characters etc 25 This is especially important for applications using text derived from Optical Character Recognition OCR and speech to text conversion LSI also deals effectively with sparse ambiguous and contradictory data Text does not need to be in sentence form for LSI to be effective It can work with lists free form notes email Web based content etc As long as a collection of text contains multiple terms LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text LSI has proven to be a useful solution to a number of conceptual matching problems 26 27 The technique has been shown to capture key relationship information including causal goal oriented and taxonomic information 28 LSI timeline editMid 1960s Factor analysis technique first described and tested H Borko and M Bernick 1988 Seminal paper on LSI technique published 19 1989 Original patent granted 19 1992 First use of LSI to assign articles to reviewers 29 1994 Patent granted for the cross lingual application of LSI Landauer et al 1995 First use of LSI for grading essays Foltz et al Landauer et al 1999 First implementation of LSI technology for intelligence community for analyzing unstructured text SAIC 2002 LSI based product offering to intelligence based government agencies SAIC Mathematics of LSI editLSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text In general the process involves constructing a weighted term document matrix performing a Singular Value Decomposition on the matrix and using the matrix to identify the concepts contained in the text Term document matrix edit LSI begins by constructing a term document matrix A displaystyle A nbsp to identify the occurrences of the m displaystyle m nbsp unique terms within a collection of n displaystyle n nbsp documents In a term document matrix each term is represented by a row and each document is represented by a column with each matrix cell aij displaystyle a ij nbsp initially representing the number of times the associated term appears in the indicated document tfij displaystyle mathrm tf ij nbsp This matrix is usually very large and very sparse Once a term document matrix is constructed local and global weighting functions can be applied to it to condition the data The weighting functions transform each cell aij displaystyle a ij nbsp of A displaystyle A nbsp to be the product of a local term weight lij displaystyle l ij nbsp which describes the relative frequency of a term in a document and a global weight gi displaystyle g i nbsp which describes the relative frequency of the term within the entire collection of documents Some common local weighting functions 30 are defined in the following table Binary lij 1 displaystyle l ij 1 nbsp if the term exists in the document or else 0 displaystyle 0 nbsp TermFrequency lij tfij displaystyle l ij mathrm tf ij nbsp the number of occurrences of term i displaystyle i nbsp in document j displaystyle j nbsp Log lij log tfij 1 displaystyle l ij log mathrm tf ij 1 nbsp Augnorm lij tfijmaxi tfij 12 displaystyle l ij frac Big frac mathrm tf ij max i mathrm tf ij Big 1 2 nbsp Some common global weighting functions are defined in the following table Binary gi 1 displaystyle g i 1 nbsp Normal gi 1 jtfij2 displaystyle g i frac 1 sqrt sum j mathrm tf ij 2 nbsp GfIdf gi gfi dfi displaystyle g i mathrm gf i mathrm df i nbsp where gfi displaystyle mathrm gf i nbsp is the total number of times term i displaystyle i nbsp occurs in the whole collection and dfi displaystyle mathrm df i nbsp is the number of documents in which term i displaystyle i nbsp occurs Idf Inverse Document Frequency gi log2 n1 dfi displaystyle g i log 2 frac n 1 mathrm df i nbsp Entropy gi 1 jpijlog pijlog n displaystyle g i 1 sum j frac p ij log p ij log n nbsp where pij tfijgfi displaystyle p ij frac mathrm tf ij mathrm gf i nbsp Empirical studies with LSI report that the Log and Entropy weighting functions work well in practice with many data sets 31 In other words each entry aij displaystyle a ij nbsp of A displaystyle A nbsp is computed as gi 1 jpijlog pijlog n displaystyle g i 1 sum j frac p ij log p ij log n nbsp aij gi log tfij 1 displaystyle a ij g i log mathrm tf ij 1 nbsp Rank reduced singular value decomposition edit A rank reduced singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text The SVD forms the foundation for LSI 32 It computes the term and document vector spaces by approximating the single term frequency matrix A displaystyle A nbsp into three other matrices an m by r term concept vector matrix T displaystyle T nbsp an r by r singular values matrix S displaystyle S nbsp and a n by r concept document vector matrix D displaystyle D nbsp which satisfy the following relations A TSDT displaystyle A approx TSD T nbsp TTT IrDTD Ir displaystyle T T T I r quad D T D I r nbsp S1 1 S2 2 Sr r gt 0Si j 0wherei j displaystyle S 1 1 geq S 2 2 geq ldots geq S r r gt 0 quad S i j 0 text where i neq j nbsp In the formula A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms and n is the number of documents T is a computed m by r matrix of term vectors where r is the rank of A a measure of its unique dimensions min m n S is a computed r by r diagonal matrix of decreasing singular values and D is a computed n by r matrix of document vectors The SVD is then truncated to reduce the rank by keeping only the largest k r diagonal entries in the singular value matrix S where k is typically on the order 100 to 300 dimensions This effectively reduces the term and document vector matrix sizes to m by k and n by k respectively The SVD operation along with this reduction has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A This reduced set of matrices is often denoted with a modified formula such as A Ak Tk Sk DkT dd dd dd dd dd dd Efficient LSI algorithms only compute the first k singular values and term and document vectors as opposed to computing a full SVD and then truncating it Note that this rank reduction is essentially the same as doing Principal Component Analysis PCA on the matrix A except that PCA subtracts off the means PCA loses the sparseness of the A matrix which can make it infeasible for large lexicons Querying and augmenting LSI vector spaces editThe computed Tk and Dk matrices define the term and document vector spaces which with the computed singular values Sk embody the conceptual information derived from the document collection The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces typically computed as a function of the angle between the corresponding vectors The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index By a simple transformation of the A T S DT equation into the equivalent D AT T S 1 equation a new vector d for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S 1 The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document A drawback to computing vectors in this way when adding new searchable documents is that terms that were not known during the SVD phase for the original index are ignored These terms will have no impact on the global weights and learned correlations derived from the original collection of text However the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called folding in Although the folding in process does not account for the new semantic content of the new text adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added When the terms and concepts of a new set of documents need to be included in an LSI index either the term document matrix and the SVD must be recomputed or an incremental update method such as the one described in 13 is needed Additional uses of LSI editIt is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems As a result the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome LSI is being used in a variety of information retrieval and text processing applications although its primary application has been for concept searching and automated document categorization 33 Below are some other ways in which LSI is being used Information discovery 34 eDiscovery Government Intelligence community Publishing Automated document classification eDiscovery Government Intelligence community Publishing 35 Text summarization 36 eDiscovery Publishing Relationship discovery 37 Government Intelligence community Social Networking Automatic generation of link charts of individuals and organizations 38 Government Intelligence community Matching technical papers and grants with reviewers 39 Government Online customer support 40 Customer Management Determining document authorship 41 Education Automatic keyword annotation of images 42 Understanding software source code 43 Software Engineering Filtering spam 44 System Administration Information visualization 45 Essay scoring 46 Education Literature based discovery 47 Stock returns prediction 6 Dream Content Analysis Psychology 7 LSI is increasingly being used for electronic document discovery eDiscovery to help enterprises prepare for litigation In eDiscovery the ability to cluster categorize and search large collections of unstructured text on a conceptual basis is essential Concept based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003 48 Challenges to LSI editEarly challenges to LSI focused on scalability and performance LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques 49 However with the implementation of modern high speed processors and the availability of inexpensive memory these considerations have been largely overcome Real world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications A fully scalable unlimited number of documents online training implementation of LSI is contained in the open source gensim software package 50 Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD As a general rule fewer dimensions allow for broader comparisons of the concepts contained in a collection of text while a higher number of dimensions enable more specific or more relevant comparisons of concepts The actual number of dimensions that can be used is limited by the number of documents in the collection Research has demonstrated that around 300 dimensions will usually provide the best results with moderate sized document collections hundreds of thousands of documents and perhaps 400 dimensions for larger document collections millions of documents 51 However recent studies indicate that 50 1000 dimensions are suitable depending on the size and nature of the document collection 52 Checking the proportion of variance retained similar to PCA or factor analysis to determine the optimal dimensionality is not suitable for LSI Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality 53 When LSI topics are used as features in supervised learning methods one can use prediction error measurements to find the ideal dimensionality See also editCoh Metrix Compound term processing Distributional semantics Explicit semantic analysis Latent semantic mapping Latent semantic structure indexing Principal components analysis Probabilistic latent semantic analysis Spamdexing Word vector Topic model Latent Dirichlet allocationReferences edit Susan T Dumais 2005 Latent Semantic Analysis Annual Review of Information Science and Technology 38 188 230 doi 10 1002 aris 1440380105 The Latent Semantic Indexing home page http topicmodels west uni koblenz de ckling tmt svd ap html Markovsky I 2012 Low Rank Approximation Algorithms Implementation Applications Springer 2012 ISBN 978 1 4471 2226 5 page needed Alain Lifchitz Sandra Jhean Larose Guy Denhiere 2009 Effect of tuned parameters on an LSA multiple choice questions answering model PDF Behavior Research Methods 41 4 1201 1209 arXiv 0811 0146 doi 10 3758 BRM 41 4 1201 PMID 19897829 S2CID 480826 a b Ramiro H Galvez Agustin Gravano 2017 Assessing the usefulness of online message board mining in automatic stock prediction systems Journal of Computational Science 19 1877 7503 doi 10 1016 j jocs 2017 01 001 a b Altszyler E Ribeiro S Sigman M Fernandez Slezak D 2017 The interpretation of dream meaning Resolving ambiguity using Latent Semantic Analysis in a small corpus of text Consciousness and Cognition 56 178 187 arXiv 1610 01520 doi 10 1016 j concog 2017 09 004 PMID 28943127 S2CID 195347873 Gerry J Elman October 2007 Automated Patent Examination Support A proposal Biotechnology Law Report 26 5 435 436 doi 10 1089 blr 2007 9896 Marc W Howard Michael J Kahana 1999 Contextual Variability and Serial Position Effects in Free Recall PDF a href Template Cite journal html title Template Cite journal cite journal a Cite journal requires journal help Franklin M Zaromb et al 2006 Temporal Associations and Prior List Intrusions in Free Recall PDF Interspeech 2005 Nelson Douglas The University of South Florida Word Association Rhyme and Word Fragment Norms Retrieved May 8 2011 Genevieve Gorrell Brandyn Webb 2005 Generalized Hebbian Algorithm for Latent Semantic Analysis PDF Interspeech 2005 Archived from the original PDF on 2008 12 21 a b Matthew Brand 2006 Fast Low Rank Modifications of the Thin Singular Value Decomposition PDF Linear Algebra and Its Applications 415 20 30 doi 10 1016 j laa 2005 07 021 Ding Yaguang Zhu Guofeng Cui Chenyang Zhou Jian Tao Liang 2011 A parallel implementation of Singular Value Decomposition based on Map Reduce and PARPACK Proceedings of 2011 International Conference on Computer Science and Network Technology pp 739 741 doi 10 1109 ICCSNT 2011 6182070 ISBN 978 1 4577 1587 7 S2CID 15281129 a b Deerwester Scott Dumais Susan T Furnas George W Landauer Thomas K Harshman Richard 1990 Indexing by latent semantic analysis Journal of the American Society for Information Science 41 6 391 407 CiteSeerX 10 1 1 108 8490 doi 10 1002 SICI 1097 4571 199009 41 6 lt 391 AID ASI1 gt 3 0 CO 2 9 Abedi Vida Yeasin Mohammed Zand Ramin 27 November 2014 Empirical study using network of semantically related associations in bridging the knowledge gap Journal of Translational Medicine 12 1 324 doi 10 1186 s12967 014 0324 9 PMC 4252998 PMID 25428570 Thomas Hofmann 1999 Probabilistic Latent Semantic Analysis Uncertainty in Artificial Intelligence arXiv 1301 6705 Salakhutdinov Ruslan and Geoffrey Hinton Semantic hashing RBM 500 3 2007 500 a b c Deerwester S et al Improving Information Retrieval with Latent Semantic Indexing Proceedings of the 51st Annual Meeting of the American Society for Information Science 25 1988 pp 36 40 Benzecri J P 1973 L Analyse des Donnees Volume II L Analyse des Correspondences Paris France Dunod Furnas G W Landauer T K Gomez L M Dumais S T 1987 The vocabulary problem in human system communication Communications of the ACM 30 11 964 971 CiteSeerX 10 1 1 118 4768 doi 10 1145 32206 32212 S2CID 3002280 Landauer T et al Learning Human like Knowledge by Singular Value Decomposition A Progress Report M I Jordan M J Kearns amp S A Solla Eds Advances in Neural Information Processing Systems 10 Cambridge MIT Press 1998 pp 45 51 Dumais S Platt J Heckerman D Sahami M 1998 Inductive learning algorithms and representations for text categorization PDF Proceedings of the seventh international conference on Information and knowledge management CIKM 98 pp 148 CiteSeerX 10 1 1 80 8909 doi 10 1145 288627 288651 ISBN 978 1581130614 S2CID 617436 Homayouni R Heinrich K Wei L Berry M W 2004 Gene clustering by Latent Semantic Indexing of MEDLINE abstracts Bioinformatics 21 1 104 115 doi 10 1093 bioinformatics bth464 PMID 15308538 Price R J Zukas A E 2005 Application of Latent Semantic Indexing to Processing of Noisy Text Intelligence and Security Informatics Lecture Notes in Computer Science Vol 3495 p 602 doi 10 1007 11427995 68 ISBN 978 3 540 25999 2 Ding C A Similarity based Probability Model for Latent Semantic Indexing Proceedings of the 22nd International ACM SIGIR Conference on Research and Development in Information Retrieval 1999 pp 59 65 Bartell B Cottrell G and Belew R Latent Semantic Indexing is an Optimal Special Case of Multidimensional Scaling dead link Proceedings ACM SIGIR Conference on Research and Development in Information Retrieval 1992 pp 161 167 Graesser A Karnavat A 2000 Latent Semantic Analysis Captures Causal Goal oriented and Taxonomic Structures Proceedings of CogSci 2000 184 189 CiteSeerX 10 1 1 23 5444 Dumais S Nielsen J 1992 Automating the assignment of submitted manuscripts to reviewers Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval SIGIR 92 pp 233 244 CiteSeerX 10 1 1 16 9793 doi 10 1145 133160 133205 ISBN 978 0897915236 S2CID 15038631 Berry M W and Browne M Understanding Search Engines Mathematical Modeling and Text Retrieval Society for Industrial and Applied Mathematics Philadelphia 2005 Landauer T et al Handbook of Latent Semantic Analysis Lawrence Erlbaum Associates 2007 Berry Michael W Dumais Susan T O Brien Gavin W Using Linear Algebra for Intelligent Information Retrieval December 1994 SIAM Review 37 4 1995 pp 573 595 Dumais S Latent Semantic Analysis ARIST Review of Information Science and Technology vol 38 2004 Chapter 4 Best Practices Commentary on the Use of Search and Information Retrieval Methods in E Discovery the Sedona Conference 2007 pp 189 223 Foltz P W and Dumais S T Personalized Information Delivery An analysis of information filtering methods Communications of the ACM 1992 34 12 51 60 Gong Y and Liu X Creating Generic Text Summaries Proceedings Sixth International Conference on Document Analysis and Recognition 2001 pp 903 907 Bradford R Efficient Discovery of New Information in Large Text Databases Proceedings IEEE International Conference on Intelligence and Security Informatics Atlanta Georgia LNCS Vol 3495 Springer 2005 pp 374 380 Bradford R B 2006 Application of Latent Semantic Indexing in Generating Graphs of Terrorist Networks Intelligence and Security Informatics Lecture Notes in Computer Science Vol 3975 pp 674 675 doi 10 1007 11760146 84 ISBN 978 3 540 34478 0 Yarowsky D and Florian R Taking the Load off the Conference Chairs Towards a Digital Paper routing Assistant Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in NLP and Very Large Corpora 1999 pp 220 230 Caron J Applying LSA to Online Customer Support A Trial Study Unpublished Master s Thesis May 2000 Soboroff I et al Visualizing Document Authorship Using N grams and Latent Semantic Indexing Workshop on New Paradigms in Information Visualization and Manipulation 1997 pp 43 48 Monay F and Gatica Perez D On Image Auto annotation with Latent Space Models Proceedings of the 11th ACM international conference on Multimedia Berkeley CA 2003 pp 275 278 Maletic J Marcus A November 13 15 2000 Using latent semantic analysis to identify similarities in source code to support program understanding Proceedings 12th IEEE Internationals Conference on Tools with Artificial Intelligence ICTAI 2000 Vancouver British Columbia pp 46 53 CiteSeerX 10 1 1 36 6652 doi 10 1109 TAI 2000 889845 ISBN 978 0 7695 0909 9 S2CID 10354564 a href Template Cite book html title Template Cite book cite book a CS1 maint location missing publisher link Gee K Using Latent Semantic Indexing to Filter Spam in Proceedings 2003 ACM Symposium on Applied Computing Melbourne Florida pp 460 464 Landauer T Laham D and Derr M From Paragraph to Graph Latent Semantic Analysis for Information Visualization Proceedings of the National Academy of Sciences 101 2004 pp 5214 5219 Foltz Peter W Laham Darrell and Landauer Thomas K Automated Essay Scoring Applications to Educational Technology Proceedings of EdMedia 1999 Gordon M and Dumais S Using Latent Semantic Indexing for Literature Based Discovery Journal of the American Society for Information Science 49 8 1998 pp 674 685 There Has to be a Better Way to Search 2008 White Paper Fios Inc Karypis G Han E Fast Supervised Dimensionality Reduction Algorithm with Applications to Document Categorization and Retrieval Proceedings of CIKM 00 9th ACM Conference on Information and Knowledge Management Radim Rehurek 2011 Subspace Tracking for Latent Semantic Analysis Advances in Information Retrieval Lecture Notes in Computer Science Vol 6611 pp 289 300 doi 10 1007 978 3 642 20161 5 29 ISBN 978 3 642 20160 8 Bradford R An Empirical Study of Required Dimensionality for Large scale Latent Semantic Indexing Applications Proceedings of the 17th ACM Conference on Information and Knowledge Management Napa Valley California USA 2008 pp 153 162 Landauer Thomas K and Dumais Susan T Latent Semantic Analysis Scholarpedia 3 11 4356 2008 Landauer T K Foltz P W amp Laham D 1998 Introduction to Latent Semantic Analysis Discourse Processes 25 259 284Further reading editLandauer Thomas Foltz Peter W Laham Darrell 1998 Introduction to Latent Semantic Analysis PDF Discourse Processes 25 2 3 259 284 CiteSeerX 10 1 1 125 109 doi 10 1080 01638539809545028 S2CID 16625196 Deerwester Scott Dumais Susan T Furnas George W Landauer Thomas K Harshman Richard 1990 Indexing by Latent Semantic Analysis PDF Journal of the American Society for Information Science 41 6 391 407 CiteSeerX 10 1 1 33 2447 doi 10 1002 SICI 1097 4571 199009 41 6 lt 391 AID ASI1 gt 3 0 CO 2 9 Archived from the original PDF on 2012 07 17 Original article where the model was first exposed Berry Michael Dumais Susan T O Brien Gavin W 1995 Using Linear Algebra for Intelligent Information Retrieval a href Template Cite journal html title Template Cite journal cite journal a Cite journal requires journal help PDF Archived 2018 11 23 at the Wayback Machine Illustration of the application of LSA to document retrieval Chicco D Masseroli M 2015 Software suite for gene and protein annotation prediction and similarity search IEEE ACM Transactions on Computational Biology and Bioinformatics 12 4 837 843 doi 10 1109 TCBB 2014 2382127 hdl 11311 959408 PMID 26357324 S2CID 14714823 Latent Semantic Analysis InfoVis Archived from the original on 2020 02 18 Retrieved 2005 07 01 Fridolin Wild November 23 2005 An Open Source LSA Package for R CRAN Retrieved November 20 2006 Thomas Landauer Susan T Dumais A Solution to Plato s Problem The Latent Semantic Analysis Theory of Acquisition Induction and Representation of Knowledge Retrieved 2007 07 02 External links editArticles on LSA edit Latent Semantic Analysis a scholarpedia article on LSA written by Tom Landauer one of the creators of LSA Talks and demonstrations edit LSA Overview talk by Prof Thomas Hofmann describing LSA its applications in Information Retrieval and its connections to probabilistic latent semantic analysis Complete LSA sample code in C for Windows The demo code includes enumeration of text files filtering stop words stemming making a document term matrix and SVD Implementations edit Due to its cross domain applications in Information Retrieval Natural Language Processing NLP Cognitive Science and Computational Linguistics LSA has been implemented to support many different kinds of applications Sense Clusters an Information Retrieval oriented perl implementation of LSA S Space Package a Computational Linguistics and Cognitive Science oriented Java implementation of LSA Semantic Vectors applies Random Projection LSA and Reflective Random Indexing to Lucene term document matrices Infomap Project an NLP oriented C implementation of LSA superseded by semanticvectors project Text to Matrix Generator A MATLAB Toolbox for generating term document matrices from text collections with support for LSA Gensim contains a Python implementation of LSA for matrices larger than RAM Retrieved from https en wikipedia org w index php title Latent semantic analysis amp oldid 1218342466 Latent semantic indexing, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.