fbpx
Wikipedia

Markov logic network

A Markov logic network (MLN) is a probabilistic logic which applies the ideas of a Markov network to first-order logic, enabling uncertain inference. Markov logic networks generalize first-order logic, in the sense that, in a certain limit, all unsatisfiable statements have a probability of zero, and all tautologies have probability one.

History

Work in this area began in 2003 by Pedro Domingos and Matt Richardson, and they began to use the term MLN to describe it.[1][2]

Description

Briefly, it is a collection of formulas from first-order logic, to each of which is assigned a real number, the weight. Taken as a Markov network, the vertices of the network graph are atomic formulas, and the edges are the logical connectives used to construct the formula. Each formula is considered to be a clique, and the Markov blanket is the set of formulas in which a given atom appears. A potential function is associated to each formula, and takes the value of one when the formula is true, and zero when it is false. The potential function is combined with the weight to form the Gibbs measure and partition function for the Markov network.

The above definition glosses over a subtle point: atomic formulas do not have a truth value unless they are grounded and given an interpretation; that is, until they are ground atoms with a Herbrand interpretation. Thus, a Markov logic network becomes a Markov network only with respect to a specific grounding and interpretation; the resulting Markov network is called the ground Markov network. The vertices of the graph of the ground Markov network are the ground atoms. The size of the resulting Markov network thus depends strongly (exponentially) on the number of constants in the domain of discourse.

Inference

The goal of inference in a Markov logic network is to find the stationary distribution of the system, or one that is close to it; that this may be difficult or not always possible is illustrated by the richness of behaviour seen in the Ising model. As in a Markov network, the stationary distribution finds the most likely assignment of probabilities to the vertices of the graph; in this case, the vertices are the ground atoms of an interpretation. That is, the distribution indicates the probability of the truth or falsehood of each ground atom. Given the stationary distribution, one can then perform inference in the traditional statistical sense of conditional probability: obtain the probability   that formula A holds, given that formula B is true.

Inference in MLNs can be performed using standard Markov network inference techniques over the minimal subset of the relevant Markov network required for answering the query. These techniques include Gibbs sampling, which is effective but may be excessively slow for large networks, belief propagation, or approximation via pseudolikelihood.

See also

Resources

  1. ^ Domingos, Pedro (2015). The Master Algorithm: How machine learning is reshaping how we live. p. 246-7.
  2. ^ Richardson, Matthew; Domingos, Pedro (2006). "Markov Logic Networks" (PDF). Machine Learning. 62 (1–2): 107–136. doi:10.1007/s10994-006-5833-1.

External links

  • Alchemy 2.0: Markov logic networks in C++
  • pracmln: Markov logic networks in Python
  • ProbCog: Markov logic networks in Python and Java that can use its own inference engine or Alchemy's
  • RockIt: Markov logic networks in Java (with web interface/REST API)
  • Tuffy: A Learning and Inference Engine with strong RDBMs-based optimization for scalability
  • Felix: A successor to Tuffy, with prebuilt submodules to speed up common subtasks
  • Factorie: Scala based probabilistic inference language, with prebuilt submodules for natural language processing etc
  • Figaro: Scala based MLN language
  • LoMRF: Logical Markov Random Fields, an open-source implementation of Markov Logic Networks in Scala

markov, logic, network, this, article, needs, additional, citations, verification, please, help, improve, this, article, adding, citations, reliable, sources, unsourced, material, challenged, removed, find, sources, news, newspapers, books, scholar, jstor, feb. This article needs additional citations for verification Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources Markov logic network news newspapers books scholar JSTOR February 2020 Learn how and when to remove this template message A Markov logic network MLN is a probabilistic logic which applies the ideas of a Markov network to first order logic enabling uncertain inference Markov logic networks generalize first order logic in the sense that in a certain limit all unsatisfiable statements have a probability of zero and all tautologies have probability one Contents 1 History 2 Description 3 Inference 4 See also 5 Resources 6 External linksHistory EditWork in this area began in 2003 by Pedro Domingos and Matt Richardson and they began to use the term MLN to describe it 1 2 Description EditBriefly it is a collection of formulas from first order logic to each of which is assigned a real number the weight Taken as a Markov network the vertices of the network graph are atomic formulas and the edges are the logical connectives used to construct the formula Each formula is considered to be a clique and the Markov blanket is the set of formulas in which a given atom appears A potential function is associated to each formula and takes the value of one when the formula is true and zero when it is false The potential function is combined with the weight to form the Gibbs measure and partition function for the Markov network The above definition glosses over a subtle point atomic formulas do not have a truth value unless they are grounded and given an interpretation that is until they are ground atoms with a Herbrand interpretation Thus a Markov logic network becomes a Markov network only with respect to a specific grounding and interpretation the resulting Markov network is called the ground Markov network The vertices of the graph of the ground Markov network are the ground atoms The size of the resulting Markov network thus depends strongly exponentially on the number of constants in the domain of discourse Inference EditThe goal of inference in a Markov logic network is to find the stationary distribution of the system or one that is close to it that this may be difficult or not always possible is illustrated by the richness of behaviour seen in the Ising model As in a Markov network the stationary distribution finds the most likely assignment of probabilities to the vertices of the graph in this case the vertices are the ground atoms of an interpretation That is the distribution indicates the probability of the truth or falsehood of each ground atom Given the stationary distribution one can then perform inference in the traditional statistical sense of conditional probability obtain the probability P A B displaystyle P A B that formula A holds given that formula B is true Inference in MLNs can be performed using standard Markov network inference techniques over the minimal subset of the relevant Markov network required for answering the query These techniques include Gibbs sampling which is effective but may be excessively slow for large networks belief propagation or approximation via pseudolikelihood See also EditMarkov random field Statistical relational learning Probabilistic logic network Probabilistic soft logicResources Edit Domingos Pedro 2015 The Master Algorithm How machine learning is reshaping how we live p 246 7 Richardson Matthew Domingos Pedro 2006 Markov Logic Networks PDF Machine Learning 62 1 2 107 136 doi 10 1007 s10994 006 5833 1 External links EditUniversity of Washington Statistical Relational Learning group Alchemy 2 0 Markov logic networks in C pracmln Markov logic networks in Python ProbCog Markov logic networks in Python and Java that can use its own inference engine or Alchemy s markov thebeast Markov logic networks in Java RockIt Markov logic networks in Java with web interface REST API Tuffy A Learning and Inference Engine with strong RDBMs based optimization for scalability Felix A successor to Tuffy with prebuilt submodules to speed up common subtasks Factorie Scala based probabilistic inference language with prebuilt submodules for natural language processing etc Figaro Scala based MLN language LoMRF Logical Markov Random Fields an open source implementation of Markov Logic Networks in Scala Retrieved from https en wikipedia org w index php title Markov logic network amp oldid 1022016970, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.