fbpx
Wikipedia

Hungarian algorithm

The Hungarian method is a combinatorial optimization algorithm that solves the assignment problem in polynomial time and which anticipated later primal–dual methods. It was developed and published in 1955 by Harold Kuhn, who gave it the name "Hungarian method" because the algorithm was largely based on the earlier works of two Hungarian mathematicians, Dénes Kőnig and Jenő Egerváry.[1][2] However, in 2006 it was discovered that Carl Gustav Jacobi had solved the assignment problem in the 19th century, and the solution had been published posthumously in 1890 in Latin.[3]

James Munkres reviewed the algorithm in 1957 and observed that it is (strongly) polynomial.[4] Since then the algorithm has been known also as the Kuhn–Munkres algorithm or Munkres assignment algorithm. The time complexity of the original algorithm was , however Edmonds and Karp, and independently Tomizawa, noticed that it can be modified to achieve an running time.[5][6] One of the most popular[citation needed] variants is the Jonker–Volgenant algorithm.[7] Ford and Fulkerson extended the method to general maximum flow problems in form of the Ford–Fulkerson algorithm.

The problem edit

Example edit

In this simple example, there are three workers: Alice, Bob and Dora. One of them has to clean the bathroom, another sweep the floors and the third washes the windows, but they each demand different pay for the various tasks. The problem is to find the lowest-cost way to assign the jobs. The problem can be represented in a matrix of the costs of the workers doing the jobs. For example:

Task
Worker    
Clean
bathroom
Sweep
floors
Wash
windows
Alice $8 $4 $7
Bob $5 $2 $3
Dora $9 $4 $8

The Hungarian method, when applied to the above table, would give the minimum cost: this is $15, achieved by having Alice clean the bathroom, Dora sweep the floors, and Bob wash the windows. This can be confirmed using brute force:

Clean
Sweep
Alice Bob Dora
Alice $17 $16
Bob $18 $18
Dora $15 $16
(the unassigned person washes the windows)

Matrix formulation edit

In the matrix formulation, we are given a nonnegative n×n matrix, where the element in the i-th row and j-th column represents the cost of assigning the j-th job to the i-th worker. We have to find an assignment of the jobs to the workers, such that each job is assigned to one worker and each worker is assigned one job, such that the total cost of assignment is minimum.

This can be expressed as permuting the rows of a cost matrix C to minimize the trace of a matrix,

 

where P is a permutation matrix. (Equivalently, the columns can be permuted using CP.)

If the goal is to find the assignment that yields the maximum cost, the problem can be solved by negating the cost matrix C.

Bipartite graph formulation edit

The algorithm can equivalently be described by formulating the problem using a bipartite graph. We have a complete bipartite graph   with n worker vertices (S) and n job vertices (T), and the edges (E) each have a nonnegative cost  . We want to find a perfect matching with a minimum total cost.

The algorithm in terms of bipartite graphs edit

Let us call a function   a potential if   for each  . The value of potential y is the sum of the potential over all vertices:  .

The cost of each perfect matching is at least the value of each potential: the total cost of the matching is the sum of costs of all edges; the cost of each edge is at least the sum of potentials of its endpoints; since the matching is perfect, each vertex is an endpoint of exactly one edge; hence the total cost is at least the total potential.

The Hungarian method finds a perfect matching and a potential such that the matching cost equals the potential value. This proves that both of them are optimal. In fact, the Hungarian method finds a perfect matching of tight edges: an edge   is called tight for a potential y if  . Let us denote the subgraph of tight edges by  . The cost of a perfect matching in   (if there is one) equals the value of y.

During the algorithm we maintain a potential y and an orientation of   (denoted by  ) which has the property that the edges oriented from T to S form a matching M. Initially, y is 0 everywhere, and all edges are oriented from S to T (so M is empty). In each step, either we modify y so that its value increases, or modify the orientation to obtain a matching with more edges. We maintain the invariant that all the edges of M are tight. We are done if M is a perfect matching.

In a general step, let   and   be the vertices not covered by M (so   consists of the vertices in S with no incoming edge and   consists of the vertices in T with no outgoing edge). Let Z be the set of vertices reachable in   from   by a directed path. This can be computed by breadth-first search.

If   is nonempty, then reverse the orientation of all edges along a directed path in   from   to  . Thus the size of the corresponding matching increases by 1.

If   is empty, then let

 

Δ is well defined because at least one such edge   must exist whenever the matching is not yet of maximum possible size (see the following section); it is positive because there are no tight edges between   and  . Increase y by Δ on the vertices of   and decrease y by Δ on the vertices of  . The resulting y is still a potential, and although the graph   changes, it still contains M (see the next subsections). We orient the new edges from S to T. By the definition of Δ the set Z of vertices reachable from   increases (note that the number of tight edges does not necessarily increase).

We repeat these steps until M is a perfect matching, in which case it gives a minimum cost assignment. The running time of this version of the method is  : M is augmented n times, and in a phase where M is unchanged, there are at most n potential changes (since Z increases every time). The time sufficient for a potential change is  .

Proof that the algorithm makes progress edit

We must show that as long as the matching is not of maximum possible size, the algorithm is always able to make progress — that is, to either increase the number of matched edges, or tighten at least one edge. It suffices to show that at least one of the following holds at every step:

  • M is of maximum possible size.
  •   contains an augmenting path.
  • G contains a loose-tailed path: a path from some vertex in   to a vertex in   that consists of any number (possibly zero) of tight edges followed by a single loose edge. The trailing loose edge of a loose-tailed path is thus from  , guaranteeing that Δ is well defined.

If M is of maximum possible size, we are of course finished. Otherwise, by Berge's lemma, there must exist an augmenting path P with respect to M in the underlying graph G. However, this path may not exist in  : Although every even-numbered edge in P is tight by the definition of M, odd-numbered edges may be loose and thus absent from  . One endpoint of P is in  , the other in  ; w.l.o.g., suppose it begins in  . If every edge on P is tight, then it remains an augmenting path in   and we are done. Otherwise, let   be the first loose edge on P. If   then we have found a loose-tailed path and we are done. Otherwise, v is reachable from some other path Q of tight edges from a vertex in  . Let   be the subpath of P beginning at v and continuing to the end, and let   be the path formed by traveling along Q until a vertex on   is reached, and then continuing to the end of  . Observe that   is an augmenting path in G with at least one fewer loose edge than P. P can be replaced with   and this reasoning process iterated (formally, using induction on the number of loose edges) until either an augmenting path in   or a loose-tailed path in G is found.

Proof that adjusting the potential y leaves M unchanged edit

To show that every edge in M remains after adjusting y, it suffices to show that for an arbitrary edge in M, either both of its endpoints, or neither of them, are in Z. To this end let   be an edge in M from T to S. It is easy to see that if v is in Z then u must be too, since every edge in M is tight. Now suppose, toward contradiction, that   but  . u itself cannot be in   because it is the endpoint of a matched edge, so there must be some directed path of tight edges from a vertex in   to u. This path must avoid v, since that is by assumption not in Z, so the vertex immediately preceding u in this path is some other vertex  .   is a tight edge from T to S and is thus in M. But then M contains two edges that share the vertex u, contradicting the fact that M is a matching. Thus every edge in M has either both endpoints or neither endpoint in Z.

Proof that y remains a potential edit

To show that y remains a potential after being adjusted, it suffices to show that no edge has its total potential increased beyond its cost. This is already established for edges in M by the preceding paragraph, so consider an arbitrary edge uv from S to T. If   is increased by Δ, then either  , in which case   is decreased by Δ, leaving the total potential of the edge unchanged, or  , in which case the definition of Δ guarantees that  . Thus y remains a potential.

The algorithm in O(n3) time edit

Suppose there are   jobs and   workers ( ). We describe how to compute for each prefix of jobs the minimum total cost to assign each of these jobs to distinct workers. Specifically, we add the  th job and update the total cost in time  , yielding an overall time complexity of  . Note that this is better than   when the number of jobs is small relative to the number of workers.

Adding the j-th job in O(jW) time edit

We use the same notation as the previous section, though we modify their definitions as necessary. Let   denote the set of the first   jobs and   denote the set of all workers.

Before the  th step of the algorithm, we assume that we have a matching on   that matches all jobs in   and potentials   satisfying the following condition: the matching is tight with respect to the potentials, and the potentials of all unmatched workers are zero, and the potentials of all matched workers are non-positive. Note that such potentials certify the optimality of the matching.

During the  th step, we add the  th job to   to form   and initialize  . At all times, every vertex in   will be reachable from the  th job in  . While   does not contain a worker that has not been assigned a job, let

 

and   denote any   at which the minimum is attained. After adjusting the potentials in the way described in the previous section, there is now a tight edge from   to  .

  • If   is unmatched, then we have an augmenting path in the subgraph of tight edges from   to  . After toggling the matching along this path, we have now matched the first   jobs, and this procedure terminates.
  • Otherwise, we add   and the job matched with it to  .

Adjusting potentials takes   time. Recomputing   and   after changing the potentials and   also can be done in   time. Case 1 can occur at most   times before case 2 occurs and the procedure terminates, yielding the overall time complexity of  .

Implementation in C++ edit

For convenience of implementation, the code below adds an additional worker   such that   stores the negation of the sum of all   computed so far. After the  th job is added and the matching updated, the cost of the current matching equals the sum of all   computed so far, or  .

This code is adapted from e-maxx :: algo.[8]

/**  * Solution to https://open.kattis.com/problems/cordonbleu using Hungarian  * algorithm.  */  #include <cassert> #include <iostream> #include <limits> #include <vector> using namespace std;  /**  * Sets a = min(a, b)  * @return true if b < a  */ template <class T> bool ckmin(T &a, const T &b) { return b < a ? a = b, 1 : 0; }  /**  * Given J jobs and W workers (J <= W), computes the minimum cost to assign each  * prefix of jobs to distinct workers.  *  * @tparam T a type large enough to represent integers on the order of J *  * max(|C|)  * @param C a matrix of dimensions JxW such that C[j][w] = cost to assign j-th  * job to w-th worker (possibly negative)  *  * @return a vector of length J, with the j-th entry equaling the minimum cost  * to assign the first (j+1) jobs to distinct workers  */ template <class T> vector<T> hungarian(const vector<vector<T>> &C) {  const int J = (int)size(C), W = (int)size(C[0]);  assert(J <= W);  // job[w] = job assigned to w-th worker, or -1 if no job assigned  // note: a W-th worker was added for convenience  vector<int> job(W + 1, -1);  vector<T> ys(J), yt(W + 1); // potentials  // -yt[W] will equal the sum of all deltas  vector<T> answers;  const T inf = numeric_limits<T>::max();  for (int j_cur = 0; j_cur < J; ++j_cur) { // assign j_cur-th job  int w_cur = W;  job[w_cur] = j_cur;  // min reduced cost over edges from Z to worker w  vector<T> min_to(W + 1, inf);  vector<int> prv(W + 1, -1); // previous worker on alternating path  vector<bool> in_Z(W + 1); // whether worker is in Z  while (job[w_cur] != -1) { // runs at most j_cur + 1 times  in_Z[w_cur] = true;  const int j = job[w_cur];  T delta = inf;  int w_next;  for (int w = 0; w < W; ++w) {   if (!in_Z[w]) {   if (ckmin(min_to[w], C[j][w] - ys[j] - yt[w]))   prv[w] = w_cur;   if (ckmin(delta, min_to[w])) w_next = w;   }  }  // delta will always be non-negative,  // except possibly during the first time this loop runs  // if any entries of C[j_cur] are negative  for (int w = 0; w <= W; ++w) {   if (in_Z[w]) ys[job[w]] += delta, yt[w] -= delta;   else min_to[w] -= delta;  }  w_cur = w_next;  }  // update assignments along alternating path  for (int w; w_cur != W; w_cur = w) job[w_cur] = job[w = prv[w_cur]];  answers.push_back(-yt[W]);  }  return answers; }  /**  * Sanity check: https://en.wikipedia.org/wiki/Hungarian_algorithm#Example  * First job (5):  * clean bathroom: Bob -> 5  * First + second jobs (9):  * clean bathroom: Bob -> 5  * sweep floors: Alice -> 4  * First + second + third jobs (15):  * clean bathroom: Alice -> 8  * sweep floors: Dora -> 4  * wash windows: Bob -> 3  */ void sanity_check_hungarian() {  vector<vector<int>> costs{{8, 5, 9}, {4, 2, 4}, {7, 3, 8}};  assert((hungarian(costs) == vector<int>{5, 9, 15}));  cerr << "Sanity check passed.\n"; }  // solves https://open.kattis.com/problems/cordonbleu void cordon_bleu() {  int N, M;  cin >> N >> M;  vector<pair<int, int>> B(N), C(M);  vector<pair<int, int>> bottles(N), couriers(M);  for (auto &b : bottles) cin >> b.first >> b.second;  for (auto &c : couriers) cin >> c.first >> c.second;  pair<int, int> rest;  cin >> rest.first >> rest.second;  vector<vector<int>> costs(N, vector<int>(N + M - 1));  auto dist = [&](pair<int, int> x, pair<int, int> y) {  return abs(x.first - y.first) + abs(x.second - y.second);  };  for (int b = 0; b < N; ++b) {  for (int c = 0; c < M; ++c) { // courier -> bottle -> restaurant  costs[b][c] =   dist(couriers[c], bottles[b]) + dist(bottles[b], rest);  }  for (int _ = 0; _ < N - 1; ++_) { // restaurant -> bottle -> restaurant  costs[b][_ + M] = 2 * dist(bottles[b], rest);  }  }  cout << hungarian(costs).back() << "\n"; }  int main() {  sanity_check_hungarian();  cordon_bleu(); } 

Connection to successive shortest paths edit

The Hungarian algorithm can be seen to be equivalent to the successive shortest path algorithm for minimum cost flow,[9][10] where the reweighting technique from Johnson's algorithm is used to find the shortest paths. The implementation from the previous section is rewritten below in such a way as to emphasize this connection; it can be checked that the potentials   for workers   are equal to the potentials   from the previous solution up to a constant offset. When the graph is sparse (there are only   allowed job, worker pairs), it is possible to optimize this algorithm to run in   time by using a Fibonacci heap to determine   instead of iterating over all   workers to find the one with minimum distance (alluded to here).

template <class T> vector<T> hungarian(const vector<vector<T>> &C) {  const int J = (int)size(C), W = (int)size(C[0]);  assert(J <= W);  // job[w] = job assigned to w-th worker, or -1 if no job assigned  // note: a W-th worker was added for convenience  vector<int> job(W + 1, -1);  vector<T> h(W); // Johnson potentials  vector<T> answers;  T ans_cur = 0;  const T inf = numeric_limits<T>::max();  // assign j_cur-th job using Dijkstra with potentials  for (int j_cur = 0; j_cur < J; ++j_cur) {  int w_cur = W; // unvisited worker with minimum distance  job[w_cur] = j_cur;  vector<T> dist(W + 1, inf); // Johnson-reduced distances  dist[W] = 0;  vector<bool> vis(W + 1); // whether visited yet  vector<int> prv(W + 1, -1); // previous worker on shortest path  while (job[w_cur] != -1) { // Dijkstra step: pop min worker from heap  T min_dist = inf;  vis[w_cur] = true;  int w_next = -1; // next unvisited worker with minimum distance  // consider extending shortest path by w_cur -> job[w_cur] -> w  for (int w = 0; w < W; ++w) {   if (!vis[w]) {   // sum of reduced edge weights w_cur -> job[w_cur] -> w   T edge = C[job[w_cur]][w] - h[w];   if (w_cur != W) {   edge -= C[job[w_cur]][w_cur] - h[w_cur];   assert(edge >= 0); // consequence of Johnson potentials   }   if (ckmin(dist[w], dist[w_cur] + edge)) prv[w] = w_cur;   if (ckmin(min_dist, dist[w])) w_next = w;   }  }  w_cur = w_next;  }  for (int w = 0; w < W; ++w) { // update potentials  ckmin(dist[w], dist[w_cur]);  h[w] += dist[w];  }  ans_cur += h[w_cur];  for (int w; w_cur != W; w_cur = w) job[w_cur] = job[w = prv[w_cur]];  answers.push_back(ans_cur);  }  return answers; } 

Matrix interpretation edit

This variant of the algorithm follows the formulation given by Flood,[11] and later described more explicitly by Munkres, who proved it runs in   time.[4] Instead of keeping track of the potentials of the vertices, the algorithm operates only on a matrix:

 

where   is the original cost matrix and   are the potentials from the graph interpretation. Changing the potentials corresponds to adding or subtracting from rows or columns of this matrix. The algorithm starts with  . As such, it can be viewed as taking the original cost matrix and modifying it.

Given n workers and tasks, the problem is written in the form of an n×n cost matrix

a1 a2 a3 a4
b1 b2 b3 b4
c1 c2 c3 c4
d1 d2 d3 d4

where a, b, c and d are workers who have to perform tasks 1, 2, 3 and 4. a1, a2, a3, and a4 denote the penalties incurred when worker "a" does task 1, 2, 3, and 4 respectively.

The problem is equivalent to assigning each worker a unique task such that the total penalty is minimized. Note that each task can only be worked on by one worker.

Step 1 edit

For each row, its minimum element is subtracted from every element in that row. This causes all elements to have non-negative values. Therefore, an assignment with a total penalty of 0 is by definition a minimum assignment.

This also leads to at least one zero in each row. As such, a naive greedy algorithm can attempt to assign all workers a task with a penalty of zero. This is illustrated below.

0 a2 a3 a4
b1 b2 b3 0
c1 0 c3 c4
d1 d2 0 d4

The zeros above would be the assigned tasks.

Worst-case there are n! combinations to try, since multiple zeroes can appear in a row if multiple elements are the minimum. So at some point this naive algorithm should be short circuited.

Step 2 edit

Sometimes it may turn out that the matrix at this stage cannot be used for assigning, as is the case for the matrix below.

0 a2 0 a4
b1 0 b3 0
0 c2 c3 c4
0 d2 d3 d4

To overcome this, we repeat the above procedure for all columns (i.e. the minimum element in each column is subtracted from all the elements in that column) and then check if an assignment with penalty 0 is possible.

In most situations this will give the result, but if it is still not possible then we need to keep going.

Step 3 edit

All zeros in the matrix must be covered by marking as few rows and/or columns as possible. Steps 3 and 4 form one way to accomplish this.

For each row, try to assign an arbitrary zero. Assigned tasks are represented by starring a zero. Note that assignments can't be in the same row or column.

  • We assign the first zero of Row 1. The second zero of Row 1 can't be assigned.
  • We assign the first zero of Row 2. The second zero of Row 2 can't be assigned.
  • Zeros on Row 3 and Row 4 can't be assigned, because they are on the same column as the zero assigned on Row 1.

We could end with another assignment if we choose another ordering of the rows and columns.

0* a2 0 a4
b1 0* b3 0
0 c2 c3 c4
0 d2 d3 d4

Step 4 edit

Cover all columns containing a (starred) zero.

× ×
0* a2 0 a4
b1 0* b3 0
0 c2 c3 c4
0 d2 d3 d4

Find a non-covered zero and prime it (mark it with a prime symbol). If no such zero can be found, meaning all zeroes are covered, skip to step 5.

  • If the zero is on the same row as a starred zero, cover the corresponding row, and uncover the column of the starred zero.
  • Then, GOTO "Find a non-covered zero and prime it."
    • Here, the second zero of Row 1 is uncovered. Because there is another zero starred on Row 1, we cover Row 1 and uncover Column 1.
    • Then, the second zero of Row 2 is uncovered. We cover Row 2 and uncover Column 2.
×
0* a2 0' a4 ×
b1 0* b3 0
0 c2 c3 c4
0 d2 d3 d4
0* a2 0' a4 ×
b1 0* b3 0' ×
0 c2 c3 c4
0 d2 d3 d4
  • Else the non-covered zero has no assigned zero on its row. We make a path starting from the zero by performing the following steps:
    1. Substep 1: Find a starred zero on the corresponding column. If there is one, go to Substep 2, else, stop.
    2. Substep 2: Find a primed zero on the corresponding row (there should always be one). Go to Substep 1.

The zero on Row 3 is uncovered. We add to the path the first zero of Row 1, then the second zero of Row 1, then we are done.

0* a2 0' a4 ×
b1 0* b3 0' ×
0' c2 c3 c4
0 d2 d3 d4
  • (Else branch continued) For all zeros encountered during the path, star primed zeros and unstar starred zeros.
    • As the path begins and ends by a primed zero when swapping starred zeros, we have assigned one more zero.
0 a2 0* a4
b1 0* b3 0
0* c2 c3 c4
0 d2 d3 d4
  • (Else branch continued) Unprime all primed zeroes and uncover all lines.
  • Repeat the previous steps (continue looping until the above "skip to step 5" is reached).
    • We cover columns 1, 2 and 3. The second zero on Row 2 is uncovered, so we cover Row 2 and uncover Column 2:
× ×
0 a2 0* a4
b1 0* b3 0' ×
0* c2 c3 c4
0 d2 d3 d4

All zeros are now covered with a minimal number of rows and columns.

The aforementioned detailed description is just one way to draw the minimum number of lines to cover all the 0s. Other methods work as well.

Step 5 edit

If the number of starred zeros is n (or in the general case  , where n is the number of people and m is the number of jobs), the algorithm terminates. See the Result subsection below on how to interpret the results.

Otherwise, find the lowest uncovered value. Subtract this from every unmarked element and add it to every element covered by two lines. Go back to step 4.

This is equivalent to subtracting a number from all rows which are not covered and adding the same number to all columns which are covered. These operations do not change optimal assignments.

Result edit

If following this specific version of the algorithm, the starred zeros form the minimum assignment.

From Kőnig's theorem,[12] the minimum number of lines (minimum vertex cover[13]) will be n (the size of maximum matching[14]). Thus, when n lines are required, minimum cost assignment can be found by looking at only zeroes in the matrix.

Bibliography edit

  • R.E. Burkard, M. Dell'Amico, S. Martello: Assignment Problems (Revised reprint). SIAM, Philadelphia (PA.) 2012. ISBN 978-1-61197-222-1
  • M. Fischetti, "Lezioni di Ricerca Operativa", Edizioni Libreria Progetto Padova, Italia, 1995.
  • R. Ahuja, T. Magnanti, J. Orlin, "Network Flows", Prentice Hall, 1993.
  • S. Martello, "Jeno Egerváry: from the origins of the Hungarian algorithm to satellite communication". Central European Journal of Operational Research 18, 47–58, 2010

References edit

  1. ^ Harold W. Kuhn, "The Hungarian Method for the assignment problem", Naval Research Logistics Quarterly, 2: 83–97, 1955. Kuhn's original publication.
  2. ^ Harold W. Kuhn, "Variants of the Hungarian method for assignment problems", Naval Research Logistics Quarterly, 3: 253–258, 1956.
  3. ^ . Archived from the original on 16 October 2015.
  4. ^ a b J. Munkres, "Algorithms for the Assignment and Transportation Problems", Journal of the Society for Industrial and Applied Mathematics, 5(1):32–38, 1957 March.
  5. ^ Edmonds, Jack; Karp, Richard M. (1 April 1972). "Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems". Journal of the ACM. 19 (2): 248–264. doi:10.1145/321694.321699. S2CID 6375478.
  6. ^ Tomizawa, N. (1971). "On some techniques useful for solution of transportation network problems". Networks. 1 (2): 173–194. doi:10.1002/net.3230010206. ISSN 1097-0037.
  7. ^ Jonker, R.; Volgenant, A. (December 1987). "A shortest augmenting path algorithm for dense and sparse linear assignment problems". Computing. 38 (4): 325–340. doi:10.1007/BF02278710. S2CID 7806079.
  8. ^ "Hungarian Algorithm for Solving the Assignment Problem". e-maxx :: algo. 23 August 2012. Retrieved 13 May 2023.
  9. ^ Jacob Kogler (20 December 2022). "Minimum-cost flow - Successive shortest path algorithm". Algorithms for Competitive Programming. Retrieved 14 May 2023.
  10. ^ "Solving assignment problem using min-cost-flow". Algorithms for Competitive Programming. 17 July 2022. Retrieved 14 May 2023.
  11. ^ Flood, Merrill M. (1956). "The Traveling-Salesman Problem". Operations Research. 4 (1): 61–75. doi:10.1287/opre.4.1.61. ISSN 0030-364X.
  12. ^ Kőnig's theorem (graph theory) Konig's theorem
  13. ^ Vertex cover minimum vertex cover
  14. ^ Matching (graph theory) matching

External links edit

  • Bruff, Derek, (matrix formalism).
  • Mordecai J. Golin, Bipartite Matching and the Hungarian Method (bigraph formalism), Course Notes, Hong Kong University of Science and Technology.
  • Hungarian maximum matching algorithm (both formalisms), in Brilliant website.
  • R. A. Pilgrim, Munkres' Assignment Algorithm. Modified for Rectangular Matrices, Course notes, Murray State University.
  • Mike Dawes, , Course notes, University of Western Ontario.
  • On Kuhn's Hungarian Method – A tribute from Hungary, András Frank, Egervary Research Group, Pazmany P. setany 1/C, H1117, Budapest, Hungary.
  • Lecture: Fundamentals of Operations Research - Assignment Problem - Hungarian Algorithm, Prof. G. Srinivasan, Department of Management Studies, IIT Madras.
  • Extension: Assignment sensitivity analysis (with O(n^4) time complexity), Liu, Shell.
  • Solve any Assignment Problem online, provides a step by step explanation of the Hungarian Algorithm.

Implementations edit

Note that not all of these satisfy the   time complexity, even if they claim so. Some may contain errors, implement the slower   algorithm, or have other inefficiencies. In the worst case, a code example linked from Wikipedia could later be modified to include exploit code. Verification and benchmarking is necessary when using such code examples from unknown authors.

  • Lua and Python versions of R.A. Pilgrim's code (claiming   time complexity)
  • Julia implementation
  • C implementation claiming   time complexity
  • Java implementation claiming   time complexity
  • Python implementation
  • Ruby implementation with unit tests
  • C# implementation claiming   time complexity
  • D implementation with unit tests (port of a Java version claiming  )
  • Online interactive implementation
  • Serial and parallel implementations.
  • Matlab and C
  • Perl implementation
  • C++ implementation
  • C++ implementation claiming   time complexity (BSD style open source licensed)
  • MATLAB implementation
  • C implementation
  • JavaScript implementation with unit tests (port of a Java version claiming   time complexity)
  • Clue R package proposes an implementation, solve_LSAP
  • Node.js implementation on GitHub
  • Python implementation in scipy package


hungarian, algorithm, hungarian, method, combinatorial, optimization, algorithm, that, solves, assignment, problem, polynomial, time, which, anticipated, later, primal, dual, methods, developed, published, 1955, harold, kuhn, gave, name, hungarian, method, bec. The Hungarian method is a combinatorial optimization algorithm that solves the assignment problem in polynomial time and which anticipated later primal dual methods It was developed and published in 1955 by Harold Kuhn who gave it the name Hungarian method because the algorithm was largely based on the earlier works of two Hungarian mathematicians Denes Konig and Jeno Egervary 1 2 However in 2006 it was discovered that Carl Gustav Jacobi had solved the assignment problem in the 19th century and the solution had been published posthumously in 1890 in Latin 3 James Munkres reviewed the algorithm in 1957 and observed that it is strongly polynomial 4 Since then the algorithm has been known also as the Kuhn Munkres algorithm or Munkres assignment algorithm The time complexity of the original algorithm was O n 4 displaystyle O n 4 however Edmonds and Karp and independently Tomizawa noticed that it can be modified to achieve an O n 3 displaystyle O n 3 running time 5 6 One of the most popular citation needed O n 3 displaystyle O n 3 variants is the Jonker Volgenant algorithm 7 Ford and Fulkerson extended the method to general maximum flow problems in form of the Ford Fulkerson algorithm Contents 1 The problem 1 1 Example 1 2 Matrix formulation 1 3 Bipartite graph formulation 2 The algorithm in terms of bipartite graphs 2 1 Proof that the algorithm makes progress 2 2 Proof that adjusting the potential y leaves M unchanged 2 3 Proof that y remains a potential 3 The algorithm in O n3 time 3 1 Adding the j th job in O jW time 3 2 Implementation in C 4 Connection to successive shortest paths 5 Matrix interpretation 5 1 Step 1 5 2 Step 2 5 3 Step 3 5 4 Step 4 5 5 Step 5 5 6 Result 6 Bibliography 7 References 8 External links 8 1 ImplementationsThe problem editMain article Assignment problem Example edit In this simple example there are three workers Alice Bob and Dora One of them has to clean the bathroom another sweep the floors and the third washes the windows but they each demand different pay for the various tasks The problem is to find the lowest cost way to assign the jobs The problem can be represented in a matrix of the costs of the workers doing the jobs For example TaskWorker Cleanbathroom Sweepfloors Wash windows Alice 8 4 7 Bob 5 2 3 Dora 9 4 8 The Hungarian method when applied to the above table would give the minimum cost this is 15 achieved by having Alice clean the bathroom Dora sweep the floors and Bob wash the windows This can be confirmed using brute force CleanSweep Alice Bob Dora Alice 17 16 Bob 18 18 Dora 15 16 the unassigned person washes the windows Matrix formulation edit In the matrix formulation we are given a nonnegative n n matrix where the element in the i th row and j th column represents the cost of assigning the j th job to the i th worker We have to find an assignment of the jobs to the workers such that each job is assigned to one worker and each worker is assigned one job such that the total cost of assignment is minimum This can be expressed as permuting the rows of a cost matrix C to minimize the trace of a matrix min P Tr P C displaystyle min P operatorname Tr PC nbsp where P is a permutation matrix Equivalently the columns can be permuted using CP If the goal is to find the assignment that yields the maximum cost the problem can be solved by negating the cost matrix C Bipartite graph formulation edit The algorithm can equivalently be described by formulating the problem using a bipartite graph We have a complete bipartite graph G S T E displaystyle G S T E nbsp with n worker vertices S and n job vertices T and the edges E each have a nonnegative cost c i j displaystyle c i j nbsp We want to find a perfect matching with a minimum total cost The algorithm in terms of bipartite graphs editLet us call a function y S T R displaystyle y S cup T to mathbb R nbsp a potential if y i y j c i j displaystyle y i y j leq c i j nbsp for each i S j T displaystyle i in S j in T nbsp The value of potential y is the sum of the potential over all vertices v S T y v displaystyle sum v in S cup T y v nbsp The cost of each perfect matching is at least the value of each potential the total cost of the matching is the sum of costs of all edges the cost of each edge is at least the sum of potentials of its endpoints since the matching is perfect each vertex is an endpoint of exactly one edge hence the total cost is at least the total potential The Hungarian method finds a perfect matching and a potential such that the matching cost equals the potential value This proves that both of them are optimal In fact the Hungarian method finds a perfect matching of tight edges an edge i j displaystyle ij nbsp is called tight for a potential y if y i y j c i j displaystyle y i y j c i j nbsp Let us denote the subgraph of tight edges by G y displaystyle G y nbsp The cost of a perfect matching in G y displaystyle G y nbsp if there is one equals the value of y During the algorithm we maintain a potential y and an orientation of G y displaystyle G y nbsp denoted by G y displaystyle overrightarrow G y nbsp which has the property that the edges oriented from T to S form a matching M Initially y is 0 everywhere and all edges are oriented from S to T so M is empty In each step either we modify y so that its value increases or modify the orientation to obtain a matching with more edges We maintain the invariant that all the edges of M are tight We are done if M is a perfect matching In a general step let R S S displaystyle R S subseteq S nbsp and R T T displaystyle R T subseteq T nbsp be the vertices not covered by M so R S displaystyle R S nbsp consists of the vertices in S with no incoming edge and R T displaystyle R T nbsp consists of the vertices in T with no outgoing edge Let Z be the set of vertices reachable in G y displaystyle overrightarrow G y nbsp from R S displaystyle R S nbsp by a directed path This can be computed by breadth first search If R T Z displaystyle R T cap Z nbsp is nonempty then reverse the orientation of all edges along a directed path in G y displaystyle overrightarrow G y nbsp from R S displaystyle R S nbsp to R T displaystyle R T nbsp Thus the size of the corresponding matching increases by 1 If R T Z displaystyle R T cap Z nbsp is empty then let D min c i j y i y j i Z S j T Z displaystyle Delta min c i j y i y j i in Z cap S j in T setminus Z nbsp D is well defined because at least one such edge i j displaystyle ij nbsp must exist whenever the matching is not yet of maximum possible size see the following section it is positive because there are no tight edges between Z S displaystyle Z cap S nbsp and T Z displaystyle T setminus Z nbsp Increase y by D on the vertices of Z S displaystyle Z cap S nbsp and decrease y by D on the vertices of Z T displaystyle Z cap T nbsp The resulting y is still a potential and although the graph G y displaystyle G y nbsp changes it still contains M see the next subsections We orient the new edges from S to T By the definition of D the set Z of vertices reachable from R S displaystyle R S nbsp increases note that the number of tight edges does not necessarily increase We repeat these steps until M is a perfect matching in which case it gives a minimum cost assignment The running time of this version of the method is O n 4 displaystyle O n 4 nbsp M is augmented n times and in a phase where M is unchanged there are at most n potential changes since Z increases every time The time sufficient for a potential change is O n 2 displaystyle O n 2 nbsp Proof that the algorithm makes progress edit We must show that as long as the matching is not of maximum possible size the algorithm is always able to make progress that is to either increase the number of matched edges or tighten at least one edge It suffices to show that at least one of the following holds at every step M is of maximum possible size G y displaystyle G y nbsp contains an augmenting path G contains a loose tailed path a path from some vertex in R S displaystyle R S nbsp to a vertex in T Z displaystyle T setminus Z nbsp that consists of any number possibly zero of tight edges followed by a single loose edge The trailing loose edge of a loose tailed path is thus from Z S displaystyle Z cap S nbsp guaranteeing that D is well defined If M is of maximum possible size we are of course finished Otherwise by Berge s lemma there must exist an augmenting path P with respect to M in the underlying graph G However this path may not exist in G y displaystyle G y nbsp Although every even numbered edge in P is tight by the definition of M odd numbered edges may be loose and thus absent from G y displaystyle G y nbsp One endpoint of P is in R S displaystyle R S nbsp the other in R T displaystyle R T nbsp w l o g suppose it begins in R S displaystyle R S nbsp If every edge on P is tight then it remains an augmenting path in G y displaystyle G y nbsp and we are done Otherwise let u v displaystyle uv nbsp be the first loose edge on P If v Z displaystyle v notin Z nbsp then we have found a loose tailed path and we are done Otherwise v is reachable from some other path Q of tight edges from a vertex in R S displaystyle R S nbsp Let P v displaystyle P v nbsp be the subpath of P beginning at v and continuing to the end and let P displaystyle P nbsp be the path formed by traveling along Q until a vertex on P v displaystyle P v nbsp is reached and then continuing to the end of P v displaystyle P v nbsp Observe that P displaystyle P nbsp is an augmenting path in G with at least one fewer loose edge than P P can be replaced with P displaystyle P nbsp and this reasoning process iterated formally using induction on the number of loose edges until either an augmenting path in G y displaystyle G y nbsp or a loose tailed path in G is found Proof that adjusting the potential y leaves M unchanged edit To show that every edge in M remains after adjusting y it suffices to show that for an arbitrary edge in M either both of its endpoints or neither of them are in Z To this end let v u displaystyle vu nbsp be an edge in M from T to S It is easy to see that if v is in Z then u must be too since every edge in M is tight Now suppose toward contradiction that u Z displaystyle u in Z nbsp but v Z displaystyle v notin Z nbsp u itself cannot be in R S displaystyle R S nbsp because it is the endpoint of a matched edge so there must be some directed path of tight edges from a vertex in R S displaystyle R S nbsp to u This path must avoid v since that is by assumption not in Z so the vertex immediately preceding u in this path is some other vertex v T displaystyle v in T nbsp v u displaystyle v u nbsp is a tight edge from T to S and is thus in M But then M contains two edges that share the vertex u contradicting the fact that M is a matching Thus every edge in M has either both endpoints or neither endpoint in Z Proof that y remains a potential edit To show that y remains a potential after being adjusted it suffices to show that no edge has its total potential increased beyond its cost This is already established for edges in M by the preceding paragraph so consider an arbitrary edge uv from S to T If y u displaystyle y u nbsp is increased by D then either v Z T displaystyle v in Z cap T nbsp in which case y v displaystyle y v nbsp is decreased by D leaving the total potential of the edge unchanged or v T Z displaystyle v in T setminus Z nbsp in which case the definition of D guarantees that y u y v D c u v displaystyle y u y v Delta leq c u v nbsp Thus y remains a potential The algorithm in O n3 time editSuppose there are J displaystyle J nbsp jobs and W displaystyle W nbsp workers J W displaystyle J leq W nbsp We describe how to compute for each prefix of jobs the minimum total cost to assign each of these jobs to distinct workers Specifically we add the j displaystyle j nbsp th job and update the total cost in time O j W displaystyle O jW nbsp yielding an overall time complexity of O j 1 J j W O J 2 W displaystyle O left sum j 1 J jW right O J 2 W nbsp Note that this is better than O W 3 displaystyle O W 3 nbsp when the number of jobs is small relative to the number of workers Adding the j th job in O jW time edit We use the same notation as the previous section though we modify their definitions as necessary Let S j displaystyle S j nbsp denote the set of the first j displaystyle j nbsp jobs and T displaystyle T nbsp denote the set of all workers Before the j displaystyle j nbsp th step of the algorithm we assume that we have a matching on S j 1 T displaystyle S j 1 cup T nbsp that matches all jobs in S j 1 displaystyle S j 1 nbsp and potentials y displaystyle y nbsp satisfying the following condition the matching is tight with respect to the potentials and the potentials of all unmatched workers are zero and the potentials of all matched workers are non positive Note that such potentials certify the optimality of the matching During the j displaystyle j nbsp th step we add the j displaystyle j nbsp th job to S j 1 displaystyle S j 1 nbsp to form S j displaystyle S j nbsp and initialize Z j displaystyle Z j nbsp At all times every vertex in Z displaystyle Z nbsp will be reachable from the j displaystyle j nbsp th job in G y displaystyle G y nbsp While Z displaystyle Z nbsp does not contain a worker that has not been assigned a job let D min c j w y j y w j Z S j w T Z displaystyle Delta min c j w y j y w j in Z cap S j w in T setminus Z nbsp and w next displaystyle w text next nbsp denote any w displaystyle w nbsp at which the minimum is attained After adjusting the potentials in the way described in the previous section there is now a tight edge from Z displaystyle Z nbsp to w next displaystyle w text next nbsp If w next displaystyle w text next nbsp is unmatched then we have an augmenting path in the subgraph of tight edges from j displaystyle j nbsp to w next displaystyle w text next nbsp After toggling the matching along this path we have now matched the first j displaystyle j nbsp jobs and this procedure terminates Otherwise we add w next displaystyle w text next nbsp and the job matched with it to Z displaystyle Z nbsp Adjusting potentials takes O W displaystyle O W nbsp time Recomputing D displaystyle Delta nbsp and w next displaystyle w text next nbsp after changing the potentials and Z displaystyle Z nbsp also can be done in O W displaystyle O W nbsp time Case 1 can occur at most j 1 displaystyle j 1 nbsp times before case 2 occurs and the procedure terminates yielding the overall time complexity of O j W displaystyle O jW nbsp Implementation in C edit For convenience of implementation the code below adds an additional worker w W displaystyle w W nbsp such that y w W displaystyle y w W nbsp stores the negation of the sum of all D displaystyle Delta nbsp computed so far After the j displaystyle j nbsp th job is added and the matching updated the cost of the current matching equals the sum of all D displaystyle Delta nbsp computed so far or y w W displaystyle y w W nbsp This code is adapted from e maxx algo 8 Solution to https open kattis com problems cordonbleu using Hungarian algorithm include lt cassert gt include lt iostream gt include lt limits gt include lt vector gt using namespace std Sets a min a b return true if b lt a template lt class T gt bool ckmin T amp a const T amp b return b lt a a b 1 0 Given J jobs and W workers J lt W computes the minimum cost to assign each prefix of jobs to distinct workers tparam T a type large enough to represent integers on the order of J max C param C a matrix of dimensions JxW such that C j w cost to assign j th job to w th worker possibly negative return a vector of length J with the j th entry equaling the minimum cost to assign the first j 1 jobs to distinct workers template lt class T gt vector lt T gt hungarian const vector lt vector lt T gt gt amp C const int J int size C W int size C 0 assert J lt W job w job assigned to w th worker or 1 if no job assigned note a W th worker was added for convenience vector lt int gt job W 1 1 vector lt T gt ys J yt W 1 potentials yt W will equal the sum of all deltas vector lt T gt answers const T inf numeric limits lt T gt max for int j cur 0 j cur lt J j cur assign j cur th job int w cur W job w cur j cur min reduced cost over edges from Z to worker w vector lt T gt min to W 1 inf vector lt int gt prv W 1 1 previous worker on alternating path vector lt bool gt in Z W 1 whether worker is in Z while job w cur 1 runs at most j cur 1 times in Z w cur true const int j job w cur T delta inf int w next for int w 0 w lt W w if in Z w if ckmin min to w C j w ys j yt w prv w w cur if ckmin delta min to w w next w delta will always be non negative except possibly during the first time this loop runs if any entries of C j cur are negative for int w 0 w lt W w if in Z w ys job w delta yt w delta else min to w delta w cur w next update assignments along alternating path for int w w cur W w cur w job w cur job w prv w cur answers push back yt W return answers Sanity check https en wikipedia org wiki Hungarian algorithm Example First job 5 clean bathroom Bob gt 5 First second jobs 9 clean bathroom Bob gt 5 sweep floors Alice gt 4 First second third jobs 15 clean bathroom Alice gt 8 sweep floors Dora gt 4 wash windows Bob gt 3 void sanity check hungarian vector lt vector lt int gt gt costs 8 5 9 4 2 4 7 3 8 assert hungarian costs vector lt int gt 5 9 15 cerr lt lt Sanity check passed n solves https open kattis com problems cordonbleu void cordon bleu int N M cin gt gt N gt gt M vector lt pair lt int int gt gt B N C M vector lt pair lt int int gt gt bottles N couriers M for auto amp b bottles cin gt gt b first gt gt b second for auto amp c couriers cin gt gt c first gt gt c second pair lt int int gt rest cin gt gt rest first gt gt rest second vector lt vector lt int gt gt costs N vector lt int gt N M 1 auto dist amp pair lt int int gt x pair lt int int gt y return abs x first y first abs x second y second for int b 0 b lt N b for int c 0 c lt M c courier gt bottle gt restaurant costs b c dist couriers c bottles b dist bottles b rest for int 0 lt N 1 restaurant gt bottle gt restaurant costs b M 2 dist bottles b rest cout lt lt hungarian costs back lt lt n int main sanity check hungarian cordon bleu Connection to successive shortest paths editThe Hungarian algorithm can be seen to be equivalent to the successive shortest path algorithm for minimum cost flow 9 10 where the reweighting technique from Johnson s algorithm is used to find the shortest paths The implementation from the previous section is rewritten below in such a way as to emphasize this connection it can be checked that the potentials h displaystyle h nbsp for workers 0 W 1 displaystyle 0 dots W 1 nbsp are equal to the potentials y displaystyle y nbsp from the previous solution up to a constant offset When the graph is sparse there are only M displaystyle M nbsp allowed job worker pairs it is possible to optimize this algorithm to run in O J M J 2 log W displaystyle O JM J 2 log W nbsp time by using a Fibonacci heap to determine w next displaystyle w text next nbsp instead of iterating over all W displaystyle W nbsp workers to find the one with minimum distance alluded to here template lt class T gt vector lt T gt hungarian const vector lt vector lt T gt gt amp C const int J int size C W int size C 0 assert J lt W job w job assigned to w th worker or 1 if no job assigned note a W th worker was added for convenience vector lt int gt job W 1 1 vector lt T gt h W Johnson potentials vector lt T gt answers T ans cur 0 const T inf numeric limits lt T gt max assign j cur th job using Dijkstra with potentials for int j cur 0 j cur lt J j cur int w cur W unvisited worker with minimum distance job w cur j cur vector lt T gt dist W 1 inf Johnson reduced distances dist W 0 vector lt bool gt vis W 1 whether visited yet vector lt int gt prv W 1 1 previous worker on shortest path while job w cur 1 Dijkstra step pop min worker from heap T min dist inf vis w cur true int w next 1 next unvisited worker with minimum distance consider extending shortest path by w cur gt job w cur gt w for int w 0 w lt W w if vis w sum of reduced edge weights w cur gt job w cur gt w T edge C job w cur w h w if w cur W edge C job w cur w cur h w cur assert edge gt 0 consequence of Johnson potentials if ckmin dist w dist w cur edge prv w w cur if ckmin min dist dist w w next w w cur w next for int w 0 w lt W w update potentials ckmin dist w dist w cur h w dist w ans cur h w cur for int w w cur W w cur w job w cur job w prv w cur answers push back ans cur return answers Matrix interpretation editThis variant of the algorithm follows the formulation given by Flood 11 and later described more explicitly by Munkres who proved it runs in O n 4 displaystyle mathcal O n 4 nbsp time 4 Instead of keeping track of the potentials of the vertices the algorithm operates only on a matrix a i j c i j y i y j displaystyle a ij c i j y i y j nbsp where c i j displaystyle c i j nbsp is the original cost matrix and y i y j displaystyle y i y j nbsp are the potentials from the graph interpretation Changing the potentials corresponds to adding or subtracting from rows or columns of this matrix The algorithm starts with a i j c i j displaystyle a ij c i j nbsp As such it can be viewed as taking the original cost matrix and modifying it Given n workers and tasks the problem is written in the form of an n n cost matrix a1a2a3a4b1b2b3b4c1c2c3c4d1d2d3d4 where a b c and d are workers who have to perform tasks 1 2 3 and 4 a1 a2 a3 and a4 denote the penalties incurred when worker a does task 1 2 3 and 4 respectively The problem is equivalent to assigning each worker a unique task such that the total penalty is minimized Note that each task can only be worked on by one worker Step 1 edit For each row its minimum element is subtracted from every element in that row This causes all elements to have non negative values Therefore an assignment with a total penalty of 0 is by definition a minimum assignment This also leads to at least one zero in each row As such a naive greedy algorithm can attempt to assign all workers a task with a penalty of zero This is illustrated below 0a2a3a4b1b2b30c10c3c4d1d20d4 The zeros above would be the assigned tasks Worst case there are n combinations to try since multiple zeroes can appear in a row if multiple elements are the minimum So at some point this naive algorithm should be short circuited Step 2 edit Sometimes it may turn out that the matrix at this stage cannot be used for assigning as is the case for the matrix below 0a20a4b10b300c2c3c40d2d3d4 To overcome this we repeat the above procedure for all columns i e the minimum element in each column is subtracted from all the elements in that column and then check if an assignment with penalty 0 is possible In most situations this will give the result but if it is still not possible then we need to keep going Step 3 edit All zeros in the matrix must be covered by marking as few rows and or columns as possible Steps 3 and 4 form one way to accomplish this For each row try to assign an arbitrary zero Assigned tasks are represented by starring a zero Note that assignments can t be in the same row or column We assign the first zero of Row 1 The second zero of Row 1 can t be assigned We assign the first zero of Row 2 The second zero of Row 2 can t be assigned Zeros on Row 3 and Row 4 can t be assigned because they are on the same column as the zero assigned on Row 1 We could end with another assignment if we choose another ordering of the rows and columns 0 a20a4b10 b300c2c3c40d2d3d4 Step 4 edit Cover all columns containing a starred zero 0 a2 0 a4 b1 0 b3 0 0 c2 c3 c4 0 d2 d3 d4 Find a non covered zero and prime it mark it with a prime symbol If no such zero can be found meaning all zeroes are covered skip to step 5 If the zero is on the same row as a starred zero cover the corresponding row and uncover the column of the starred zero Then GOTO Find a non covered zero and prime it Here the second zero of Row 1 is uncovered Because there is another zero starred on Row 1 we cover Row 1 and uncover Column 1 Then the second zero of Row 2 is uncovered We cover Row 2 and uncover Column 2 0 a2 0 a4 b1 0 b3 0 0 c2 c3 c4 0 d2 d3 d4 0 a2 0 a4 b1 0 b3 0 0 c2 c3 c4 0 d2 d3 d4 Else the non covered zero has no assigned zero on its row We make a path starting from the zero by performing the following steps Substep 1 Find a starred zero on the corresponding column If there is one go to Substep 2 else stop Substep 2 Find a primed zero on the corresponding row there should always be one Go to Substep 1 The zero on Row 3 is uncovered We add to the path the first zero of Row 1 then the second zero of Row 1 then we are done 0 a2 0 a4 b1 0 b3 0 0 c2 c3 c4 0 d2 d3 d4 Else branch continued For all zeros encountered during the path star primed zeros and unstar starred zeros As the path begins and ends by a primed zero when swapping starred zeros we have assigned one more zero 0 a2 0 a4 b1 0 b3 0 0 c2 c3 c4 0 d2 d3 d4 Else branch continued Unprime all primed zeroes and uncover all lines Repeat the previous steps continue looping until the above skip to step 5 is reached We cover columns 1 2 and 3 The second zero on Row 2 is uncovered so we cover Row 2 and uncover Column 2 0 a2 0 a4 b1 0 b3 0 0 c2 c3 c4 0 d2 d3 d4 All zeros are now covered with a minimal number of rows and columns The aforementioned detailed description is just one way to draw the minimum number of lines to cover all the 0s Other methods work as well Step 5 edit If the number of starred zeros is n or in the general case m i n n m displaystyle min n m nbsp where n is the number of people and m is the number of jobs the algorithm terminates See the Result subsection below on how to interpret the results Otherwise find the lowest uncovered value Subtract this from every unmarked element and add it to every element covered by two lines Go back to step 4 This is equivalent to subtracting a number from all rows which are not covered and adding the same number to all columns which are covered These operations do not change optimal assignments Result edit If following this specific version of the algorithm the starred zeros form the minimum assignment From Konig s theorem 12 the minimum number of lines minimum vertex cover 13 will be n the size of maximum matching 14 Thus when n lines are required minimum cost assignment can be found by looking at only zeroes in the matrix Bibliography editR E Burkard M Dell Amico S Martello Assignment Problems Revised reprint SIAM Philadelphia PA 2012 ISBN 978 1 61197 222 1 M Fischetti Lezioni di Ricerca Operativa Edizioni Libreria Progetto Padova Italia 1995 R Ahuja T Magnanti J Orlin Network Flows Prentice Hall 1993 S Martello Jeno Egervary from the origins of the Hungarian algorithm to satellite communication Central European Journal of Operational Research 18 47 58 2010References edit Harold W Kuhn The Hungarian Method for the assignment problem Naval Research Logistics Quarterly 2 83 97 1955 Kuhn s original publication Harold W Kuhn Variants of the Hungarian method for assignment problems Naval Research Logistics Quarterly 3 253 258 1956 Presentation Archived from the original on 16 October 2015 a b J Munkres Algorithms for the Assignment and Transportation Problems Journal of the Society for Industrial and Applied Mathematics 5 1 32 38 1957 March Edmonds Jack Karp Richard M 1 April 1972 Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems Journal of the ACM 19 2 248 264 doi 10 1145 321694 321699 S2CID 6375478 Tomizawa N 1971 On some techniques useful for solution of transportation network problems Networks 1 2 173 194 doi 10 1002 net 3230010206 ISSN 1097 0037 Jonker R Volgenant A December 1987 A shortest augmenting path algorithm for dense and sparse linear assignment problems Computing 38 4 325 340 doi 10 1007 BF02278710 S2CID 7806079 Hungarian Algorithm for Solving the Assignment Problem e maxx algo 23 August 2012 Retrieved 13 May 2023 Jacob Kogler 20 December 2022 Minimum cost flow Successive shortest path algorithm Algorithms for Competitive Programming Retrieved 14 May 2023 Solving assignment problem using min cost flow Algorithms for Competitive Programming 17 July 2022 Retrieved 14 May 2023 Flood Merrill M 1956 The Traveling Salesman Problem Operations Research 4 1 61 75 doi 10 1287 opre 4 1 61 ISSN 0030 364X Konig s theorem graph theory Konig s theorem Vertex cover minimum vertex cover Matching graph theory matchingExternal links editBruff Derek The Assignment Problem and the Hungarian Method matrix formalism Mordecai J Golin Bipartite Matching and the Hungarian Method bigraph formalism Course Notes Hong Kong University of Science and Technology Hungarian maximum matching algorithm both formalisms in Brilliant website R A Pilgrim Munkres Assignment Algorithm Modified for Rectangular Matrices Course notes Murray State University Mike Dawes The Optimal Assignment Problem Course notes University of Western Ontario On Kuhn s Hungarian Method A tribute from Hungary Andras Frank Egervary Research Group Pazmany P setany 1 C H1117 Budapest Hungary Lecture Fundamentals of Operations Research Assignment Problem Hungarian Algorithm Prof G Srinivasan Department of Management Studies IIT Madras Extension Assignment sensitivity analysis with O n 4 time complexity Liu Shell Solve any Assignment Problem online provides a step by step explanation of the Hungarian Algorithm Implementations edit Note that not all of these satisfy the O n 3 displaystyle O n 3 nbsp time complexity even if they claim so Some may contain errors implement the slower O n 4 displaystyle O n 4 nbsp algorithm or have other inefficiencies In the worst case a code example linked from Wikipedia could later be modified to include exploit code Verification and benchmarking is necessary when using such code examples from unknown authors Lua and Python versions of R A Pilgrim s code claiming O n 3 displaystyle O n 3 nbsp time complexity Julia implementation C implementation claiming O n 3 displaystyle O n 3 nbsp time complexity Java implementation claiming O n 3 displaystyle O n 3 nbsp time complexity Python implementation Ruby implementation with unit tests C implementation claiming O n 3 displaystyle O n 3 nbsp time complexity D implementation with unit tests port of a Java version claiming O n 3 displaystyle O n 3 nbsp Online interactive implementation Serial and parallel implementations Matlab and C Perl implementation C implementation C implementation claiming O n 3 displaystyle O n 3 nbsp time complexity BSD style open source licensed MATLAB implementation C implementation JavaScript implementation with unit tests port of a Java version claiming O n 3 displaystyle O n 3 nbsp time complexity Clue R package proposes an implementation solve LSAP Node js implementation on GitHub Python implementation in scipy package Retrieved from https en wikipedia org w index php title Hungarian algorithm amp oldid 1218012424, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.