fbpx
Wikipedia

Consistent hashing

In computer science, consistent hashing[1][2] is a special kind of hashing technique such that when a hash table is resized, only keys need to be remapped on average where is the number of keys and is the number of slots. In contrast, in most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped because the mapping between the keys and the slots is defined by a modular operation.

History edit

The term "consistent hashing" was introduced by David Karger et al. at MIT for use in distributed caching, particularly for the web.[3] This academic paper from 1997 in Symposium on Theory of Computing introduced the term "consistent hashing" as a way of distributing requests among a changing population of web servers.[4] Each slot is then represented by a server in a distributed system or cluster. The addition of a server and the removal of a server (during scalability or outage) requires only   items to be re-shuffled when the number of slots (i.e. servers) change. The authors mention linear hashing and its ability to handle sequential server addition and removal, while consistent hashing allows servers to be added and removed in an arbitrary order. [1] The paper was later re-purposed to address technical challenge of keeping track of a file in peer-to-peer networks such as a distributed hash table.[5][6]

Teradata used this technique in their distributed database, released in 1986, although they did not use this term. Teradata still uses the concept of a hash table to fulfill exactly this purpose. Akamai Technologies was founded in 1998 by the scientists Daniel Lewin and F. Thomson Leighton (co-authors of the article coining "consistent hashing"). In Akamai's content delivery network,[7] consistent hashing is used to balance the load within a cluster of servers, while a stable marriage algorithm is used to balance load across clusters.[2]

Consistent hashing has also been used to reduce the impact of partial system failures in large web applications to provide robust caching without incurring the system-wide fallout of a failure.[8] Consistent hashing is also the cornerstone of distributed hash tables (DHTs), which employ hash values to partition a keyspace across a distributed set of nodes, then construct an overlay network of connected nodes that provide efficient node retrieval by key.

Rendezvous hashing, designed in 1996, is a simpler and more general technique [citation needed]. It achieves the goals of consistent hashing using the very different highest random weight (HRW) algorithm.

Basic technique edit

 
In this case, using consistent hashing would result in the "BLOB" getting stored server 139. A BLOB is mapped to the next server that appears on the circle in clockwise order until it reaches a server which is  

In the problem of load balancing, for example, when a BLOB has to be assigned to one of   servers on a cluster, a standard hash function could be used in such a way that we calculate the hash value for that BLOB, assuming the resultant value of the hash is  , we perform modular operation with the number of servers (  in this case) to determine the server in which we can place the BLOB:  ; hence the BLOB will be placed in the server whose   is successor of   in this case. However, when a server is added or removed during outage or scaling (when   changes), all the BLOBs in every server should be reassigned and moved due to rehashing, but this operation is expensive.

Consistent hashing was designed to avoid the problem of having to reassign every BLOB when a server is added or removed throughout the cluster. The central idea is to use a hash function that maps both the BLOB and servers to a unit circle, usually   radians. For example,   (where   is hash of a BLOB or server's identifier, like IP address or UUID). Each BLOB is then assigned to the next server that appears on the circle in clockwise order. Usually, binary search algorithm or linear search is used to find a "spot" or server to place that particular BLOB in   or   complexities respectively; and in every iteration, which happens in clockwise manner, an operation   (where   is the value of the server within the cluster) is performed to find the server to place the BLOB. This provides an even distribution of BLOBs to servers. But, more importantly, if a server fails and is removed from the circle, only the BLOBs that were mapped to the failed server need to be reassigned to the next server in clockwise order. Likewise, if a new server is added, it is added to the unit circle, and only the BLOBs mapped to that server need to be reassigned.

Importantly, when a server is added or removed, the vast majority of the BLOBs maintain their prior server assignments, and the addition of   server only causes   fraction of the BLOBs to relocate. Although the process of moving BLOBs across cache servers in the cluster depends on the context, commonly, the newly added cache server identifies its "successor" and moves all the BLOBs, whose mapping belongs to this server (i.e. whose hash value is less than that of the new server), from it. However, in the case of web page caches, in most implementations there is no involvement of moving or copying, assuming the cached BLOB is small enough. When a request hits a newly added cache server, a cache miss happens and a request to the actual web server is made and the BLOB is cached locally for future requests. The redundant BLOBs on the previously used cache servers would be removed as per the cache eviction policies.[9]

Implementation edit

Let   and   be the hash functions used for the BLOB and server's unique identifier respectively. In practice, a binary search tree (BST) is used to dynamically maintain the   within a cluster or hashring, and to find the successor or minimum within the BST, tree traversal is used.

Inserting   into the cluster
Let   be the hash value of a BLOB such that,   where   and  . To insert  , find the successor of   in the BST of  s. If   is larger than all of the  s, the BLOB is placed in the server with smallest   value.
Deleting   from the cluster
Find the successor of   in the BST, remove the BLOB from the returned  . If   has no successor, remove the BLOB from the smallest of the  s.[10]
Insert a server into cluster
Let   be the hash value of a server's identifier such that,   where   and  . Move all the BLOBs, whose hash value is smaller than  , from the server whose   is successor of  . If   is largest of all the  s, move the relevant BLOBs from the smallest of the  s into  .[11]
Delete a server from cluster
Find the successor of   in the BST, move the BLOBs from   into its successor server. If   doesn't have a successor, move the BLOBs into the smallest of the  s.[12]

Variance reduction edit

To avoid skewness of multiple nodes within the radian, which happen due to lack of uniform distribution of the servers within the cluster, multiple labels are used. Those duplicate labels are called "virtual nodes" i.e. multiple labels which point to a single "real" label or server within the cluster. The amount of virtual nodes or duplicate labels used for a particular server within a cluster is called the "weight" of that particular server.[13]

Practical extensions edit

A number of extensions to the basic technique are needed for effectively using consistent hashing for load balancing in practice. In the basic scheme above, if a server fails, all its BLOBs are reassigned to the next server in clockwise order, potentially doubling the load of that server. This may not be desirable. To ensure a more even redistribution of BLOBs on server failure, each server can be hashed to multiple locations on the unit circle. When a server fails, the BLOBs assigned to each of its replicas on the unit circle will get reassigned to a different server in clockwise order, thus redistributing the BLOBs more evenly. Another extension concerns a situation where a single BLOB gets "hot" and is accessed a large number of times and will have to be hosted in multiple servers. In this situation, the BLOB may be assigned to multiple contiguous servers by traversing the unit circle in clockwise order. A more complex practical consideration arises when two BLOBs are hashed near each other in the unit circle and both get "hot" at the same time. In this case, both BLOBs will use the same set of contiguous servers in the unit circle. This situation can be ameliorated by each BLOB choosing a different hash function for mapping servers to the unit circle.[2]

Comparison with rendezvous hashing and other alternatives edit

Rendezvous hashing, designed in 1996, is a simpler and more general technique, and permits fully distributed agreement on a set of   options out of a possible set of   options. It can in fact be shown that consistent hashing is a special case of rendezvous hashing. Because of its simplicity and generality, rendezvous hashing is now being used in place of Consistent Hashing in many applications.

If key values will always increase monotonically, an alternative approach using a hash table with monotonic keys may be more suitable than consistent hashing.[citation needed]

Complexity edit

Asymptotic time complexities for   nodes (or slots) and   keys
Classic hash table Consistent hashing
add a node    
remove a node    
lookup a key    
add a key    
remove a key    

The   is an average cost for redistribution of keys and the   complexity for consistent hashing comes from the fact that a binary search among nodes angles is required to find the next node on the ring.[citation needed]

Examples edit

Known examples of consistent hashing use include:

References edit

  1. ^ a b Karger, D.; Lehman, E.; Leighton, T.; Panigrahy, R.; Levine, M.; Lewin, D. (1997). Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web. Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing. ACM Press New York, NY, USA. pp. 654–663. doi:10.1145/258533.258660.
  2. ^ a b c Bruce Maggs and Ramesh Sitaraman (2015). "Algorithmic nuggets in content delivery" (PDF). ACM SIGCOMM Computer Communication Review. 45 (3).
  3. ^ Roughgarden & Valiant 2021, p. 2.
  4. ^ Roughgarden & Valiant 2021, p. 7.
  5. ^ Roughgarden & Valiant 2021, p. 8.
  6. ^ I. Stoica et al., "Chord: a scalable peer-to-peer lookup protocol for Internet applications," in IEEE/ACM Transactions on Networking, vol. 11, no. 1, pp. 17–32, Feb. 2003, doi: 10.1109/TNET.2002.808407.
  7. ^ Nygren., E.; Sitaraman R. K.; Sun, J. (2010). "The Akamai Network: A Platform for High-Performance Internet Applications" (PDF). ACM SIGOPS Operating Systems Review. 44 (3): 2–19. doi:10.1145/1842733.1842736. S2CID 207181702. (PDF) from the original on November 30, 2022. Retrieved August 29, 2023.
  8. ^ Karger, D.; Sherman, A.; Berkheimer, A.; Bogstad, B.; Dhanidina, R.; Iwamoto, K.; Kim, B.; Matkins, L.; Yerushalmi, Y. (1999). . Computer Networks. 31 (11): 1203–1213. doi:10.1016/S1389-1286(99)00055-9. Archived from the original on 2008-07-21. Retrieved 2008-02-05.
  9. ^ Roughgarden & Valiant 2021, p. 6.
  10. ^ Moitra 2016, p. 2.
  11. ^ Moitra 2016, p. 2–3.
  12. ^ Moitra 2016, p. 3.
  13. ^ Roughgarden & Valiant 2021, p. 6–7.
  14. ^ "What Exactly Is Membase?". 16 December 2014. Retrieved 2020-10-29.
  15. ^ Holt, Greg (February 2011). "Building a Consistent Hashing Ring". openstack.org. Retrieved 2019-11-17.
  16. ^ DeCandia, G.; Hastorun, D.; Jampani, M.; Kakulapati, G.; Lakshman, A.; Pilchin, A.; Sivasubramanian, S.; Vosshall, P.; Vogels, Werner (2007). "Dynamo" (PDF). ACM Sigops Operating Systems Review. 41 (6): 205–220. doi:10.1145/1323293.1294281. Retrieved 2018-06-07.
  17. ^ Lakshman, Avinash; Malik, Prashant (2010). "Cassandra: a decentralized structured storage system". ACM SIGOPS Operating Systems Review. 44 (2): 35–40. doi:10.1145/1773912.1773922. S2CID 916681.
  18. ^ "NoSQL Comparison: MongoDB vs ScyllaDB". benchant.com. Retrieved 21 March 2024.
  19. ^ . www.project-voldemort.com/. Archived from the original on 9 February 2015. Retrieved 9 February 2015. Consistent hashing is a technique that avoids these problems, and we use it to compute the location of each key on the cluster.
  20. ^ "Akka Routing". akka.io. Retrieved 2019-11-16.
  21. ^ . Archived from the original on 2015-09-19. Retrieved 2016-12-06.
  22. ^ "GlusterFS Algorithms: Distribution". gluster.org. 2012-03-01. Retrieved 2019-11-16.
  23. ^ Roughgarden, Tim; Valiant, Gregory (2016-03-28). "Modern Algorithmic Toolbox" (PDF). stanford.edu. Retrieved 2019-11-17.
  24. ^ Vishnevskiy, Stanislav (2017-07-06). "How Discord Scaled Elixir to 5,000,000 Concurrent Users". Retrieved 2022-08-16.
  25. ^ "Consistent Hash Load Balancing for gRPC". 24 November 2021. Retrieved 2023-09-04.
  26. ^ Stoica, I.; Morris, R.; Liben-Nowell, D.; Karger, D.; Kaashoek, M. F.; Dabek, F.; Balakrishnan, H. (25 Feb 2003). "Chord: a scalable peer-to-peer lookup protocol for Internet applications". IEEE/ACM Transactions on Networking. 11 (1): 17–32. doi:10.1109/TNET.2002.808407. S2CID 221276912.
  27. ^ "MinIO Versioning, Metadata and Storage Deep Dive". Retrieved 2023-10-24.

Works cited edit

  • Moitra, Ankur (10 February 2016). "Advanced Algorithms, 6.854" (PDF). Massachusetts Institute of Technology. (PDF) from the original on 13 April 2021. Retrieved 8 October 2021.
  • Roughgarden, Tim; Valiant, Gregory (28 March 2021). "The Modern Algorithmic Toolbox, Introduction to Consistent Hashing" (PDF). Stanford University. (PDF) from the original on 25 July 2021. Retrieved 7 October 2021.

External links edit

consistent, hashing, computer, science, consistent, hashing, special, kind, hashing, technique, such, that, when, hash, table, resized, only, displaystyle, keys, need, remapped, average, where, displaystyle, number, keys, displaystyle, number, slots, contrast,. In computer science consistent hashing 1 2 is a special kind of hashing technique such that when a hash table is resized only n m displaystyle n m keys need to be remapped on average where n displaystyle n is the number of keys and m displaystyle m is the number of slots In contrast in most traditional hash tables a change in the number of array slots causes nearly all keys to be remapped because the mapping between the keys and the slots is defined by a modular operation Contents 1 History 2 Basic technique 2 1 Implementation 2 2 Variance reduction 3 Practical extensions 4 Comparison with rendezvous hashing and other alternatives 5 Complexity 6 Examples 7 References 7 1 Works cited 8 External linksHistory editThe term consistent hashing was introduced by David Karger et al at MIT for use in distributed caching particularly for the web 3 This academic paper from 1997 in Symposium on Theory of Computing introduced the term consistent hashing as a way of distributing requests among a changing population of web servers 4 Each slot is then represented by a server in a distributed system or cluster The addition of a server and the removal of a server during scalability or outage requires only n u m k e y s n u m s l o t s displaystyle num keys num slots nbsp items to be re shuffled when the number of slots i e servers change The authors mention linear hashing and its ability to handle sequential server addition and removal while consistent hashing allows servers to be added and removed in an arbitrary order 1 The paper was later re purposed to address technical challenge of keeping track of a file in peer to peer networks such as a distributed hash table 5 6 Teradata used this technique in their distributed database released in 1986 although they did not use this term Teradata still uses the concept of a hash table to fulfill exactly this purpose Akamai Technologies was founded in 1998 by the scientists Daniel Lewin and F Thomson Leighton co authors of the article coining consistent hashing In Akamai s content delivery network 7 consistent hashing is used to balance the load within a cluster of servers while a stable marriage algorithm is used to balance load across clusters 2 Consistent hashing has also been used to reduce the impact of partial system failures in large web applications to provide robust caching without incurring the system wide fallout of a failure 8 Consistent hashing is also the cornerstone of distributed hash tables DHTs which employ hash values to partition a keyspace across a distributed set of nodes then construct an overlay network of connected nodes that provide efficient node retrieval by key Rendezvous hashing designed in 1996 is a simpler and more general technique citation needed It achieves the goals of consistent hashing using the very different highest random weight HRW algorithm Basic technique edit nbsp In this case using consistent hashing would result in the BLOB getting stored server 139 A BLOB is mapped to the next server that appears on the circle in clockwise order until it reaches a server which is z server ID displaystyle zeta leq text server ID nbsp In the problem of load balancing for example when a BLOB has to be assigned to one of n displaystyle n nbsp servers on a cluster a standard hash function could be used in such a way that we calculate the hash value for that BLOB assuming the resultant value of the hash is b displaystyle beta nbsp we perform modular operation with the number of servers n displaystyle n nbsp in this case to determine the server in which we can place the BLOB z b n displaystyle zeta beta n nbsp hence the BLOB will be placed in the server whose server ID displaystyle text server ID nbsp is successor of z displaystyle zeta nbsp in this case However when a server is added or removed during outage or scaling when n displaystyle n nbsp changes all the BLOBs in every server should be reassigned and moved due to rehashing but this operation is expensive Consistent hashing was designed to avoid the problem of having to reassign every BLOB when a server is added or removed throughout the cluster The central idea is to use a hash function that maps both the BLOB and servers to a unit circle usually 2 p displaystyle 2 pi nbsp radians For example z F 360 displaystyle zeta Phi 360 nbsp where F displaystyle Phi nbsp is hash of a BLOB or server s identifier like IP address or UUID Each BLOB is then assigned to the next server that appears on the circle in clockwise order Usually binary search algorithm or linear search is used to find a spot or server to place that particular BLOB in O log N displaystyle O log N nbsp or O N displaystyle O N nbsp complexities respectively and in every iteration which happens in clockwise manner an operation z PS displaystyle zeta leq Psi nbsp where PS displaystyle Psi nbsp is the value of the server within the cluster is performed to find the server to place the BLOB This provides an even distribution of BLOBs to servers But more importantly if a server fails and is removed from the circle only the BLOBs that were mapped to the failed server need to be reassigned to the next server in clockwise order Likewise if a new server is added it is added to the unit circle and only the BLOBs mapped to that server need to be reassigned Importantly when a server is added or removed the vast majority of the BLOBs maintain their prior server assignments and the addition of n t h displaystyle n th nbsp server only causes 1 n displaystyle 1 n nbsp fraction of the BLOBs to relocate Although the process of moving BLOBs across cache servers in the cluster depends on the context commonly the newly added cache server identifies its successor and moves all the BLOBs whose mapping belongs to this server i e whose hash value is less than that of the new server from it However in the case of web page caches in most implementations there is no involvement of moving or copying assuming the cached BLOB is small enough When a request hits a newly added cache server a cache miss happens and a request to the actual web server is made and the BLOB is cached locally for future requests The redundant BLOBs on the previously used cache servers would be removed as per the cache eviction policies 9 Implementation edit Let h b x displaystyle h b x nbsp and h s x displaystyle h s x nbsp be the hash functions used for the BLOB and server s unique identifier respectively In practice a binary search tree BST is used to dynamically maintain the server ID displaystyle text server ID nbsp within a cluster or hashring and to find the successor or minimum within the BST tree traversal is used Inserting x displaystyle x nbsp into the cluster Let b displaystyle beta nbsp be the hash value of a BLOB such that h b x b 360 displaystyle h b x beta 360 nbsp where x B L O B displaystyle x in mathrm BLOB nbsp and h b x z displaystyle h b x zeta nbsp To insert x displaystyle x nbsp find the successor of z displaystyle zeta nbsp in the BST of server ID displaystyle text server ID nbsp s If z displaystyle zeta nbsp is larger than all of the server ID displaystyle text server ID nbsp s the BLOB is placed in the server with smallest server ID displaystyle text server ID nbsp value Deleting x displaystyle x nbsp from the cluster Find the successor of z displaystyle zeta nbsp in the BST remove the BLOB from the returned server ID displaystyle text server ID nbsp If z displaystyle zeta nbsp has no successor remove the BLOB from the smallest of the server ID displaystyle text server ID nbsp s 10 Insert a server into cluster Let F displaystyle Phi nbsp be the hash value of a server s identifier such that h s x F 360 displaystyle h s x Phi 360 nbsp where x IP address UUID displaystyle x in text IP address UUID nbsp and h s x 8 displaystyle h s x theta nbsp Move all the BLOBs whose hash value is smaller than 8 displaystyle theta nbsp from the server whose server ID displaystyle text server ID nbsp is successor of 8 displaystyle theta nbsp If 8 displaystyle theta nbsp is largest of all the server ID displaystyle text server ID nbsp s move the relevant BLOBs from the smallest of the server ID displaystyle text server ID nbsp s into 8 displaystyle theta nbsp 11 Delete a server from cluster Find the successor of 8 displaystyle theta nbsp in the BST move the BLOBs from 8 displaystyle theta nbsp into its successor server If 8 displaystyle theta nbsp doesn t have a successor move the BLOBs into the smallest of the server ID displaystyle text server ID nbsp s 12 Variance reduction edit To avoid skewness of multiple nodes within the radian which happen due to lack of uniform distribution of the servers within the cluster multiple labels are used Those duplicate labels are called virtual nodes i e multiple labels which point to a single real label or server within the cluster The amount of virtual nodes or duplicate labels used for a particular server within a cluster is called the weight of that particular server 13 Practical extensions editA number of extensions to the basic technique are needed for effectively using consistent hashing for load balancing in practice In the basic scheme above if a server fails all its BLOBs are reassigned to the next server in clockwise order potentially doubling the load of that server This may not be desirable To ensure a more even redistribution of BLOBs on server failure each server can be hashed to multiple locations on the unit circle When a server fails the BLOBs assigned to each of its replicas on the unit circle will get reassigned to a different server in clockwise order thus redistributing the BLOBs more evenly Another extension concerns a situation where a single BLOB gets hot and is accessed a large number of times and will have to be hosted in multiple servers In this situation the BLOB may be assigned to multiple contiguous servers by traversing the unit circle in clockwise order A more complex practical consideration arises when two BLOBs are hashed near each other in the unit circle and both get hot at the same time In this case both BLOBs will use the same set of contiguous servers in the unit circle This situation can be ameliorated by each BLOB choosing a different hash function for mapping servers to the unit circle 2 Comparison with rendezvous hashing and other alternatives editRendezvous hashing designed in 1996 is a simpler and more general technique and permits fully distributed agreement on a set of k displaystyle k nbsp options out of a possible set of n displaystyle n nbsp options It can in fact be shown that consistent hashing is a special case of rendezvous hashing Because of its simplicity and generality rendezvous hashing is now being used in place of Consistent Hashing in many applications If key values will always increase monotonically an alternative approach using a hash table with monotonic keys may be more suitable than consistent hashing citation needed Complexity editAsymptotic time complexities for N displaystyle N nbsp nodes or slots and K displaystyle K nbsp keys Classic hash table Consistent hashing add a node O K displaystyle O K nbsp O K N log N displaystyle O K N log N nbsp remove a node O K displaystyle O K nbsp O K N log N displaystyle O K N log N nbsp lookup a key O 1 displaystyle O 1 nbsp O log N displaystyle O log N nbsp add a key O 1 displaystyle O 1 nbsp O log N displaystyle O log N nbsp remove a key O 1 displaystyle O 1 nbsp O log N displaystyle O log N nbsp The O K N displaystyle O K N nbsp is an average cost for redistribution of keys and the O log N displaystyle O log N nbsp complexity for consistent hashing comes from the fact that a binary search among nodes angles is required to find the next node on the ring citation needed Examples editKnown examples of consistent hashing use include Couchbase automated data partitioning 14 OpenStack s Object Storage Service Swift 15 Partitioning component of Amazon s storage system Dynamo 16 Data partitioning in Apache Cassandra 17 Data partitioning in ScyllaDB 18 Data partitioning in Voldemort 19 Akka s consistent hashing router 20 Riak a distributed key value database 21 Gluster a network attached storage file system 22 Akamai content delivery network 23 Discord chat application 24 Load balancing gRPC requests to a distributed cache in SpiceDB 25 Chord algorithm 26 MinIO object storage system 27 References edit a b Karger D Lehman E Leighton T Panigrahy R Levine M Lewin D 1997 Consistent Hashing and Random Trees Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web Proceedings of the Twenty Ninth Annual ACM Symposium on Theory of Computing ACM Press New York NY USA pp 654 663 doi 10 1145 258533 258660 a b c Bruce Maggs and Ramesh Sitaraman 2015 Algorithmic nuggets in content delivery PDF ACM SIGCOMM Computer Communication Review 45 3 Roughgarden amp Valiant 2021 p 2 Roughgarden amp Valiant 2021 p 7 Roughgarden amp Valiant 2021 p 8 I Stoica et al Chord a scalable peer to peer lookup protocol for Internet applications in IEEE ACM Transactions on Networking vol 11 no 1 pp 17 32 Feb 2003 doi 10 1109 TNET 2002 808407 Nygren E Sitaraman R K Sun J 2010 The Akamai Network A Platform for High Performance Internet Applications PDF ACM SIGOPS Operating Systems Review 44 3 2 19 doi 10 1145 1842733 1842736 S2CID 207181702 Archived PDF from the original on November 30 2022 Retrieved August 29 2023 Karger D Sherman A Berkheimer A Bogstad B Dhanidina R Iwamoto K Kim B Matkins L Yerushalmi Y 1999 Web Caching with Consistent Hashing Computer Networks 31 11 1203 1213 doi 10 1016 S1389 1286 99 00055 9 Archived from the original on 2008 07 21 Retrieved 2008 02 05 Roughgarden amp Valiant 2021 p 6 Moitra 2016 p 2 Moitra 2016 p 2 3 Moitra 2016 p 3 Roughgarden amp Valiant 2021 p 6 7 What Exactly Is Membase 16 December 2014 Retrieved 2020 10 29 Holt Greg February 2011 Building a Consistent Hashing Ring openstack org Retrieved 2019 11 17 DeCandia G Hastorun D Jampani M Kakulapati G Lakshman A Pilchin A Sivasubramanian S Vosshall P Vogels Werner 2007 Dynamo PDF ACM Sigops Operating Systems Review 41 6 205 220 doi 10 1145 1323293 1294281 Retrieved 2018 06 07 Lakshman Avinash Malik Prashant 2010 Cassandra a decentralized structured storage system ACM SIGOPS Operating Systems Review 44 2 35 40 doi 10 1145 1773912 1773922 S2CID 916681 NoSQL Comparison MongoDB vs ScyllaDB benchant com Retrieved 21 March 2024 Design Voldemort www project voldemort com Archived from the original on 9 February 2015 Retrieved 9 February 2015 Consistent hashing is a technique that avoids these problems and we use it to compute the location of each key on the cluster Akka Routing akka io Retrieved 2019 11 16 Riak Concepts Archived from the original on 2015 09 19 Retrieved 2016 12 06 GlusterFS Algorithms Distribution gluster org 2012 03 01 Retrieved 2019 11 16 Roughgarden Tim Valiant Gregory 2016 03 28 Modern Algorithmic Toolbox PDF stanford edu Retrieved 2019 11 17 Vishnevskiy Stanislav 2017 07 06 How Discord Scaled Elixir to 5 000 000 Concurrent Users Retrieved 2022 08 16 Consistent Hash Load Balancing for gRPC 24 November 2021 Retrieved 2023 09 04 Stoica I Morris R Liben Nowell D Karger D Kaashoek M F Dabek F Balakrishnan H 25 Feb 2003 Chord a scalable peer to peer lookup protocol for Internet applications IEEE ACM Transactions on Networking 11 1 17 32 doi 10 1109 TNET 2002 808407 S2CID 221276912 MinIO Versioning Metadata and Storage Deep Dive Retrieved 2023 10 24 Works cited edit Moitra Ankur 10 February 2016 Advanced Algorithms 6 854 PDF Massachusetts Institute of Technology Archived PDF from the original on 13 April 2021 Retrieved 8 October 2021 Roughgarden Tim Valiant Gregory 28 March 2021 The Modern Algorithmic Toolbox Introduction to Consistent Hashing PDF Stanford University Archived PDF from the original on 25 July 2021 Retrieved 7 October 2021 External links editUnderstanding Consistent hashing Consistent hashing by Michael Nielsen on June 3 2009 Consistent Hashing Danny Lewin and the Creation of Akamai Jump Consistent Hashing A Fast Minimal Memory Consistent Hash Algorithm Rendezvous Hashing an alternative to Consistent Hashing Implementations in various languages C C C Erlang Go Java PHP Ruby Python Python again Perl Perl6 Retrieved from https en wikipedia org w index php title Consistent hashing amp oldid 1217934559, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.