fbpx
Wikipedia

Lock (computer science)

In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive: a mechanism that enforces limits on access to a resource when there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy, and with a variety of possible methods there exists multiple unique implementations for different applications.

Types Edit

Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access.

The simplest type of lock is a binary semaphore. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade.

Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread. Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource. With a spinlock, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process re-scheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread.

Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.

Uniprocessor architectures have the option of using uninterruptible sequences of instructions—using special instructions or instruction prefixes to disable interrupts temporarily—but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues.

The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code:

if (lock == 0) {  // lock free, set it  lock = myPID; } 

The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or Peterson's algorithm are possible substitutes if atomic locking operations are not available.

Careless use of locks can result in deadlock or livelock. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at run-time. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.)

Some languages do support locks syntactically. An example in C# follows:

public class Account // This is a monitor of an account {  private decimal _balance = 0;  private object _balanceLock = new object();  public void Deposit(decimal amount)  {  // Only one thread at a time may execute this statement.  lock (_balanceLock)  {  _balance += amount;  }  }  public void Withdraw(decimal amount)  {  // Only one thread at a time may execute this statement.  lock (_balanceLock)  {  _balance -= amount;  }  } } 

The code lock(this) can lead to problems if the instance can be accessed publicly.[1]

Similar to Java, C# can also synchronize entire methods, by using the MethodImplOptions.Synchronized attribute.[2][3]

[MethodImpl(MethodImplOptions.Synchronized)] public void SomeMethod() {  // do stuff } 

Granularity Edit

Before being introduced to lock granularity, one needs to understand three concepts about locks:

  • lock overhead: the extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage;
  • lock contention: this occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row);
  • deadlock: the situation when each of at least two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever.

There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.

An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock.[citation needed]

In a database management system, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users.

Database locks Edit

Database locks can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using 2-phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using waits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps.

There are mechanisms employed to manage the actions of multiple concurrent users on a database—the purpose is to prevent lost updates and dirty reads. The two types of locking are pessimistic locking and optimistic locking:

  • Pessimistic locking: a user who reads a record with the intention of updating it places an exclusive lock on the record to prevent other users from manipulating it. This means no one else can manipulate that record until the user releases the lock. The downside is that users can be locked out for a very long time, thereby slowing the overall system response and causing frustration.
Where to use pessimistic locking: this is mainly used in environments where data-contention (the degree of users request to the database system at any one time) is heavy; where the cost of protecting data through locks is less than the cost of rolling back transactions, if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records. Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time. It is not appropriate for use in Web application development.
  • Optimistic locking: this allows multiple concurrent users access to the database whilst the system keeps a copy of the initial-read made by each user. When a user wants to update a record, the application determines whether another user has changed the record since it was last read. The application does this by comparing the initial-read held in memory to the database record to verify any changes made to the record. Any discrepancies between the initial-read and the database record violates concurrency rules and hence causes the system to disregard any update request. An error message is generated and the user is asked to start the update process again. It improves database performance by reducing the amount of locking required, thereby reducing the load on the database server. It works efficiently with tables that require limited updates since no users are locked out. However, some updates may fail. The downside is constant update failures due to high volumes of update requests from multiple concurrent users - it can be frustrating for users.
Where to use optimistic locking: this is appropriate in environments where there is low contention for data, or where read-only access to data is required. Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications,[4] where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications.

Disadvantages Edit

Lock-based resource protection and thread/process synchronization have many disadvantages:

  • Contention: some threads/processes have to wait until a lock (or a whole set of locks) is released. If one of the threads holding a lock dies, stalls, blocks, or enters an infinite loop, other threads waiting for the lock may wait indefinitely until the computer is power cycled.
  • Overhead: the use of locks adds overhead for each access to a resource, even when the chances for collision are very rare. (However, any chance for such collisions is a race condition.)
  • Debugging: bugs associated with locks are time dependent and can be very subtle and extremely hard to replicate, such as deadlocks.
  • Instability: the optimal balance between lock overhead and lock contention can be unique to the problem domain (application) and sensitive to design, implementation, and even low-level system architectural changes. These balances may change over the life cycle of an application and may entail tremendous changes to update (re-balance).
  • Composability: locks are only composable (e.g., managing multiple concurrent locks in order to atomically delete item X from table A and insert X into table B) with relatively elaborate (overhead) software support and perfect adherence by applications programming to rigorous conventions.
  • Priority inversion: a low-priority thread/process holding a common lock can prevent high-priority threads/processes from proceeding. Priority inheritance can be used to reduce priority-inversion duration. The priority ceiling protocol can be used on uniprocessor systems to minimize the worst-case priority-inversion duration, as well as prevent deadlock.
  • Convoying: all other threads have to wait if a thread holding a lock is descheduled due to a time-slice interrupt or page fault.

Some concurrency control strategies avoid some or all of these problems. For example, a funnel or serializing tokens can avoid the biggest problem: deadlocks. Alternatives to locking include non-blocking synchronization methods, like lock-free programming techniques and transactional memory. However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve the application level from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application.

In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands.[citation needed]

Lack of composability Edit

One of lock-based programming's biggest problems is that "locks don't compose": it is hard to combine small, correct lock-based modules into equally correct larger programs without modifying the modules or at least knowing about their internals. Simon Peyton Jones (an advocate of software transactional memory) gives the following example of a banking application:[5] design a class Account that allows multiple concurrent clients to deposit or withdraw money to an account; and give an algorithm to transfer money from one account to another. The lock-based solution to the first part of the problem is:

class Account: member balance: Integer member mutex: Lock method deposit(n: Integer) mutex.lock() balance ← balance + n mutex.unlock() method withdraw(n: Integer) deposit(−n) 

The second part of the problem is much more complicated. A transfer routine that is correct for sequential programs would be

function transfer(from: Account, to: Account, amount: Integer) from.withdraw(amount) to.deposit(amount) 

In a concurrent program, this algorithm is incorrect because when one thread is halfway through transfer, another might observe a state where amount has been withdrawn from the first account, but not yet deposited into the other account: money has gone missing from the system. This problem can only be fixed completely by taking locks on both account prior to changing any of the two accounts, but then the locks have to be taken according to some arbitrary, global ordering to prevent deadlock:

function transfer(from: Account, to: Account, amount: Integer) if from < to // arbitrary ordering on the locks from.lock() to.lock() else to.lock() from.lock() from.withdraw(amount) to.deposit(amount) from.unlock() to.unlock() 

This solution gets more complicated when more locks are involved, and the transfer function needs to know about all of the locks, so they cannot be hidden.

Language support Edit

Programming languages vary in their support for synchronization:

  • Ada provides protected objects that have visible protected subprograms or entries[6] as well as rendezvous.[7]
  • The ISO/IEC C standard provides a standard mutual exclusion (locks) API since C11. The current ISO/IEC C++ standard supports threading facilities since C++11. The OpenMP standard is supported by some compilers, and allows critical sections to be specified using pragmas. The POSIX pthread API provides lock support.[8] Visual C++ provides the synchronize attribute of methods to be synchronized, but this is specific to COM objects in the Windows architecture and Visual C++ compiler.[9] C and C++ can easily access any native operating system locking features.
  • C# provides the lock keyword on a thread to ensure its exclusive access to a resource.
  • VB.NET provides a SyncLock keyword like C#'s lock keyword.
  • Java provides the keyword synchronized to lock code blocks, methods or objects[10] and libraries featuring concurrency-safe data structures.
  • Objective-C provides the keyword @synchronized[11] to put locks on blocks of code and also provides the classes NSLock,[12] NSRecursiveLock,[13] and NSConditionLock[14] along with the NSLocking protocol[15] for locking as well.
  • PHP provides a file-based locking [16] as well as a Mutex class in the pthreads extension. [17]
  • Python provides a low-level mutex mechanism with a Lock class from the threading module.[18]
  • The ISO/IEC Fortran standard (ISO/IEC 1539-1:2010) provides the lock_type derived type in the intrinsic module iso_fortran_env and the lock/unlock statements since Fortran 2008.[19]
  • Ruby provides a low-level mutex object and no keyword.[20]
  • Rust provides the Mutex<T>[21] struct.[22]
  • x86 assembly provides the LOCK prefix on certain operations to guarantee their atomicity.
  • Haskell implements locking via a mutable data structure called an MVar, which can either be empty or contain a value, typically a reference to a resource. A thread that wants to use the resource ‘takes’ the value of the MVar, leaving it empty, and puts it back when it is finished. Attempting to take a resource from an empty MVar results in the thread blocking until the resource is available.[23] As an alternative to locking, an implementation of software transactional memory also exists.[24]
  • Go provides a low-level Mutex object in standard's library sync package.[25] It can be used for locking code blocks, methods or objects.

Mutexes vs. semaphores Edit

A mutex is a locking mechanism that sometimes uses the same basic implementation as the binary semaphore. The differences between them are in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only the task that locked the mutex is supposed to unlock it. This constraint aims to handle some potential problems of using semaphores:

  1. Priority inversion: If the mutex knows who locked it and is supposed to unlock it, it is possible to promote the priority of that task whenever a higher-priority task starts waiting on the mutex.
  2. Premature task termination: Mutexes may also provide deletion safety, where the task holding the mutex cannot be accidentally deleted. [citation needed]
  3. Termination deadlock: If a mutex-holding task terminates for any reason, the OS can release the mutex and signal waiting tasks of this condition.
  4. Recursion deadlock: a task is allowed to lock a reentrant mutex multiple times as it unlocks it an equal number of times.
  5. Accidental release: An error is raised on the release of the mutex if the releasing task is not its owner.

See also Edit

References Edit

  1. ^ "lock Statement (C# Reference)".
  2. ^ "ThreadPoolPriority, and MethodImplAttribute". MSDN. p. ??. Retrieved 2011-11-22.
  3. ^ "C# From a Java Developer's Perspective". Archived from the original on 2013-01-02. Retrieved 2011-11-22.
  4. ^ . Microsoft. August 2002. Archived from the original on 2008-05-08. Retrieved 2008-05-30.
  5. ^ Peyton Jones, Simon (2007). "Beautiful concurrency" (PDF). In Wilson, Greg; Oram, Andy (eds.). Beautiful Code: Leading Programmers Explain How They Think. O'Reilly.
  6. ^ ISO/IEC 8652:2007. "Protected Units and Protected Objects". Ada 2005 Reference Manual. Retrieved 2010-02-27. A protected object provides coordinated access to shared data, through calls on its visible protected operations, which can be protected subprograms or protected entries.
  7. ^ ISO/IEC 8652:2007. "Example of Tasking and Synchronization". Ada 2005 Reference Manual. Retrieved 2010-02-27.
  8. ^ Marshall, Dave (March 1999). "Mutual Exclusion Locks". Retrieved 2008-05-30.
  9. ^ "Synchronize". msdn.microsoft.com. Retrieved 2008-05-30.
  10. ^ "Synchronization". Sun Microsystems. Retrieved 2008-05-30.
  11. ^ "Apple Threading Reference". Apple, inc. Retrieved 2009-10-17.
  12. ^ "NSLock Reference". Apple, inc. Retrieved 2009-10-17.
  13. ^ "NSRecursiveLock Reference". Apple, inc. Retrieved 2009-10-17.
  14. ^ "NSConditionLock Reference". Apple, inc. Retrieved 2009-10-17.
  15. ^ "NSLocking Protocol Reference". Apple, inc. Retrieved 2009-10-17.
  16. ^ "flock".
  17. ^ . Archived from the original on 2017-07-04. Retrieved 2016-12-29.
  18. ^ Lundh, Fredrik (July 2007). . Archived from the original on 2020-11-01. Retrieved 2008-05-30.
  19. ^ John Reid (2010). "Coarrays in the next Fortran Standard" (PDF). Retrieved 2020-02-17.
  20. ^ "Programming Ruby: Threads and Processes". 2001. Retrieved 2008-05-30.
  21. ^ "std::sync::Mutex - Rust". doc.rust-lang.org. Retrieved 3 November 2020.
  22. ^ "Shared-State Concurrency - The Rust Programming Language". doc.rust-lang.org. Retrieved 3 November 2020.
  23. ^ Marlow, Simon (August 2013). "Basic concurrency: threads and MVars". Parallel and Concurrent Programming in Haskell. O’Reilly Media. ISBN 9781449335946.
  24. ^ Marlow, Simon (August 2013). "Software transactional memory". Parallel and Concurrent Programming in Haskell. O’Reilly Media. ISBN 9781449335946.
  25. ^ "sync package - sync - pkg.go.dev". pkg.go.dev. Retrieved 2021-11-23.

External links Edit

    lock, computer, science, computer, science, lock, mutex, from, mutual, exclusion, synchronization, primitive, mechanism, that, enforces, limits, access, resource, when, there, many, threads, execution, lock, designed, enforce, mutual, exclusion, concurrency, c. In computer science a lock or mutex from mutual exclusion is a synchronization primitive a mechanism that enforces limits on access to a resource when there are many threads of execution A lock is designed to enforce a mutual exclusion concurrency control policy and with a variety of possible methods there exists multiple unique implementations for different applications Contents 1 Types 2 Granularity 3 Database locks 4 Disadvantages 4 1 Lack of composability 5 Language support 6 Mutexes vs semaphores 7 See also 8 References 9 External linksTypes EditGenerally locks are advisory locks where each thread cooperates by acquiring the lock before accessing the corresponding data Some systems also implement mandatory locks where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access The simplest type of lock is a binary semaphore It provides exclusive access to the locked data Other schemes also provide shared access for reading data Other widely implemented access modes are exclusive intend to exclude and intend to upgrade Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource With a spinlock the thread simply waits spins until the lock becomes available This is efficient if threads are blocked for a short time because it avoids the overhead of operating system process re scheduling It is inefficient if the lock is held for a long time or if the progress of the thread that is holding the lock depends on preemption of the locked thread Locks typically require hardware support for efficient implementation This support usually takes the form of one or more atomic instructions such as test and set fetch and add or compare and swap These instructions allow a single process to test if the lock is free and if free acquire the lock in a single atomic operation Uniprocessor architectures have the option of using uninterruptible sequences of instructions using special instructions or instruction prefixes to disable interrupts temporarily but this technique does not work for multiprocessor shared memory machines Proper support for locks in a multiprocessor environment can require quite complex hardware or software support with substantial synchronization issues The reason an atomic operation is required is because of concurrency where more than one task executes the same logic For example consider the following C code if lock 0 lock free set it lock myPID The above example does not guarantee that the task has the lock since more than one task can be testing the lock at the same time Since both tasks will detect that the lock is free both tasks will attempt to set the lock not knowing that the other task is also setting the lock Dekker s or Peterson s algorithm are possible substitutes if atomic locking operations are not available Careless use of locks can result in deadlock or livelock A number of strategies can be used to avoid or recover from deadlocks or livelocks both at design time and at run time The most common strategy is to standardize the lock acquisition sequences so that combinations of inter dependent locks are always acquired in a specifically defined cascade order Some languages do support locks syntactically An example in C follows public class Account This is a monitor of an account private decimal balance 0 private object balanceLock new object public void Deposit decimal amount Only one thread at a time may execute this statement lock balanceLock balance amount public void Withdraw decimal amount Only one thread at a time may execute this statement lock balanceLock balance amount The code lock this can lead to problems if the instance can be accessed publicly 1 Similar to Java C can also synchronize entire methods by using the MethodImplOptions Synchronized attribute 2 3 MethodImpl MethodImplOptions Synchronized public void SomeMethod do stuff Granularity EditBefore being introduced to lock granularity one needs to understand three concepts about locks lock overhead the extra resources for using locks like the memory space allocated for locks the CPU time to initialize and destroy locks and the time for acquiring or releasing locks The more locks a program uses the more overhead associated with the usage lock contention this occurs whenever one process or thread attempts to acquire a lock held by another process or thread The more fine grained the available locks the less likely one process thread will request a lock held by the other For example locking a row rather than the entire table or locking a cell rather than the entire row deadlock the situation when each of at least two tasks is waiting for a lock that the other task holds Unless something is done the two tasks will wait forever There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization An important property of a lock is its granularity The granularity is a measure of the amount of data the lock is protecting In general choosing a coarse granularity a small number of locks each protecting a large segment of data results in less lock overhead when a single process is accessing the protected data but worse performance when multiple processes are running concurrently This is because of increased lock contention The more coarse the lock the higher the likelihood that the lock will stop an unrelated process from proceeding Conversely using a fine granularity a larger number of locks each protecting a fairly small amount of data increases the overhead of the locks themselves but reduces lock contention Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock citation needed In a database management system for example a lock could protect in order of decreasing granularity part of a field a field a record a data page or an entire table Coarse granularity such as using table locks tends to give the best performance for a single user whereas fine granularity such as record locks tends to give the best performance for multiple users Database locks EditMain article Lock database Database locks can be used as a means of ensuring transaction synchronicity i e when making transaction processing concurrent interleaving transactions using 2 phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction However deadlocks become an unfortunate side effect of locking in databases Deadlocks are either prevented by pre determining the locking order between transactions or are detected using waits for graphs An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps There are mechanisms employed to manage the actions of multiple concurrent users on a database the purpose is to prevent lost updates and dirty reads The two types of locking are pessimistic locking and optimistic locking Pessimistic locking a user who reads a record with the intention of updating it places an exclusive lock on the record to prevent other users from manipulating it This means no one else can manipulate that record until the user releases the lock The downside is that users can be locked out for a very long time thereby slowing the overall system response and causing frustration Where to use pessimistic locking this is mainly used in environments where data contention the degree of users request to the database system at any one time is heavy where the cost of protecting data through locks is less than the cost of rolling back transactions if concurrency conflicts occur Pessimistic concurrency is best implemented when lock times will be short as in programmatic processing of records Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data because records might be locked for relatively large periods of time It is not appropriate for use in Web application development dd Optimistic locking this allows multiple concurrent users access to the database whilst the system keeps a copy of the initial read made by each user When a user wants to update a record the application determines whether another user has changed the record since it was last read The application does this by comparing the initial read held in memory to the database record to verify any changes made to the record Any discrepancies between the initial read and the database record violates concurrency rules and hence causes the system to disregard any update request An error message is generated and the user is asked to start the update process again It improves database performance by reducing the amount of locking required thereby reducing the load on the database server It works efficiently with tables that require limited updates since no users are locked out However some updates may fail The downside is constant update failures due to high volumes of update requests from multiple concurrent users it can be frustrating for users Where to use optimistic locking this is appropriate in environments where there is low contention for data or where read only access to data is required Optimistic concurrency is used extensively in NET to address the needs of mobile and disconnected applications 4 where locking data rows for prolonged periods of time would be infeasible Also maintaining record locks requires a persistent connection to the database server which is not possible in disconnected applications dd Disadvantages EditLock based resource protection and thread process synchronization have many disadvantages Contention some threads processes have to wait until a lock or a whole set of locks is released If one of the threads holding a lock dies stalls blocks or enters an infinite loop other threads waiting for the lock may wait indefinitely until the computer is power cycled Overhead the use of locks adds overhead for each access to a resource even when the chances for collision are very rare However any chance for such collisions is a race condition Debugging bugs associated with locks are time dependent and can be very subtle and extremely hard to replicate such as deadlocks Instability the optimal balance between lock overhead and lock contention can be unique to the problem domain application and sensitive to design implementation and even low level system architectural changes These balances may change over the life cycle of an application and may entail tremendous changes to update re balance Composability locks are only composable e g managing multiple concurrent locks in order to atomically delete item X from table A and insert X into table B with relatively elaborate overhead software support and perfect adherence by applications programming to rigorous conventions Priority inversion a low priority thread process holding a common lock can prevent high priority threads processes from proceeding Priority inheritance can be used to reduce priority inversion duration The priority ceiling protocol can be used on uniprocessor systems to minimize the worst case priority inversion duration as well as prevent deadlock Convoying all other threads have to wait if a thread holding a lock is descheduled due to a time slice interrupt or page fault Some concurrency control strategies avoid some or all of these problems For example a funnel or serializing tokens can avoid the biggest problem deadlocks Alternatives to locking include non blocking synchronization methods like lock free programming techniques and transactional memory However such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software Therefore they may only relieve the application level from the details of implementing locks with the problems listed above still needing to be dealt with beneath the application In most cases proper locking depends on the CPU providing a method of atomic instruction stream synchronization for example the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item Therefore an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands citation needed Lack of composability Edit One of lock based programming s biggest problems is that locks don t compose it is hard to combine small correct lock based modules into equally correct larger programs without modifying the modules or at least knowing about their internals Simon Peyton Jones an advocate of software transactional memory gives the following example of a banking application 5 design a class Account that allows multiple concurrent clients to deposit or withdraw money to an account and give an algorithm to transfer money from one account to another The lock based solution to the first part of the problem is class Account member balance Integer member mutex Lock method deposit n Integer mutex lock balance balance n mutex unlock method withdraw n Integer deposit n The second part of the problem is much more complicated A transfer routine that is correct for sequential programs would be function transfer from Account to Account amount Integer from withdraw amount to deposit amount In a concurrent program this algorithm is incorrect because when one thread is halfway through transfer another might observe a state where amount has been withdrawn from the first account but not yet deposited into the other account money has gone missing from the system This problem can only be fixed completely by taking locks on both account prior to changing any of the two accounts but then the locks have to be taken according to some arbitrary global ordering to prevent deadlock function transfer from Account to Account amount Integer if from lt to arbitrary ordering on the locks from lock to lock else to lock from lock from withdraw amount to deposit amount from unlock to unlock This solution gets more complicated when more locks are involved and the transfer function needs to know about all of the locks so they cannot be hidden Language support EditSee also Barrier computer science Programming languages vary in their support for synchronization Ada provides protected objects that have visible protected subprograms or entries 6 as well as rendezvous 7 The ISO IEC C standard provides a standard mutual exclusion locks API since C11 The current ISO IEC C standard supports threading facilities since C 11 The OpenMP standard is supported by some compilers and allows critical sections to be specified using pragmas The POSIX pthread API provides lock support 8 Visual C provides the synchronize attribute of methods to be synchronized but this is specific to COM objects in the Windows architecture and Visual C compiler 9 C and C can easily access any native operating system locking features C provides the lock keyword on a thread to ensure its exclusive access to a resource VB NET provides a SyncLock keyword like C s lock keyword Java provides the keyword synchronized to lock code blocks methods or objects 10 and libraries featuring concurrency safe data structures Objective C provides the keyword synchronized 11 to put locks on blocks of code and also provides the classes NSLock 12 NSRecursiveLock 13 and NSConditionLock 14 along with the NSLocking protocol 15 for locking as well PHP provides a file based locking 16 as well as a Mutex class in the pthreads extension 17 Python provides a low level mutex mechanism with a Lock class from the threading module 18 The ISO IEC Fortran standard ISO IEC 1539 1 2010 provides the lock type derived type in the intrinsic module iso fortran env and the lock unlock statements since Fortran 2008 19 Ruby provides a low level mutex object and no keyword 20 Rust provides the Mutex lt T gt 21 struct 22 x86 assembly provides the LOCK prefix on certain operations to guarantee their atomicity Haskell implements locking via a mutable data structure called an MVar which can either be empty or contain a value typically a reference to a resource A thread that wants to use the resource takes the value of the MVar leaving it empty and puts it back when it is finished Attempting to take a resource from an empty MVar results in the thread blocking until the resource is available 23 As an alternative to locking an implementation of software transactional memory also exists 24 Go provides a low level Mutex object in standard s library sync package 25 It can be used for locking code blocks methods or objects Mutexes vs semaphores EditThis section is an excerpt from Semaphore programming Semaphores vs mutexes edit A mutex is a locking mechanism that sometimes uses the same basic implementation as the binary semaphore The differences between them are in how they are used While a binary semaphore may be colloquially referred to as a mutex a true mutex has a more specific use case and definition in that only the task that locked the mutex is supposed to unlock it This constraint aims to handle some potential problems of using semaphores Priority inversion If the mutex knows who locked it and is supposed to unlock it it is possible to promote the priority of that task whenever a higher priority task starts waiting on the mutex Premature task termination Mutexes may also provide deletion safety where the task holding the mutex cannot be accidentally deleted citation needed Termination deadlock If a mutex holding task terminates for any reason the OS can release the mutex and signal waiting tasks of this condition Recursion deadlock a task is allowed to lock a reentrant mutex multiple times as it unlocks it an equal number of times Accidental release An error is raised on the release of the mutex if the releasing task is not its owner See also EditCritical section Double checked locking File locking Lock free and wait free algorithms Monitor synchronization Mutual exclusion Read write lock patternReferences Edit lock Statement C Reference ThreadPoolPriority and MethodImplAttribute MSDN p Retrieved 2011 11 22 C From a Java Developer s Perspective Archived from the original on 2013 01 02 Retrieved 2011 11 22 Designing Data Tier Components and Passing Data Through Tiers Microsoft August 2002 Archived from the original on 2008 05 08 Retrieved 2008 05 30 Peyton Jones Simon 2007 Beautiful concurrency PDF In Wilson Greg Oram Andy eds Beautiful Code Leading Programmers Explain How They Think O Reilly ISO IEC 8652 2007 Protected Units and Protected Objects Ada 2005 Reference Manual Retrieved 2010 02 27 A protected object provides coordinated access to shared data through calls on its visible protected operations which can be protected subprograms or protected entries ISO IEC 8652 2007 Example of Tasking and Synchronization Ada 2005 Reference Manual Retrieved 2010 02 27 Marshall Dave March 1999 Mutual Exclusion Locks Retrieved 2008 05 30 Synchronize msdn microsoft com Retrieved 2008 05 30 Synchronization Sun Microsystems Retrieved 2008 05 30 Apple Threading Reference Apple inc Retrieved 2009 10 17 NSLock Reference Apple inc Retrieved 2009 10 17 NSRecursiveLock Reference Apple inc Retrieved 2009 10 17 NSConditionLock Reference Apple inc Retrieved 2009 10 17 NSLocking Protocol Reference Apple inc Retrieved 2009 10 17 flock The Mutex class Archived from the original on 2017 07 04 Retrieved 2016 12 29 Lundh Fredrik July 2007 Thread Synchronization Mechanisms in Python Archived from the original on 2020 11 01 Retrieved 2008 05 30 John Reid 2010 Coarrays in the next Fortran Standard PDF Retrieved 2020 02 17 Programming Ruby Threads and Processes 2001 Retrieved 2008 05 30 std sync Mutex Rust doc rust lang org Retrieved 3 November 2020 Shared State Concurrency The Rust Programming Language doc rust lang org Retrieved 3 November 2020 Marlow Simon August 2013 Basic concurrency threads and MVars Parallel and Concurrent Programming in Haskell O Reilly Media ISBN 9781449335946 Marlow Simon August 2013 Software transactional memory Parallel and Concurrent Programming in Haskell O Reilly Media ISBN 9781449335946 sync package sync pkg go dev pkg go dev Retrieved 2021 11 23 External links EditTutorial on Locks and Critical Sections Retrieved from https en wikipedia org w index php title Lock computer science amp oldid 1175222853, wikipedia, wiki, book, books, library,

    article

    , read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.