fbpx
Wikipedia

Synchronization (computer science)

In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity. Process synchronization primitives are commonly used to implement data synchronization.

The need for synchronization

The need for synchronization does not arise merely in multi-processor systems but for any kind of concurrent processes; even in single processor systems. Mentioned below are some of the main needs for synchronization:

Forks and Joins: When a job arrives at a fork point, it is split into N sub-jobs which are then serviced by n tasks. After being serviced, each sub-job waits until all other sub-jobs are done processing. Then, they are joined again and leave the system. Thus, parallel programming requires synchronization as all the parallel processes wait for several other processes to occur.

Producer-Consumer: In a producer-consumer relationship, the consumer process is dependent on the producer process until the necessary data has been produced.

Exclusive use resources: When multiple processes are dependent on a resource and they need to access it at the same time, the operating system needs to ensure that only one processor accesses it at a given point in time. This reduces concurrency.

Thread or process synchronization

 
Figure 1: Three processes accessing a shared resource (critical section) simultaneously.

Thread synchronization is defined as a mechanism which ensures that two or more concurrent processes or threads do not simultaneously execute some particular program segment known as critical section. Processes' access to critical section is controlled by using synchronization techniques. When one thread starts executing the critical section (serialized segment of the program) the other thread should wait until the first thread finishes. If proper synchronization techniques[1] are not applied, it may cause a race condition where the values of variables may be unpredictable and vary depending on the timings of context switches of the processes or threads.

For example, suppose that there are three processes, namely 1, 2, and 3. All three of them are concurrently executing, and they need to share a common resource (critical section) as shown in Figure 1. Synchronization should be used here to avoid any conflicts for accessing this shared resource. Hence, when Process 1 and 2 both try to access that resource, it should be assigned to only one process at a time. If it is assigned to Process 1, the other process (Process 2) needs to wait until Process 1 frees that resource (as shown in Figure 2).

 
Figure 2: A process accessing a shared resource if available, based on some synchronization technique.

Another synchronization requirement which needs to be considered is the order in which particular processes or threads should be executed. For example, one cannot board a plane before buying a ticket. Similarly, one cannot check e-mails before validating the appropriate credentials (for example, user name and password). In the same way, an ATM will not provide any service until it receives a correct PIN.

Other than mutual exclusion, synchronization also deals with the following:

  • deadlock, which occurs when many processes are waiting for a shared resource (critical section) which is being held by some other process. In this case, the processes just keep waiting and execute no further;
  • starvation, which occurs when a process is waiting to enter the critical section, but other processes monopolize the critical section, and the first process is forced to wait indefinitely;
  • priority inversion, which occurs when a high-priority process is in the critical section, and it is interrupted by a medium-priority process. This violation of priority rules can happen under certain circumstances and may lead to serious consequences in real-time systems;
  • busy waiting, which occurs when a process frequently polls to determine if it has access to a critical section. This frequent polling robs processing time from other processes.

Minimizing synchronization

One of the challenges for exascale algorithm design is to minimize or reduce synchronization. Synchronization takes more time than computation, especially in distributed computing. Reducing synchronization drew attention from computer scientists for decades. Whereas it becomes an increasingly significant problem recently as the gap between the improvement of computing and latency increases. Experiments have shown that (global) communications due to synchronization on distributed computers takes a dominated share in a sparse iterative solver.[2] This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient(HPCG),[3] for ranking the top 500 supercomputers.

Classic problems of synchronization

The following are some classic problems of synchronization:

These problems are used to test nearly every newly proposed synchronization scheme or primitive.

Hardware synchronization

Many systems provide hardware support for critical section code.

A single processor or uniprocessor system could disable interrupts by executing currently running code without preemption, which is very inefficient on multiprocessor systems.[4] "The key ability we require to implement synchronization in a multiprocessor is a set of hardware primitives with the ability to atomically read and modify a memory location. Without such a capability, the cost of building basic synchronization primitives will be too high and will increase as the processor count increases. There are a number of alternative formulations of the basic hardware primitives, all of which provide the ability to atomically read and modify a location, together with some way to tell if the read and write were performed atomically. These hardware primitives are the basic building blocks that are used to build a wide variety of user-level synchronization operations, including things such as locks and barriers. In general, architects do not expect users to employ the basic hardware primitives, but instead expect that the primitives will be used by system programmers to build a synchronization library, a process that is often complex and tricky."[5] Many modern hardware provides special atomic hardware instructions by either test-and-set the memory word or compare-and-swap contents of two memory words.

Synchronization strategies in programming languages

In Java, to prevent thread interference and memory consistency errors, blocks of code are wrapped into synchronized (lock_object) sections. This forces any thread to acquire the said lock object before it can execute the block. The lock is automatically released when the thread which acquired the lock, and is then executing the block, leaves the block or enters the waiting state within the block. Any variable updates, made by a thread in a synchronized block, become visible to other threads when they similarly acquire the lock and execute the block.

Java synchronized blocks, in addition to enabling mutual exclusion and memory consistency, enable signaling—i.e., sending events from threads which have acquired the lock and are executing the code block to those which are waiting for the lock within the block. This means that Java synchronized sections combine functionality of mutexes and events. Such primitive is known as synchronization monitor.

Any object may be used as a lock/monitor in Java. The declaring object is a lock object when the whole method is marked with synchronized.

The .NET Framework has synchronization primitives. "Synchronization is designed to be cooperative, demanding that every thread or process follow the synchronization mechanism before accessing protected resources (critical section) for consistent results." In .NET, locking, signaling, lightweight synchronization types, spinwait and interlocked operations are some of mechanisms related to synchronization.[6]

Implementation of Synchronization

Spinlock

Another effective way of implementing synchronization is by using spinlocks. Before accessing any shared resource or piece of code, every processor checks a flag. If the flag is reset, then the processor sets the flag and continues executing the thread. But, if the flag is set (locked), the threads would keep spinning in a loop and keep checking if the flag is set or not. But, spinlocks are effective only if the flag is reset for lower cycles otherwise it can lead to performance issues as it wastes many processor cycles waiting.[7]

Barriers

Barriers are simple to implement and provide good responsiveness. They are based on the concept of implementing wait cycles to provide synchronization. Consider three threads running simultaneously, starting from barrier 1. After time t, thread1 reaches barrier 2 but it still has to wait for thread 2 and 3 to reach barrier2 as it does not have the correct data. Once all the threads reach barrier 2 they all start again. After time t, thread 1 reaches barrier3 but it will have to wait for threads 2 and 3 and the correct data again.

Thus, in barrier synchronization of multiple threads there will always be a few threads that will end up waiting for other threads as in the above example thread 1 keeps waiting for thread 2 and 3. This results in severe degradation of the process performance.[8]

The barrier synchronization wait function for ith thread can be represented as:

(Wbarrier)i = f ((Tbarrier)i, (Rthread)i)

Where Wbarrier is the wait time for a thread, Tbarrier is the number of threads has arrived, and Rthread is the arrival rate of threads.[9]

Experiments show that 34% of the total execution time is spent in waiting for other slower threads.[8]

Semaphores

Semaphores are signalling mechanisms which can allow one or more threads/processors to access a section. A Semaphore has a flag which has a certain fixed value associated with it and each time a thread wishes to access the section, it decrements the flag. Similarly, when the thread leaves the section, the flag is incremented. If the flag is zero, the thread cannot access the section and gets blocked if it chooses to wait.

Some semaphores would allow only one thread or process in the code section. Such Semaphores are called binary semaphore and are very similar to Mutex. Here, if the value of semaphore is 1, the thread is allowed to access and if the value is 0, the access is denied.[10]

Mathematical foundations

Synchronization was originally a process-based concept whereby a lock could be obtained on an object. Its primary usage was in databases. There are two types of (file) lock; read-only and read–write. Read-only locks may be obtained by many processes or threads. Readers–writer locks are exclusive, as they may only be used by a single process/thread at a time.

Although locks were derived for file databases, data is also shared in memory between processes and threads. Sometimes more than one object (or file) is locked at a time. If they are not locked simultaneously they can overlap, causing a deadlock exception.

Java and Ada only have exclusive locks because they are thread based and rely on the compare-and-swap processor instruction.

An abstract mathematical foundation for synchronization primitives is given by the history monoid. There are also many higher-level theoretical devices, such as process calculi and Petri nets, which can be built on top of the history monoid.

Synchronization examples

Following are some synchronization examples with respect to different platforms.[11]

Synchronization in Windows

Windows provides:

Synchronization in Linux

Linux provides:

Enabling and disabling of kernel preemption replaced spinlocks on uniprocessor systems. Prior to kernel version 2.6, Linux disabled interrupt to implement short critical sections. Since version 2.6 and later, Linux is fully preemptive.

Synchronization in Solaris

Solaris provides:

  • semaphores;
  • condition variables;
  • adaptive mutexes, binary semaphores that are implemented differently depending upon the conditions;
  • readers–writer locks:
  • turnstiles, queue of threads which are waiting on acquired lock.[13]

Pthreads synchronization

Pthreads is a platform-independent API that provides:

  • mutexes;
  • condition variables;
  • readers–writer locks;
  • spinlocks;
  • barriers.

Data synchronization

 
Figure 3: Changes from both server and client(s) are synchronized.

A distinctly different (but related) concept is that of data synchronization. This refers to the need to update and keep multiple copies of a set of data coherent with one another or to maintain data integrity, Figure 3.[14] For example, database replication is used to keep multiple copies of data synchronized with database servers that store data in different locations.

Examples include:

  • File synchronization, such as syncing a hand-held MP3 player to a desktop computer;
  • Cluster file systems, which are file systems that maintain data or indexes in a coherent fashion across a whole computing cluster;
  • Cache coherency, maintaining multiple copies of data in sync across multiple caches;
  • RAID, where data is written in a redundant fashion across multiple disks, so that the loss of any one disk does not lead to a loss of data;
  • Database replication, where copies of data on a database are kept in sync, despite possible large geographical separation;
  • Journaling, a technique used by many modern file systems to make sure that file metadata are updated on a disk in a coherent, consistent manner.

Challenges in data synchronization

Some of the challenges which user may face in data synchronization:

  • data formats complexity;
  • real-timeliness;
  • data security;
  • data quality;
  • performance.

Data formats complexity

Data formats tend to grow more complex with time as the organization grows and evolves. This results not only in building simple interfaces between the two applications (source and target), but also in a need to transform the data while passing them to the target application. ETL (extraction transformation loading) tools can be helpful at this stage for managing data format complexities.

Real-timeliness

In real-time systems, customers want to see the current status of their order in e-shop, the current status of a parcel delivery—a real time parcel tracking—, the current balance on their account, etc. This shows the need of a real-time system, which is being updated as well to enable smooth manufacturing process in real-time, e.g., ordering material when enterprise is running out stock, synchronizing customer orders with manufacturing process, etc. From real life, there exist so many examples where real-time processing gives successful and competitive advantage.

Data security

There are no fixed rules and policies to enforce data security. It may vary depending on the system which you are using. Even though the security is maintained correctly in the source system which captures the data, the security and information access privileges must be enforced on the target systems as well to prevent any potential misuse of the information. This is a serious issue and particularly when it comes for handling secret, confidential and personal information. So because of the sensitivity and confidentiality, data transfer and all in-between information must be encrypted.

Data quality

Data quality is another serious constraint. For better management and to maintain good quality of data, the common practice is to store the data at one location and share with different people and different systems and/or applications from different locations. It helps in preventing inconsistencies in the data.

Performance

There are five different phases involved in the data synchronization process:

Each of these steps is critical. In case of large amounts of data, the synchronization process needs to be carefully planned and executed to avoid any negative impact on performance.

See also

References

  1. ^ Gramoli, V. (2015). More than you ever wanted to know about synchronization: Synchrobench, measuring the impact of the synchronization on concurrent algorithms (PDF). Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. ACM. pp. 1–10.
  2. ^ Shengxin, Zhu and Tongxiang Gu and Xingping Liu (2014). "Minimizing synchronizations in sparse iterative solvers for distributed supercomputers". Computers & Mathematics with Applications. 67 (1): 199–209. doi:10.1016/j.camwa.2013.11.008.
  3. ^ "HPCG Benchmark".
  4. ^ Silberschatz, Abraham; Gagne, Greg; Galvin, Peter Baer (July 11, 2008). "Chapter 6: Process Synchronization". Operating System Concepts (Eighth ed.). John Wiley & Sons. ISBN 978-0-470-12872-5.
  5. ^ Hennessy, John L.; Patterson, David A. (September 30, 2011). "Chapter 5: Thread-Level Parallelism". Computer Architecture: A Quantitative Approach (Fifth ed.). Morgan Kaufmann. ISBN 978-0-123-83872-8.
  6. ^ "Synchronization Primitives in .NET framework". MSDN, The Microsoft Developer Network. Microsoft. Retrieved 23 November 2014.
  7. ^ Massa, Anthony (2003). Embedded Software Development with ECos. Pearson Education Inc. ISBN 0-13-035473-2.
  8. ^ a b Meng, Chen, Pan, Yao, Wu, Jinglei, Tianzhou, Ping, Jun. Minghui (2014). "A speculative mechanism for barrier sychronization". 2014 IEEE International Conference on High Performance Computing and Communications (HPCC), 2014 IEEE 6th International Symposium on Cyberspace Safety and Security (CSS) and 2014 IEEE 11th International Conference on Embedded Software and Systems (ICESS).{{cite journal}}: CS1 maint: multiple names: authors list (link)
  9. ^ Rahman, Mohammed Mahmudur (2012). "Process synchronization in multiprocessor and multi-core processor". 2012 International Conference on Informatics, Electronics & Vision (ICIEV). pp. 554–559. doi:10.1109/ICIEV.2012.6317471. ISBN 978-1-4673-1154-0. S2CID 8134329.
  10. ^ Li, Yao, Qing, Carolyn (2003). Real-Time Concepts for Embedded Systems. CMP Books. ISBN 978-1578201242.{{cite book}}: CS1 maint: multiple names: authors list (link)
  11. ^ Silberschatz, Abraham; Gagne, Greg; Galvin, Peter Baer (December 7, 2012). "Chapter 5: Process Synchronization". Operating System Concepts (Ninth ed.). John Wiley & Sons. ISBN 978-1-118-06333-0.
  12. ^ "What is RCU, Fundamentally? [LWN.net]". lwn.net.
  13. ^ Mauro, Jim. "Turnstiles and priority inheritance - SunWorld - August 1999". sunsite.uakom.sk.
  14. ^ Nakatani, Kazuo; Chuang, Ta-Tao; Zhou, Duanning (2006). "Data Synchronization Technology: Standards, Business Values and Implications". Communications of the Association for Information Systems. 17. doi:10.17705/1cais.01744. ISSN 1529-3181.
  • Schneider, Fred B. (1997). On concurrent programming. Springer-Verlag New York, Inc. ISBN 978-0-387-94942-0.

External links

  • at IBM developerWorks
  • The Little Book of Semaphores, by Allen B. Downey
  • Need of Process Synchronization

synchronization, computer, science, this, article, needs, additional, citations, verification, please, help, improve, this, article, adding, citations, reliable, sources, unsourced, material, challenged, removed, find, sources, synchronization, computer, scien. This article needs additional citations for verification Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources Synchronization computer science news newspapers books scholar JSTOR November 2014 Learn how and when to remove this template message In computer science synchronization refers to one of two distinct but related concepts synchronization of processes and synchronization of data Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point in order to reach an agreement or commit to a certain sequence of action Data synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another or to maintain data integrity Process synchronization primitives are commonly used to implement data synchronization Contents 1 The need for synchronization 2 Thread or process synchronization 3 Minimizing synchronization 3 1 Classic problems of synchronization 3 2 Hardware synchronization 3 3 Synchronization strategies in programming languages 3 4 Implementation of Synchronization 3 4 1 Spinlock 3 4 2 Barriers 3 4 3 Semaphores 3 5 Mathematical foundations 3 6 Synchronization examples 3 6 1 Synchronization in Windows 3 6 2 Synchronization in Linux 3 6 3 Synchronization in Solaris 3 6 4 Pthreads synchronization 4 Data synchronization 4 1 Challenges in data synchronization 4 1 1 Data formats complexity 4 1 2 Real timeliness 4 1 3 Data security 4 1 4 Data quality 4 1 5 Performance 5 See also 6 References 7 External linksThe need for synchronization EditThe need for synchronization does not arise merely in multi processor systems but for any kind of concurrent processes even in single processor systems Mentioned below are some of the main needs for synchronization Forks and Joins When a job arrives at a fork point it is split into N sub jobs which are then serviced by n tasks After being serviced each sub job waits until all other sub jobs are done processing Then they are joined again and leave the system Thus parallel programming requires synchronization as all the parallel processes wait for several other processes to occur Producer Consumer In a producer consumer relationship the consumer process is dependent on the producer process until the necessary data has been produced Exclusive use resources When multiple processes are dependent on a resource and they need to access it at the same time the operating system needs to ensure that only one processor accesses it at a given point in time This reduces concurrency Thread or process synchronization Edit Figure 1 Three processes accessing a shared resource critical section simultaneously Thread synchronization is defined as a mechanism which ensures that two or more concurrent processes or threads do not simultaneously execute some particular program segment known as critical section Processes access to critical section is controlled by using synchronization techniques When one thread starts executing the critical section serialized segment of the program the other thread should wait until the first thread finishes If proper synchronization techniques 1 are not applied it may cause a race condition where the values of variables may be unpredictable and vary depending on the timings of context switches of the processes or threads For example suppose that there are three processes namely 1 2 and 3 All three of them are concurrently executing and they need to share a common resource critical section as shown in Figure 1 Synchronization should be used here to avoid any conflicts for accessing this shared resource Hence when Process 1 and 2 both try to access that resource it should be assigned to only one process at a time If it is assigned to Process 1 the other process Process 2 needs to wait until Process 1 frees that resource as shown in Figure 2 Figure 2 A process accessing a shared resource if available based on some synchronization technique Another synchronization requirement which needs to be considered is the order in which particular processes or threads should be executed For example one cannot board a plane before buying a ticket Similarly one cannot check e mails before validating the appropriate credentials for example user name and password In the same way an ATM will not provide any service until it receives a correct PIN Other than mutual exclusion synchronization also deals with the following deadlock which occurs when many processes are waiting for a shared resource critical section which is being held by some other process In this case the processes just keep waiting and execute no further starvation which occurs when a process is waiting to enter the critical section but other processes monopolize the critical section and the first process is forced to wait indefinitely priority inversion which occurs when a high priority process is in the critical section and it is interrupted by a medium priority process This violation of priority rules can happen under certain circumstances and may lead to serious consequences in real time systems busy waiting which occurs when a process frequently polls to determine if it has access to a critical section This frequent polling robs processing time from other processes Minimizing synchronization EditOne of the challenges for exascale algorithm design is to minimize or reduce synchronization Synchronization takes more time than computation especially in distributed computing Reducing synchronization drew attention from computer scientists for decades Whereas it becomes an increasingly significant problem recently as the gap between the improvement of computing and latency increases Experiments have shown that global communications due to synchronization on distributed computers takes a dominated share in a sparse iterative solver 2 This problem is receiving increasing attention after the emergence of a new benchmark metric the High Performance Conjugate Gradient HPCG 3 for ranking the top 500 supercomputers Classic problems of synchronization Edit The following are some classic problems of synchronization The Producer Consumer Problem also called The Bounded Buffer Problem The Readers Writers Problem The Dining Philosophers Problem These problems are used to test nearly every newly proposed synchronization scheme or primitive Hardware synchronization Edit Many systems provide hardware support for critical section code A single processor or uniprocessor system could disable interrupts by executing currently running code without preemption which is very inefficient on multiprocessor systems 4 The key ability we require to implement synchronization in a multiprocessor is a set of hardware primitives with the ability to atomically read and modify a memory location Without such a capability the cost of building basic synchronization primitives will be too high and will increase as the processor count increases There are a number of alternative formulations of the basic hardware primitives all of which provide the ability to atomically read and modify a location together with some way to tell if the read and write were performed atomically These hardware primitives are the basic building blocks that are used to build a wide variety of user level synchronization operations including things such as locks and barriers In general architects do not expect users to employ the basic hardware primitives but instead expect that the primitives will be used by system programmers to build a synchronization library a process that is often complex and tricky 5 Many modern hardware provides special atomic hardware instructions by either test and set the memory word or compare and swap contents of two memory words Synchronization strategies in programming languages Edit In Java to prevent thread interference and memory consistency errors blocks of code are wrapped into synchronized lock object sections This forces any thread to acquire the said lock object before it can execute the block The lock is automatically released when the thread which acquired the lock and is then executing the block leaves the block or enters the waiting state within the block Any variable updates made by a thread in a synchronized block become visible to other threads when they similarly acquire the lock and execute the block Java synchronized blocks in addition to enabling mutual exclusion and memory consistency enable signaling i e sending events from threads which have acquired the lock and are executing the code block to those which are waiting for the lock within the block This means that Java synchronized sections combine functionality of mutexes and events Such primitive is known as synchronization monitor Any object may be used as a lock monitor in Java The declaring object is a lock object when the whole method is marked with synchronized The NET Framework has synchronization primitives Synchronization is designed to be cooperative demanding that every thread or process follow the synchronization mechanism before accessing protected resources critical section for consistent results In NET locking signaling lightweight synchronization types spinwait and interlocked operations are some of mechanisms related to synchronization 6 Implementation of Synchronization Edit Spinlock Edit Main article Spinlock Another effective way of implementing synchronization is by using spinlocks Before accessing any shared resource or piece of code every processor checks a flag If the flag is reset then the processor sets the flag and continues executing the thread But if the flag is set locked the threads would keep spinning in a loop and keep checking if the flag is set or not But spinlocks are effective only if the flag is reset for lower cycles otherwise it can lead to performance issues as it wastes many processor cycles waiting 7 Barriers Edit Main article Barrier computer science Barriers are simple to implement and provide good responsiveness They are based on the concept of implementing wait cycles to provide synchronization Consider three threads running simultaneously starting from barrier 1 After time t thread1 reaches barrier 2 but it still has to wait for thread 2 and 3 to reach barrier2 as it does not have the correct data Once all the threads reach barrier 2 they all start again After time t thread 1 reaches barrier3 but it will have to wait for threads 2 and 3 and the correct data again Thus in barrier synchronization of multiple threads there will always be a few threads that will end up waiting for other threads as in the above example thread 1 keeps waiting for thread 2 and 3 This results in severe degradation of the process performance 8 The barrier synchronization wait function for ith thread can be represented as Wbarrier i f Tbarrier i Rthread i Where Wbarrier is the wait time for a thread Tbarrier is the number of threads has arrived and Rthread is the arrival rate of threads 9 Experiments show that 34 of the total execution time is spent in waiting for other slower threads 8 Semaphores Edit Main article Semaphore programming Semaphores are signalling mechanisms which can allow one or more threads processors to access a section A Semaphore has a flag which has a certain fixed value associated with it and each time a thread wishes to access the section it decrements the flag Similarly when the thread leaves the section the flag is incremented If the flag is zero the thread cannot access the section and gets blocked if it chooses to wait Some semaphores would allow only one thread or process in the code section Such Semaphores are called binary semaphore and are very similar to Mutex Here if the value of semaphore is 1 the thread is allowed to access and if the value is 0 the access is denied 10 Mathematical foundations Edit Synchronization was originally a process based concept whereby a lock could be obtained on an object Its primary usage was in databases There are two types of file lock read only and read write Read only locks may be obtained by many processes or threads Readers writer locks are exclusive as they may only be used by a single process thread at a time Although locks were derived for file databases data is also shared in memory between processes and threads Sometimes more than one object or file is locked at a time If they are not locked simultaneously they can overlap causing a deadlock exception Java and Ada only have exclusive locks because they are thread based and rely on the compare and swap processor instruction An abstract mathematical foundation for synchronization primitives is given by the history monoid There are also many higher level theoretical devices such as process calculi and Petri nets which can be built on top of the history monoid Synchronization examples Edit Following are some synchronization examples with respect to different platforms 11 Synchronization in Windows Edit Windows provides interrupt masks which protect access to global resources critical section on uniprocessor systems spinlocks which prevent in multiprocessor systems spinlocking thread from being preempted dynamic dispatchers citation needed which act like mutexes semaphores events and timers Synchronization in Linux Edit Linux provides semaphores spinlock barriers mutex readers writer locks for the longer section of codes which are accessed very frequently but don t change very often read copy update RCU 12 Enabling and disabling of kernel preemption replaced spinlocks on uniprocessor systems Prior to kernel version 2 6 Linux disabled interrupt to implement short critical sections Since version 2 6 and later Linux is fully preemptive Synchronization in Solaris Edit Solaris provides semaphores condition variables adaptive mutexes binary semaphores that are implemented differently depending upon the conditions readers writer locks turnstiles queue of threads which are waiting on acquired lock 13 Pthreads synchronization Edit Pthreads is a platform independent API that provides mutexes condition variables readers writer locks spinlocks barriers Data synchronization EditMain article Data synchronization Figure 3 Changes from both server and client s are synchronized A distinctly different but related concept is that of data synchronization This refers to the need to update and keep multiple copies of a set of data coherent with one another or to maintain data integrity Figure 3 14 For example database replication is used to keep multiple copies of data synchronized with database servers that store data in different locations Examples include File synchronization such as syncing a hand held MP3 player to a desktop computer Cluster file systems which are file systems that maintain data or indexes in a coherent fashion across a whole computing cluster Cache coherency maintaining multiple copies of data in sync across multiple caches RAID where data is written in a redundant fashion across multiple disks so that the loss of any one disk does not lead to a loss of data Database replication where copies of data on a database are kept in sync despite possible large geographical separation Journaling a technique used by many modern file systems to make sure that file metadata are updated on a disk in a coherent consistent manner Challenges in data synchronization Edit Some of the challenges which user may face in data synchronization data formats complexity real timeliness data security data quality performance Data formats complexity Edit Data formats tend to grow more complex with time as the organization grows and evolves This results not only in building simple interfaces between the two applications source and target but also in a need to transform the data while passing them to the target application ETL extraction transformation loading tools can be helpful at this stage for managing data format complexities Real timeliness Edit In real time systems customers want to see the current status of their order in e shop the current status of a parcel delivery a real time parcel tracking the current balance on their account etc This shows the need of a real time system which is being updated as well to enable smooth manufacturing process in real time e g ordering material when enterprise is running out stock synchronizing customer orders with manufacturing process etc From real life there exist so many examples where real time processing gives successful and competitive advantage Data security Edit There are no fixed rules and policies to enforce data security It may vary depending on the system which you are using Even though the security is maintained correctly in the source system which captures the data the security and information access privileges must be enforced on the target systems as well to prevent any potential misuse of the information This is a serious issue and particularly when it comes for handling secret confidential and personal information So because of the sensitivity and confidentiality data transfer and all in between information must be encrypted Data quality Edit Data quality is another serious constraint For better management and to maintain good quality of data the common practice is to store the data at one location and share with different people and different systems and or applications from different locations It helps in preventing inconsistencies in the data Performance Edit There are five different phases involved in the data synchronization process data extraction from the source or master or main system data transfer data transformation data load to the target system data updationEach of these steps is critical In case of large amounts of data the synchronization process needs to be carefully planned and executed to avoid any negative impact on performance See also EditFutures and promises synchronization mechanisms in pure functional paradigmsReferences Edit Gramoli V 2015 More than you ever wanted to know about synchronization Synchrobench measuring the impact of the synchronization on concurrent algorithms PDF Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming ACM pp 1 10 Shengxin Zhu and Tongxiang Gu and Xingping Liu 2014 Minimizing synchronizations in sparse iterative solvers for distributed supercomputers Computers amp Mathematics with Applications 67 1 199 209 doi 10 1016 j camwa 2013 11 008 HPCG Benchmark Silberschatz Abraham Gagne Greg Galvin Peter Baer July 11 2008 Chapter 6 Process Synchronization Operating System Concepts Eighth ed John Wiley amp Sons ISBN 978 0 470 12872 5 Hennessy John L Patterson David A September 30 2011 Chapter 5 Thread Level Parallelism Computer Architecture A Quantitative Approach Fifth ed Morgan Kaufmann ISBN 978 0 123 83872 8 Synchronization Primitives in NET framework MSDN The Microsoft Developer Network Microsoft Retrieved 23 November 2014 Massa Anthony 2003 Embedded Software Development with ECos Pearson Education Inc ISBN 0 13 035473 2 a b Meng Chen Pan Yao Wu Jinglei Tianzhou Ping Jun Minghui 2014 A speculative mechanism for barrier sychronization 2014 IEEE International Conference on High Performance Computing and Communications HPCC 2014 IEEE 6th International Symposium on Cyberspace Safety and Security CSS and 2014 IEEE 11th International Conference on Embedded Software and Systems ICESS a href Template Cite journal html title Template Cite journal cite journal a CS1 maint multiple names authors list link Rahman Mohammed Mahmudur 2012 Process synchronization in multiprocessor and multi core processor 2012 International Conference on Informatics Electronics amp Vision ICIEV pp 554 559 doi 10 1109 ICIEV 2012 6317471 ISBN 978 1 4673 1154 0 S2CID 8134329 Li Yao Qing Carolyn 2003 Real Time Concepts for Embedded Systems CMP Books ISBN 978 1578201242 a href Template Cite book html title Template Cite book cite book a CS1 maint multiple names authors list link Silberschatz Abraham Gagne Greg Galvin Peter Baer December 7 2012 Chapter 5 Process Synchronization Operating System Concepts Ninth ed John Wiley amp Sons ISBN 978 1 118 06333 0 What is RCU Fundamentally LWN net lwn net Mauro Jim Turnstiles and priority inheritance SunWorld August 1999 sunsite uakom sk Nakatani Kazuo Chuang Ta Tao Zhou Duanning 2006 Data Synchronization Technology Standards Business Values and Implications Communications of the Association for Information Systems 17 doi 10 17705 1cais 01744 ISSN 1529 3181 Schneider Fred B 1997 On concurrent programming Springer Verlag New York Inc ISBN 978 0 387 94942 0 External links EditAnatomy of Linux synchronization methods at IBM developerWorks The Little Book of Semaphores by Allen B Downey Need of Process Synchronization Retrieved from https en wikipedia org w index php title Synchronization computer science amp oldid 1170050503, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.