fbpx
Wikipedia

Page replacement algorithm

In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated. Page replacement happens when a requested page is not in memory (page fault) and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.

When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the quality of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.

The page replacing problem is a typical online problem from the competitive analysis perspective in the sense that the optimal deterministic algorithm is known.

History edit

Page replacement algorithms were a hot topic of research and debate in the 1960s and 1970s. That mostly ended with the development of sophisticated LRU (least recently used) approximations and working set algorithms. Since then, some basic assumptions made by the traditional page replacement algorithms were invalidated, resulting in a revival of research. In particular, the following trends in the behavior of underlying hardware and user-level software have affected the performance of page replacement algorithms:

  • Size of primary storage has increased by multiple orders of magnitude. With several gigabytes of primary memory, algorithms that require a periodic check of each and every memory frame are becoming less and less practical.
  • Memory hierarchies have grown taller. The cost of a CPU cache miss is far more expensive. This exacerbates the previous problem.
  • Locality of reference of user software has weakened. This is mostly attributed to the spread of object-oriented programming techniques that favor large numbers of small functions, use of sophisticated data structures like trees and hash tables that tend to result in chaotic memory reference patterns, and the advent of garbage collection that drastically changed memory access behavior of applications.

Requirements for page replacement algorithms have changed due to differences in operating system kernel architectures. In particular, most modern OS kernels have unified virtual memory and file system caches, requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files. The latter pages have specific properties. For example, they can be locked, or can have write ordering requirements imposed by journaling. Moreover, as the goal of page replacement is to minimize total time waiting for memory, it has to take into account memory requirements imposed by other kernel sub-systems that allocate memory. As a result, page replacement in modern kernels (Linux, FreeBSD, and Solaris) tends to work at the level of a general purpose kernel memory allocator, rather than at the higher level of a virtual memory subsystem.

Local vs. global replacement edit

Replacement algorithms can be local or global.

When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a memory partition). A global replacement algorithm is free to select any page in memory.

Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are fixed partitioning and balanced set algorithms based on the working set model. The advantage of local page replacement is its scalability: each process can handle its page faults independently, leading to more consistent performance for that process. However global page replacement is more efficient on an overall system basis.[1]

Detecting which pages are referenced and modified edit

Modern general purpose computers and some embedded processors have support for virtual memory. Each process has its own virtual address space. A page table maps a subset of the process virtual addresses to physical addresses. In addition, in most architectures the page table holds an "access" bit and a "dirty" bit for each page in the page table. The CPU sets the access bit when the process reads or writes memory in that page. The CPU sets the dirty bit when the process writes memory in that page. The operating system can modify the access and dirty bits. The operating system can detect accesses to memory and files through the following means:

  • By clearing the access bit in pages present in the process' page table. After some time, the OS scans the page table looking for pages that had the access bit set by the CPU. This is fast because the access bit is set automatically by the CPU and inaccurate because the OS does not immediately receive notice of the access nor does it have information about the order in which the process accessed these pages.
  • By removing pages from the process' page table without necessarily removing them from physical memory. The next access to that page is detected immediately because it causes a page fault. This is slow because a page fault involves a context switch to the OS, software lookup for the corresponding physical address, modification of the page table and a context switch back to the process and accurate because the access is detected immediately after it occurs.
  • Directly when the process makes system calls that potentially access the page cache like read and write in POSIX.

Precleaning edit

Most replacement algorithms simply return the target page as their result. This means that if target page is dirty (that is, contains data that have to be written to the stable storage before page can be reclaimed), I/O has to be initiated to send that page to the stable storage (to clean the page). In the early days of virtual memory, time spent on cleaning was not of much concern, because virtual memory was first implemented on systems with full duplex channels to the stable storage, and cleaning was customarily overlapped with paging. Contemporary commodity hardware, on the other hand, does not support full duplex transfers, and cleaning of target pages becomes an issue.

To deal with this situation, various precleaning policies are implemented. Precleaning is the mechanism that starts I/O on dirty pages that are (likely) to be replaced soon. The idea is that by the time the precleaned page is actually selected for the replacement, the I/O will complete and the page will be clean. Precleaning assumes that it is possible to identify pages that will be replaced next. Precleaning that is too eager can waste I/O bandwidth by writing pages that manage to get re-dirtied before being selected for replacement.

The (h,k)-paging problem edit

The (h,k)-paging problem is a generalization of the model of paging problem: Let h,k be positive integers such that  . We measure the performance of an algorithm with cache of size   relative to the theoretically optimal page replacement algorithm. If  , we provide the optimal page replacement algorithm with strictly less resource.

The (h,k)-paging problem is a way to measure how an online algorithm performs by comparing it with the performance of the optimal algorithm, specifically, separately parameterizing the cache size of the online algorithm and optimal algorithm.

Marking algorithms edit

Marking algorithms is a general class of paging algorithms. For each page, we associate it with a bit called its mark. Initially, we set all pages as unmarked. During a stage (a period of operation or a sequence of requests) of page requests, we mark a page when it is first requested in this stage. A marking algorithm is such an algorithm that never pages out a marked page.

If ALG is a marking algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of size h, where  , then ALG is  -competitive. So every marking algorithm attains the  -competitive ratio.

LRU is a marking algorithm while FIFO is not a marking algorithm.

Conservative algorithms edit

An algorithm is conservative, if on any consecutive request sequence containing k or fewer distinct page references, the algorithm will incur k or fewer page faults.

If ALG is a conservative algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of  , then ALG is  -competitive. So every conservative algorithm attains the  -competitive ratio.

LRU, FIFO and CLOCK are conservative algorithms.

Page replacement algorithms edit

There are a variety of page replacement algorithms:[2]

The theoretically optimal page replacement algorithm edit

The theoretically optimal page replacement algorithm (also known as OPT, clairvoyant replacement algorithm, or Bélády's optimal page replacement policy)[3][4][2] is an algorithm that works as follows: when a page needs to be swapped in, the operating system swaps out the page whose next use will occur farthest in the future. For example, a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0.4 seconds.

This algorithm cannot be implemented in a general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis. Despite this limitation, algorithms exist[citation needed] that can offer near-optimal performance — the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program's memory reference pattern is relatively consistent each time it runs.

Analysis of the paging problem has also been done in the field of online algorithms. Efficiency of randomized online algorithms for the paging problem is measured using amortized analysis.

Not recently used edit

The not recently used (NRU) page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well.

At a certain fixed time interval, a timer interrupt triggers and clears the referenced bit of all the pages, so only pages referenced within the current timer interval are marked with a referenced bit. When a page needs to be replaced, the operating system divides the pages into four classes:

3. referenced, modified
2. referenced, not modified
1. not referenced, modified
0. not referenced, not modified

Although it does not seem possible for a page to be modified yet not referenced, this happens when a class 3 page has its referenced bit cleared by the timer interrupt. The NRU algorithm picks a random page from the lowest category for removal. So out of the above four page categories, the NRU algorithm will replace a not-referenced, not-modified page if such a page exists. Note that this algorithm implies that a modified but not-referenced (within the last timer interval) page is less important than a not-modified page that is intensely referenced.

NRU is a marking algorithm, so it is  -competitive.

First-in, first-out edit

The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little bookkeeping on the part of the operating system. The idea is obvious from the name – the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the oldest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiences Bélády's anomaly. In simple words, on a page fault, the frame that has been in memory the longest is replaced.

FIFO page replacement algorithm is used by the OpenVMS operating system, with some modifications.[5] Partial second chance is provided by skipping a limited number of entries with valid translation table references,[6] and additionally, pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re-used.

FIFO is a conservative algorithm, so it is  -competitive.

Second-chance edit

A modified form of the FIFO page replacement algorithm, known as the Second-chance page replacement algorithm, fares relatively better than FIFO at little cost for the improvement. It works by looking at the front of the queue as FIFO does, but instead of immediately paging out that page, it checks to see if its referenced bit is set. If it is not set, the page is swapped out. Otherwise, the referenced bit is cleared, the page is inserted at the back of the queue (as if it were a new page) and this process is repeated. This can also be thought of as a circular queue. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be swapped out, as it now has its referenced bit cleared. If all the pages have their reference bit cleared, then second chance algorithm degenerates into pure FIFO.

As its name suggests, Second-chance gives every page a "second-chance" – an old page that has been referenced is probably in use, and should not be swapped out over a new page that has not been referenced.

Clock edit

Clock is a more efficient version of FIFO than Second-chance because pages don't have to be constantly pushed to the back of the list, but it performs the same general function as Second-Chance. The clock algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the last examined page frame in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location. If R is 0, the new page is put in place of the page the "hand" points to, and the hand is advanced one position. Otherwise, the R bit is cleared, then the clock hand is incremented and the process is repeated until a page is replaced.[7] This algorithm was first described in 1969 by Fernando J. Corbató.[8]

Variants of clock edit

  • GCLOCK: Generalized clock page replacement algorithm.[9]
  • Clock-Pro keeps a circular list of information about recently referenced pages, including all M pages in memory as well as the most recent M pages that have been paged out. This extra information on paged-out pages, like the similar information maintained by ARC, helps it work better than LRU on large loops and one-time scans.[10]
  • WSclock.[11] By combining the Clock algorithm with the concept of a working set (i.e., the set of pages expected to be used by that process during some time interval), the performance of the algorithm can be improved. In practice, the "aging" algorithm and the "WSClock" algorithm are probably the most important page replacement algorithms.[12][13]
  • Clock with Adaptive Replacement (CAR) is a page replacement algorithm that has performance comparable to ARC, and substantially outperforms both LRU and CLOCK.[14] The algorithm CAR is self-tuning and requires no user-specified magic parameters.

CLOCK is a conservative algorithm, so it is  -competitive.

Least recently used edit

The least recently used (LRU) page replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as adaptive replacement cache), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible.

The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process.

Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it acquires the value equal to the counter at the time of page access. Whenever a page needs to be replaced, the operating system selects the page with the lowest counter and swaps it out.

Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations.

One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool.

On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU).

Variants on LRU edit

  1. LRU-K[15] evicts the page whose K-th most recent access is furthest in the past. For example, LRU-1 is simply LRU whereas LRU-2 evicts pages according to the time of their penultimate access. LRU-K improves greatly on LRU with regards to locality in time.
  2. The ARC[16] algorithm extends LRU by maintaining a history of recently evicted pages and uses this to change preference to recent or frequent access. It is particularly resistant to sequential scans.
  3. The 2Q[17] algorithm improves upon the LRU and LRU/2 algorithm. By having two queues, one for hot-path items and the other for slow-path items, items are first placed in the slow-path queue and after a second access of the items placed in the hot-path items. Because references to added items are longer hold than in the LRU and LRU/2 algorithm, it has a better hot-path queue which improves the hit rate of the cache.

A comparison of ARC with other algorithms (LRU, MQ, 2Q, LRU-2, LRFU, LIRS) can be found in Megiddo & Modha 2004.[18]

LRU is a marking algorithm, so it is  -competitive.

Random edit

Random replacement algorithm replaces a random page in memory. This eliminates the overhead cost of tracking page references. Usually it fares better than FIFO, and for looping memory references it is better than LRU, although generally LRU performs better in practice. OS/390 uses global LRU approximation and falls back to random replacement when LRU performance degenerates, and the Intel i860 processor used a random replacement policy (Rhodehamel 1989[19]).

Not frequently used (NFU) edit

The not frequently used (NFU) page replacement algorithm requires a counter, and every page has one counter of its own which is initially set to 0. At each clock interval, all pages that have been referenced within that interval will have their counter incremented by 1. In effect, the counters keep track of how frequently a page has been used. Thus, the page with the lowest counter can be swapped out when necessary.

The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use. Thus, in a multi-pass compiler, pages which were heavily used during the first pass, but are not needed in the second pass will be favoured over pages which are comparably lightly used in the second pass, as they have higher frequency counters. This results in poor performance. Other common scenarios exist where NFU will perform similarly, such as an OS boot-up. Thankfully, a similar and better algorithm exists, and its description follows.

The not frequently used page-replacement algorithm generates fewer page faults than the least recently used page replacement algorithm when the page table contains null pointer values.

Aging edit

The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number. For instance, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this in chronological order: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present time have more impact than page references long ago. This ensures that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Thus, when a page needs to be swapped out, the page with the lowest counter will be chosen.

The following Python code simulates the aging algorithm. Counters   are initialized with 0 and updated as described above via  , using arithmetic shift operators.

from collections.abc import Sequence def simulate_aging(Rs: Sequence, k: int) -> None: # Simulate aging print(" t | R-bits (0-{length}) | Counters for pages 0-{length}".format(length=len(Rs))) Vs = [0] * len(Rs[0]) for t, R in enumerate(Rs): Vs[:] = [R[i] << (k - 1) | V >> 1 for i, V in enumerate(Vs)] print("{:02d} | {} | [{}]".format(t, R, ", ".join(["{:0{}b}".format(V, k) for V in Vs]))) 

In the given example of R-bits for 6 pages over 5 clock ticks, the function prints the following output, which lists the R-bits for each clock tick t and the individual counter values   for each page in binary representation.[20]

>>> Rs = [[1,0,1,0,1,1], [1,1,0,0,1,0], [1,1,0,1,0,1], [1,0,0,0,1,0], [0,1,1,0,0,0]] >>> k = 8 >>> simulate_aging(Rs, k)  t | R-bits (0-5) | Counters for pages 0-5 00 | [1, 0, 1, 0, 1, 1] | [10000000, 00000000, 10000000, 00000000, 10000000, 10000000] 01 | [1, 1, 0, 0, 1, 0] | [11000000, 10000000, 01000000, 00000000, 11000000, 01000000] 02 | [1, 1, 0, 1, 0, 1] | [11100000, 11000000, 00100000, 10000000, 01100000, 10100000] 03 | [1, 0, 0, 0, 1, 0] | [11110000, 01100000, 00010000, 01000000, 10110000, 01010000] 04 | [0, 1, 1, 0, 0, 0] | [01111000, 10110000, 10001000, 00100000, 01011000, 00101000] 

Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest 16/32 (depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price.

Longest distance first (LDF) page replacement algorithm edit

The basic idea behind this algorithm is Locality of Reference as used in LRU but the difference is that in LDF, locality is based on distance not on the used references. In the LDF, replace the page which is on longest distance from the current page. If two pages are on same distance then the page which is next to current page in anti-clock rotation will get replaced.[citation needed]

Implementation details edit

Techniques for hardware with no reference bit edit

Many of the techniques discussed above assume the presence of a reference bit associated with each page. Some hardware has no such bit, so its efficient use requires techniques that operate well without one.

One notable example is VAX hardware running OpenVMS. This system knows if a page has been modified, but not necessarily if a page has been read. Its approach is known as Secondary Page Caching. Pages removed from working sets (process-private memory, generally) are placed on special-purpose lists while remaining in physical memory for some time. Removing a page from a working set is not technically a page-replacement operation, but effectively identifies that page as a candidate. A page whose backing store is still valid (whose contents are not dirty, or otherwise do not need to be preserved) is placed on the tail of the Free Page List. A page that requires writing to backing store will be placed on the Modified Page List. These actions are typically triggered when the size of the Free Page List falls below an adjustable threshold.

Pages may be selected for working set removal in an essentially random fashion, with the expectation that if a poor choice is made, a future reference may retrieve that page from the Free or Modified list before it is removed from physical memory. A page referenced this way will be removed from the Free or Modified list and placed back into a process working set. The Modified Page List additionally provides an opportunity to write pages out to backing store in groups of more than one page, increasing efficiency. These pages can then be placed on the Free Page List. The sequence of pages that works its way to the head of the Free Page List resembles the results of a LRU or NRU mechanism and the overall effect has similarities to the Second-Chance algorithm described earlier.

Another example is used by the Linux kernel on ARM. The lack of hardware functionality is made up for by providing two page tables – the processor-native page tables, with neither referenced bits nor dirty bits, and software-maintained page tables with the required bits present. The emulated bits in the software-maintained table are set by page faults. In order to get the page faults, clearing emulated bits in the second table revokes some of the access rights to the corresponding page, which is implemented by altering the native table.

Page cache in Linux edit

Linux uses a unified page cache for

  • brk and anonymous mmaped-regions. This includes the heap and stack of user-space programs. It is written to swap when paged out.
  • Non-anonymous (file-backed) mmaped regions. If present in memory and not privately modified the physical page is shared with file cache or buffer.
  • Shared memory acquired through shm_open.
  • The tmpfs in-memory filesystem; written to swap when paged out.
  • The file cache including; written to the underlying block storage (possibly going through the buffer, see below) when paged out.
  • The cache of block devices, called the "buffer" by Linux (not to be confused with other structures also called buffers like those use for pipes and buffers used internally in Linux); written to the underlying storage when paged out.

The unified page cache operates on units of the smallest page size supported by the CPU (4 KiB in ARMv8, x86 and x86-64) with some pages of the next larger size (2 MiB in x86-64) called "huge pages" by Linux. The pages in the page cache are divided in an "active" set and an "inactive" set. Both sets keep a LRU list of pages. In the basic case, when a page is accessed by a user-space program it is put in the head of the inactive set. When it is accessed repeatedly, it is moved to the active list. Linux moves the pages from the active set to the inactive set as needed so that the active set is smaller than the inactive set. When a page is moved to the inactive set it is removed from the page table of any process address space, without being paged out of physical memory.[21][22] When a page is removed from the inactive set, it is paged out of physical memory. The size of the "active" and "inactive" list can be queried from /proc/meminfo in the fields "Active", "Inactive", "Active(anon)", "Inactive(anon)", "Active(file)" and "Inactive(file)".

Working set edit

The working set of a process is the set of pages expected to be used by that process during some time interval.

The "working set model" isn't a page replacement algorithm in the strict sense (it's actually a kind of medium-term scheduler)[clarification needed]

References edit

  1. ^ Bell, John. "Operating Systems Course Notes: Virtual Memory". University of Illinois at Chicago College of Engineering. from the original on 23 September 2018. Retrieved 21 July 2017.
  2. ^ a b Jones, Douglas W. "22C:116 Lecture Notes". University of Iowa Department of Computer Science. Archived from the original on 30 June 2012. Retrieved 18 March 2008.
  3. ^ Torrez, Paul; et al. . UCLA Computer Science Department. Archived from the original on 9 January 2009.
  4. ^ Bahn, Hyokyung; Noh, Sam H. (12–14 February 2003). Characterization of Web reference behavior revisited: Evidence for Dichotomized Cache management. International Conference on Information Networking 2003. Jeju, South Korea: Springer-Verlag. pp. 1018–1027. doi:10.1007/978-3-540-45235-5_100. ISBN 978-3-540-40827-7.
  5. ^ Silberschatz, Abraham; Galvin, Peter Baer; Gagne, Greg (14 December 2004). Operating system concepts (7th ed.). Hoboken, NJ, USA: John Wiley & Sons. p. 339. ISBN 0-47169-466-5. OCLC 56913661.
  6. ^ VMS Help — Sys Parameters, TBSKIPWSL
  7. ^ Tanenbaum, Andrew S. (2001). Modern Operating Systems (2nd ed.). Upper Saddle River, NJ, USA: Prentice-Hall. p. 218 (4.4.5). ISBN 978-0-13-031358-4. LCCN 00051666. OCLC 45284637. OL 24214243M.
  8. ^ Corbató, Fernando J. (1969). "A Paging Experiment with the Multics System" (PDF). Festschrift: In Honor of P. M. Morse. MIT Press. pp. 217–228.
  9. ^ Smith, Alan Jay (September 1978). "Sequentiality and prefetching in database systems". ACM Transactions on Database Systems. New York, NY, USA: ACM. 3 (3): 223–247. doi:10.1145/320263.320276. S2CID 11611563.
  10. ^ Jiang, Song; Chen, Feng; Zhang, Xiaodong (10–15 April 2005). CLOCK-Pro: an effective improvement of the CLOCK replacement (PDF). 2005 USENIX Annual Technical Conference. Anaheim, CA, USA: USENIX Association. p. 35. Archived (PDF) from the original on 12 June 2019. Retrieved 24 March 2009.
  11. ^ Carr, Richard W.; Hennessy, John L. (14–16 December 1981). WSCLOCK—a simple and effective algorithm for virtual memory management (gzipped PDF). Eighth ACM symposium on Operating systems principles. Pacific Grove, CA, USA: ACM. pp. 87–95. doi:10.1145/800216.806596. ISBN 0-89791-062-1. from the original on 10 June 2007.
  12. ^ Gottlieb, Allan. "WSClock". New York University Computer Science Department. Archived from the original on 30 July 2012. Retrieved 12 June 2019.
  13. ^ Tanenbaum, Andrew S. "Page Replacement Algorithms". InformIT. Archived from the original on 10 September 2012. Retrieved 12 June 2019.
  14. ^ Bansal, Sorav & Modha, Dharmendra S. (31 March – 2 April 2004). CAR: Clock with Adaptive Replacement (PDF). 3rd USENIX Conference on File and Storage Technologies (FAST '04). San Francisco, CA, USA: USENIX Association. pp. 187–200. CiteSeerX 10.1.1.105.6057. (PDF) from the original on 31 July 2004.
  15. ^ O'Neil, Elizabeth J.; et al. (25–28 May 1993). The LRU-K page replacement algorithm for database disk buffering (PDF). 1993 ACM SIGMOD international conference on Management of data. Washington, D.C., USA: ACM. pp. 297–306. CiteSeerX 10.1.1.18.1434. doi:10.1145/170035.170081. ISBN 0-89791-592-5. Archived (PDF) from the original on 6 September 2019.
  16. ^ Megiddo, Nimrod & Modha, Dharmendra S. (31 March – 2 April 2003). ARC: A Self-Tuning, Low Overhead Replacement Cache (PDF). 2nd USENIX Conference on File and Storage Technologies (FAST '03). San Francisco, CA, USA: USENIX Association. pp. 115–130. (PDF) from the original on 8 February 2010.
  17. ^ Johnson, Theodore; Shasha, Dennis (12–15 September 1994). 2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm (PDF). 20th International Conference on Very Large Data Bases. Santiago de Chile, Chile: Morgan Kaufmann. pp. 439–450. ISBN 1-55860-153-8. Archived (PDF) from the original on 17 March 2020. Retrieved 31 July 2005.
  18. ^ Megiddo, Nimrod & Modha, Dharmendra S. (2004). "Outperforming LRU with an Adaptive Replacement Cache Algorithm" (PDF). Computer. IEEE Computer Society. 37 (4): 58. CiteSeerX 10.1.1.231.498. doi:10.1109/MC.2004.1297303. S2CID 5507282. Archived (PDF) from the original on 21 October 2012. Retrieved 20 September 2013.
  19. ^ Rhodehamel, Michael W. (2–4 October 1989). The Bus Interface and Paging Units of the i860 Microprocessor. 1989 IEEE International Conference on Computer Design: VLSI in Computers and Processors. Cambridge, MA, USA: IEEE. pp. 380–384. doi:10.1109/ICCD.1989.63392. ISBN 0-8186-1971-6. INSPEC Accession Number 3719504.
  20. ^ Tanenbaum, Andrew S.; Bos, Herbert (2015). Modern Operating Systems (4th ed.). Boston, MA, USA: Pearson. p. 215. ISBN 978-0-13-359162-0. OL 25620855M.
  21. ^ See explanation at the start of /mm/workingset.c in the Linux source
  22. ^ Corbet, Jonathan Corbet (2 May 2012). "Better active/inactive list balancing". LWN.net.

Further reading edit

  • Wong, Kin-Yeung (23 January 2006). "Web cache replacement policies: a pragmatic approach". IEEE Network. IEEE. 20 (1): 28–34. doi:10.1109/MNET.2006.1580916. ISSN 0890-8044. S2CID 17969287. INSPEC Accession Number 8964134.
  • Aho, Alfred V.; Denning, Peter J.; Ullman, Jeffrey D. (January 1971). "Principles of Optimal Page Replacement". Journal of the ACM. New York, NY, USA: ACM. 18 (1): 80–93. doi:10.1145/321623.321632. S2CID 3154537.
  • Tanenbaum, Andrew S. (1997). Operating Systems: Design and Implementation (2nd ed.). Upper Saddle River, NJ, USA: Prentice-Hall. ISBN 0-13-638677-6. LCCN 96037153. OL 998396M.
  • Tanenbaum, Andrew S. (2001). Modern Operating Systems (2nd ed.). Upper Saddle River, NJ, USA: Prentice-Hall. ISBN 978-0-13-031358-4. LCCN 00051666. OCLC 45284637. OL 24214243M. Online excerpt on page replacement algorithms: Page Replacement Algorithms.
  • Glass, Gideon; Cao, Pei (15–18 June 1997). Adaptive page replacement based on memory reference behavior. 1997 ACM SIGMETRICS international conference on Measurement and modeling of computer systems. Seattle, WA, USA: ACM. pp. 115–126. doi:10.1145/258612.258681. ISBN 0-89791-909-2. Also available in extended form as Glass, Gideon; Cao, Pei (1997). "Technical Report 1338". Department of Computer Sciences, University of Wisconsin-Madison.
  • Kim, Jong Min; et al. (17–21 October 2000). A Low-Overhead High-Performance Unified Buffer Management Scheme that Exploits Sequential and Looping References (PDF). 4th Usenix Symposium on Operating System Design and Implementation (OSDI'2000). Vol. 4. San Diego, CA, USA: USENIX Association. (PDF) from the original on 18 September 2004.
  • Smaragdakis, Yannis; Kaplan, Scott; Wilson, Paul (1–4 May 1999). EELRU: simple and effective adaptive page replacement (PDF). 1999 ACM SIGMETRICS international conference on Measurement and modeling of computer systems. Atlanta, GA, USA: ACM. pp. 122–133. doi:10.1145/301453.301486. ISBN 1-58113-083-X. Archived (PDF) from the original on 4 March 2016.
  • Jiang, Song; Zhang, Xiaodong (15–19 June 2002). LIRS: a Low Inter Reference recency Set replacement (PDF). 2002 ACM SIGMETRICS international conference on Measurement and modeling of computer systems. Marina Del Rey, CA, USA: ACM. pp. 31–42. doi:10.1145/511334.511340. ISBN 1-58113-531-9. Archived (PDF) from the original on 12 June 2019.
  • Lee, Donghee; et al. (1–4 September 1997). Implementation and Performance Evaluation of the LRFU Replacement Policy. 23rd Euromicro Conference New Frontiers of Information Technology. Budapest, Hungary: IEEE Computer Society. pp. 106–111. doi:10.1109/EMSCNT.1997.658446. ISBN 0-8186-8215-9. INSPEC Accession Number 5856800.
  • Zhou, Yuanyuan; Philbin, James; Li, Kai (25–30 June 2001). The Multi-Queue Replacement Algorithm for Second-Level Buffer Caches (PDF). 2001 USENIX Annual Technical Conference. Boston, MA, USA: USENIX Association. pp. 91–104. ISBN 1-880446-09-X. (PDF) from the original on 24 November 2005.

page, replacement, algorithm, this, article, about, algorithms, specific, paging, outline, general, cache, algorithms, processor, disk, database, cache, algorithms, computer, operating, system, that, uses, paging, virtual, memory, management, page, replacement. This article is about algorithms specific to paging For an outline of general cache algorithms e g processor disk database web see Cache algorithms In a computer operating system that uses paging for virtual memory management page replacement algorithms decide which memory pages to page out sometimes called swap out or write to disk when a page of memory needs to be allocated Page replacement happens when a requested page is not in memory page fault and a free page cannot be used to satisfy the allocation either because there are none or because the number of free pages is lower than some threshold When the page that was selected for replacement and paged out is referenced again it has to be paged in read in from disk and this involves waiting for I O completion This determines the quality of the page replacement algorithm the less time waiting for page ins the better the algorithm A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware and tries to guess which pages should be replaced to minimize the total number of page misses while balancing this with the costs primary storage and processor time of the algorithm itself The page replacing problem is a typical online problem from the competitive analysis perspective in the sense that the optimal deterministic algorithm is known Contents 1 History 2 Local vs global replacement 3 Detecting which pages are referenced and modified 4 Precleaning 5 The h k paging problem 6 Marking algorithms 7 Conservative algorithms 8 Page replacement algorithms 8 1 The theoretically optimal page replacement algorithm 8 2 Not recently used 8 3 First in first out 8 4 Second chance 8 5 Clock 8 5 1 Variants of clock 8 6 Least recently used 8 6 1 Variants on LRU 8 7 Random 8 8 Not frequently used NFU 8 9 Aging 8 10 Longest distance first LDF page replacement algorithm 9 Implementation details 9 1 Techniques for hardware with no reference bit 9 2 Page cache in Linux 10 Working set 11 References 12 Further readingHistory editPage replacement algorithms were a hot topic of research and debate in the 1960s and 1970s That mostly ended with the development of sophisticated LRU least recently used approximations and working set algorithms Since then some basic assumptions made by the traditional page replacement algorithms were invalidated resulting in a revival of research In particular the following trends in the behavior of underlying hardware and user level software have affected the performance of page replacement algorithms Size of primary storage has increased by multiple orders of magnitude With several gigabytes of primary memory algorithms that require a periodic check of each and every memory frame are becoming less and less practical Memory hierarchies have grown taller The cost of a CPU cache miss is far more expensive This exacerbates the previous problem Locality of reference of user software has weakened This is mostly attributed to the spread of object oriented programming techniques that favor large numbers of small functions use of sophisticated data structures like trees and hash tables that tend to result in chaotic memory reference patterns and the advent of garbage collection that drastically changed memory access behavior of applications Requirements for page replacement algorithms have changed due to differences in operating system kernel architectures In particular most modern OS kernels have unified virtual memory and file system caches requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files The latter pages have specific properties For example they can be locked or can have write ordering requirements imposed by journaling Moreover as the goal of page replacement is to minimize total time waiting for memory it has to take into account memory requirements imposed by other kernel sub systems that allocate memory As a result page replacement in modern kernels Linux FreeBSD and Solaris tends to work at the level of a general purpose kernel memory allocator rather than at the higher level of a virtual memory subsystem Local vs global replacement editReplacement algorithms can be local or global When a process incurs a page fault a local page replacement algorithm selects for replacement some page that belongs to that same process or a group of processes sharing a memory partition A global replacement algorithm is free to select any page in memory Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes Most popular forms of partitioning are fixed partitioning and balanced set algorithms based on the working set model The advantage of local page replacement is its scalability each process can handle its page faults independently leading to more consistent performance for that process However global page replacement is more efficient on an overall system basis 1 Detecting which pages are referenced and modified editSee also Virtual memory and Page table Modern general purpose computers and some embedded processors have support for virtual memory Each process has its own virtual address space A page table maps a subset of the process virtual addresses to physical addresses In addition in most architectures the page table holds an access bit and a dirty bit for each page in the page table The CPU sets the access bit when the process reads or writes memory in that page The CPU sets the dirty bit when the process writes memory in that page The operating system can modify the access and dirty bits The operating system can detect accesses to memory and files through the following means By clearing the access bit in pages present in the process page table After some time the OS scans the page table looking for pages that had the access bit set by the CPU This is fast because the access bit is set automatically by the CPU and inaccurate because the OS does not immediately receive notice of the access nor does it have information about the order in which the process accessed these pages By removing pages from the process page table without necessarily removing them from physical memory The next access to that page is detected immediately because it causes a page fault This is slow because a page fault involves a context switch to the OS software lookup for the corresponding physical address modification of the page table and a context switch back to the process and accurate because the access is detected immediately after it occurs Directly when the process makes system calls that potentially access the page cache like read and write in POSIX Precleaning editMost replacement algorithms simply return the target page as their result This means that if target page is dirty that is contains data that have to be written to the stable storage before page can be reclaimed I O has to be initiated to send that page to the stable storage to clean the page In the early days of virtual memory time spent on cleaning was not of much concern because virtual memory was first implemented on systems with full duplex channels to the stable storage and cleaning was customarily overlapped with paging Contemporary commodity hardware on the other hand does not support full duplex transfers and cleaning of target pages becomes an issue To deal with this situation various precleaning policies are implemented Precleaning is the mechanism that starts I O on dirty pages that are likely to be replaced soon The idea is that by the time the precleaned page is actually selected for the replacement the I O will complete and the page will be clean Precleaning assumes that it is possible to identify pages that will be replaced next Precleaning that is too eager can waste I O bandwidth by writing pages that manage to get re dirtied before being selected for replacement The h k paging problem editThe h k paging problem is a generalization of the model of paging problem Let h k be positive integers such that h k displaystyle h leq k nbsp We measure the performance of an algorithm with cache of size h k displaystyle h leq k nbsp relative to the theoretically optimal page replacement algorithm If h lt k displaystyle h lt k nbsp we provide the optimal page replacement algorithm with strictly less resource The h k paging problem is a way to measure how an online algorithm performs by comparing it with the performance of the optimal algorithm specifically separately parameterizing the cache size of the online algorithm and optimal algorithm Marking algorithms editThis section may be confusing or unclear to readers Please help clarify the section There might be a discussion about this on the talk page December 2015 Learn how and when to remove this template message Marking algorithms is a general class of paging algorithms For each page we associate it with a bit called its mark Initially we set all pages as unmarked During a stage a period of operation or a sequence of requests of page requests we mark a page when it is first requested in this stage A marking algorithm is such an algorithm that never pages out a marked page If ALG is a marking algorithm with a cache of size k and OPT is the optimal algorithm with a cache of size h where h k displaystyle h leq k nbsp then ALG is k k h 1 displaystyle tfrac k k h 1 nbsp competitive So every marking algorithm attains the k k h 1 displaystyle tfrac k k h 1 nbsp competitive ratio LRU is a marking algorithm while FIFO is not a marking algorithm Conservative algorithms editAn algorithm is conservative if on any consecutive request sequence containing k or fewer distinct page references the algorithm will incur k or fewer page faults If ALG is a conservative algorithm with a cache of size k and OPT is the optimal algorithm with a cache of h k displaystyle h leq k nbsp then ALG is k k h 1 displaystyle tfrac k k h 1 nbsp competitive So every conservative algorithm attains the k k h 1 displaystyle tfrac k k h 1 nbsp competitive ratio LRU FIFO and CLOCK are conservative algorithms Page replacement algorithms editThere are a variety of page replacement algorithms 2 The theoretically optimal page replacement algorithm edit The theoretically optimal page replacement algorithm also known as OPT clairvoyant replacement algorithm or Belady s optimal page replacement policy 3 4 2 is an algorithm that works as follows when a page needs to be swapped in the operating system swaps out the page whose next use will occur farthest in the future For example a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0 4 seconds This algorithm cannot be implemented in a general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used except when all software that will run on a system is either known beforehand and is amenable to static analysis of its memory reference patterns or only a class of applications allowing run time analysis Despite this limitation algorithms exist citation needed that can offer near optimal performance the operating system keeps track of all pages referenced by the program and it uses those data to decide which pages to swap in and out on subsequent runs This algorithm can offer near optimal performance but not on the first run of a program and only if the program s memory reference pattern is relatively consistent each time it runs Analysis of the paging problem has also been done in the field of online algorithms Efficiency of randomized online algorithms for the paging problem is measured using amortized analysis Not recently used edit The not recently used NRU page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used This algorithm works on the following principle when a page is referenced a referenced bit is set for that page marking it as referenced Similarly when a page is modified written to a modified bit is set The setting of the bits is usually done by the hardware although it is possible to do so on the software level as well At a certain fixed time interval a timer interrupt triggers and clears the referenced bit of all the pages so only pages referenced within the current timer interval are marked with a referenced bit When a page needs to be replaced the operating system divides the pages into four classes 3 referenced modified 2 referenced not modified 1 not referenced modified 0 not referenced not modifiedAlthough it does not seem possible for a page to be modified yet not referenced this happens when a class 3 page has its referenced bit cleared by the timer interrupt The NRU algorithm picks a random page from the lowest category for removal So out of the above four page categories the NRU algorithm will replace a not referenced not modified page if such a page exists Note that this algorithm implies that a modified but not referenced within the last timer interval page is less important than a not modified page that is intensely referenced NRU is a marking algorithm so it is k k h 1 displaystyle tfrac k k h 1 nbsp competitive First in first out edit The simplest page replacement algorithm is a FIFO algorithm The first in first out FIFO page replacement algorithm is a low overhead algorithm that requires little bookkeeping on the part of the operating system The idea is obvious from the name the operating system keeps track of all the pages in memory in a queue with the most recent arrival at the back and the oldest arrival in front When a page needs to be replaced the page at the front of the queue the oldest page is selected While FIFO is cheap and intuitive it performs poorly in practical application Thus it is rarely used in its unmodified form This algorithm experiences Belady s anomaly In simple words on a page fault the frame that has been in memory the longest is replaced FIFO page replacement algorithm is used by the OpenVMS operating system with some modifications 5 Partial second chance is provided by skipping a limited number of entries with valid translation table references 6 and additionally pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re used FIFO is a conservative algorithm so it is k k h 1 displaystyle tfrac k k h 1 nbsp competitive Second chance edit A modified form of the FIFO page replacement algorithm known as the Second chance page replacement algorithm fares relatively better than FIFO at little cost for the improvement It works by looking at the front of the queue as FIFO does but instead of immediately paging out that page it checks to see if its referenced bit is set If it is not set the page is swapped out Otherwise the referenced bit is cleared the page is inserted at the back of the queue as if it were a new page and this process is repeated This can also be thought of as a circular queue If all the pages have their referenced bit set on the second encounter of the first page in the list that page will be swapped out as it now has its referenced bit cleared If all the pages have their reference bit cleared then second chance algorithm degenerates into pure FIFO As its name suggests Second chance gives every page a second chance an old page that has been referenced is probably in use and should not be swapped out over a new page that has not been referenced Clock edit Clock is a more efficient version of FIFO than Second chance because pages don t have to be constantly pushed to the back of the list but it performs the same general function as Second Chance The clock algorithm keeps a circular list of pages in memory with the hand iterator pointing to the last examined page frame in the list When a page fault occurs and no empty frames exist then the R referenced bit is inspected at the hand s location If R is 0 the new page is put in place of the page the hand points to and the hand is advanced one position Otherwise the R bit is cleared then the clock hand is incremented and the process is repeated until a page is replaced 7 This algorithm was first described in 1969 by Fernando J Corbato 8 Variants of clock edit GCLOCK Generalized clock page replacement algorithm 9 Clock Pro keeps a circular list of information about recently referenced pages including all M pages in memory as well as the most recent M pages that have been paged out This extra information on paged out pages like the similar information maintained by ARC helps it work better than LRU on large loops and one time scans 10 WSclock 11 By combining the Clock algorithm with the concept of a working set i e the set of pages expected to be used by that process during some time interval the performance of the algorithm can be improved In practice the aging algorithm and the WSClock algorithm are probably the most important page replacement algorithms 12 13 Clock with Adaptive Replacement CAR is a page replacement algorithm that has performance comparable to ARC and substantially outperforms both LRU and CLOCK 14 The algorithm CAR is self tuning and requires no user specified magic parameters CLOCK is a conservative algorithm so it is k k h 1 displaystyle tfrac k k h 1 nbsp competitive Least recently used edit The least recently used LRU page replacement algorithm though similar in name to NRU differs in the fact that LRU keeps track of page usage over a short period of time while NRU just looks at the usage in the last clock interval LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too While LRU can provide near optimal performance in theory almost as good as adaptive replacement cache it is rather expensive to implement in practice There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible The most expensive method is the linked list method which uses a linked list containing all the pages in memory At the back of this list is the least recently used page and at the front is the most recently used page The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference which is a very time consuming process Another method that requires hardware support is as follows suppose the hardware has a 64 bit counter that is incremented at every instruction Whenever a page is accessed it acquires the value equal to the counter at the time of page access Whenever a page needs to be replaced the operating system selects the page with the lowest counter and swaps it out Because of implementation costs one may consider algorithms like those that follow that are similar to LRU but which offer cheaper implementations One important advantage of the LRU algorithm is that it is amenable to full statistical analysis It has been proven for example that LRU can never result in more than N times more page faults than OPT algorithm where N is proportional to the number of pages in the managed pool On the other hand LRU s weakness is that its performance tends to degenerate under many quite common reference patterns For example if there are N pages in the LRU pool an application executing a loop over array of N 1 pages will cause a page fault on each and every access As loops over large arrays are common much effort has been put into modifying LRU to work better in such situations Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm like Most Recently Used MRU Variants on LRU edit LRU K 15 evicts the page whose K th most recent access is furthest in the past For example LRU 1 is simply LRU whereas LRU 2 evicts pages according to the time of their penultimate access LRU K improves greatly on LRU with regards to locality in time The ARC 16 algorithm extends LRU by maintaining a history of recently evicted pages and uses this to change preference to recent or frequent access It is particularly resistant to sequential scans The 2Q 17 algorithm improves upon the LRU and LRU 2 algorithm By having two queues one for hot path items and the other for slow path items items are first placed in the slow path queue and after a second access of the items placed in the hot path items Because references to added items are longer hold than in the LRU and LRU 2 algorithm it has a better hot path queue which improves the hit rate of the cache A comparison of ARC with other algorithms LRU MQ 2Q LRU 2 LRFU LIRS can be found in Megiddo amp Modha 2004 18 LRU is a marking algorithm so it is k k h 1 displaystyle tfrac k k h 1 nbsp competitive Random edit Random replacement algorithm replaces a random page in memory This eliminates the overhead cost of tracking page references Usually it fares better than FIFO and for looping memory references it is better than LRU although generally LRU performs better in practice OS 390 uses global LRU approximation and falls back to random replacement when LRU performance degenerates and the Intel i860 processor used a random replacement policy Rhodehamel 1989 19 Not frequently used NFU edit The not frequently used NFU page replacement algorithm requires a counter and every page has one counter of its own which is initially set to 0 At each clock interval all pages that have been referenced within that interval will have their counter incremented by 1 In effect the counters keep track of how frequently a page has been used Thus the page with the lowest counter can be swapped out when necessary The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use Thus in a multi pass compiler pages which were heavily used during the first pass but are not needed in the second pass will be favoured over pages which are comparably lightly used in the second pass as they have higher frequency counters This results in poor performance Other common scenarios exist where NFU will perform similarly such as an OS boot up Thankfully a similar and better algorithm exists and its description follows The not frequently used page replacement algorithm generates fewer page faults than the least recently used page replacement algorithm when the page table contains null pointer values Aging edit The aging algorithm is a descendant of the NFU algorithm with modifications to make it aware of the time span of use Instead of just incrementing the counters of pages referenced putting equal emphasis on page references regardless of the time the reference counter on a page is first shifted right divided by 2 before adding the referenced bit to the left of that binary number For instance if a page has referenced bits 1 0 0 1 1 0 in the past 6 clock ticks its referenced counter will look like this in chronological order 10000000 01000000 00100000 10010000 11001000 01100100 Page references closer to the present time have more impact than page references long ago This ensures that pages referenced more recently though less frequently referenced will have higher priority over pages more frequently referenced in the past Thus when a page needs to be swapped out the page with the lowest counter will be chosen The following Python code simulates the aging algorithm Counters V i displaystyle V i nbsp are initialized with 0 and updated as described above via V i R i k 1 V i 1 displaystyle V i leftarrow R i ll k 1 V i gg 1 nbsp using arithmetic shift operators from collections abc import Sequence def simulate aging Rs Sequence k int gt None Simulate aging print t R bits 0 length Counters for pages 0 length format length len Rs Vs 0 len Rs 0 for t R in enumerate Rs Vs R i lt lt k 1 V gt gt 1 for i V in enumerate Vs print 02d format t R join 0 b format V k for V in Vs In the given example of R bits for 6 pages over 5 clock ticks the function prints the following output which lists the R bits for each clock tick t and the individual counter values V i displaystyle V i nbsp for each page in binary representation 20 gt gt gt Rs 1 0 1 0 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 0 0 1 0 0 1 1 0 0 0 gt gt gt k 8 gt gt gt simulate aging Rs k t R bits 0 5 Counters for pages 0 5 00 1 0 1 0 1 1 10000000 00000000 10000000 00000000 10000000 10000000 01 1 1 0 0 1 0 11000000 10000000 01000000 00000000 11000000 01000000 02 1 1 0 1 0 1 11100000 11000000 00100000 10000000 01100000 10100000 03 1 0 0 0 1 0 11110000 01100000 00010000 01000000 10110000 01010000 04 0 1 1 0 0 0 01111000 10110000 10001000 00100000 01011000 00101000 Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest 16 32 depending on the bit size of the processor s integers time intervals Consequently two pages may have referenced counters of 00000000 even though one page was referenced 9 intervals ago and the other 1000 intervals ago Generally speaking knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out Thus aging can offer near optimal performance for a moderate price Longest distance first LDF page replacement algorithm edit The basic idea behind this algorithm is Locality of Reference as used in LRU but the difference is that in LDF locality is based on distance not on the used references In the LDF replace the page which is on longest distance from the current page If two pages are on same distance then the page which is next to current page in anti clock rotation will get replaced citation needed Implementation details editTechniques for hardware with no reference bit edit Many of the techniques discussed above assume the presence of a reference bit associated with each page Some hardware has no such bit so its efficient use requires techniques that operate well without one One notable example is VAX hardware running OpenVMS This system knows if a page has been modified but not necessarily if a page has been read Its approach is known as Secondary Page Caching Pages removed from working sets process private memory generally are placed on special purpose lists while remaining in physical memory for some time Removing a page from a working set is not technically a page replacement operation but effectively identifies that page as a candidate A page whose backing store is still valid whose contents are not dirty or otherwise do not need to be preserved is placed on the tail of the Free Page List A page that requires writing to backing store will be placed on the Modified Page List These actions are typically triggered when the size of the Free Page List falls below an adjustable threshold Pages may be selected for working set removal in an essentially random fashion with the expectation that if a poor choice is made a future reference may retrieve that page from the Free or Modified list before it is removed from physical memory A page referenced this way will be removed from the Free or Modified list and placed back into a process working set The Modified Page List additionally provides an opportunity to write pages out to backing store in groups of more than one page increasing efficiency These pages can then be placed on the Free Page List The sequence of pages that works its way to the head of the Free Page List resembles the results of a LRU or NRU mechanism and the overall effect has similarities to the Second Chance algorithm described earlier Another example is used by the Linux kernel on ARM The lack of hardware functionality is made up for by providing two page tables the processor native page tables with neither referenced bits nor dirty bits and software maintained page tables with the required bits present The emulated bits in the software maintained table are set by page faults In order to get the page faults clearing emulated bits in the second table revokes some of the access rights to the corresponding page which is implemented by altering the native table Page cache in Linux edit Linux uses a unified page cache for brk and anonymous mmaped regions This includes the heap and stack of user space programs It is written to swap when paged out Non anonymous file backed mmaped regions If present in memory and not privately modified the physical page is shared with file cache or buffer Shared memory acquired through shm open The tmpfs in memory filesystem written to swap when paged out The file cache including written to the underlying block storage possibly going through the buffer see below when paged out The cache of block devices called the buffer by Linux not to be confused with other structures also called buffers like those use for pipes and buffers used internally in Linux written to the underlying storage when paged out The unified page cache operates on units of the smallest page size supported by the CPU 4 KiB in ARMv8 x86 and x86 64 with some pages of the next larger size 2 MiB in x86 64 called huge pages by Linux The pages in the page cache are divided in an active set and an inactive set Both sets keep a LRU list of pages In the basic case when a page is accessed by a user space program it is put in the head of the inactive set When it is accessed repeatedly it is moved to the active list Linux moves the pages from the active set to the inactive set as needed so that the active set is smaller than the inactive set When a page is moved to the inactive set it is removed from the page table of any process address space without being paged out of physical memory 21 22 When a page is removed from the inactive set it is paged out of physical memory The size of the active and inactive list can be queried from proc meminfo in the fields Active Inactive Active anon Inactive anon Active file and Inactive file Working set editMain article Working set The working set of a process is the set of pages expected to be used by that process during some time interval The working set model isn t a page replacement algorithm in the strict sense it s actually a kind of medium term scheduler clarification needed References edit Bell John Operating Systems Course Notes Virtual Memory University of Illinois at Chicago College of Engineering Archived from the original on 23 September 2018 Retrieved 21 July 2017 a b Jones Douglas W 22C 116 Lecture Notes University of Iowa Department of Computer Science Archived from the original on 30 June 2012 Retrieved 18 March 2008 Torrez Paul et al CS111 Lecture 11 notes UCLA Computer Science Department Archived from the original on 9 January 2009 Bahn Hyokyung Noh Sam H 12 14 February 2003 Characterization of Web reference behavior revisited Evidence for Dichotomized Cache management International Conference on Information Networking 2003 Jeju South Korea Springer Verlag pp 1018 1027 doi 10 1007 978 3 540 45235 5 100 ISBN 978 3 540 40827 7 Silberschatz Abraham Galvin Peter Baer Gagne Greg 14 December 2004 Operating system concepts 7th ed Hoboken NJ USA John Wiley amp Sons p 339 ISBN 0 47169 466 5 OCLC 56913661 VMS Help Sys Parameters TBSKIPWSL Tanenbaum Andrew S 2001 Modern Operating Systems 2nd ed Upper Saddle River NJ USA Prentice Hall p 218 4 4 5 ISBN 978 0 13 031358 4 LCCN 00051666 OCLC 45284637 OL 24214243M Corbato Fernando J 1969 A Paging Experiment with the Multics System PDF Festschrift In Honor of P M Morse MIT Press pp 217 228 Smith Alan Jay September 1978 Sequentiality and prefetching in database systems ACM Transactions on Database Systems New York NY USA ACM 3 3 223 247 doi 10 1145 320263 320276 S2CID 11611563 Jiang Song Chen Feng Zhang Xiaodong 10 15 April 2005 CLOCK Pro an effective improvement of the CLOCK replacement PDF 2005 USENIX Annual Technical Conference Anaheim CA USA USENIX Association p 35 Archived PDF from the original on 12 June 2019 Retrieved 24 March 2009 Carr Richard W Hennessy John L 14 16 December 1981 WSCLOCK a simple and effective algorithm for virtual memory management gzipped PDF Eighth ACM symposium on Operating systems principles Pacific Grove CA USA ACM pp 87 95 doi 10 1145 800216 806596 ISBN 0 89791 062 1 Archived from the original on 10 June 2007 Gottlieb Allan WSClock New York University Computer Science Department Archived from the original on 30 July 2012 Retrieved 12 June 2019 Tanenbaum Andrew S Page Replacement Algorithms InformIT Archived from the original on 10 September 2012 Retrieved 12 June 2019 Bansal Sorav amp Modha Dharmendra S 31 March 2 April 2004 CAR Clock with Adaptive Replacement PDF 3rd USENIX Conference on File and Storage Technologies FAST 04 San Francisco CA USA USENIX Association pp 187 200 CiteSeerX 10 1 1 105 6057 Archived PDF from the original on 31 July 2004 O Neil Elizabeth J et al 25 28 May 1993 The LRU K page replacement algorithm for database disk buffering PDF 1993 ACM SIGMOD international conference on Management of data Washington D C USA ACM pp 297 306 CiteSeerX 10 1 1 18 1434 doi 10 1145 170035 170081 ISBN 0 89791 592 5 Archived PDF from the original on 6 September 2019 Megiddo Nimrod amp Modha Dharmendra S 31 March 2 April 2003 ARC A Self Tuning Low Overhead Replacement Cache PDF 2nd USENIX Conference on File and Storage Technologies FAST 03 San Francisco CA USA USENIX Association pp 115 130 Archived PDF from the original on 8 February 2010 Johnson Theodore Shasha Dennis 12 15 September 1994 2Q A Low Overhead High Performance Buffer Management Replacement Algorithm PDF 20th International Conference on Very Large Data Bases Santiago de Chile Chile Morgan Kaufmann pp 439 450 ISBN 1 55860 153 8 Archived PDF from the original on 17 March 2020 Retrieved 31 July 2005 Megiddo Nimrod amp Modha Dharmendra S 2004 Outperforming LRU with an Adaptive Replacement Cache Algorithm PDF Computer IEEE Computer Society 37 4 58 CiteSeerX 10 1 1 231 498 doi 10 1109 MC 2004 1297303 S2CID 5507282 Archived PDF from the original on 21 October 2012 Retrieved 20 September 2013 Rhodehamel Michael W 2 4 October 1989 The Bus Interface and Paging Units of the i860 Microprocessor 1989 IEEE International Conference on Computer Design VLSI in Computers and Processors Cambridge MA USA IEEE pp 380 384 doi 10 1109 ICCD 1989 63392 ISBN 0 8186 1971 6 INSPEC Accession Number 3719504 Tanenbaum Andrew S Bos Herbert 2015 Modern Operating Systems 4th ed Boston MA USA Pearson p 215 ISBN 978 0 13 359162 0 OL 25620855M See explanation at the start of mm workingset c in the Linux source Corbet Jonathan Corbet 2 May 2012 Better active inactive list balancing LWN net Further reading editWong Kin Yeung 23 January 2006 Web cache replacement policies a pragmatic approach IEEE Network IEEE 20 1 28 34 doi 10 1109 MNET 2006 1580916 ISSN 0890 8044 S2CID 17969287 INSPEC Accession Number 8964134 Aho Alfred V Denning Peter J Ullman Jeffrey D January 1971 Principles of Optimal Page Replacement Journal of the ACM New York NY USA ACM 18 1 80 93 doi 10 1145 321623 321632 S2CID 3154537 Tanenbaum Andrew S 1997 Operating Systems Design and Implementation 2nd ed Upper Saddle River NJ USA Prentice Hall ISBN 0 13 638677 6 LCCN 96037153 OL 998396M Tanenbaum Andrew S 2001 Modern Operating Systems 2nd ed Upper Saddle River NJ USA Prentice Hall ISBN 978 0 13 031358 4 LCCN 00051666 OCLC 45284637 OL 24214243M Online excerpt on page replacement algorithms Page Replacement Algorithms Glass Gideon Cao Pei 15 18 June 1997 Adaptive page replacement based on memory reference behavior 1997 ACM SIGMETRICS international conference on Measurement and modeling of computer systems Seattle WA USA ACM pp 115 126 doi 10 1145 258612 258681 ISBN 0 89791 909 2 Also available in extended form as Glass Gideon Cao Pei 1997 Technical Report 1338 Department of Computer Sciences University of Wisconsin Madison Kim Jong Min et al 17 21 October 2000 A Low Overhead High Performance Unified Buffer Management Scheme that Exploits Sequential and Looping References PDF 4th Usenix Symposium on Operating System Design and Implementation OSDI 2000 Vol 4 San Diego CA USA USENIX Association Archived PDF from the original on 18 September 2004 Smaragdakis Yannis Kaplan Scott Wilson Paul 1 4 May 1999 EELRU simple and effective adaptive page replacement PDF 1999 ACM SIGMETRICS international conference on Measurement and modeling of computer systems Atlanta GA USA ACM pp 122 133 doi 10 1145 301453 301486 ISBN 1 58113 083 X Archived PDF from the original on 4 March 2016 Jiang Song Zhang Xiaodong 15 19 June 2002 LIRS a Low Inter Reference recency Set replacement PDF 2002 ACM SIGMETRICS international conference on Measurement and modeling of computer systems Marina Del Rey CA USA ACM pp 31 42 doi 10 1145 511334 511340 ISBN 1 58113 531 9 Archived PDF from the original on 12 June 2019 Lee Donghee et al 1 4 September 1997 Implementation and Performance Evaluation of the LRFU Replacement Policy 23rd Euromicro Conference New Frontiers of Information Technology Budapest Hungary IEEE Computer Society pp 106 111 doi 10 1109 EMSCNT 1997 658446 ISBN 0 8186 8215 9 INSPEC Accession Number 5856800 Zhou Yuanyuan Philbin James Li Kai 25 30 June 2001 The Multi Queue Replacement Algorithm for Second Level Buffer Caches PDF 2001 USENIX Annual Technical Conference Boston MA USA USENIX Association pp 91 104 ISBN 1 880446 09 X Archived PDF from the original on 24 November 2005 Retrieved from https en wikipedia org w index php title Page replacement algorithm amp oldid 1183785003 Variants of clock, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.