Chapter 10: Virtual Memory
Virtual memory can be implemented via:
- Demand paging 请求分页 更加简单,不需要考虑外部碎片
- Demand segmentation 请求分段
Demand Paging
Bring a page into memory only when it is needed
invalid reference -->abort
not-in-memory -->bring to memory
- Less I/O needed
- Less memory needed
- Faster response
- More users
Pure demand paging–never bring a page into memory unless page will be needed
Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated
(1 --> in-memory, 0–> not-in-memory)
Page Fault
trap to OS
OS looks at another table to decide:
- Invalid reference -->abort. Just not in memory.
- Just not in memory.
Performance of Demand Paging
Effective Access Time (EAT) =(1 –p) x memory access + p(page fault overhead + [swap page out ] + swap page in + restart overhead)
Copy-on-Write
Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory.
If either process modifies (修改) a shared page, only then is the page copied
Page Replacement
- Find the location of the desired page on disk.
- Find a free frame: -If there is a free frame, use it. -If there is no free frame, use a page replacement algorithm to select a victim frame.
- Read the desired page into the (newly) free frame. Update the page and frame tables.
- Restart the instruction.
First-In-First-Out (FIFO) Algorithm
FIFO Replacement –Belady’s Anomaly :
more frames --> less page faults
Optimal Page Replacement 最优算法
Replace page that will not be used for longest period of time.
**Used for measuring how well your algorithm performs.**最优算法,主要是作为度量
Least Recently Used (LRU) Algorithm
Counter implementation :
Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter.
When a page needs to be changed, look at the counters to determine which are to change.
缺点:开销太大
LRU Algorithm
Stack implementation –keep a stack of page numbers in a double link form
LRU Approximation Algorithms LRU近似算法
Additional-Reference-Bits Algorithm
Second chance
Counting Algorithms
Keep a counter of the number of references that have been made to each page
LFU(least frequently used) Algorithm: replaces page with smallest count.
MFU(most frequent used) Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used
Allocation of Frames
Two major allocation schemes.
- fixed allocation
- priority allocation
Global replacement –process selects a replacement frame from the set of all frames; one process can take a frame from another.
Local replacement –each process selects from only its own set of allocated frames.
两周置换各有利弊,分析应用场合
Thrashing
抖动 颠簸
If a process does not have “enough” pages, the page-fault rate is very high. This leads to:
- low CPU utilization.
- operating system thinks that it needs to increase the degree of multiprogramming
- another process added to the system.
Thrashing=a process is busy swapping pages in and out
Locality model
Process migrates from one locality to another.
Localities may overlap
Why does thrashing occur? Σsize of locality > total memory size
Working-Set Model
Keeping Track of the Working Set
Allocating Kernel Memory
Buddy System
Slab Allocator
Other Considerations
Prepaging
- To reduce the large number of page faults that occurs at process startup
- Prepage all or some of the pages a process will need, before they are referenced
- But if prepaged pages are unused, I/O and memory was wasted
- Assume s pages are prepaged and αof the pages is used
Page Size
- fragmentation 小了好,考虑内部碎片,平均大小是1/2 page size
- table size 大了好
- I/O overhead 大
- locality
TLB Reach
Program Structure
I/O interlock
Pages must sometimes be locked into memory