OS.MEMORY

Views:
 
Category: Education
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Slide 1:

GRACE REPORT SARAH REPORT MHAJO REPORT JANE REPORT MICHIKO REPORT

Slide 2:

HOME HOME HOME HOME HOME HOME HOME MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of BACK TO MAIN MENU GRACE DIANNE ♥

Slide 3:

MEMORY MANAGEMENT CONCEPT MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 4:

MEMORY MANAGEMENT CONCEPT Defined as an operating system component concerned with the process of managing the computer’s assurance that the jobs would be protected from interfering with one another. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 5:

It is important to remember that the simplest form of memory management involves dividing the memory into two, in order to accommodate the OS on one end and the user programs on the user space, which is known as single absolute partition. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 6:

ADDRESS BINDING CONCEPT MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 7:

When a program, commonly stored on a secondary storage device, is loaded into a memory partition its address is mapped from one address space ton another. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 8:

From the secondary storage, the physical address is converted into a logical address in memory. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 9:

The process is called address binding, which means that a program is bound to the address location in the memory. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 10:

COMPILE TIME BINDING MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 11:

Binding is done while the program code is being converted into machine readable form or while compilation takes place. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 12:

This is done provided that the location to which the program will be stored in memory is already identified. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 13:

The process is generates an absolute code . MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 14:

LOAD TIME BINDING MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 15:

Binding is done during he loading process of the program. When the location in memory, where the program is going to be stored, is not yet identified during compile time then binding is delayed until the program is loaded in memory. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 16:

This binding method generates what is known as relocatable code. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 17:

RUN TIME BINDING MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 18:

This method is done during the run time execution of the program. At this point, remember the cardinal rule that in order for programs to be executed by the CPU it must be first loaded into memory. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 19:

In this method, if the program is capable of being transferred from one memory location to another during the run time process then binding is postponed until run time. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 20:

DYNAMIC LOADING MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 21:

This concept facilitates efficient memory utilization by loading only the main program and prohibiting unneeded routines to go along with the main program. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 22:

When such routine is required the OS then locates it and loads it together with the mother program. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 23:

DYNAMIC LINKING MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 24:

This concept eposes on the idea that instead of delaying the loading process it is the linking process that must be postponed. To implement this scheme a stub is used to control the situation. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 25:

Stub – is a piece of code tasked to locate needed routines. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 26:

OVERLAYS MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 27:

In early times when computer capacity was still very small, the concept of splitting programs into small modules was adopted in order to fit each module one at a time into the computer. This module is called overlays. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 28:

SWAPPING MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 29:

Another important concept in memory management is the process swapping. Unlike in overlays wherein only modules are placed and taken out of memory, swapping considers an entire process. MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 30:

Swapped out processes are placed on a backing store which is normally a Direct Access Storage Device (DASD). MEMORY MANAGEMENT ADDRESS BINDING COMPLETE TIME RUN TIME LOADING TIME DYNAMICLOADING DYNAMIC LINKING OVERLAYS SWAPPING Concepts of

Slide 31:

PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION HOME HOME HOME HOME HOME HOME HOME SARAH JANE ♥ BACK TO MAIN MENU

Slide 32:

PARTITIONS ♥ the act of dividing a hard disk drive into multiple logical storage units. PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 33:

The simplest way to achieve multiple programs in memory is to divided the user space into cells or partitions where different programs can be placed . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 34:

♣MULTIPROGRAMMING WITH FIXED PARTITIONS♣ PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 35:

MULTIPROGRAMMING WITH FIXED PARTITIONS ♥ accommodate jobs into memory only when the process can fit in any of the fix-sized partitions. PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 36:

MULTIPROGRAMMING WITH VARIABLE PARTITIONS ♥ capable of providing an exact-size user to an incoming job . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 37:

MULTIPROGRAMMING WITH FIXED PARTITION ALLOCATION (MFP) ♥ implemented with dissimilar or similar-sized partitions . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 38:

♣ FRAGMENTATION♣ PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 39:

INTERNAL FRAGMENTATION ♥ amount of unused memory space within a partition produced when a small process is allocated to a big partition . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 40:

EXTERNAL FRAGMENTATION ♥ may be defined as the amount of memory regions that are available but are too small to accommodate any incoming process . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 41:

♣PARTITION SELECTION ALGORITHMS♣ PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 42:

FIRST FIT ♥ an algorithm which always starts at the beginning of memory. Once it locates a particular hole which can accommodate an incoming job it then immediately places the job into the partition. PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 43:

When a new job arrives and request for memory the first fit algorithm would again go back to the start, usually low level addresses, and evaluate each unused memory partition until it finds one which can accommodate the new process . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 44:

BEST FIT ♥ an algorithm which allocates an incoming job into the smallest available partition. In this strategy, the possible internal fragmentation would be minimized. PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 45:

In order to assure the job that it will be allocated to the smallest available hole it searches the entire list of available memory regions . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 46:

NEXT FIT ♥ this algorithm is a modified version of the first fit algorithm. Next fit starts from the beginning of the memory regions and allocates the job to the first hole that is finds capable of accepting the job. PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 47:

The difference of this algorithm with first fit is that when a second job arrives and requests for a memory location, the pointer of the next fit algorithm goes to the next available partition instead of going back to the start . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 48:

WORST FIT ♥ works in the opposite of best fit. Allocates the largest hole to a particular program/process. It produces a large amount of internal fragmentation . PARTITIONS MULTI PROGRAMMING FIXED PARTITION ALLOCATION PARTITION SELECTION FRAGMENTATION

Slide 49:

HOME HOME HOME HOME HOME HOME HOME BACK TO MAIN MENU MARY MARIEJO ♥ MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 50:

Multi- Programming with Variable Partitions MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 51:

Multi-Programming with Variable Partitions another major memory management scheme is MVP. the approach of MVP is that at start of memory allocation the user space is deemed as one big hole . MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 52:

Checking boarding MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 53:

Checking boarding is a composition of used and unused memory location which result  from processes that entered and exited the memory . MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 54:

OS P1=25K P2=50K Free 80K t=1 OS P1=25K P2=50K P3=60K Free 20K OS P1=25K Free 50k P3=60K Free 20K OS P6=15K Free 25K P9=20K Free 25K P7=30K Free 40K t=5 t=11 t=n MVP Figure ..allows multiple programs to reside memory witj higher memory maximization.. MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 55:

Buddy System MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 56:

Buddy System MVP and MFP schemes have their own shortcoming. MVP prone to memory wastage arising from wasteful job-partition matches . MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 57:

although MVp is free from internal fragmentation, its drawback arises from difficulty in managing dynamic partitions and the overhead cost brought about by compaction. MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 58:

a  memory management technique which settles this dilemma is known as the buddy system. the memory allocated is the smallest power of two but larger than the size of the job. MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 59:

Virtual Memory MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 60:

Virtual Memory the alternating process parts to a limited memory space facilitated the sharing-of-memory concept, which became the origin of virtual memory. MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 61:

Virtual memory’s main idea is to allow a combination of program,stack and data to run on a smaller memory by carefully choosing which part of the process is to be placed in the physical memory. MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 62:

Job AT MR BT 1 0 7 5 2 1 3 10 3 2 6 3 4 3 6 3 OS J1=7K J2=3K 12K OS 22K OS J1= 7K 15K T = 0 T = 1 MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 63:

Job AT MR BT 1 0 7 5 2 1 3 10 3 2 6 3 4 3 6 3 OS 22K T = 2 T = 3 OS J1 = 7k J2 = 3k J3 = 6k 6k OS J1 = 7K J2 = 3K J3 = 6K J4 = 6K MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 64:

Unused Start MEMORY 1Mb Request A, 150K A = 256 K 256K 512K Request B, 100K A = 256 K B = 128K 128K 512K Request C, 50K A = 256 K B = 128K C= 64K 64k 512K Released B A = 256 K 128K C= 64K 64k 512K Request D, 200K A = 256 K 128K C= 64K 64k D = 256K 256K Release C A = 256 K 256k D = 256K 256K MP VARIABLE PARTITION CHECKING BOARDING BUDDY SYSTEM VIRTUAL MEMORY CONTENTS: TABLES OF PROCESS

Slide 65:

CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION HOME HOME HOME HOME HOME HOME HOME RANISSA JANE ♥ BACK TO MAIN MENU

Slide 66:

CONCEPT OF LOCALITY Closely link to virtual memory The principle of locality states that program and data references within a process then to cluster. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 67:

CONCEPT OF LOCALITY Clustering provides the system a good chance of hitting or accessing the needed page accurately and immediately. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 68:

CONCEPT OF LOCALITY This principle is important since only several parts of the process are found in memory while the rest are in the disk . CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 69:

CONCEPT OF LOCALITY There are two types of locality, namely temporal locality and spatial locality. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 70:

CONCEPT OF LOCALITY Temporal locality simply suggests that if a particular part was accessed before then it would most likely be accessed again. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 71:

CONCEPT OF LOCALITY Spatial locality suggests that locations near a region that was access are candidates for access in the near future. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 72:

PAGING… Another memory management technique commonly associated with virtual memory . CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 73:

PAGING… The idea is to divide the memory in the frames where programs can reside. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 74:

PAGING… The programs, on the other hand, are divided into equal-sized pages. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 75:

PAGING… In this technique, the logical address is divided into a page number and a page offset. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 76:

PAGING… The former identifies the page where a particular data is found while the latter shows the corresponding frame address of the data in physical memory. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 77:

PAGING… The transfer of data from a page to a frame is done through use of a page table . It contains both page number and fr ame number. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 78:

PAGING… CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 79:

PAGING FORMULA… Let: P= page size V= physical address U = logical address p= page number f= frame number d= displacement p = U div P d = U mod P u = p * P + d v = f * P + d CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 80:

SEGMENTATION.. a technique similar to paging. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 81:

SEGMENTATION.. this memory management scheme looks into programs as segments and not pages. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 82:

SEGMENTATION.. Each segment has a known length and name . CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 83:

SEGMENTATION.. there is a great difference since pages, like in a book, are of equal sizes where as segments, like the line segments of geometry, vary in sizes. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 84:

SEGMENTATION.. Due to the varying sizes of the segments, limit and base registers are used to determine the boundaries of the programs. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 85:

SEGMENTATION.. The Segment table , similar to the page table, is used to index the base and limit registers essential in mapping the logical address to the physical memory . CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 86:

PAGING WITH SEGMENTATION.. This hybrid between Paging and Segmentation, combines the efficiency of paging and the security and sharing capabilities of segmentation . CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 87:

PAGE REPLACEMENT ALGORITHM When an executing process branches out for a particular page which is currently not in memory a page fault occurs. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 88:

PAGE REPLACEMENT ALGORITHM In order for the process to continue with its execution, the requested page must be fetched from the disk. CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 89:

PAGE REPLACEMENT ALGORITHM The process of fetching a page from the secondary storage has two categories, namely pre-paging and demand paging . CONCEPT OF LOCALITY PAGING SEGMENTATION PAGE REPLACEMENT ALGORITHM PAGING WITH SEGMENTATION

Slide 90:

HOME MICHIKO ♥ BACK TO MAIN MENU OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED Replacement Algorithms:

Slide 91:

similar to the shortest job first CPU scheduling algorithm in order for this algorithm to work it necessitates knowledge of the future need for a certain page OPTIMAL PAGE (OPT) REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 92:

When a page fault occurs the OS will have no basis in determining when the page would be next required. OPTIMAL PAGE (OPT) REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 93:

Initially all the frames are empty . Frames: OPTIMAL PAGE (OPT) REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 94:

When the program request for a page from the empty frames then the first page fault occurs. OPTIMAL PAGE (OPT) REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 95:

OPTIMAL PAGE (OPT) REPLACEMENT ALGORITHM Replacement Algorithms: 4 3 2 1 4 3 5 4 3 2 1 5 4 3 2 1 5 2 1 4 3 3 3 5 2 4 4 4 3 5 √ √ √ √ √ √ √ Figure 7.7 OPT Page Algorithm Young Page Frames Initially Empty OPT N = 3 Old Page Reference String PF % = # pf /n Let: # pf be number of page faults n be the total number of page references PF % = 7/12 = 58% OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 96:

S imilar to the First Come First Serve CPU Scheduling algorithm The requested page is loaded first and subsequent fetches are also serviced in order of request arrival. FIRST-IN-FIRST-OUT (FIFO) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 97:

FIRST-IN-FIRST-OUT (FIFO) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: 4 3 2 1 4 3 5 4 3 2 1 5 4 3 2 1 4 3 5 2 1 4 3 2 1 4 3 5 2 4 3 2 1 4 3 5 √ √ √ √ √ √ √ √ √ FIFO N = 3 PF % = 9/12 = 75% Figure 7.8 FIFO Replacement Page Algorithm 3 frames OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 98:

The more frames are added the more page faults are likely to occur. This strange phenomenon is more popularly known as Belady’s Anomaly. FIRST-IN-FIRST-OUT (FIFO) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 99:

FIRST-IN-FIRST-OUT (FIFO) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: 4 3 2 1 4 3 5 4 3 2 1 5 4 3 2 1 5 4 3 2 1 5 4 3 2 1 5 4 3 2 1 4 3 2 1 5 4 3 2 4 3 2 1 5 4 3 √ √ √ √ √ √ √ √ √ √ FIFO N = 4 PF % = 10/12 = 83% Figure 7.9 FIFO Replacement Page Algorithm 4 frames OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 100:

R eposes on the opposite of the concept of temporal locality, wherein if a page has not been referenced for quite some time then it is not going to be referenced in the future. LEAST RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 101:

One nice feature of LRU is that it never exhibits Belady’s Anomaly. LEAST RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 102:

LEAST RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: 4 3 2 1 4 3 5 4 3 2 1 5 4 3 2 1 4 3 5 4 3 2 1 5 4 3 2 1 4 3 5 4 3 2 1 4 3 2 1 4 3 5 4 3 2 √ √ √ √ √ √ √ √ √ √ LRU N = 3 PF % = 10/12 = 83% Figure 7.10 illustrates LRU algorithm . 3 frames OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 103:

Notice that in Figure 7.11 the increase in frames had decreased the occurrence of page faults. LEAST RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 104:

LEAST RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: 4 3 2 1 4 3 5 4 3 2 1 5 4 3 2 1 4 3 5 4 3 2 1 5 4 3 2 1 4 3 5 4 3 2 1 4 3 2 1 4 3 5 4 3 2 4 3 2 1 1 1 5 4 3 √ √ √ √ √ √ √ √ LRU N = 4 PF % = 8/12 = 67% Figure 7.11 illustrates LRU algorithm . 4 frames OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 105:

implemented using status bits R and M. R stands for reference and determine if a page has been referenced once in memory. NOT RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 106:

M stand for modify and checks whether a page in memory has been updated or not. These bits serve as the basis for the NRU implementation. NOT RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 107:

When a page fault occurs all of the pages in memory are inspected in terms of their R and M bits and a page is selected at random using the four classifications derived the bits inspection . NOT RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

Slide 108:

NOT RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM Replacement Algorithms: Category 1 : not referenced, not modified. (0, 0) Category 2 : not referenced, modified. (0, 1) Category 3 : not referenced, not modified. (1, 0) Category 4 : referenced, modified. (1, 1) Class 4 means that the page was accessed from memory by a program and was updated in the process. Class 3 was similarly accessed but was not change. On the other hand, Class 2 was modified but not referenced seems not correct. It actually originates from a class 4 type until a clock interrupt changes its R bit in a cycle. Class 1 simply means that the page was never modified nor used. The candidates for swap out are ranked according to their class with the class 1 as the first priority for eviction. OPT REPLACEMENT FIRST IN FIRST OUT LEAST RECENTLY USED NOT RECENTLY USED

authorStream Live Help