Lecture 8 CSE325-OS-Memory 16-18

Category: Entertainment

Presentation Description

No description available.


Presentation Transcript

slide 1:

Operating Systems Memory Management Memory Management Dr. Shamim Akhter

slide 2:

CPU Utilization • CPU utilization 1 – P n – n process in memory – P is the I/O block time by one process Dr. Shamim Akhter – P is the I/O block time by one process – P n probability of all process are waiting for I/O • I/O waiting 80 of time – CPU utilization 1 – 0.8 1 20 1 process memory – CPU utilization 1 – 0.8 3 49 3 processes memory – CPU utilization 1 – 0.8 10 89 10 processes memory Note: in reality all process do not spend the same amount of I/O waiting time Dr. Shamim Akhter

slide 3:

Multiprogramming • Multiple Programs 1 loaded at the same time – into different areas of memory • Processor needs to context switch Dr. Shamim Akhter • Programs use memory addresses / references • Proper addressing is required – no matter where the prog is located in memory Dr. Shamim Akhter

slide 4:

Memory ref. program C Code A12h + A12 Assembly Code lw t0 48 t1 add t0 s2 t0 Sw t0 48 t1 PC Memory 1000110100101000 0000000000110000 0000001001001000 0100000000100000 Computer Hardware Unit having all memory ref. Memory Unit Dr. Shamim Akhter having all memory ref. Sees only a stream of memory addresses does not know how they are generated. By the: Instruction counter-PC Indexing- Memory Index Indirection-accessing a variable through the use of a pointer. Dr. Shamim Akhter

slide 5:

Memory Management • Subdividing memory – to accommodate multiple processes • Memory needs to be allocated • Memory needs to be allocated – to ensure a reasonable supply of ready processes to consume available processor time

slide 6:

Memory Management Requirements • Relocation • Protection • Logical organization • Logical organization • Physical organization

slide 7:

Memory Management Requirements • Relocation – Programmer does not know • where the program will be placed in memory when it is executed – While the program is executing – While the program is executing • it may be swapped to disk and returned to main memory at a different location relocated – Memory references must be translated in the code to actual physical memory address P 1 P 2 P 3 M

slide 8:

Addressing Requirement The processor hardware and operating system software must be able to translate the memory references found in the code of the program into actual physical memory addresses current memory location

slide 9:

• User programs go through several steps before being executed

slide 10:

Steps for Loading a Process in Memory System Library static linking System Library dynamic linking Linking: combine all the object modules of a program into a binary program image Linking Loader

slide 11:

The Linking Function Module A CALL B Return Length L Module B 0 L-1 Module A JSR “L” Return Module B L Object Modules 0 L-1 0 11 Module B CALL C Return Length M Module C Return Length N JSR “L+M” Return Module C Return L+M-1 L+M L+M+N-1 Load Module M-1 0 N-1

slide 12:

Static Linking and Loading Printf.c Printf.o Static Library gcc ar Linker HelloWorld.c gcc HelloWorld.o Linker Memory Loader a.out or name of your command

slide 13:

Dynamic Linking • The linking of some external modules is done after the creation of the load module executable file – Windows: external modules are .DLL files – Unix: external modules are .SO files shared library • The load module contains references to external modules • The load module contains references to external modules which are resolved either at: – Loading time load-time dynamic linking – Run time: when a call is made to a procedure defined in the external module run-time dynamic linking • OS finds the external module and links it to the load module – Check to see if the external module has been loaded into memory 13

slide 14:

Advantages of Dynamic Linking • The external module is often an OS utility. Executable files can use another version of the external module without the need of being modified • Code sharing: the same external module needs to be • Code sharing: the same external module needs to be loaded in main memory only once. Each process is linked to the same external module – Saves memory and disk space 14

slide 15:

Address Binding – Mapping from one address space to another • Address representation – Source program: symbolic such as count – After compiling: re-locatable address • 14 bytes from the beginning of this module – After linkage editor loader or run-time referring: absolute address – After linkage editor loader or run-time referring: absolute address • Physical memory address 2014 15 0 250 2000 2250 Re-locatable Address Absolute Address Physical Memory Symbolic Address int I goto p1 p1

slide 16:

Address Binding Cont. • Address binding of instructions and data to physical memory addresses can happen at three different stages – Compile time: If memory location known a priori absolute code can be generated • Must recompile code if starting location changes • MS-DOS .COM-format program 16

slide 17:

Binding at Compile Time Mapping Symbolic Addresses PROGRAM JUMP x Re-locatable Addresses PROGRAM JUMP x Code i Base address 0 1234 Data y Base address 0 17 LOAD y DATA x y Source Code Compile The CPU generates the absolute addresses The CPU generates the absolute addresses JUMP x LOAD y j k 1234 ADD 4 ADD 4 Source Target Which Segment i j Code k y Data Un-solved Address Table

slide 18:

Binding at Compile Time Cont. Address Generate: Static Linking PROGRAM Code Base address 0 Data Base address 0 1000 500 18 Linking PROGRAM JUMP x LOAD y j i k 1234 y ADD 4 Source Target Which Segment i+1000 j+1000 Code k+1000 y+500 Data Un-solved Address Table

slide 19:

Address Binding Cont. • Address binding of instructions and data to physical memory addresses can happen at three different stages – Load time: Must generate re-locatable code if memory location is not known at compile time • Physical memory address is fixed at load time • Must reload if starting location changes 19

slide 20:

Binding at Loader Time Memory i 1000 10000 Code Code and Data are loaded into memory at address 10000 and 20000 respectively Data JUMP x LOAD y j i k ADD 4 1234 y 500 20000 10000+1000+j 11000+j 20000+500+y 20500+y Every unresolved address must be adjusted

slide 21:

Address Binding Cont. – Execution time: Binding delayed until run time • The process can be moved during its execution from one memory segment to another • The CPU generates the relative Relative Relocatable Addresses 0 JUMP 400 • The CPU generates the relative virtual addresses • Need hardware support for address maps e.g. base and limit registers • Most general-purpose OS use this method – Swapping Paging Segmentation 21 JUMP 400 LOAD 1200 400 1200 MAX 2000

slide 22:

Need : Dynamic relocation scheme with Automated Hardware System Automated Hardware System

slide 23:

MMU CPU MMU Memory R/W Logical address Virtual address Physical address R/W Hardware Device •Logical address – generated by the CPU also referred to as virtual address Dr. Shamim Akhter •Logical address – generated by the CPU also referred to as virtual address - An integer value relative to the starting point of prog •Physical address – address seen by the memory unit - Add logical address with prog’s starting address in main memory During Compile and load time address-binding methods generate identical logical physical address. However execution time they differ. WHY-Answer Virtual Memory Dr. Shamim Akhter

slide 24:

Dynamic Relocation Using a Relocation Register Generated By CPU Seen By Memory Unit 14000 to 14000+MAX How MMU translates 24 Map LA to PA Binding at execution time when reference is made Binding at execution time when reference is made 0 to MAX

slide 25:

Memory Protection o A test for limit can be added. o Limit is the total amount of memory allocated to the process.

slide 26:

Relocation • Base register • Bounds register These values are set when the process is when the process is loaded or when the process is swapped in

slide 27:

• Sofar – during execution process must be in physical memory. – Process size is limited to memory size. • What if the process is higher than allocated memory size

slide 28:

Sol n 1: Overlays and Dynamic Loading • Keep in memory only those instructions and data that are needed at any given time. Resident Code Memory Code Data Overley1 Overley2 Dr. Shamim Akhter 28 Resident Code Must remain during execution Data Code Data Code Data Overley3 Program is written using overlays to overcome memory limitations Disadvantages: • programmer should design their programs to maintain overlay structure • require complete knowledge of program structure its code and data structure Dr. Shamim Akhter

slide 29:

Overlay Cont. Pass 1 70K Pass 2 80K Sym. Table 20K Common Rou. 30K Pass 1 70K Pass 2 80K Sym. Table 20K Common Rou. 30K Assembler Do not requires any Special Example : Assembler required memory 200K Overlay is a technique 29 Assembler Total Memory Available 150K Do not requires any Special Support from OS. Special linking and relocation system are needed.

slide 30:

Sol n 2: SWAP Process can be swapped temporarily out of memory to a backing store and then brought back into memory for continue execution. Example: LRU page replacement RR CPU Scheduling Algo Quantum expires. Called Roll In-Roll Out in priority-based scheduling. Other Solutions: Paging Segmentation Virtual memory will explain later scheduling. Normally process SWAP out will be swap back to same memory • Address Binding is done at assembly/load time -Process can’t be moved diff n location •Address Binding is done in execution time -Swap process into diff n location Dr. Shamim Akhter

slide 31:

Main memory consists OS and various user processes. Need efficient way to allocate Dr. Shamim Akhter Need efficient way to allocate processes into memory. ALLOCATION Dr. Shamim Akhter

slide 32:

Contiguous Memory Allocation Process requests to execute Contiguous free space in memory Yes Allocate to memory No Single Partition Multiple Partitions execute in memory No Added to queue and wait Dr. Shamim Akhter Lower Memory -OS - Interrupt Vector Table User Space Higher Memory Single Partition Fixed Dynamic

slide 33:

Multiple Partitions Fixed Partitioning

slide 34:

Fixed Partitioning

slide 35:

Fixed Partitioning

slide 36:

Example: Multiple Partitions Dynamic Partitions OS 2160 k 0 400k Process Memory Burst Time P1 600k 10 P2 1000k 5 P3 300k 20 Job Queue 2160 k 1 hole 2560k P3 300k 20 P4 700k 8 P5 500k 15 FCFS Scheduling Algorithm

slide 37:

OS OS OS OS OS P1 P1 P1 P5 P2 P4 P4 P3 P3 P3 P3 P3 0 400k 1000k 2000k 2300k 0 400k 1000k P2 termi- nates 2300k 0 400k 1000k 2300k 1700k 2000k 0 400k 1000k 2300k 1700k 2000k 0 400k 1000k 2300k 1700k 2000k 900k Dr. Shamim Akhter HD P1 P2 P3 P4 P5 … … FCFS 2560k a Long term scheduling 2560k b After 14 min P2 terminates RR-1 2560k c P4 allocates 2560k 2560k d After 28 min P1 terminates RR-1 e P1 P2 p3 Short term scheduling RR-1

slide 38:

Effects of Dynamic Partitioning Must use compaction to shift processes so they are contiguous and all free memory is in one block Compaction Merged two holes while they nearby

slide 39:

More than one options holes Which one will be Which one will be choose Dr. Shamim Akhter

slide 40:

Dynamic Partitioning • OS must decide which free block to allocate • Best-fit algorithm – Chooses the block that is closest in size to the request – Chooses the block that is closest in size to the request – Worst performer overall – Since the smallest block is found for process the fragmentation of the smallest size is left – Memory compaction must be done more often

slide 41:

Dynamic Partitioning • First-fit algorithm – Scans memory – Scans memory • form the beginning OR form the previous first fit • and chooses the first available block that is large enough – Fastest – May have many processes loaded in the front end of memory that must be searched over when trying to find a free block

slide 42:

Dynamic Partitioning • Worst-fit – Scans memory for the largest hole. – Produces largest leftover hole. • Simulation reflects: – First fit and best fit are bettertime storage than worst fit

slide 43:

Allocation Schemes Worst fit

slide 44:

As processes are loaded removed from memory the free space is broken into little pieces. Dr. Shamim Akhter space is broken into little pieces. FRAGMENTATION Dr. Shamim Akhter

slide 45:

Dr. Shamim Akhter Fragmentation Internal Fragmentation inside partition External Fragmentation between two process Dr. Shamim Akhter - Fixed Size Partition - In-efficient memory utilization - Small prog occupies entire partition Variable size partition Worst case would have a block of free memory Sol n - Compaction If relocation is static and is done at assembly/load time- compaction cant be done If relocation is dynamic and is done at execution time- compaction possible

slide 46:

External Fragmentation Solution Compaction is Costly Selecting Optimal compaction strategy is difficult. Solution Solution PAGING SCHEME Dr. Shamim Akhter

slide 48:

Sol n 3 : Paging • Sol n : for external fragmentation problem – REDUCE to waste memory space • Permits logical address of space of a process to be non-contiguous.

slide 49:

Paging Schemes • Physical memory – is broken into fixed-sized blocks called frames. • Logical memory – is also broken into blocks of same sized called pages. – Page size define by the hardware. Typically 512 – 8192 bytes power of 2 – getconf PAGE_SIZE UNIX command to get page size • Operating system maintains – a page table for each process

slide 50:

Paging- dynamic relocation Base relocation register One for each frame Logical Memory

slide 51:

Process and Frames

slide 52:

Process and Frames

slide 53:

Shared Pages 40 users execute a text editor with -150 KB code Reentrant code - 50 KB data space Total 8000 KB space Shared pages reduced to 2150 KB space

slide 54:

Logical Addresses

slide 55:

Logical address to Physical Address Direct Mapping 0 0 1 1 2 2 3 3 4 0 5 1 6 2 Page Size 4 Frame Size 4 Frame 0 Frame 1 Dr. Shamim Akhter Page Offset Displacement within the page 0 0 1 1 2 2 3 3 4 0 A 5 1 B 6 2 C 7 3 D Page 0 Page 1 1 3 0 1 3 3 3 6 2 7 3 8 0 9 1 10 2 11 3 12 0 A 13 1 B 14 2 C 15 3 D p7/4 d7-p4 f34 d3 Frame 2 Frame 3 Dr. Shamim Akhter

slide 56:

Logical Address 0 Page 0 2 n 1 2 3 Page 1 2 n 4 Page Page Offset p D m-n bits n bits 0 n- 2 n 1 n- 2 n 2 n- 2 n m bits Page 1 2 4 5 6 Page 2 2 n 7 … 2 m Page 2 m /2 n 2 m-n 2 n 2 n- 2 n … 2 m-n n- 2 n Total Address 2 m Dr. Shamim Akhter

slide 57:


slide 58:

In-Class Exercise

slide 59:

Paging with Associative Direct Mapping Associative Mapping Tag Simultaneously TLB miss will take TLB is a cache of the page table representing only a subset of the page table contents. ASID- Address-space Translation Lookaside Buffer TLB – Faster Cache Memory Direct Mapping PTBR Page-Table Base Register Two Times Memory Access -Page - Frame -- Actual memory address -Memory access slower by factor of 2 Access Simultaneously take 10 longer time Address-space identifiers / Process Identifier

slide 60:

Effective Memory Access Time Effective access time TLB hit x TLB access time + memory access time + TLB miss x TLB access time + Page table access time + memory access time Memory Access Time 100 nanoseconds 80 hit ratio: 0.8 x 20+100 + 0.20x 20+100+100 nanoseconds 140 nanoseconds 40 slower 98 hit ratio: 0.98 x 20+100 + 0.02x 20+100+100 nanoseconds 122 nanoseconds 22 slower 100 hit ratio: 1.0 x 20+100 nanoseconds 120 nanoseconds 20 slower

slide 61:

Internal Fragmentation Solution Solution SEGMENTATION SCHEME Dr. Shamim Akhter

slide 62:

Sol n 4 : Segmentation • All segments of all programs do not have to be of the same length • There is a maximum segment length • Addressing consist of two parts – a segment number and an offset • Since segments are not equal segmentation is similar to dynamic partitioning

slide 63:

Logical Addresses

slide 64:

Example of Segmentation Example: Reference 2: 53 Reference 0:1222

slide 65:

Segmentation Hardware

slide 66:


slide 67:

LDT- Local Descriptor Table / GDT- Global Descriptor Table S G/L Prings 13 1 2 Max Seg 2 13 Each segment is 4 GB Segmentation with two-level paging schemes Max Seg 2 13 - 8KB for LDT - 8KB for GDT Each segment is 4 GB 10 10 12 4KB Invalid bit 31 rest bits are used for disk addressing

slide 68:

Reading Materials • Chapter 9: – 9.1 9.2 9.3 9.4.1- 9.6

authorStream Live Help