UNIT - III cache Memory


Presentation Description

No description available.


Presentation Transcript


UNIT - III UNIT-III MEMORY ORGANIZATION: Memory device characteristics, Random access memories: semiconductor RAMS, Serial – access Memories – Memory organization, Magnetic disk memories, Magnetic tape memories, Optical memories, Virtual memory, Main Memory Allocation, Interleaved memory, Cache Memory, Associative Memory.

Outline :

Outline Types of Memory Characteristics of memory devices Memory organization

Objective :

Objective Basic memory circuits Organization of main memory Cache memory concept Virtual memory

Types of Memory devices:

Types of Memory devices Semiconductor RAM memories Read only memories

Semiconductor RAM Memories :

Semiconductor RAM Memories Random Access memory Each storage location can be accessed independently with fixed access time which is independent of accessed location Two categories Static RAM (SRAMs) Dynamic Ram (DRAMs) SRAM store data indefinitely as long as DC power is supplied DRAM use capacitor which needs recharging (refreshing) Earlier such memories are expensive






ROMs Read only memory

Mask ROM or ROM:

Mask ROM or ROM




EPROM Isolated gate NMOSFET UV-EPROM E-EPROM --- Floating gate SiN 3 as gate oxide

Flash Memory:

Flash Memory Stacked gate MOS transistor

Q & A:

Q & A Nonvolatile memories ? ROMs & Flash One transistor cell ? Flash ,DRAM, ROM, EPROM



Memory structure (chip):

Memory structure (chip) Capacity 64 bits Capacity 8 bytes Capacity 64 bytes

2D – Memory :

2D – Memory

3D - Memory:

3D - Memory

Memory device characteristics:

Memory device characteristics Cost and performance Cost (c) = C ( total cost of memory system) / S (storage capacity) Performance Read access time ( t A ) Write access time Calculated from time memory receives a read request to the time at which requested information becomes available at output lines Other characteristics DRO (memory restoration in a destructive readout) Maximum amount of information transferred to or from memory per unit time ( data transfer rate or bandwidth bM bits per sec.)

Speed, Size , Cost:

Speed, Size , Cost Fastest is SRAM But expensive Costly Alternative is DRAM For large volume of data it becomes Costly A huge capacity and low cost alternate is Magnetic Disk Magnetic tape Optical Disk

The Goal: Large, Fast, Cheap Memory !!!:

The Goal: Large, Fast, Cheap Memory !!! Fact Large memories are slow Fast memories are small How do we create a memory that is large, cheap and fast (most of the time) ? Hierarchy Parallelism

PowerPoint Presentation:

Control Datapath Secondary Storage (Disk) Processor Registers Main Memory (DRAM) Second Level Cache (SRAM) On-Chip Cache 1s 10,000,000ns (10 ms) Speed (ns): 10ns 100ns 100s Gs Size (bytes): Ks Ms Tertiary Storage (Tape) 10,000,000,000ns (10 sec) Ts

Memory Hierarchy:

Memory Hierarchy The Total memory of computer can be visualized as hierarchy i.e all types of memory arranged from higher capacity & slow speed to low capacity & high speed


Example Re - Drawn

Cache :

Cache If the active portions of the program and data are placed in a fast small memory, the average memory access time can be reduced, Thus reducing the total execution time of the program Such a fast small memory is referred to as cache memory The cache is the fastest component in the memory hierarchy and approaches the speed of CPU component


Cache Cache memory is in between CPU and main memory CPU access data from cache L-1 cache fabricated on CPU chip (on chip) L-2 cache between main memory and CPU (offchip) CPU


Cache Principle of locality The references to memory at any given time interval tend to be confined within some local memory locations and therefore predictable Temporal locality a recently referenced memory location has more chances to referred again Spatial locality a neighbor of a recently referenced memory location is likely to be referenced

Operation of cache:

Operation of cache When CPU needs to access a memory, cache is examined If memory location found than ok! Read (hit) If not then memory location is searched in main memory and block of Mem. Loc. Just contain referred mem. Loc. transferred into cache and then read by CPU (miss)

Measuring performance of cache:

Measuring performance of cache Cache access time (c) Time between requested address arrived and requested data placed on the bus Main memory access time (m) Time it takes to transfer data from main memory Hit ratio (0 < 1) hr hr = cache hits / cache hits + cache miss |||| i.e (total requests made by cpu)

PowerPoint Presentation:

Miss ratio (mr) mr = (1 – hr) Mean/avarage access time = c + (1-hr)m if hr --> 1 then m.a.t. = c if hr --> 0 then m.a.t. = c + m Efficiency of cache (%) Cache access time * 100 / mean access time


Example For a CPU cache access time is 160ns and main memory access time is 960ns and hit ratio is .90 . calculate mean access time and cache efficiency Sol. M.a.t = c + (1-hr)m mean = 160 + (1-.90)960 = 256 ns efficiency = c / mean = 160/256

Cache design:

Cache design Size (1Mb … 2Mb…) Cost More cache is expensive More cache is faster (up to a point) Checking cache for data takes time Mapping Function The transformation of data from main memory to cache memory is referred to as a mapping process 32K * 12 Bytes main memory 15 bit address (2 15 =32K) Cache of 512 * 12 Bytes i.e. cache is 512 (2 9 ) lines of 1 bytes

PowerPoint Presentation:

Associative Direct Set-associative Replacement policy/Algorithm LRU (Least Recently Used) LFU (Least Frequently Used) FIFO (First In First Out) Write policy Write-through Write -back

Cache mapping :

Cache mapping Associative Easiest and fastest mapping method Uses associative memory Associative memory can store address and data both

PowerPoint Presentation:

CPU place the 15 bit address in argument register and matching address is searched in cache if found then stored data is read If no match found then main memory is accessed The address – data pair is then transferred in cache This replace the previously stored address- data pair Which pair should be replaced ? It depends upon replacement policy adhered Example - FIFO

Disadvantage :

Disadvantage Associative memories are expansive

Direct mapping :

Direct mapping Simple RAM memories with direct mapping can be used In this scheme the 15 bit main memory address is divided in two fields index field and tag field Here, index bits = 9 bits = ( 2 9 = 512) = to access cache and remaining 15 – 9 = 6 bits = tag bits In general, n bit memory address = k bit cache and remaining n - k bits tag

Direct mapping:

Direct mapping

PowerPoint Presentation:

When a new word is brought into cache the tag bits stored along with data When cpu generates the address, then index field is used to access cache And then tag field is matched with tag stored in cache If both match then it is hit If not then miss, it is read from main memory and stored in cache with new tag

Disadvantage :

Disadvantage If two or more words with same index but different tag are accessed again and again Hit ratio drops

Example :

Example Here block size is of one word (one memory location) Block size may be of more than one word

Cache block:

Index field is divided in two 6 bit to identify block 3 bits to identify word with in the block Cache block

Set associative mapping:

Set associative mapping Each word of cache can store more than one word Two way set associative memory can store two words with same index but different tags Each word is 2*(6 bit tag + 12 bit data ) = 36 bit so, cache is 9 bit index = (2 9 ) 512 * 36

PowerPoint Presentation:

Replacement Algorithms There must be a method for selecting which line in the cache is going to be replaced when there’s no room for a new line Hardware implemented algorithm (speed) Direct mapping There is no need for a replacement algorithm with direct mapping Each block only maps to one line Replace that line

PowerPoint Presentation:

Associative & Set Associative Replacement Algorithms Least Recently used (LRU) Replace the block that hasn't been touched in the longest period of time First in first out (FIFO) – replace block that has been in cache longest Least frequently used (LFU) – replace block which has had fewest hits Random – only slightly lower performance than use-based algorithms LRU, FIFO, and LFU

Cache write policies:

Cache write policies If cpu has to write a word in memory location

Cache initialization:

Cache initialization When first system is turned on, program portions are loaded in main memory from aux. memory What is stored in cache ? At that time cache is not empty it may contain valid or not valid word So, indicating this we use a special valid bit If 1 then word is valid If 0 then word is not valid Advantage of that, The data will be replaced only when valid bit is 0

Next topic Main memory:

Next topic Main memory This ppt is uploaded on ecepiet.blogspot.com

authorStream Live Help