程序代写代做代考 algorithm cache PowerPoint 演示文稿

PowerPoint 演示文稿

CO101
Principle of Computer

Organization
Lecture 18: Memory 4

Liang Yanyan

澳門科技大學
Macau of University of Science and

Technology

Problems
• Not enough main memory?

• A program requests too much memory space.
• Multiple programs?

• Different programs try to access the same address → conflict.
• How to protect a program’s data from being modified by other

programs?

2

Processor Main memory

physical
address

instruction
and data

Solution for size limitation
• Create a big virtual memory and presents to processor.

• The processor just know it has a big memory.
• Actually, some data are stored in main memory, some are stored

in hard disk. Processor doesn’t know where data are stored.
• Processor read/write data from virtual memory through a

virtual address.

3

Processor

Main
memory

physical
address

Disk

virtual
address

address
translator

Virtual memory

instruction
and data

Solution for conflict
• Each program can use the virtual address arbitrarily, and use as

much space (virtual memory) as it wants.
• E.g. two distinct programs can use the same virtual address, which

can be mapped to two different physical memory locations by the
operating system → resolve address conflict.

4

V
ir

tu
al

A
dd

re
ss

Physical
Memory

Disk

V
irtual A

ddress

Program A Program B

Review: The Memory Hierarchy
• Take advantage of the principle of locality to present the user with as

much memory as is available in the cheapest technology at the
speed offered by the fastest technology

5

Increasing
distance from
the processor
in access
time

L1$

L2$

Main Memory

Secondary Memory

Processor

(Relative) size of the memory at each level

Inclusive– what
is in L1$ is a
subset of what is
in L2$ is a
subset of what is
in MM that is a
subset of is in
SM

4-8 bytes (word)

1 to 4 blocks

1,024+ bytes (disk sector = page)

8-32 bytes (block)

Virtual Memory
• Use main memory as a “cache” for secondary memory

• Allows efficient and safe sharing of memory among multiple
programs

• Provides the ability to easily run programs larger than the size of
physical memory

• Simplifies loading a program for execution by providing for code
relocation (i.e., the code can be loaded anywhere in main memory)

• Managed jointly by CPU hardware and the operating system (OS)
• What makes it work? – again the Principle of Locality

• A program is likely to access a relatively small portion of its address
space during any period of time

• Each program is compiled into its own address space – a
“virtual” address space
• During run-time each virtual address must be translated to a

physical address (an address in main memory)
• VM “block” is called a page
• VM translation “miss” is called a page fault

6

The big picture (32-bit address)

7

CPU

Program A
(I have 4GB

memory)

Program B
(I have 4GB

memory)

Program C
(I have 4GB

memory)

Virtual
address

Data

Main memory
(1 GB)

(program A: data 1)
(program B: data 2)
(program A: data 2)
(program C: data 4)


physical
address

Address
Translator
(Operating

System)
Disk

(32 GB)

(program A: data 5)
(program B: data 6)
(program A: data 8)
(program C: data 9)


• Use the main memory as a cache, reduce the read/write delay of accessing disk.
• Fully associative, least recently used policy for replacement.

Virtual memory
• Virtual memory is divided into pages.
• Virtual address is divided into two fields.

• Virtual page number
• Page offset

8

page 0
page 1
page 2
page 3


page N

Virtual memory

Virtual address

Program A
(I have 4GB

memory)

Convert virtual address to physical address
• The conversion is done using a page table.
• Each program (process) has a separate page table stored in main memory.

A “page table register” points to the page table of current program (process).
• The page table is indexed with the virtual page number (VPN)
• Each entry contains a valid bit, and a physical page number (PPN)

• The PPN is concatenated with the page offset to form a physical address.
• No tag is needed because the index is the full VPN.
• Virtual address → physical address by combination of HW/SW.
• Each memory request needs first an address translation.

9

Convert virtual address to physical address

10

How big is
a page?

212 bytes

Loading data using virtual address

11

• If Valid == 1, get the
Physical page number
and concatenate with the
page offset to form the
physical address, load
data from physical
memory.

• If Valid == 0, locate data
in disk, load data from
disk to physical memory.
Record the physical
page number and use it
to update the Page table,
change Valid to 1.

Virtual Addressing with a Cache
• Thus it takes an extra memory access to translate a VA

to a PA.

12

CPU Trans- lation Cache
Main

Memory

VA PA miss

hit
data

Problems
• How big is the page table?

• In the previous example, virtual page number is 20-bit long →
the table contains 220 entries.

• Where can we store this table?
• It can be stored in main memory, but the time for an address

translation is much longer. We need to access the main memory
and do an address translation.

13

To read a data using virtual address:

Step 1: read the page table stored in main memory, load
the physical page number and calculate the
physical address.

Step 2: load data using the physical address.

→ Involves two memory accesses!!

Solution
• Add a cache to store portion of the page table.
• Instead of read table entries from main memory, load

table entries from cache.
• Just like the data cache introduced before, the only difference is

that the cache here is used to store table entries.

14

Processor
Table cache

(store portion
of the table)

Main
Memory

(store entire
page table)

Virtual
address

Physical
address

Translation Lookaside Buffers (TLBs)
• Just like any other cache, the TLB can be organized as

fully associative, set associative, or direct mapped.
• TLB access time is typically smaller than cache access

time (because TLBs are much smaller than caches)
• TLBs are typically not more than 512 entries even on high end

machines.

15

Table cache: Translation Lookaside Buffer (TLB)

16

1. TLB is a fast
cache to store a
portion of the
page table.

2. As a result,
instead of two
memory
accesses for
each read, now
we need 1 cache
access and 1
memory access.

3. Since it is a
cache, we need
a tag field.

TLB miss and page fault
• Address cannot be found in TLB → TLB miss

• Need to search the page table (e.g. in main memory), load the
table entry into TLB, and the program will be resumed.

• If a table entry is found, but the valid bit is off. It means
data is not in main memory → page fault.
• Handled by operating system.
• Copy data from disk to main memory, which may previously be

swapped out of main memory to make room for other processes.
• Record the physical page number of the data in main memory.
• Update page table entry: store the physical page number in the

table entry indexed by the virtual page number, update the valid
bit.

• Resume the program, now the data can be found in main
memory.

17

TLB miss and page fault
• A TLB miss – is it a page fault or merely a TLB miss?

• If the page is loaded into main memory, then the TLB miss can
be handled (in hardware or software) by loading the translation
information from the page table into the TLB.

• Takes 10’s of cycles to find and load the translation info into the TLB
• If the page is not in main memory, then it’s a true page fault.

• Takes 1,000,000’s of cycles to service a page fault
• TLB misses are much more frequent than true page

faults.
• On page fault, the page must be fetched from disk.

• Takes millions of clock cycles
• Handled by OS code

• Try to minimize page fault rate
• Fully associative placement
• Smart replacement algorithms

18

Cooperation of TLB & Cache

19

TLB Event Combinations

20

TLB Page
Table

Cache Possible? Under what circumstances?

Hit Hit Hit
Hit Hit Miss

Miss Hit Hit
Miss Hit Miss

Miss Miss Miss
Hit Miss Miss/

Hit
Miss Miss Hit

Yes – what we want!
Yes – although the page table is not
checked if the TLB hits

Yes – TLB miss, PA in page table

Yes – TLB miss, PA in page table, but data
not in cache

Yes – page fault
Impossible – TLB translation not possible if
page is not present in memory

Impossible – data not allowed in cache if
page is not in memory

4 Questions for the Memory Hierarchy
• Q1: Where can a entry be placed in the upper level?

(Entry placement)

• Q2: How is a entry found if it is in the upper level?
(Entry identification)

• Q3: Which entry should be replaced on a miss?
(Entry replacement)

• Q4: What happens on a write?
(Write strategy)

21

CO101�Principle of Computer Organization
Problems
Solution for size limitation
Solution for conflict
Review: The Memory Hierarchy
Virtual Memory
The big picture (32-bit address)
Virtual memory
Convert virtual address to physical address
Convert virtual address to physical address
Loading data using virtual address
Virtual Addressing with a Cache
Problems
Solution
Translation Lookaside Buffers (TLBs)
Table cache: Translation Lookaside Buffer (TLB)
TLB miss and page fault
TLB miss and page fault
Cooperation of TLB & Cache
TLB Event Combinations
4 Questions for the Memory Hierarchy

Posted in Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *