0% found this document useful (0 votes)
33 views57 pages

Memory Hierarchy and Locality Concepts

The document discusses the memory hierarchy in computer systems, focusing on the concepts of locality of reference and caching. It explains how locality helps bridge the CPU-memory speed gap and outlines the structure of memory hierarchies, including various cache levels and their operations. Additionally, it covers types of cache misses and the organization of cache memory, emphasizing the importance of locality in optimizing performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views57 pages

Memory Hierarchy and Locality Concepts

The document discusses the memory hierarchy in computer systems, focusing on the concepts of locality of reference and caching. It explains how locality helps bridge the CPU-memory speed gap and outlines the structure of memory hierarchies, including various cache levels and their operations. Additionally, it covers types of cache misses and the organization of cache memory, emphasizing the importance of locality in optimizing performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Computer Systems Organization (CS2.

201)
MEMORY HIERARCHY (SECTION 6.1 -6.3.1)

Deepak Gangadharan
Computer Systems Group (CSG), IIIT Hyderabad

Slide Contents: Adapted from slides by Randal Bryant


Topics
▪ Locality of reference
▪ Caching in the memory hierarchy

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 2


The CPU-Memory Gap
The gap widens between DRAM, disk, and CPU speeds.
100,000,000.0
Disk
10,000,000.0
1,000,000.0
SSD
100,000.0
10,000.0 Disk seek time

Flash SSD access time


1,000.0
ns

DRAM DRAM access time


100.0
10.0
1.0
0.1 CPU
0.0
1980 1985 1990 1995 2000 2003 2005 2010
Year

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 3


Locality to the Rescue!
The key to bridging this CPU-Memory gap is a fundamental property of computer programs
known as locality

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 4


Topics
Storage technologies and trends
Locality of reference
Caching in the memory hierarchy

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 5


Locality
Principle of Locality: Programs tend to use data and instructions with addresses near or equal to
those they have used recently

Temporal locality:
◦ Recently referenced items are likely
to be referenced again in the near future

Spatial locality:
◦ Items with nearby addresses tend
to be referenced close together in time

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 6


Locality Example
sum = 0;
for (i = 0; i < n; i++)
sum += a[i];
return sum;

Data references
◦ Reference array elements in succession (stride-1
Spatial locality
reference pattern).
◦ Reference variable sum each iteration. Temporal locality

Instruction references
◦ Reference instructions in sequence. Spatial locality
◦ Cycle through loop repeatedly. Temporal locality

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 7


Qualitative Estimates of Locality
Claim: Being able to look at code and get a qualitative sense of its locality is a key skill for a
professional programmer.

Question: Does this function have good locality with respect to array a?
int sum_array_rows(int a[M][N])
{
int i, j, sum = 0;

for (i = 0; i < M; i++)


for (j = 0; j < N; j++)
sum += a[i][j];
return sum;
}

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 8


Locality Example
Question: Does this function have good locality with respect to array a?

int sum_array_cols(int a[M][N])


{
int i, j, sum = 0;

for (j = 0; j < N; j++)


for (i = 0; i < M; i++)
sum += a[i][j];
return sum;
}

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 9


Locality Example
Question: Can you permute the loops so that the function scans the 3-d array a with a stride-1
reference pattern (and thus has good spatial locality)?

int sum_array_3d(int a[M][N][N])


{
int i, j, k, sum = 0;

for (i = 0; i < M; i++)


for (j = 0; j < N; j++)
for (k = 0; k < N; k++)
sum += a[k][i][j];
return sum;
}

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 10


Memory Hierarchies
Some fundamental and enduring properties of hardware and software:
◦ Fast storage technologies cost more per byte, have less capacity, and require more power (heat!).
◦ The gap between CPU and main memory speed is widening.
◦ Well-written programs tend to exhibit good locality.

These fundamental properties complement each other beautifully.

They suggest an approach for organizing memory and storage systems known as a memory
hierarchy.

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 11


Today
Storage technologies and trends
Locality of reference
Caching in the memory hierarchy

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 12


An Example Memory Hierarchy
L0: CPU registers hold words retrieved
Registers from L1 cache

L1: L1 cache
Smaller, (SRAM) L1 cache holds cache lines retrieved
from L2 cache
faster,
costlier L2:
L2 cache
per byte L2 cache holds cache lines
(SRAM)
retrieved from main memory
L3:
Larger, Main memory
(DRAM) Main memory holds disk blocks
slower, retrieved from local disks
cheaper
per byte L4: Local secondary storage Local disks hold files
(local disks) retrieved from disks on
remote network servers

Remote secondary storage


L5: (tapes, distributed file systems, Web servers)

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 13


Caches
Cache: A smaller, faster storage device that acts as a staging area for a subset of the data in a
larger, slower device.
Fundamental idea of a memory hierarchy:
◦ For each k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1.

Why do memory hierarchies work?


◦ Because of locality, programs tend to access the data at level k more often than they access the data at
level k+1.
◦ Thus, the storage at level k+1 can be slower, and thus larger and cheaper per bit.

Big Idea: The memory hierarchy creates a large pool of storage that costs as much as the cheap
storage near the bottom, but that serves data to programs at the rate of the fast storage near the
top.

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 14


General Cache Concepts
Smaller, faster, more expensive
Cache 8
4 9 14
10 3 memory caches a subset of
the blocks

Data is copied in block-sized


10
4 transfer units

Larger, slower, cheaper memory


Memory 0 1 2 3 viewed as partitioned into “blocks”

4 5 6 7
8 9 10 11
12 13 14 15

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 15


General Cache Concepts: Hit
Data in block b is needed
Request: 14
Block b is in cache:
Cache 8 9 14 3 Hit!

Memory 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 16


General Cache Concepts: Miss
Data in block b is needed
Request: 12
Block b is not in cache:
Cache 8 9
12 14 3 Miss!

Block b is fetched from


12 Request: 12
memory

Memory 0 1 2 3 Block b is stored in cache


4 5 6 7 • Placement policy:
8 9 10 11 determines where b goes
12 13 14 15 • Replacement policy:
determines which block
gets evicted (victim)

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 17


General Caching Concepts:
Types of Cache Misses
Cold (compulsory) miss
◦ Cold misses occur because the cache is empty.

Conflict miss
◦ Most caches limit blocks at level k+1 to a small subset (sometimes a singleton) of the block positions at
level k.
◦ E.g. Block i at level k+1 must be placed in block (i mod 4) at level k.
◦ Conflict misses occur when the level k cache is large enough, but multiple data objects all map to the
same level k block.
◦ E.g. Referencing blocks 0, 8, 0, 8, 0, 8, ... would miss every time.

Capacity miss
◦ Occurs when the set of active cache blocks (working set) is larger than the cache.

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 18


Topics
Cache memory organization and operation
Performance impact of caches
◦ Rearranging loops to improve spatial locality
◦ Using blocking to improve temporal locality

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 19


Cache Memories
Cache memories are small, fast SRAM-based memories managed automatically in hardware.
◦ Hold frequently accessed blocks of main memory

CPU looks first for data in caches (e.g., L1, L2, and L3), then in main memory.
Typical system structure:
CPU chip
Register file
Cache
ALU
memories
System bus Memory bus

I/O Main
Bus interface
bridge memory

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 20


General Cache Organization (S, E, B)
E = 2e lines per set
set
line

S = 2s sets

Cache size:
C = S x E x B data bytes
v tag 0 1 2 B-1

valid bit
B = 2b bytes per cache block (the data)

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 21


Cache Read
E = 2e lines per set • Locate set
• Check if any line in set
has matching tag
Address of word: • Yes + line valid: hit
t bits s bits b bits • Locate data starting
S = 2s sets at offset
tag set block
index offset

data begins at this offset

v tag 0 1 2 B-1

valid bit
B = 2b bytes per cache block (the data)

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 22


Example: Direct Mapped Cache (E = 1)
Direct mapped: One line per set
Assume: cache block size 8 bytes

Address of int:
v tag 0 1 2 3 4 5 6 7
t bits 0…01 100

v tag 0 1 2 3 4 5 6 7
find set
S = 2s sets
v tag 0 1 2 3 4 5 6 7

v tag 0 1 2 3 4 5 6 7

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 23


Example: Direct Mapped Cache (E = 1)
Direct mapped: One line per set
Assume: cache block size 8 bytes

Address of int:
valid? + match: assume yes = hit
t bits 0…01 100

v tag 0 1 2 3 4 5 6 7

block offset

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 24


Example: Direct Mapped Cache (E = 1)
Direct mapped: One line per set
Assume: cache block size 8 bytes

Address of int:
valid? + match: assume yes = hit
t bits 0…01 100

v tag 0 1 2 3 4 5 6 7

block offset

int (4 Bytes) is here


No match: old line is evicted and replaced

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 25


Direct-Mapped Cache Simulation
t=1 s=2 b=1 M=16 byte addresses, B=2 bytes/block,
x xx x S=4 sets, E=1 Blocks/set

Address trace (reads, one byte per read):


0 [00002], miss
1 [00012], hit
7 [01112], miss
8 [10002], miss
v Tag Block 0 [00002] miss
Set 0 0
1 1?
0 ?
M[8-9]
M[0-1]
Set 1
Set 2
Set 3 1 0 M[6-7]

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 26


E-way Set Associative Cache (Here: E = 2)
E = 2: Two lines per set
Assume: cache block size 8 bytes Address of short int:
t bits 0…01 100

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 find set

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 27


E-way Set Associative Cache (Here: E = 2)
E = 2: Two lines per set
Assume: cache block size 8 bytes Address of short int:
t bits 0…01 100
compare both

valid? + match: yes = hit

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

block offset

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 28


E-way Set Associative Cache (Here: E = 2)
E = 2: Two lines per set
Assume: cache block size 8 bytes Address of short int:
t bits 0…01 100
compare both

valid? + match: yes = hit

v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7

block offset
short int (2 Bytes) is here No match:
• One line in set is selected for eviction and replacement
• Replacement policies: random, least recently used (LRU), …

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 29


2-Way Set Associative Cache Simulation
t=2 s=1 b=1 M=16 byte addresses, B=2 bytes/block,
xx x x S=2 sets, E=2 blocks/set

Address trace (reads, one byte per read):


0 [00002], miss
1 [00012], hit
7 [01112], miss
8 [10002], miss
0 [00002] hit
v Tag Block
0
Set 0 1 ?
00 ?
M[0-1]
0
1 10 M[8-9]
0
1 01 M[6-7]
Set 1
0

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 30


What about writes?
Multiple copies of data exist:
◦ L1, L2, Main Memory, Disk
What to do on a write-hit?
◦ Write-through (write immediately to memory)
◦ Write-back (defer write to memory until replacement of line)
◦ Need a dirty bit (line different from memory or not)

What to do on a write-miss?
◦ Write-allocate (load into cache, update line in cache)
◦ Good if more writes to the location follow
◦ No-write-allocate (writes immediately to memory)
Typical
◦ Write-through + No-write-allocate
◦ Write-back + Write-allocate

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 31


Intel Core i7 Cache Hierarchy
Processor package L1 i-cache and d-cache:
Core 0 Core 3 32 KB, 8-way,
Regs Regs Access: 4 cycles

L1 L1 L1 L1 L2 unified cache:
d-cache i-cache d-cache i-cache 256 KB, 8-way,
… Access: 10 cycles
L2 unified cache L2 unified cache
L3 unified cache:
8 MB, 16-way,
L3 unified cache Access: 40-75 cycles
(shared by all cores)
Block size: 64 bytes for
all caches.
Main memory

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 32


Cache Performance Metrics
Miss Rate
◦ Fraction of memory references not found in cache (misses / accesses)
= 1 – hit rate
◦ Typical numbers (in percentages):
◦ 3-10% for L1
◦ can be quite small (e.g., < 1%) for L2, depending on size, etc.

Hit Time
◦ Time to deliver a line in the cache to the processor
◦ includes time to determine whether the line is in the cache
◦ Typical numbers:
◦ 1-2 clock cycle for L1
◦ 5-20 clock cycles for L2

Miss Penalty
◦ Additional time required because of a miss
◦ typically 50-200 cycles for main memory (Trend: increasing!)

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 33


Lets think about those numbers
Huge difference between a hit and a miss
◦ Could be 100x, if just L1 and main memory

Would you believe 99% hits is twice as good as 97%?


◦ Consider:
cache hit time of 1 cycle
miss penalty of 100 cycles

◦ Average access time:


97% hits: 1 cycle + 0.03 * 100 cycles = 4 cycles
99% hits: 1 cycle + 0.01 * 100 cycles = 2 cycles

This is why “miss rate” is used instead of “hit rate”

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 34


Writing Cache Friendly Code
Make the common case go fast
◦ Focus on the inner loops of the core functions

Minimize the misses in the inner loops


◦ Repeated references to variables are good (temporal locality)
◦ Stride-1 reference patterns are good (spatial locality)

Key idea: Our qualitative notion of locality is quantified


through our understanding of cache memories.

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 35


Topics
Cache organization and operation
Performance impact of caches
◦ Rearranging loops to improve spatial locality
◦ Using blocking to improve temporal locality

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 36


Miss Rate Analysis for Matrix Multiply
Assume:
◦ Line size = 32B (big enough for four 64-bit words)
◦ Matrix dimension (N) is very large
◦ Approximate 1/N as 0.0
◦ Cache is not even big enough to hold multiple rows

Analysis Method:
◦ Look at access pattern of inner loop

k j j

i k i

A B C

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 37


Matrix Multiplication Example
Description: Variable sum
◦ Multiply N x N matrices /* ijk */ held in register
◦ O(N3) total operations for (i=0; i<n; i++) {
◦ N reads per source element for (j=0; j<n; j++) {
◦ N values summed per destination sum = 0.0;
◦ but may be able to hold in register for (k=0; k<n; k++)
sum += a[i][k] * b[k][j];
c[i][j] = sum;
}
}

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 38


Layout of C Arrays in Memory (review)
C arrays allocated in row-major order
◦ each row in contiguous memory locations
Stepping through columns in one row:
◦ for (i = 0; i < N; i++)
sum += a[0][i];
◦ accesses successive elements
◦ if block size (B) > 4 bytes, exploit spatial locality
◦ compulsory miss rate = 4 bytes / B

Stepping through rows in one column:


◦ for (i = 0; i < n; i++)
sum += a[i][0];
◦ accesses distant elements
◦ no spatial locality!
◦ compulsory miss rate = 1 (i.e. 100%)

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 39


Matrix Multiplication (ijk)
/* ijk */
Inner loop:
for (i=0; i<n; i++) {
for (j=0; j<n; j++) { (*,j)
sum = 0.0; (i,j)
(i,*)
for (k=0; k<n; k++)
sum += a[i][k] * b[k][j]; A B C
c[i][j] = sum;
}
} Row-wise Column- Fixed
wise
Misses per inner loop iteration:
A B C
0.25 1.0 0.0
COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 40
Matrix Multiplication (jik)
/* jik */
Inner loop:
for (j=0; j<n; j++) {
for (i=0; i<n; i++) { (*,j)
sum = 0.0; (i,j)
for (k=0; k<n; k++) (i,*)
sum += a[i][k] * b[k][j]; A B C
c[i][j] = sum
}
} Row-wise Column- Fixed
wise
Misses per inner loop iteration:
A B C
0.25 1.0 0.0
COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 41
Matrix Multiplication (kij)
/* kij */
Inner loop:
for (k=0; k<n; k++) {
for (i=0; i<n; i++) { (i,k) (k,*)
r = a[i][k]; (i,*)
for (j=0; j<n; j++) A B C
c[i][j] += r * b[k][j];
}
} Fixed Row-wise Row-wise

Misses per inner loop iteration:


A B C
0.0 0.25 0.25
COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 42
Matrix Multiplication (ikj)
/* ikj */
Inner loop:
for (i=0; i<n; i++) {
for (k=0; k<n; k++) { (i,k) (k,*)
r = a[i][k]; (i,*)
for (j=0; j<n; j++) A B C
c[i][j] += r * b[k][j];
}
}
Fixed Row-wise Row-wise

Misses per inner loop iteration:


A B C
0.0 0.25 0.25
COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 43
Matrix Multiplication (jki)
/* jki */ Inner loop:
for (j=0; j<n; j++) { (*,k) (*,j)
for (k=0; k<n; k++) {
(k,j)
r = b[k][j];
for (i=0; i<n; i++) A B C
c[i][j] += a[i][k] * r;
}
} Column- Fixed Column-
wise wise

Misses per inner loop iteration:


A B C
1.0 0.0 1.0

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 44


Matrix Multiplication (kji)
/* kji */
Inner loop:
for (k=0; k<n; k++) {
for (j=0; j<n; j++) { (*,k) (*,j)
r = b[k][j]; (k,j)
for (i=0; i<n; i++)
A B C
c[i][j] += a[i][k] * r;
}
}
Column- Fixed Column-
wise wise
Misses per inner loop iteration:
A B C
1.0 0.0 1.0
COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 45
Summary of Matrix Multiplication
for (i=0; i<n; i++) { for (k=0; k<n; k++) {
for (j=0; j<n; j++) { for (i=0; i<n; i++) {
sum = 0.0; ijk (& jik): r = a[i][k];
for (k=0; k<n; k++)
sum += a[i][k] * b[k][j];
• 2 loads, 0 stores for (j=0; j<n; j++)
c[i][j] += r * b[k][j];
c[i][j] = sum; • misses/iter = 1.25 }
} }
}
kij (& ikj):
• 2 loads, 1 store
for (j=0; j<n; j++) {
• misses/iter = 0.5
for (k=0; k<n; k++) {
r = b[k][j]; jki (& kji):
for (i=0; i<n; i++)
c[i][j] += a[i][k] * r;
• 2 loads, 1 store
} • misses/iter = 2.0
}

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 46


Core i7 Matrix Multiply Performance
60
jki / kji
50
Cycles per inner loop iteration

40
jki
30
kji
ijk
ijk / jik jik
20

10

kij / ikj
0
50 100 150 200 250 300 350 400 450 500 550 600 650 700 750
Array size (n)
COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 47
Topics
Cache organization and operation
Performance impact of caches
◦ Rearranging loops to improve spatial locality
◦ Using blocking to improve temporal locality

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 48


Example: Matrix Multiplication
c = (double *) calloc(sizeof(double), n*n);

/* Multiply n x n matrices a and b */


void mmm(double *a, double *b, double *c, int n) {
int i, j, k;
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
for (k = 0; k < n; k++)
c[i*n+j] += a[i*n + k]*b[k*n + j];
}

j
c a b
=i *

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 49


Cache Miss Analysis
Assume:
◦ Matrix elements are doubles
◦ Cache block = 8 doubles n
◦ Cache size C << n (much smaller than n)

First iteration:
◦ n/8 + n = 9n/8 misses
= *
◦ Afterwards in cache:
(schematic)

= *
8 wide

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 50


Cache Miss Analysis
Assume:
◦ Matrix elements are doubles
◦ Cache block = 8 doubles
◦ Cache size C << n (much smaller than n)

n
Second iteration:
◦ Again:
n/8 + n = 9n/8 misses

Total misses:
= *
◦ 9n/8 * n2 = (9/8) * n3
8 wide

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 51


Blocked Matrix Multiplication
c = (double *) calloc(sizeof(double), n*n);

/* Multiply n x n matrices a and b */


void mmm(double *a, double *b, double *c, int n) {
int i, j, k;
for (i = 0; i < n; i+=B)
for (j = 0; j < n; j+=B)
for (k = 0; k < n; k+=B)
/* B x B mini matrix multiplications */
for (i1 = i; i1 < i+B; i++)
for (j1 = j; j1 < j+B; j++)
for (k1 = k; k1 < k+B; k++)
c[i1*n+j1] += a[i1*n + k1]*b[k1*n + j1];
}
j1
c a b c
= i1 * +

Block size B x B
COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 52
Cache Miss Analysis
Assume:
◦ Cache block = 8 doubles
◦ Cache size C << n (much smaller than n) n/B blocks
◦ Three blocks fit into cache: 3B2 < C

First (block) iteration:


◦ B2/8 misses for each block = *
◦ 2n/B * B2/8 = nB/4
(omitting matrix c)

◦ Afterwards in cache Block size B x B


(schematic)

= *

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 53


Cache Miss Analysis
Assume:
◦ Cache block = 8 doubles
◦ Cache size C << n (much smaller than n)
◦ Three blocks fit into cache: 3B2 < C
n/B blocks

Second (block) iteration:


◦ Same as first iteration
◦ 2n/B * B2/8 = nB/4 = *

Block size B x B
Total misses:
◦ nB/4 * (n/B)2 = n3/(4B)

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 54


Summary
No blocking: (9/8) * n3
Blocking: 1/(4B) * n3

Suggest largest possible block size B, but limit 3B2 < C!

Reason for dramatic difference:


◦ Matrix multiplication has inherent temporal locality:
◦ Input data: 3n2, computation 2n3
◦ Every array elements used O(n) times!
◦ But program has to be written properly

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 55


Concluding Observations
Programmer can optimize for cache performance
◦ How data structures are organized
◦ How data are accessed
◦ Nested loop structure
◦ Blocking is a general technique

All systems favor “cache friendly code”


◦ Getting absolute optimum performance is very platform specific
◦ Cache sizes, line sizes, associativities, etc.
◦ Can get most of the advantage with generic code
◦ Keep working set reasonably small (temporal locality)
◦ Use small strides (spatial locality)

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 56


Thank You!

COMPUTER SYSTEMS ORGANIZATION (SPRING 2024) 57

You might also like