Memory Hierarchy and Management in Computing
Memory Hierarchy and Management in Computing
The computer memory can be divided into 5 major hierarchies that are based on use as well as
speed. A processor can easily move from any one level to some other on the basis of its
requirements. These five hierarchies in a system’s memory are register, cache memory, main
memory, magnetic disc, and magnetic tape.
1. Capacity
It refers to the total volume of data that a system’s memory can store. The capacity increases
moving from the top to the bottom in the Memory Hierarchy.
2. Access Time
It refers to the time interval present between the request for read/write and the data availability.
The access time increases as we move from the top to the bottom in the Memory Hierarchy.
3. Performance
When a computer system was designed earlier without the Memory Hierarchy Design, the gap
in speed increased between the given CPU registers and the Main Memory due to a large
difference in the system’s access time. It ultimately resulted in the system’s lower performance,
and thus, enhancement was required. Such a kind of enhancement was introduced in the form
of Memory Hierarchy Design, and because of this, the system’s performance increased. One
of the primary ways to increase the performance of a system is minimising how much a memory
hierarchy has to be done to manipulate data.
1. Registers
The register is usually an SRAM or static RAM in the computer processor that is used to hold
the data word that is typically 64 bits or 128 bits. A majority of the processors make use of a
status word register and an accumulator. The accumulator is primarily used to store the data in
the form of mathematical operations, and the status word register is primarily used for decision
making.
2. Cache Memory
The cache basically holds a chunk of information that is used frequently from the main
memory. We can also find cache memory in the processor. In case the processor has a single-
core, it will rarely have multiple cache levels. The present multi-core processors would have
three 2-levels for every individual core, and one of the levels is shared.
2
3. Main Memory
In a computer, the main memory is nothing but the CPU’s memory unit that communicates
directly. It’s the primary storage unit of a computer system. The main memory is very fast and
a very large memory that is used for storing the information throughout the computer’s
operations. This type of memory is made up of ROM as well as RAM.
4. Magnetic Disks
In a computer, the magnetic disks are circular plates that’s fabricated with plastic or metal with
a magnetised material. Two faces of a disk are frequently used, and many disks can be stacked
on a single spindle by read/write heads that are obtainable on every plane. The disks in a
computer jointly turn at high speed.
5. Magnetic Tape
Magnetic tape refers to a normal magnetic recording designed with a slender magnetizable
overlay that covers an extended, thin strip of plastic film. It is used mainly to back up huge
chunks of data. When a computer needs to access a strip, it will first mount it to access the
information. Once the information is allowed, it will then be unmounted. The actual access
time of a computer memory would be slower within a magnetic strip, and it will take a few
minutes for us to access a strip.
5
Paged Memory Management method divides the computer's main memory into fixed-size
units known as page frames
Segmented memory is the only memory management method that does not provide the user's
program with a linear and contiguous address space.
Swapping is a method in which the process should be swapped temporarily from the main
memory to the backing store. It will be later brought back into the memory for continue
execution.
Memory allocation is a process by which computer programs are assigned memory or space.
Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages.
Fragmentation refers to the condition of a disk in which files are divided into pieces scattered
around the disk.
Segmentation method works almost similarly to paging. The only difference between the
two is that segments are of variable length, whereas, in the paging method, pages are always
of fixed size.
Dynamic loading is a routine of a program which is not loaded until the program calls it.
Linking is a method that helps OS to collect and merge various modules of code and data
into a single executable file.
Cache Memories: Mapping and Replacement Techniques
Cache mapping refers to a technique using which the content present in the main memory is
brought into the memory of the cache. Three distinct types of mapping are used for cache
memory mapping.
• The word that is required isn’t present in the memory of the cache.
• The page consists of the required word that we need to map from the main memory.
• We can perform such a type of mapping using various different techniques of cache
mapping.
Let us discuss different techniques of cache mapping.
6
Now, before we proceed ahead, it is very crucial that we note these points:
Important Note:
• The main memory gets divided into multiple partitions of equal size, known as the
frames or blocks.
• The cache memory is actually divided into various partitions of the same sizes as that
of the blocks, known as lines.
• The main memory block is copied simply to the cache during the process of cache
mapping, and this block isn’t brought at all from the main memory.
1. Direct Mapping
In the case of direct mapping, a certain block of the main memory would be able to map a
cache only up to a certain line of the cache. The total line numbers of cache to which any
distinct block can map are given by the following:
7
Cache line number = (Address of the Main Memory Block ) Modulo (Total number of lines
in Cache)
For example,
• Let us consider that particular cache memory is divided into a total of ‘n’ number of
lines.
• Then, the block ‘j’ of the main memory would be able to map to line number only of
the cache (j mod n).
•
The Need for Replacement Algorithm
In the case of direct mapping,
9
3. K-way Set Associative Mapping
In the case of k-way set associative mapping,
• The grouping of the cache lines occurs into various sets where all the sets consist of k
number of lines.
• Any given main memory block can map only to a particular cache set.
• However, within that very set, the block of memory can map any cache line that is
freely available.
• The cache set to which a certain main memory block can map is basically given as
follows:
Cache set number = ( Block Address of the Main Memory ) Modulo (Total Number of sets
present in the Cache)
For Example
Let us consider the example given as follows of a two-way set-associative mapping:
In this case,
• The k-way set associative mapping refers to a combination of the direct mapping as
well as the fully associative mapping.
• It makes use of the fully associative mapping that exists within each set.
• Therefore, the k-way set associative mapping needs a certain type of replacement
algorithm.
Division of Physical Address
In the case of fully k-way set mapping, the division of the physical address occurs as follows:
Special Cases
• In case k = 1, the k-way set associative mapping would become direct mapping. Thus,
Direct Mapping = one-way set associative mapping
• In the case of k = The total number of lines present in the cache, then the k-way set
associative mapping would become fully associative mapping.
• The L1 cache, also known as the primary cache, is very fast, but it is relatively small.
It is embedded usually in the processor chip in the form of the CPU cache.
• The secondary cache, also known as the L2 cache, is often comparatively more
capacious than the L1 cache.
• The Level 3 (or L3) cache refers to a specialized memory that is developed in order to
improve the actual performance of the L1 and L2.
Virtual Memory
Virtual Memory is a storage scheme that provides user an illusion of having a very big main
memory. This is done by treating a part of secondary memory as the main memory.
In this scheme, User can load the bigger size processes than the available main memory by
having the illusion that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the
different parts of more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased.
In modern word, virtual memory has become quite common these days. In this scheme,
whenever some pages needs to be loaded in the main memory for the execution and the memory
is not available for those many pages, then in that case, instead of stopping the pages from
entering in the main memory, the OS search for the RAM area that are least used in the recent
times or that are not referenced and copy that into the secondary memory to make the space for
the new pages in the main memory.
Since all this procedure happens automatically, therefore it makes the computer feel like it is
having the unlimited RAM.
Demand Paging
Demand Paging is a popular method of virtual memory management. In demand paging, the
pages of a process which are least used, get stored in the secondary memory.
A page is copied to the main memory when its demand is made or page fault occurs. There are
various page replacement algorithms which are used to determine the pages which will be
replaced. We will discuss each one of them later in detail.
12
Snapshot of a virtual memory management system
Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB. The main
memory contains 8 frame of 1 KB each. The OS resides in the first two partitions. In the third
partition, 1st page of P1 is stored and the other frames are also shown as filled with the different
pages of processes in the main memory.
The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame
each. The page tables of both the processes contain various information that is also shown in
the image.
The CPU contains a register which contains the base address of page table that is 5 in the case
of P1 and 7 in the case of P2. This page table base address will be added to the page number
of the Logical address when it comes to accessing the actual corresponding entry.
13
DMA Controller is a hardware device that allows I/O devices to directly access memory with
less participation of the processor. DMA controller needs the same old circuits of an interface
to communicate with the CPU and Input/Output devices.
The block diagram of the DMA controller. The unit communicates with the CPU through data
bus and control lines. Through the use of the address bus and allowing the DMA and RS register
to select inputs, the register within the DMA is chosen by the CPU. RD and WR are two-way
inputs. When BG (bus grant) input is 0, the CPU can communicate with DMA registers. When
BG (bus grant) input is 1, the CPU has relinquished the buses and DMA can communicate
directly with the memory.
DMA controller registers :
The DMA controller has three registers as follows.
Address register – It contains the address to specify the desired location in memory. Word
count register – It contains the number of words to be transferred. Control register – It specifies
the transfer mode.
Note – All registers in the DMA appear to the CPU as I/O interface registers. Therefore, the
CPU can both read and write into the DMA registers under program control via the data bus.
Block Diagram
Explanation : The CPU initializes the DMA by sending the given information through the data
bus.
• The starting address of the memory block where the data is available (to read) or where
data are to be stored (to write).
• It also sends word count which is the number of words in the memory block to be read
or write.
• Control to define the mode of transfer such as read or write.
• A control to begin the DMA transfer.
14
Suppose any device which is connected to input-output port wants to transfer data to memory,
first of all it will send input-output port address and control signal, input-output read to input-
output port, then it will send memory address and memory write signal to memory where
data has to be transferred. In normal input-output technique the processor becomes busy in
checking whether any input-output operation is completed or not for next input-output
operation, therefore this technique is slow. This problem of slow data transfer between input-
output port and memory or between two memory is avoided by implementing Direct Memory
Access (DMA) technique. This is faster as the microprocessor/computer is bypassed and the
control of address bus and data bus is given to the DMA controller.
• HOLD – hold signal
Suppose a floppy drive that is connected at input-output port wants to transfer data to
memory, the following steps are performed:
• Step-1: First of all the floppy drive will send a DMA request (DREQ) to the
DMAC, it means the floppy drive wants its DMA service.
• Step-2: Now the DMAC will send a HOLD signal to the CPU.
• Step-3: After accepting the DMA service request from the DMAC, the CPU will
send hold acknowledgment (HLDA) to the DMAC, it means the microprocessor
has released control of the address bus the data bus to DMAC and the
microprocessor/computer is bypassed during DMA service.
• Step-4: Now the DMAC will send one acknowledgement (DACL) to the floppy
drive which is connected at the input-output port. It means the DMAC tells the
15
floppy drive be ready for its DMA service.
• Step-5: Now with the help of input-output read and memory write signal the
data is transferred from the floppy drive to the memory.
Modes of DMAC:
1. Single Mode – In this only one channel is used, means only a single DMAC is
connected to the bus system.
2. Cascade Mode – In this multiple channels are used, we can further cascade more number
of DMACs.
I/O Interface
The method that is used to transfer information between internal storage and external I/O
devices is known as I/O interface. The CPU is interfaced using special communication links
by the peripherals connected to any computer system. These communication links are used
to resolve the differences between CPU and peripheral. There exists special hardware
components between CPU and peripherals to supervise and synchronize all the input and
output transfers that are called interface units.
Mode of Transfer:
The binary information that is received from an external device is usually stored in the
memory unit. The information that is transferred from the CPU to the external device is
originated from the memory unit. CPU merely processes the information but the source and
target is always the memory unit. Data transfer between CPU and the I/O devices may be
done in different modes.
Data transfer to and from the peripherals may be done in any of the three possible ways
1. Programmed I/O.
2. Interrupt- initiated I/O.
16
3. Direct memory access( DMA).
Now let’s discuss each mode one by one.
1. Programmed I/O: It is due to the result of the I/O instructions that are written
in the computer program. Each data item transfer is initiated by an instruction in
the program. Usually the transfer is from a CPU register and memory. In this case
it requires constant monitoring by the CPU of the peripheral devices.
Example of Programmed I/O: In this case, the I/O device does not have direct
access to the memory unit. A transfer from I/O device to memory requires the
execution of several instructions by the CPU, including an input instruction to
transfer the data from device to the CPU and store instruction to transfer the data
from CPU to memory. In programmed I/O, the CPU stays in the program loop
until the I/O unit indicates that it is ready for data transfer. This is a time
consuming process since it needlessly keeps the CPU busy. This situation can be
avoided by using an interrupt facility. This is discussed below.
2. Interrupt- initiated I/O: Since in the above case we saw the CPU is kept busy
unnecessarily. This situation can very well be avoided by using an interrupt driven
method for data transfer. By using interrupt facility and special commands to
inform the interface to issue an interrupt request signal whenever data is available
from any device. In the meantime the CPU can proceed for any other program
execution. The interface meanwhile keeps monitoring the device. Whenever it is
determined that the device is ready for data transfer it initiates an interrupt request
signal to the computer. Upon detection of an external interrupt signal the CPU
stops momentarily the task that it was already performing, branches to the service
program to process the I/O transfer, and then return to the task it was originally
performing.
Note: Both the methods programmed I/O and Interrupt-driven I/O require the active
intervention of the processor to transfer data between memory and the I/O module,
and any data transfer must transverse a path through the processor. Thus both these
forms of I/O suffer from two inherent drawbacks.
• The I/O transfer rate is limited by the speed with which the processor can
test and service a device.
• The processor is tied up in managing an I/O transfer; a number of
instructions must be executed for each I/O transfer.
3. Direct Memory Access: The data transfer between a fast storage media such as
magnetic disk and memory unit is limited by the speed of the CPU. Thus we can
allow the peripherals directly communicate with each other using the memory
buses, removing the intervention of the CPU. This type of data transfer technique
is known as DMA or direct memory access. During DMA the CPU is idle and it
has no control over the memory buses. The DMA controller takes over the buses
to manage the transfer directly between the I/O devices and the memory unit.
17
Bus Request : It is used by the DMA controller to request the CPU to relinquish the
control of the buses.
Bus Grant : It is activated by the CPU to Inform the external DMA controller that
the buses are in high impedance state and the requesting DMA can take control of
the buses. Once the DMA has taken the control of the buses it transfers the data.
This transfer can take place in many ways.
Types of DMA transfer using DMA controller:
Burst Transfer :
DMA returns the bus after complete data transfer. A register is used as a byte count,
being decremented for each byte transfer, and upon the byte count reaching zero, the
DMAC will release the bus. When the DMAC operates in burst mode, the CPU is
halted for the duration of the data transfer.
Steps involved are:
1. Bus grant request time.
2. Transfer the entire block of data at transfer rate of device because the
device is usually slow than the speed at which the data can be transferred
to CPU.
3. Release the control of the bus back to CPU So, total time taken to
transfer the N bytes = Bus grant request time + (N) * (memory transfer
rate) + Bus release control time.
Where,
X µsec =data transfer time or preparation time (words/block)
Y µsec =memory cycle time or cycle time or transfer time
(words/block)
% CPU idle (Blocked)=(Y/X+Y)*100
% CPU Busy=(X/X+Y)*100
Cyclic Stealing :
An alternative method in which DMA controller transfers one word at a time after
18
which it must return the control of the buses to the CPU. The CPU delays its
operation only for one memory cycle to allow the direct memory I/O transfer to
“steal” one memory cycle.
Steps Involved are:
4. Buffer the byte into the buffer
5. Inform the CPU that the device has 1 byte to transfer (i.e. bus grant
request)
6. Transfer the byte (at system bus speed)
7. Release the control of the bus back to CPU.
Before moving on transfer next byte of data, device performs step 1 again so
that bus isn’t tied up and the transfer won’t depend upon the transfer rate of
device. So, for 1 byte of transfer of data, time taken by using cycle stealing
mode (T).= time required for bus grant + 1 bus cycle to transfer data + time
required to release the bus, it will be N x T
In cycle stealing mode we always follow pipelining concept that when one byte is
getting transferred then Device is parallel preparing the next byte. “The fraction of
CPU time to the data transfer time” if asked then cycle stealing mode is used.
Where,
X µsec =data transfer time or preparation time
(words/block)
Y µsec =memory cycle time or cycle time or transfer
time (words/block)
% CPU idle (Blocked) =(Y/X)*100
% CPU busy=(X/Y)*100
Interleaved mode: In this technique , the DMA controller takes over the system bus
when the microprocessor is not using [Link] alternate half cycle i.e. half cycle DMA
+ half cycle processor.
19
In micro-computer base system, the only purpose of peripheral devices is just to
provide special communication links for the interfacing them with the CPU. To resolve the
differences between peripheral devices and CPU, there is a special need for communication
links.
The major differences are as follows:
1. The nature of peripheral devices is electromagnetic and electro-mechanical. The
nature of the CPU is electronic. There is a lot of difference in the mode of
operation of both peripheral devices and CPU.
2. There is also a synchronization mechanism because the data transfer rate of
peripheral devices are slow than CPU.
3. In peripheral devices, data code and formats are differ from the format in the
CPU and memory.
4. The operating mode of peripheral devices are different and each may be
controlled so as not to disturb the operation of other peripheral devices
connected to CPU.
There is a special need of the additional hardware to resolve the differences between CPU
and peripheral devices to supervise and synchronize all input and output devices.
20
21
Universal Serial Bus (USB)
The universal serial bus (USB) is a standard interface for connecting a wide range of devices
to the computer such as keyboard, mouse, smartphones, speakers, cameras etc. The USB was
introduced for commercial use in the year 1995 at that time it has a data transfer speed of 12
megabits/s.
With some improvement, a modified USB 2 was introduced which is also called a highspeed
USB that transfers data at 480 megabits/s. With the evolution of I/O devices that require
highspeed data transfer also leads to the development of USB 3 which is also referred to
as Superspeed USB which transfers data at 5 gigabits/s. The recent version of USB can transfer
data up to 20 gigabits/s.
• The developed USB must be simple and a low-cost interconnection system that should
be easy to use.
• The developed USB must be compatible with all new I/O devices, their bit rates,
internet connections and audio, video application.
• The USB must support a plug-and-play mode of operation.
• The USB must support low power implementation.
• The USB must also provide support for legacy hardware and software.
USB Architecture
When multiple I/O devices are connected to the computer through USB they all are organized
in a tree structure. Each I/O device makes a point-to-point connection and transfers data using
the serial transmission format we have discussed serial transmission in our previous content
‘interface circuit’.
As we know a tree structure has a root, nodes and leaves. The tree structure connecting I/O
devices to the computer using USB has nodes which are also referred to as a hub. Hub is the
intermediatory connecting point between the I/O devices and the computer. Every tree has a
root here, it is referred to as the root hub which connects the entire tree to the hosting
computer. The leaves of the tree here are nothing but the I/O devices such as a mouse,
keyboard, camera, speaker.
22
The USB works on the principle of polling. In polling, the processor keeps on checking
whether the I/O device is ready for data transfer or not. So, the devices do not have to inform
the processor about any of their statuses. It is the processor’s responsibility to keep a check.
This makes the USB simple and low cost.
Whenever a new device is connected to the hub it is addressed as 0. Now at a regular interval
the host computer polls all the hubs to get their status which lets the host know of I/O devices
that are either detached from the system or are attached to the system.
When the host becomes aware of the new device it gets to know about the capabilities of the
device by reading the information present in the special memory of the device’s USB
interface. So that the host can use the appropriate device driver to communicate with the
device.
The host then assigns an address to this new device, this address is written to the register of
the device interface register. With this mechanism, USB serves plug-and-play capability.
The plug and play feature let the host recognize the existence of the new I/O device
automatically when the device is plugged in. The host software determines the capabilities of
the I/O devices and if it has any special requirement.
23
USB also supports the isochronous traffic where the data is transferred at a fixed timed
interval, where the intervals are regular and of very short time. The isochronous data
transmission is comparatively faster than asynchronous and synchronous data transfer.
To accommodate the isochronous traffic, the root hub sends a sequence of bits over the USB
tree this indicates the start of isochronous data and after this sequence of bits, the actual data
is transmitted.
As USB support the isochronous data transmission the audio-video signals are transferred in
a precisely timely manner.
The USB has different types of ports and connectors. Usually, the upstream port and
connector are always the USB type A the downstream port and connector differ depending on
the type of device connected. We will discuss all types of the USB connector.
USB Type A: This is the standard connector that can be found at one end of the USB cable
and is also known as upstream. It has a flat structure and has four connecting lines as you can
see in the image below.
USB Type B: This is an older standard cable and was used to connect the peripheral devices
also referred to as downstream. It is approximately a square as you can see in the image
below. This is now been replaced by the newer versions.
Mini USB: This type of USB is compatible with mobile devices. This type of USB is now
superseded your micro-USB still you will get it on some devices.
24
Micro USB: This type of USB is found on newer mobile devices. It has a compact 5 pin
design.
USB Type C: This type of USB is used for transferring both data and power to the attached
peripheral or I/O device. The USB C does not have a fixed orientation as it is reversible i.e.
you can plug it upside down or in reverse.
USB 3.0 Micro B: This USB is a superspeed USB. This USB is used for a device that
requires high-speed data transfer. You can find this kind of USB on portable hard drives.
The standard USB has four lines of connection among which two carry power (one carry +5
V and one is for Ground). The other two lines of connection are for data transfer. USB also
supply power to connected I/O device that requires very low power.
Transferring of data over USB can be divided into two categories i.e., transferring data at low
speed and transferring data at high speed.
The low-speed transmission uses single-ended signalling where varying high voltage is
transmitted over one of the two data lines to represent the signal bit 0 or 1. The other data line
is connected to the reference voltage i.e., ground. The single-ended signalling is prone to
noise.
The high-speed data transmission uses the approach differential signalling. Here, the signal
is transmitted over the two data lines that are twisted together. Here both the data lines are
involved in carrying the signal no ground wire is required. The differential signalling is not
prone to noise and uses low voltages as compared to single-ended transmission.
25
SATA
SATA is an interface that connects various storage devices such as hard disks, optical
drives, SSD’s, etc to the motherboard. SATA was introduced in the year 2000 to replace
the long-standing PATA (Parallel ATA) interface. We all know, in serial mode, data is
transferred bit by bit and in parallel, there are several streams that carry the data. Despite
knowing this fact, there is a drawback in PATA. PATA is highly susceptible to outside
interferences and hence allows SATA to operate at high speeds than PATA. SATA cables are
thinner, more flexible and compact as compared to the PATA cables.
26
Anna University Previous year question bank
PART- A (2 MARKS)
1. What is Memory system?
Memory is a device or system that is used to store information for immediate
use in a computer or related computer hardware and digital electronic devices.
2. Give classification of memory.
There are four types of primary storage:
• read only memory (ROM)
• random access memory (RAM)
• flash memory.
• cache memory.
3. Define cache.
In computing, a cache is a high-speed data storage layer which stores a subset
of data, typically transient in nature, so that future requests for that data are
served up faster than is possible by accessing the data's primary storage
location.
4. What is Read Access Time?
With computer memory, access time is the amount of time it takes the
computer processor to read data from the memory. When looking at the access
time of memory it may be represented in either ns (nanoseconds), MHz, or
GHz.
5. Define Random Access Memory.
RAM (random access memory) is a computer's short-term memory, where the
data that the processor is currently using is stored. Your computer can access
RAM memory much faster than data on a hard disk, SSD, or other long-term
storage device, which is why RAM capacity is critical for system performance
6. What are PROMS?
A programmable read-only memory (PROM) is a form of digital memory
where the contents can be changed once after manufacture of the device. The
data is then permanent and cannot be changed. It is one type of read-only
memory (ROM).
7. Define Memory refreshing.
It is a background maintenance process that is necessary for operation of
DRAMs. When in operation, each of the memory cells needs to be refreshed
repeatedly. However, the maximum interval between two refreshes is defined
by the manufacturer of the memory and lies in the millisecond segment.
8. What is SRAM and DRAM?
SRAM stands for Static Random Access Memory ,
DRAM stands for Dynamic Random Access Memory.
SRAM:
SRAM is made up of flipflops ,
It stores the data in latch.
It is more expensive then DRAM ,
It is faster then DRAM ,
It doesn’t need any Refreshment after some period ,
It is used as Cache Memory .
Data is stored as long as Power is there
DRAM:
27
DRAM is made up of Capacitors ,
It stores the data in Separate Capacitors ,
It is cheaper than SRAM ,
It is Slower than SRAM ,
It need Periodically Refreshment to maintain the data,
It is used as Main Memory.
Data is Stored it is deleted after Shutdown.
9. What is volatile memory?
Volatile memory is a type of memory that maintains its data only while the
device is powered. If the power is interrupted for any reason, the data is lost.
10. Define data transfer or band width.
Bandwidth is the amount of data that can be transferred at one time. Data
transfer is the actual amount of data transferred.
11. What is flash memory?
Flash memory is a low-cost, high-density, non-volatile computer storage chip
that can be electrically erased and reprogrammed. Flash memory can be either
NOR-Flash or NAND-Flash. NOR-Flash allows a single word to be written or
read independently. It can be used for storing boot code.
12. What is multi level memories?
multilevel memory A memory system containing at least two memory
subsystems with different capacity and access-time attributes.
13. What is associate memory?
Associative memory is also known as content addressable memory (CAM) or
associative storage or associative array.
14. Define Seek time and latency time.
Seek Time is measured defines the amount of time it takes a hard drive's
read/write head to find the physical location of a piece of data on the disk.
Latency is the average time for the sector being accessed to rotate into position
under a head, after a completed seek.
15. Define memory cycle time.
Cycle time is the time, usually measured in nanosecond s, between the start of
one random access memory ( RAM ) access to the time when the next access
can be started
16. What is static memories?
When the allocation of memory performs at the compile time, then it is known
as static memory. In this, the memory is allocated for variables by the
compiler.
17. What is locality of reference?
locality of reference, also known as the principle of locality, is the tendency of
a processor to access the same set of memory locations repetitively over a
short period of time.
18. Define set associative cache.
Set associative caches are a. compromise between fully associative caches and
direct-mapped caches. In a set associative cache, there are a fixed number of
locations (called a set) that a given address may be stored in. The number of
locations in each set is the associative of the cache.
19. Give formula to calculate average memory access time.
Tavg = h*Tc +(1-h)*M
28
M = miss penalty (time to access main memory)
20. What is memory interleaving?
Memory Interleaving is an abstraction technique which divides memory into a
number of modules such that successive words in the address space are placed
in the different module.
21. Give the features of ROM cell.
Features of ROM (Read-Only Memory):
ROM is a non-volatile memory.
Information stored in ROM is permanent.
Information and programs stored on it, we can only read.
Information and programs are stored on ROM in binary format.
It is used in the start-up process of the computer.
22. List the difference between static RAM and dynamic RAM.
SRAM DRAM
Transistors are used to store information in Capacitors are used to store data in
SRAM. DRAM.
These are used in cache memories. These are used in main memories.
Consumes less power and generates less Uses more power and generates more
heat. heat.
29
3. Discuss the concept of Memory interleaving and give its advantages.
4. Discuss the different mapping techniques used in cache memories and their relative merits
and demerits.
5. Comparing paging and segmentation mechanisms for implementing the virtual memory.
6. What do you mean by virtual memory? Discuss how paging helps in implementing virtual
memory.
7. Discuss any six ways of improving the cache performance.
8. Explain the virtual memory translation and TLB with necessary diagram.
9. Explain the organization of magnetic disk and magnetic tape in detail.
10. Explain with the block diagram the DMA transfer in a computer system.
11. Describe in detail about IOP Organization.
12. Describe the data transfer method using DMA.
13. Discuss the design of a typical input or output interface.
14. What are interrupts? How are they handled?
15. Give comparison between memory mapped I/O and I/O mapped I/O.
16. Explain the action carried out by the processor after occurrence of an interrupt.
17. What is DMA? Describe how DMA is used to transfer data from peripherals.
18. Explain various data transfer modes used in DMA.
19. Explain SCSI bus standards.
20. Describe the working principle of USB.
30