Module 4
Chapter 1
Memory
Management
Background
• Program must be brought (from disk) into memory and placed
within a process for it to be run
• To increase the system performance, several processes must be keptin
memory—must share memory
• Various ways to manage memory – Paging,
Segmentation, Paged segmentation
• Memory consists of a large array of bytes, each with its own address.
• The CPU fetches instructions from the memory based on the value of the
program counter.
Background
• In an instruction-execution cycle, an instruction
is first fetched from memory. Instruction is decoded,
operands may need to be fetched.
• After execution of instructions, the results may be
stored back in the memory.
• Input queue – collection of processes on the disk that are waiting to be
brought into memory to run the program. (main memory is not large)
Memory management is the functionality of an
operating system which handles or manages primary
memory and moves processes back and forth between
main memory and disk during execution.
Memory management keeps track of each and every
memory location, regardless of either it is allocated to
some process or it is free. It checks how much memory is
to be allocated to processes.
It decides which process will get memory at what time. It
tracks whenever some memory gets freed or unallocated
and correspondingly it updates the status.
Basic hardware
• Each process has a separate memory space
• Separate per-process memory space protects the
processes from each other
• To separate memory spaces, we need the ability to determine
the range of legal addresses that the process may access and to
ensure that the process can access only these legal addresses.
• We can provide this protection by using two registers,
usually a base and a limit.
Basic hardware
• When a process is in run state, the base register is loaded with the physical
location where the process begins in memory. The limit register is loaded
with the length of the process. The base and limit registers can be loaded
only by the operating system, which uses a special privileged instruction
• Base register – 16384 Limit register – 4096 - indicating that the program
occupies all space between 16384 and 20480.
Binding of Instructions and Data to Memory
• Written - this is done using an editor. The
result is a text file.
• Compiled - translate the text file into a
sequence of codes that the machine (or virtual
machine) can understand and execute.
• Linked - the program might make use of
other programs, or libraries of programs.
These must be brought together with the
program and the interconnecting references
resolved.
• Loaded - the combined program must be
moved into memory
Binding of Instructions and Data to
Memory
• Address binding is the process of mapping from one address space to another
address space.
• Logical address is address generated by CPU during execution whereas Physical
Address refers to location in memory unit(the one that is loaded into memory).
• Note that user deals with only logical address(Virtual address).
• The logical address undergoes translation by the MMU or address translation unit
in particular. The output of this process is the appropriate physical address or the
location of code/data in RAM.
Binding of Instructions and Data to
Memory
⮚ Compile Time: If you know that during compile time where process will
reside in memory then absolute address is generated i.e physical address is
embedded to the executable of the program during compilation.
Advantage: Loading the executable as a process in memory is very fast.
Disadvantage: But if the generated address space is preoccupied by other
process, then the program crashes and it becomes necessary to recompile the
program to change the address space.
⮚ Load time: If it is not known at the compile time where process will reside
then relocatable address will be generated. Loader translates the relocatable
address to absolute address.
The base address of the process in main memory is added to all logical
addresses by the loader to generate absolute address. In this if the base address
of the process changes then we need to reload the process again.
Binding of Instructions and Data to
Memory
⮚ Execution time- The instructions are in memory and are being processed
by the CPU. This is used if process can be moved from one memory to
another during execution
Logical Address is generated by CPU while a program
is running.
The logical address is virtual address as it does not exist
physically, therefore, it is also known as Virtual Address.
This address is used as a reference to access the physical
memory location by CPU.
The term Logical Address Space is used for the set of all
logical addresses generated by a program’s perspective.
The hardware device called Memory-Management Unit
is used for mapping logical address to its corresponding
physical address.
Physical Address identifies a physical location of required data in
a memory.
The user never directly deals with the physical address but can
access by its corresponding logical address.
The user program generates the logical address and thinks that the
program is running in this logical address but the program needs
physical memory for its execution, therefore, the logical address
must be mapped to the physical address by MMU before they are
used.
The term Physical Address Space is used for all physical
addresses corresponding to the logical addresses in a Logical
address space.
1. Such a mapping scheme leads to a distinction between two different address spaces:
1. logical address space
2. The logical address space is the range of logical addresses that can be generated by a process
running on the CPU.
1. On many systems, the logical address space is dictated by the architecture of the CPU -
specifically by the number of bits in the address portion of an instruction. For example,
on a machine that uses 16-bit addresses in its instructions, the logical address space
would be 0 .. 65535.
2. On machines whose architecture supports relatively long addresses, logical space may be
dictated by the operating system to be some subset of the potential logical space allowed
by the architecture. For example, the VAX architecture utilizes 32-bit addresses, which
would conceivably allow logical addresses to range from 0 .. 4 Gig; but the actual logical
space allowed to any one user is dictated by system management. For example, the size
of a process's page table is dictated by a management-set parameter. If this size is set at
(say) 10K, then the process's logical space is restricted to a total size of 5 Meg, since
each page is 512 bytes long.
3. physical address space
4. The physical address space is the range of physical addresses that can be recognized by the
memory. For example, if 1 Meg of physical memory is installed on a given machine, then the
physical address space would probably be 0 .. 1,048,575. (I say probably because it could just
as well be installed as 1,048,576 .. 2,097,151 or 0 .. 524,287 + 1,048,576 .. 1,572,863.)
Logical Versus Physical Address
Memory-Management Unit
(MMU)
• Hardware device that maps virtual to physical address
• Many schemes available- Paging, Segmentation, Paged
segmentation
• In MMU scheme, the value in the relocation register is added to every
address generated by a user process at the time it is sent to memory
• The user program deals with logical addresses; it
never sees the real physical addresses
Dynamic relocation using a relocation
register
Relocation register – (14000) Initial starting address where this process is
placed in the main memory
Relocation register contains value of smallest physical address.
Dynamic
• Loading
Routine (function inside the program) is not loaded until it is called.
• All routines are kept on disk in a relocatable load format
• Only main program is loaded into the main memory
• When a routine is called, the calling routine checks if the called routine is
already loaded in to main memory
• If not,it should be brought in from secondary storage to main
memory. The relocatable linking loader is called to load the routine
Advantages:
Better memory-space utilization; unused routine is never
loaded.
• Ex: Error routines are called infrequently.
• OS may provide library routines to implement dynamic loading
Dynamic
• Linking
Dynamically linked libraries
when the programs are run.
are system libraries that are linked to user programs
• Linking of library routines is postponed until execution time.
• For each library-routine reference, a stub is included in the executable image
(actual code of the library routine is not included during compilation or when you
get the executable image)
• Stub is a small piece of code (tell the address of library routine where it is present
in memory) which is used to locate the appropriate memory-resident library
routine.
• When the stub is executed, it replaces itself with the address of the routine, and
executes the routine.
• Dynamic linking is particular useful for libraries (we don't have to include the code
of the libraries when the executable formed.)
Swapping
Swapping is a mechanism in which a process can be swapped
temporarily out of main memory (or move) to secondary storage
(disk) and make that memory available to other processes. At
some later time, the system swaps back the process from the
secondary storage to main memory.
Though performance is usually affected by swapping process but
it helps in running multiple and big processes in parallel and that's
the reason Swapping is also known as a technique for memory
compaction.
The total time taken by swapping process includes the time it takes to move the
entire process to a secondary disk and then to copy the process back to
memory, as well as the time the process takes to regain main memory.
Swapping
• Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped.
Memory
Allocation
Contiguous Non Contiguous
• Segmentation
• Fixed Partition • Paging
• Variable partition • Paged Segmentation
Fragmentation
• Memory space in the system constantly goes through loading and
releasing processes and their resources because of which the total
memory spaces gets broken into a lot of small pieces, this causes creation
small non utilised fragmented memory spaces, which are so small that
normal processes can not fit into those small fragments, causing those
memory spaces not getting utilised at all, this is called memory
Fragmentation in operating system.
External fragmentation
• Total memory space is enough to satisfy a request or to reside a process
in it, but it is not contiguous, so it cannot be used.
Internal fragmentation
• Memory block assigned to process is bigger. Some portion of memory is
left unused, as it cannot be used by another process.
Internal Fragmentation
Internal fragmentation happens when the memory is split into mounted sized blocks.
Whenever a method request for the memory, the mounted sized block is allotted to the
method. just in case the memory allotted to the method is somewhat larger than the
memory requested, then the distinction between allotted and requested memory is
that the Internal fragmentation.
External Fragmentation
External fragmentation happens when there’s a sufficient quantity of area
within the memory to satisfy the memory request of a method. however the
process’s memory request cannot be fulfilled because the memory offered is
during a non-contiguous manner. Either you apply first-fit or best-fit memory
allocation strategy it’ll cause external fragmentation.
Differences
Contiguous memory allocation
Contiguous memory allocation is a classical memory allocation model. Here, a
system assigns consecutive memory blocks (that is, memory blocks having
consecutive addresses) to a process.
Here’s how it works:
• When a process needs to execute, memory is requested by the process.
• The size of the process is compared with the amount of contiguous main
memory available to execute the process.
• If sufficient contiguous memory is found, the memory is allocated and the
process starts its execution.
• Otherwise, the process is added to a queue of waiting processes until
sufficient free contiguous memory is available
Contiguous memory allocation
▪ Contiguous memory allocation can be implemented in
operating systems with the help of two registers,
known as the base and limit registers.
• When a process is executing in main memory, its base
register contains the starting address of the memory
location where the process is executing,
• while the amount of bytes consumed by the process
is stored in the limit register.
Contiguous memory allocation
• A process does not directly refer to the actual address for a
corresponding memory location.
• Instead, it uses a relative address with respect to its base
register.
• All addresses referred by a program are considered as virtual
addresses.
• The CPU generates the logical or virtual address, which is
converted into an actual address with the help of the memory
management unit (MMU).
• The base address register is used for address translation by the
MMU. Thus, a physical address is calculated as follows:
• Physical Address = Base register address + Logical
address/Virtual address
Contiguous memory allocation
Fixed/Static Partitioning
• Static partitioning is a fixed size partitioning scheme.
• In this technique, main memory is pre-divided into
fixed size partitions.
• The size of each partition is fixed and can not be
changed.
• Each partition is allowed to store only one process.
Fixed/Static Partitioning
Fixed/Static Partitioning
Suppose process P5 of size 7MB arrives. But this process cannot be
accommodated inspite of available free space because of contiguous allocation.
Hence, 7MB becomes part of External Fragmentation.
Advantages of Fixed Partitioning –
[Link] to implement:
Algorithms needed to implement Fixed Partitioning are easy to implement.
It simply requires putting a process into certain partition without focussing on the
emergence of Internal and External Fragmentation.
[Link] OS overhead:
Processing of Fixed Partitioning require lesser computational power.
Fixed/Static Partitioning
Disadvantages of Fixed Partitioning –
[Link] Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies an entire
partition. This can cause internal fragmentation.
[Link] Fragmentation:
The total unused space (as stated above) of various partitions cannot be used to load
the processes even though there is space available but not in the contiguous form (as
spanning is not allowed).
[Link] process size:
Process of size greater than size of partition in Main Memory cannot be accommodated.
Partition size cannot be varied according to the size of incoming process’s size.
[Link] on Degree of Multiprogramming:
Partition in Main Memory are made before execution or during system configure. Main
Memory is divided into fixed number of partition. Number of processes greater than
number of partitions in RAM is invalid in Fixed Partitioning.
Variable Partitioning –
In contrast with fixed partitioning, partitions are not made before the execution
or during system configure.
Various features associated with variable Partitioning-
• Initially RAM is empty and partitions are made during the run-time
according to process’s need instead of partitioning during system configure.
• The size of partition will be equal to incoming process.
• The partition size varies according to the need of the process so that the
internal fragmentation can be avoided to ensure efficient utilisation of
RAM.
• Number of partitions in RAM is not fixed and depends on the number of
incoming process and Main Memory’s size.
Variable Partitioning –
Variable Partitioning –
Advantages of Variable Partitioning –
• No Internal Fragmentation:
In variable Partitioning, space in main memory is allocated strictly
according to the need of process, hence there is no case of internal
fragmentation. There will be no unused space left in the partition.
• No restriction on Degree of Multiprogramming:
More number of processes can be accommodated due to absence of internal
fragmentation. A process can be loaded until the memory is empty.
• No Limitation on the size of the process:
In Fixed partitioning, the process with the size greater than the size of the
largest partition could not be loaded and process can not be divided as it is
invalid in contiguous allocation technique. Here, In variable partitioning,
the process size can’t be restricted since the partition size is decided
according to the process size.
Variable Partitioning –
Disadvantages of Variable Partitioning –
Difficult Implementation:
Implementing variable Partitioning is difficult as compared to Fixed
Partitioning as it involves allocation of memory during run-time rather than
during system configure.
External Fragmentation:
There will be external fragmentation inspite of absence of internal
fragmentation.
For example, suppose in above example- process P1(2MB) and process
P3(1MB) completed their execution. Hence two spaces are left i.e. 2MB and
1MB. Let’s suppose process P5 of size 3MB arrives. The empty space in
memory cannot be allocated as no spanning is allowed in contiguous
allocation. The rule says that process must be contiguously present in main
memory to get executed. Hence it results in External Fragmentation.
Contiguous memory allocation techniques.
For both fixed and dynamic memory allocation schemes
, the operating system must keep list of each memory
location noting which are free and which are busy.
Then as new jobs come into the system, the free
partitions must be allocated.
These partitions may be allocated by 3 ways:
1. First-Fit Memory Allocation
2. Best-Fit Memory Allocation
3. Worst-Fit Memory Allocation
First-Fit Memory Allocation:
• This method keeps the free/busy list of jobs organized
by memory location, low-ordered to high-ordered
memory.
• In this method, first job claims the first available
memory with space more than or equal to it’s size.
• The operating system doesn’t search for appropriate
partition but just allocate the job to the nearest
memory partition available with sufficient size.
First-Fit Memory Allocation:
As illustrated above, the system assigns J1 the nearest partition in the
memory. As a result, there is no partition with sufficient space is available
for J3 and it is placed in the waiting list.
First-Fit Memory Allocation:
Advantages of First-Fit Memory Allocation:
It is fast in processing.
As the processor allocates the nearest available memory partition to the job, it
is very fast in execution.
Disadvantages of First-Fit Memory Allocation :
It wastes a lot of memory.
The processor ignores if the size of partition allocated to the job is very large as
compared to the size of job or not. It just allocates the memory.
As a result, a lot of memory is wasted and many jobs may not get space in the
memory, and would have to wait for another job to complete.
Worst-Fit Memory Allocation :
• In this allocation technique the process traverse the
whole memory and always search for largest
hole/partition, and then the process is placed in that
hole/partition.
• It is a slow process because it has to traverse the
entire memory to search largest hole.
Advantages of Worst-Fit Allocation :
Since this process chooses the largest hole/partition, therefore
there will be large internal fragmentation.
Now, this internal fragmentation will be quite big so that other
small processes can also be placed in that left over partition.
Disadvantages of Worst-Fit Allocation :
It is a slow process because it traverse all the partitions in the
memory and then selects the largest partition among all the
partitions, which is a time consuming process.
Best-Fit Memory Allocation:
• This method keeps the free/busy list in order by size –
smallest to largest.
• In this method, the operating system first searches the
whole of the memory according to the size of the
given job and allocates it to the closest-fitting free
partition in the memory, making it able to use memory
efficiently.
• Here the jobs are in the order from smalest job to
largest job.
Best Fit
As illustrated in above figure, the operating system first search throughout the
memory and allocates the job to the minimum possible memory partition,
making the memory allocation efficient.
Advantages of Best-Fit Allocation :
Memory Efficient. The operating system allocates the
job minimum possible space in the memory, making
memory management very efficient. To save memory
from getting wasted, it is the best method.
Disadvantages of Best-Fit Allocation :
It is a Slow Process. Checking the whole memory for
each job makes the working of the operating system
very slow. It takes a lot of time to complete the work.
Problem:
Given five memory partitions of 100Kb,
500Kb, 200Kb, 300Kb, 600Kb (in order), how
would the first-fit, best-fit, and worst-fit
algorithms place processes of 212 Kb, 417 Kb,
112 Kb, and 426 Kb (in order)? Which
algorithm makes the most efficient use of
memory?
Sol:
100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in order),
First-fit:
212K is put in 500K partition (new partition 288K = 500K - 212K)
100KB, 288KB,200 KB,300KB,600KB
417K is put in 600K partition
100KB, 288KB,200 KB,300KB,183KB
112K is put in 288K partition
100KB, 176KB,200 KB,300KB,183KB
426K must wait .
Best-fit:
100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in order),
212K is put in 300K partition
100KB,500KB,200KB,88KB,600KB
417K is put in 500K partition
100Kb, 83KB, 200Kb, 88Kb, 600Kb (in order),
112K is put in 200K partition
100Kb, 83KB, 88KB, 88Kb, 600Kb (in order),
426K is put in 600K partition
100Kb, 83KB, 88KB, 88Kb, 174Kb (in order),
Worst-fit:
100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in order),
212K is put in 600K partition
100Kb, 500Kb, 200Kb, 300Kb, 388KB
417K is put in 500K partition
100Kb, 83KB, 200Kb, 300Kb, 388KB
112K is put in 388K partition
100Kb, 83KB, 200Kb, 300Kb, 276KB
426K must wait
In this example, best-fit turns out to be the best.
• Each process is divided into parts where size of each part is same as page size.
• The size of the last part may be less than the page size.
• The pages of process are stored in the frames of main memory depending upon
their availability.
Advantages:
• It allows to store parts of a single process in a non-contiguous fashion.
• It solves the problem of external fragmentation.
Disadvantages:
• It suffers from internal fragmentation.
• There is an overhead of maintaining a page table for each process.
• The time taken to fetch the instruction increases since two memory
accesses are required
[Link] Enabled/Disabled
This bit enables or disables the caching of page.
If it is disabled , Then the bit is set to 1 otherwise set
to 0.
[Link] Bit(Dirty Bit):
It specifies whether page has been modified or not.
If it is modified, Then the bit is set to 1 otherwise set
to 0.
Problem-03:
Consider a system with byte-addressable memory, 32 bit logical addresses, 4
kilobyte page size and page table entries of 4 bytes each. The size of the page
table in the system in megabytes is _____.
A. 2
B. 4
C. 8
D. 16
In accordance to the above segment table, the segments are stored in
the main memory as-
Translating Logical Address into Physical Address-
1. CPU always generates a logical address.
2. A physical address is needed to access the main memory.
SEGMENTATION
Advantages-
1. It allows to divide the program into modules which provides better
visualization.
2. Segment table consumes less space as compared to Page Table in paging.
3. It solves the problem of internal fragmentation.
Disadvantages:
[Link] is an overhead of maintaining a segment table for each process.
[Link] time taken to fetch the instruction increases since now two memory
accesses are required.
[Link] of unequal size are not suited for swapping.
[Link] suffers from external fragmentation as the free space gets broken down into
smaller pieces with the processes being loaded and removed from the main
memory.