INTERNAL ASSIGNMENT
NAME
ROLL NUMBER
SESSION APR 2025
PROGRAM BACHELOR OF COMPUTER
APPLICATION (BCA)
SEMESTER IV
COURSE NAME SYSTEM SOFTWARE
COURSE CODE DCA2203
SET - I
1
Q1. Describe the function and significance of the segment and pointer registers in the
8086 microprocessor. How do these registers support memory segmentation and data
handling?
ANS :-
➤ Function and Significance of Segment and Pointer Registers in 8086 Microprocessor
The 8086 microprocessor is a 16-bit processor with a 20-bit address bus, which allows it to
access up to 1 MB of memory. To efficiently manage this large memory space, it uses a
concept called memory segmentation, supported by segment registers and pointer/index
registers. These registers play a critical role in organizing memory, executing instructions,
and accessing data efficiently.
I. Segment Registers in 8086
The segment registers hold the base addresses of different segments in memory. There are
four main segment registers:
1. Code Segment Register (CS)
• Holds the starting address of the code segment, which contains the executable instructions
of a program.
• The value in CS is used along with the Instruction Pointer (IP) to fetch instructions for
execution.
2. Data Segment Register (DS)
• Stores the base address of the data segment, where variables and data used by the program
are stored.
• Used with index or offset registers to access data in memory.
3. Stack Segment Register (SS)
• Contains the starting address of the stack segment, which is used for storing return
addresses, procedure call information, and local variables.
• Works with the Stack Pointer (SP) and Base Pointer (BP).
4. Extra Segment Register (ES)
• Provides additional memory space for data storage.
• Commonly used for string operations and memory transfers.
II. Pointer and Index Registers in 8086
These registers hold offset addresses within the memory segments and are used for
addressing and manipulating data.
2
1. Instruction Pointer (IP)
• Contains the offset of the next instruction to be executed within the code segment.
• Combined with CS to form the complete address of the instruction:
Physical Address = (CS × 10h) + IP
2. Stack Pointer (SP)
• Points to the top of the stack in the stack segment.
• Used for push/pop operations during procedure calls and interrupts.
3. Base Pointer (BP)
• Often used to access parameters and data within the stack.
• Can be used for accessing memory in the stack segment using addressing modes.
4. Source Index (SI) and Destination Index (DI)
• Used for string and memory transfer operations.
• SI is usually used to point to the source of data (in DS), and DI points to the destination (in
ES).
III. Memory Segmentation and Address Formation
• The 8086 uses segment:offset addressing. Each physical address is formed by combining a
16-bit segment register with a 16-bit offset register.
Formula:
Physical Address = (Segment Register × 16) + Offset
• This method allows overlapping segments and efficient memory utilization.
• It also enables modular programming by separating code, data, and stack segments.
IV. Data Handling and Execution Support
• Segment and pointer registers provide a flexible and structured approach to memory access.
• They enhance program security, support multitasking, and enable advanced operations like
stack handling and string manipulation.
• Registers like SP and BP are essential for function calls, recursion, and local variable
management.
Conclusion:
Segment and pointer registers are crucial in the 8086 microprocessor architecture. They
enable effective memory segmentation, support structured data handling, and provide
mechanisms for efficient instruction execution. By dividing memory into logical segments
and using offset addressing, the 8086 achieves both flexibility and performance in program
execution.
Q2. Differentiate between the roles of the Symbol Table and the Literal Table in an
assembler. Describe how each is constructed during Pass 1 and how they are used
during Pass 2.
3
ANS :-
➤ Difference Between Symbol Table and Literal Table in an Assembler
In the process of converting assembly language into machine code, an assembler uses internal
data structures to keep track of program elements. Two such essential structures are the
Symbol Table and the Literal Table. While both serve the purpose of memory reference
tracking, they function differently and are used at different stages.
I. Symbol Table
• The Symbol Table is a data structure used to store labels or identifiers that appear in the
source program along with their assigned memory addresses.
• It contains information such as the symbol name, its address in memory, and possibly the
length or type.
• The main role of the Symbol Table is to help the assembler resolve memory addresses of
named locations or variables during the second pass.
Construction During Pass 1
• In Pass 1, the assembler scans each line of the assembly program.
• When it encounters a label, it adds the symbol to the Symbol Table along with the current
location counter value.
• If the same label is defined again, it raises an error for duplicate symbol.
• The location counter is updated as the program is read, and this value is associated with the
label.
Usage During Pass 2
• In Pass 2, the assembler uses the Symbol Table to insert the correct memory addresses into
the machine code where labels are referenced as operands.
• This ensures accurate linking of instructions to their target addresses.
II. Literal Table
• The Literal Table holds constant values that appear in the program using literal notation,
typically identified with an equal sign such as =‘5’.
• It includes entries like the literal value, its assigned address, and its length.
Construction During Pass 1
• When the assembler finds a literal during Pass 1, it stores the literal in the Literal Table
with no address initially.
• At the occurrence of an LTORG directive or at the end of the program, memory addresses
are assigned to all unallocated literals using the current location counter.
• This process ensures that each literal gets a unique memory location.
Usage During Pass 2
• In Pass 2, the assembler refers to the Literal Table to find the address assigned to each
literal.
• These addresses are then used in generating the object code for the instructions that refer to
the literals.
III. Comparison Between Symbol Table and Literal Table
4
• The Symbol Table is used to manage programmer-defined labels while the Literal Table
manages constant values defined using literal notation.
• Symbols are assigned addresses at the time of definition, whereas literals are assigned
addresses when an LTORG is encountered or at the end of the program.
• Both tables are constructed during Pass 1 and used during Pass 2 for generating accurate
machine-level code.
Conclusion
The Symbol Table and Literal Table serve different yet essential roles in the functioning of
an assembler. The Symbol Table manages labels and their memory references, while the
Literal Table manages literal constants used in the program. Both are built during the first
pass and are critical in the second pass for final code generation and address resolution.
Q3. Compare and contrast Absolute Loaders and Relocating Loaders. Why are
relocating loaders preferred for advanced systems? Support your answer with examples
and scenarios.
ANS :-
➤ Comparison Between Absolute Loaders and Relocating Loaders
• Loaders are system software components responsible for placing machine-level instructions
into memory so that the program can be executed.
• Two commonly used types of loaders are Absolute Loaders and Relocating Loaders,
each designed for different levels of system complexity and flexibility.
➤ Absolute Loaders
• An Absolute Loader loads the object code directly into main memory at a fixed address
specified at assembly or compile time.
• It does not allow the program to be moved in memory, as all addresses are hard-coded in
the object code.
- Features of Absolute Loaders
I. Fixed Memory Assignment
• The starting address for loading the program must be predefined.
• All memory references are absolute and determined beforehand.
II. No Relocation Capability
• During the loading process, no address adjustments or updates are made.
• The loader simply reads and copies the object code to its fixed memory location.
III. Minimal Complexity
• Due to the absence of relocation logic, the loader’s design is simple and easy to implement.
- Limitations of Absolute Loaders
I. Rigid Memory Usage
• The inability to move programs in memory results in poor flexibility.
• Programs must occupy their exact memory positions or risk malfunction.
5
II. Memory Conflicts
• If two programs are assigned overlapping memory locations, execution becomes
impossible.
• This requires the programmer to carefully manage and avoid address clashes.
III. Not Scalable
• Absolute Loaders are not practical in multitasking or dynamic environments where
programs are frequently loaded and unloaded.
➤ Relocating Loaders
• Relocating Loaders support the ability to load a program into different memory locations,
adapting it dynamically during the loading phase.
• They adjust all memory-related references using relocation information contained in the
object file.
- Features of Relocating Loaders
I. Dynamic Memory Allocation
• The loader calculates a relocation factor and modifies memory addresses so the program
can be loaded into any available space.
II. Supports Dynamic Systems
• Multiple programs can be loaded into memory simultaneously, each in a separate region.
• It is well-suited for modern operating systems with multitasking capabilities.
III. Code Reusability and Modularity
• Shared code modules and libraries can be easily linked and reused across programs.
- Advantages of Relocating Loaders
I. Greater Flexibility
• Programs are not bound to specific memory addresses and can adapt to the system’s
memory availability.
II. Better Resource Management
• Efficient memory usage is possible, even in fragmented memory spaces.
• Helps the operating system in managing and reallocating memory dynamically.
III. Modern Programming Support
• Enables the use of shared libraries, dynamic linking, and real-time process swapping.
➤ Preference for Relocating Loaders in Advanced Systems
- Reasons for Preference
I. Essential for Multitasking Environments
• In systems like Linux, Windows, or macOS, multiple applications and services run
6
concurrently.
• Relocating loaders ensure that each process receives a unique, non-overlapping memory
block without the need for reassembly.
II. Real-Time Memory Management
• Programs can be loaded, moved, and unloaded as needed without restarting the system.
• This dynamic behavior is crucial for modern software systems.
III. Modular and Reusable Software Architecture
• Shared code modules can be loaded once and referenced by multiple applications, reducing
memory consumption.
- Examples and Scenarios
• In a UNIX-based system, several programs and background services run in memory
simultaneously. Relocating loaders make it possible to load and run each program
independently and without address conflicts.
• In an embedded system with a dedicated single function, an absolute loader may be
enough, but in advanced smart devices that run multiple apps, relocating loaders are essential.
• Web servers, operating systems, and cloud environments rely on relocating loaders to
efficiently manage memory among numerous processes.
➤ Conclusion
• While Absolute Loaders are simple and effective in environments with fixed program
structure and limited functionality, they fall short in flexibility and scalability.
• Relocating Loaders, with their ability to adjust programs during loading, are essential for
modern and dynamic computing systems.
• Their support for memory optimization, multitasking, and modular programming makes
them the preferred solution in today’s complex software environments.
7
SET - II
Q4. Compare and contrast device driver management in UNIX/Linux, MS-DOS, and
Windows operating systems. Discuss the key differences in installation, configuration,
communication, and architecture.
ANS :-
➤ Comparison of Device Driver Management in UNIX/Linux, MS-DOS, and Windows
Operating Systems
Device drivers act as a bridge between hardware and the operating system, enabling smooth
interaction between software applications and physical devices. Each operating system uses a
distinct strategy for managing drivers, based on its architecture and design philosophy.
➤ Device Driver Management in UNIX/Linux
• UNIX and Linux systems follow a modular approach where most device drivers are built as
kernel modules
• These modules can be loaded or unloaded into the running kernel without rebooting the
system
Installation and Configuration
I. Drivers can be compiled into the kernel or loaded using commands like insmod and
modprobe
II. Configuration is handled through files such as /etc/modules or dynamically using the udev
subsystem
III. Devices are represented as special files in the /dev directory and are named
systematically such as /dev/sda or /dev/ttyS0
8
Communication
I. Applications communicate with drivers using system calls like open read write and ioctl
II. These system calls operate on the device files representing the hardware components
Architecture
I. UNIX and Linux follow a monolithic kernel structure where drivers run in kernel space
II. Linux supports loadable kernel modules which allow the system to remain flexible and
scalable
➤ Device Driver Management in MS-DOS
• MS-DOS uses a simple and static method for device driver management
• Drivers are loaded during system startup and remain in memory for the entire session
Installation and Configuration
I. Drivers are installed by editing the [Link] file with entries such as
DEVICE=[Link]
II. There is no dynamic driver loading support in MS-DOS
III. Hardware settings like IRQ and memory addresses are manually configured by the user
Communication
I. Communication with devices is direct and often relies on software interrupts
II. MS-DOS uses BIOS or system-level routines to access basic hardware functions
Architecture
I. Drivers operate in real mode with direct access to hardware
II. There is no concept of kernel mode or user mode separation
➤ Device Driver Management in Windows
• Windows operating systems implement a layered and event-driven driver model
• Drivers are categorized based on device type and interaction level such as kernel mode or
user mode drivers
Installation and Configuration
I. Drivers are installed using setup programs or automatically via plug and play detection
II. Windows maintains driver information in the registry and uses digital signatures for
security
III. Device Manager provides a graphical interface to view and configure hardware drivers
Communication
I. Applications communicate with drivers through high-level APIs and system calls
II. Windows uses the I/O Manager and IRP (I/O Request Packets) for communication
between components
Architecture
I. Windows uses a layered driver architecture including function drivers filter drivers and bus
drivers
II. Drivers operate in kernel mode for performance and stability while user mode drivers
handle less critical tasks
➤ Key Differences Across Operating Systems
9
• UNIX/Linux supports dynamic loading and unloading of drivers while MS-DOS loads
drivers statically and Windows uses automated plug and play
• Configuration in UNIX/Linux involves command-line tools and config files MS-DOS
relies on manual setup and Windows offers GUI-based configuration
• Communication in UNIX/Linux is file-based in MS-DOS it is interrupt-based and in
Windows it is handled through structured I/O systems
• Architecture in UNIX/Linux is monolithic MS-DOS is real mode based and Windows
follows a layered modular architecture
➤ Conclusion
Device driver management varies significantly across UNIX/Linux MS-DOS and Windows.
UNIX/Linux offers flexibility and modularity with dynamic kernel modules. MS-DOS
provides simplicity with direct hardware access but lacks scalability. Windows balances
security automation and performance through its layered driver architecture. In modern
computing environments Windows and Linux are preferred due to their robust driver models
and support for complex hardware interactions.
Q5. Explain the process of IP Address allocation in UPnP devices using both DHCP and
Auto-IP mechanisms. Discuss the importance of address management in UPnP and how
it ensures seamless device communication on the network.
ANS :-
➤ IP Address Allocation in UPnP Devices
UPnP or Universal Plug and Play is a networking protocol that enables devices to discover
each other and establish functional network services with minimal user intervention. A
critical aspect of UPnP functionality is automatic IP address assignment which allows devices
to communicate seamlessly. UPnP devices use two main methods to obtain an IP address
which are DHCP and Auto-IP also known as link-local addressing.
➤ Using DHCP in UPnP
• DHCP or Dynamic Host Configuration Protocol is the preferred method for IP address
allocation in UPnP networks
• When a UPnP-enabled device is connected to the network it first tries to obtain an IP
address from a DHCP server
Steps in DHCP-based IP Assignment
I. The device sends a DHCPDISCOVER broadcast message on the network
II. The DHCP server responds with a DHCPOFFER containing available IP address and
configuration
III. The device sends a DHCPREQUEST to accept the offer
IV. The DHCP server replies with a DHCPACK to confirm the assignment
• This process assigns a unique IP address subnet mask default gateway and DNS server to
the device
• Devices configured through DHCP can be centrally managed and their addresses can be
easily reused or reassigned
10
➤ Using Auto-IP in UPnP
• If there is no DHCP server on the network or the DHCP request fails the UPnP device uses
Auto-IP to assign itself an IP address from a reserved range
• Auto-IP assigns addresses in the range [Link] to [Link]
Steps in Auto-IP Assignment
I. The device selects a random IP address within the link-local range
II. It sends an ARP probe to check if the address is already in use
III. If there is no response the device uses the address and begins communication
IV. If a conflict is detected the device selects a new address and repeats the process
• Auto-IP ensures that even in the absence of a DHCP server the device can still
communicate with other devices on the local network
➤ Importance of Address Management in UPnP
• Proper address management is vital for seamless discovery configuration and
communication of devices on a UPnP network
• Without a valid IP address devices cannot participate in the UPnP discovery and
advertisement process
Key Reasons for Address Management
I. Ensures every device has a unique IP address to avoid conflicts
II. Supports dynamic network environments where devices join and leave frequently
III. Enables zero-configuration networking which is a core goal of UPnP
IV. Facilitates automatic service discovery which relies on consistent network addressing
V. Promotes user-friendly network setup especially in home and office environments
➤ How It Ensures Seamless Communication
• By first attempting to use DHCP and falling back to Auto-IP if necessary UPnP devices
ensure maximum compatibility and reliability
• This dual approach allows devices to start functioning with minimal configuration effort
from the user
• Devices with valid IP addresses can immediately broadcast SSDP messages to announce
their presence and services
• Control points can discover these services using standard UPnP protocols such as HTTP
and SOAP
• Whether DHCP is available or not UPnP ensures that devices can interconnect and operate
within the network environment
➤ Conclusion
UPnP devices rely on robust address management to maintain a smooth and user-friendly
networking experience. By supporting both DHCP and Auto-IP mechanisms UPnP ensures
that devices can automatically obtain an IP address and begin communication without manual
setup. This flexibility is essential for plug-and-play functionality especially in modern
networks where ease of use and interoperability are critical.
11
Q6. Explain the hierarchy process in Android memory management. How does Android
decide which process to terminate when memory is low?
ANS :-
➤ Hierarchy Process in Android Memory Management
Android is a mobile operating system designed to run efficiently on devices with limited
memory. It uses a process hierarchy to manage running applications and background services
while ensuring smooth performance. Android’s memory management is based on Linux
kernel’s process handling and integrates additional logic to prioritize applications based on
their importance to the user.
I. Foreground Process
• These are the processes that the user is currently interacting with
• It includes activities currently on screen or services bound to those activities
• Foreground processes have the highest priority and are the last to be terminated
II. Visible Process
• These are not in the foreground but are still visible to the user such as activities partially
covered by another transparent activity
• These processes are important but can be terminated if the system is under extreme
memory pressure
III. Service Process
• These run services started using startService and do not involve direct user interaction
• Android keeps them alive as long as possible since they often perform background tasks
like downloading data or playing music
IV. Background Process
• These host activities that are no longer visible to the user but have not been destroyed
• Android may terminate them when memory is required for foreground or visible processes
V. Empty Process
• These processes do not hold any active components and are retained only for caching
purposes
• They have the lowest priority and are the first to be terminated when memory is low
➤ How Android Decides Which Process to Terminate
When the system runs low on memory Android begins to terminate processes to free up
resources. The decision is based on the process importance level described in the hierarchy.
The lower the priority the more likely the process is to be killed.
Process Termination Logic
I. Android maintains an internal list of all running processes and assigns an importance level
to each one
II. The system starts killing the lowest priority processes such as empty processes
III. If memory is still insufficient it terminates background processes followed by service
processes and then visible processes
12
IV. Foreground processes are only terminated as a last resort to prevent the system from
becoming unresponsive
Additional Factors Considered
• Memory usage of the process
• The time since it was last used
• Whether the process is associated with a system component or user application
• Importance of the app for device functionality such as system apps or alarm services
➤ Conclusion
Android uses a well-defined process hierarchy to manage memory efficiently. By prioritizing
processes based on user interaction and functionality it ensures a balance between
performance and resource availability. When memory is low Android begins terminating
processes from the lowest priority upward ensuring that essential and active applications
remain unaffected for as long as possible.
13