OPERATING SYSTEM
FOCUSED SHORT QUESTION:
[Link] management involves planning, monitoring, and optimizing
organizational processes to ensure efficiency, quality, and
consistency. Key activities include:
(a). Process Design: Mapping workflows and defining tasks, inputs,
and outputs.
(b). Process Implementation: Deploying processes with clear roles
and resources.
(c). Process Monitoring: Tracking performance using KPIs to find
inefficiencies.
(d). Process Evaluation: Analyzing data to check effectiveness and
goal alignment.
(e). Process Improvement: Enhancing processes with methods like Six
Sigma or Lean.
(f.) Process Documentation: Recording processes for
consistency,training, and audits .
2. System calls allow programs to safely access computerresources .
Or System calls help apps use computer resources safely.
System calls are used for
a. Managing resources like CPU and memory.
b. Keeping the system safe and stable.c. Helping developers work
with hardware without complicated details.
Example; A C program uses a system call called "read()" to safely
access and read a file with the help of the computer's kernel.
Here's how a system call works when reading a
file:
a. The program asks the kernel to read a file.
b. The program uses a system call (like "read") to request the file.
c. The kernel checks permissions, finds the file, reads it, and gives the
data back to the program.
d. The program gets the data and uses it
3. Kernel Approach
- Controls everything from one place (centralized)
- All parts are connected (monolithic)
- Fast and efficient
- Hard to change or fix
Layered Approach
- Divided into separate parts (modular)
- Each part builds on the previous one (hierarchical)
- Easier to change or fix
- A bit slower
[Link] of command Interpreter;
The Command Interpreter (Shell) helps users interact with the
operating system. Its main jobs are:
a. Running commands and programs
b. Executing script files
c. Managing processes (start, stop, monitor)
d. Navigating and managing files and folders
e. Helping users interact with the system
Why is it Separate from the Kernel?
The command interpreter is separate from the kernel for several
reasons:
a. Flexibility: Different shells can be used without changing the
kernel.
b. Safety: Errors or malicious commands can't harm the whole
system.
c. Customization: Users can choose their favorite shell without
affecting the system.
d. Simplicity: The kernel is simpler and easier to manage.
Yes, it is possible for a user to develop a new command interpreter
using the system-call interface provided by the operating system. This
is because:
a. System calls help interact with the kernel.
b. Programming interfaces make it easy to create custom shells.
c. Existing shells show it's possible.
5. Process management involves a series of activities aimed at
optimizing and controlling business processes to improve efficiency
and quality. Key activities include;
a. Designing workflows and tasks
b. Creating visual models (like flowcharts)
c. Implementing processes manually or automatically
d. Monitoring performance
e. Making improvements
f. Standardizing best practices
6. Time-Sharing Systems
- Allow multiple users to share resources like the CPU
- Each user gets a small time slot to use the CPU
- Properties:
- Multi-user access
- Fair allocation of resources
- Preemptive scheduling
- Interactive performance
- Isolation between users
Multi-Processor Systems
- Have two or more CPUs sharing resources
- Properties:
- Parallel processing
- Shared resources
- Scalability
- Fault tolerance
- Load balancing
Distributed Systems
- A collection of independent computers working together
- Properties:
- Resource sharing
- Decentralization
- Fault tolerance
- Concurrency
- Scalability
- Transparency
7. In an operating system, the primary process states describe the
different stages a process can be in during its lifecycle. These states
are;
Process States:
a. New: A process is created but not ready to run.
b. Ready: A process is loaded and waiting for the CPU.
c. Running: A process is executing on the CPU.
d. Blocked: A process is waiting for something (like I/O) to happen.
e. Terminated: A process is finished or stopped.
State Transitions;
Processes can transition between these states as follows:
a. New → Ready: The OS prepares the process to run.
b. Ready → Running: The scheduler assigns the CPU to the process.
c. Running → Ready: The process is interrupted to let another
process run.
d. Running → Blocked: The process waits for something to happen.
e. Blocked → Ready: The wait is over, and the process is ready again.
f. Running → Terminated: The process finishes or is stopped.
8. A Process Control Block (PCB) is a data structure that stores
information about each process in a computer system. Each process
has its own PCB, which is created when the process starts and
deleted when it ends.
A PCB typically contains:
a. Unique Process ID (PID)
b. Current process state (e.g., running, waiting)
c. Program Counter (next instruction to execute)
d. CPU register information
e. Memory management details
f. Scheduling information (e.g., priority)
g. Resource usage statistics (e.g., CPU time)
h. I/O device allocation information
9. Scheduling queues are essential structures in an operating system's
process scheduling. They manage processes at different stages of
execution. The primary types are:
a. job Queue: Holds all processes waiting for resources.
b. Ready Queue: Holds processes ready to run, waiting for CPU time.
c. Device Queue: Holds processes waiting for input/output
operations.
How it Works:
Processes move between these queues as they change state (ready,
waiting, or running). This helps the operating system manage
resources and multitask efficiently.
11. CPU scheduling criteria are used to evaluate and compare
different scheduling algorithms to ensure efficient process
management in an operating system. The main criteria include:
a. CPU Utilization: Keep the CPU busy as much as possible.
b. Throughput: Complete more processes in less time.
c. Turnaround Time: Finish processes quickly.
d. Waiting Time: Minimize time spent waiting in line.
e. Response Time: Respond quickly to user input.
f. Fairness: Treat all processes equally and prevent starvation.
17. Three Requirements for Critical Section Problem:
a. Mutual Exclusion: Only one process can be in the critical section at
a time.
b. Progress: If no process is in the critical section, the next process
should be allowed to enter without delay.
c. Bounded Waiting: Each process should get a chance to enter the
critical section within a reasonable time, preventing starvation.
Peterson's Solution:
18.A software-based algorithm that solves the critical section
problem in concurrent programming.
Key Components:
a. Flag: Indicates interest in entering the critical section.
b. Turn: Decides which process enters the critical section.
How it Works:
a. Each process sets its flag.
b. The process sets the turn variable to the other process.
c. The process waits until the other process finishes or sets its flag to
false.
Guarantees:
- Mutual exclusion
- Progress
- Bounded waiting
Limitation:
- Limited to two processe - Not efficient in systems with more processes
19. Synchronization Hardware:
Refers to hardware mechanisms that help coordinate multiple processes or
threads in a system.
. Synchronization Hardware;
- Helps coordinate processes in a multi-core or multi-processor environment.
- Prevents race conditions and ensures mutual exclusion.
- Examples: Test-and-Set, Compare-and-Swap, Fetch-and-Add.
20. Semaphore:
A tool to manage shared resources in a multi-threaded environment.
Types of Semaphores:
a. Binary Semaphore (Mutex): Takes values 0 or 1, used for mutual exclusion.
b. Counting Semaphore: Takes non-negative integer values, used to control
access to a resource pool.
Operations on Semaphores:
a. Wait (P): Decreases semaphore value, blocks process if value is 0.
b. Signal (V): Increases semaphore value, unblocks waiting process if value
becomes positive.
Purpose:
Helps control access to shared resources and prevents race conditions.
Differentiate between binaryand counting semaphore
Binary Semaphore (Mutex)
a. Values: 0 or 1
b. Use: Mutual Exclusion (only one process can access a resource)
c. Example: Locking a door (only one person can enter)
Counting Semaphore
a. Values: Non-negative integers (0, 1, 2, ...)
b. Use: Managing a pool of resources (e.g., printers, buffers)
c. Example: Managing a parking lot (multiple cars can enter, up to a limit)
21. Dining Philosophers Problem
Five philosophers sit at a table with one fork between each pair. To eat, a
philosopher needs both forks.
Key Issues
a. Mutual Exclusion: Only one philosopher can hold a fork.
b. Deadlock: Philosophers might wait forever for forks.
c. Starvation: A philosopher might never get to eat.
d. Concurrency: Multiple philosophers share resources (forks).
Semaphore-Based Solution
a. Initialize semaphores for each fork (available) and a mutex for critical
sections.
b. Each philosopher:
- Thinks
- Picks up left fork
- Picks up right fork (if available)
- Eats
- Puts down right fork
- Puts down left fork
c. Avoid deadlock by ensuring philosophers pick up forks in a specific order.
Code Example
Uses semaphores to manage fork access and prevent deadlock.
22. Deadlock: When processes are stuck, waiting for resources held by others.
Necessary Conditions for Deadlock:
a. Mutual Exclusion: One resource is held by one process.
b. Hold and Wait: A process holds one resource and waits for another.
c. No Preemption: Resources can't be taken away from processes.
d. Circular Wait: Processes wait for resources in a cycle.
Sufficient Conditions for Deadlock:
When all four necessary conditions are present, deadlock is guaranteed.
In short:
- Necessary conditions make deadlock possible.
- Sufficient conditions guarantee deadlock will happen.
23. Deadlock Prevention:
Two methods to prevent deadlock:
(i) Prevention of Circular Wait
- Assign a unique order to resources.
- Processes must request resources in that order.
- Alternatively, preempt resources to break the cycle.
(ii) Prevention of Hold and Wait
- Request all resources at once before starting.
- Release all held resources before requesting new ones.
These methods prevent deadlock by:
- Avoiding circular dependencies
- Preventing processes from holding resources while waiting for others
24. Banker's Algorithm Data Structures
1. Available: Tracks available resources.
2. Max: Stores maximum resources each process may need.
3. Allocation: Represents resources currently allocated to each process.
4. Need: Calculates remaining resources needed by each process.
5. Work: Tracks available resources during algorithm execution.
6. Finish: Indicates whether each process has finished execution.
Banker's Algorithm Execution
1. Temporary Allocation Check: Temporarily allocates resources and updates
matrices.
2. Safety Check: Verifies if the system remains safe by checking if all processes
can finish without deadlock.
Purpose
The Banker's Algorithm prevents deadlock by ensuring resource requests don't
lead to an unsafe state, maximizing resource utilization.
30. Belady's Anomaly:
A situation where increasing the number of page frames in a system actually
increases the number of page faults.
What Happens:
Normally, adding more frames should reduce page faults. But with certain
algorithms, like First-In, First-Out (FIFO), this doesn't always happen.
Example:
A system with 3 frames has 7 page faults. But when the number of frames
increases to 4, the number of page faults decreases to 6. However, with a
different reference string, increasing frames could lead to more page faults.
Why it Happens:
FIFO replaces the oldest page, which might not be the best choice. This can
lead to inefficient replacements and more page faults, even with more frames.
32. Disk scheduling refers to the method used by an operating system to decide
the order in which disk I/O requests are processed. The key criteria for disk
scheduling are:
a. Fairness: Process all requests without delay.
b. Efficiency: Minimize seek time (time to move between tracks).
c. Throughput: Process many requests quickly.
d. Minimize Latency: Reduce wait time for data.
e. Satisfaction of Requests: Prioritize important requests.
Different algorithms (like FCFS, SSTF, SCAN, and LOOK) aim to balance these
goals for optimal performance.
36. Swap space management refers to the process of using a portion of the
hard drive as virtual memory when the physical RAM is full.
Key points about swap space management include;
a. Virtual Memory: Swap space adds extra memory to your computer.
b. Page Swapping: When RAM is full, data is moved to swap space to free up
RAM.
c. Performance: Accessing swap space is slower than RAM, so excessive
swapping can slow down your computer.
d. Configuration: Swap space can be set up as a file or partition, and its size
depends on your computer's RAM.
38 .Paging in Memory Management:
Paging is a way to divide memory into small blocks called pages.
How Paging Works:
a. Program is divided into pages (small blocks).
b. Physical memory is divided into frames (same size as pages).
c. Pages are mapped to frames in physical memory.
d. Pages can be stored in different frames, and can be swapped in and out from
disk as needed.
Example:
Program size: 16 KB (4 pages)
P Paging in Memory Management:
Paging is a way to divide memory into small blocks called pages.
How Paging Works:
a. Program is divided into pages (small blocks).
b. Physical memory is divided into frames (same size as pages).
c. Pages are mapped to frames in physical memory.
d. Pages can be stored in different frames, and can be swapped in and out from
disk as needed.
39. Differentiate between logical and physical address
Logical Address:
a. Also known as virtual address.
b. Generated by the CPU.
c. Used by the program or process.
d. Doesn't exist physically in memory.
Physical Address:
a. Also known as real address.
b. Actual location of data in memory.
c. Used by the memory management unit (MMU).
d. Exists physically in memory.
Logical to Physical Address Mapping in Paging
a. Logical Address: Page Number + Page Offset
b. Physical Address: Frame Number + Frame Offset
c. Page Table: Maps Page Numbers to Frame Numbers
d. Translation: Use Page Table + Offset to get Physical Address
40. Logical to Physical Address Mapping in segmentation;
In segmentation, the logical address is mapped to the physical address through
a process that involves dividing the memory into segments, where each
segment corresponds to a different type of data or program structure (e.g.,
code, data, stack).
Here's how the mapping works:
[Link] Address Structure:
- 2 parts: Segment Number (S) and Offset (O)
- S: Which segment it belongs to
- O: Location within the segment
- Represented as (S, O)
[Link] Table:
- A table maintained by the operating system
- Holds base addresses of segments in physical memory
- Each entry has:
- Base address (where segment starts)
- Segment size (and other info)
[Link] Process:
- Use Segment Number (S) to find the segment in the table.
- Get the base address from the table.
- Add the Offset (O) to the base address.
-Result: Physical Address.