0% found this document useful (0 votes)
193 views4 pages

Operating Systems Course Plan 2022

This document outlines the lecture plan for an Operating Systems course being offered from January to April 2022. The course will cover key topics like process management, memory management, file systems, and I/O management through 36 lectures. Students are expected to read textbook chapters, complete homework assignments and programming projects, and take tests and quizzes. The goal is for students to understand how operating systems work and be able to apply concepts like scheduling algorithms, virtual memory, and system calls. Recommended textbooks and additional references are provided.

Uploaded by

Smit Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
193 views4 pages

Operating Systems Course Plan 2022

This document outlines the lecture plan for an Operating Systems course being offered from January to April 2022. The course will cover key topics like process management, memory management, file systems, and I/O management through 36 lectures. Students are expected to read textbook chapters, complete homework assignments and programming projects, and take tests and quizzes. The goal is for students to understand how operating systems work and be able to apply concepts like scheduling algorithms, virtual memory, and system calls. Recommended textbooks and additional references are provided.

Uploaded by

Smit Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Dept. of Comp. Engg.

Lecture Plan for CO24508 Operating Systems (Sem: JAN’22-April’22)


Course Instructor : Dr. Vandan Tewari

Pre Requisite of this course:

1. Computer Architecture
2. Data Structures & Good Programming Skills(C/C++/Java)

COURSE OBJECTIVE:

To enable a student in high level understanding of how operating systems work with respect to Process
management, Memory management, File Systems and I/O management.

COURSE OUTCOMES:

After Completing the course student should be able to:

1. Describe and explain the importance and objectives of an operating system and various services
provided by the operating system.
2. Interpret the important functions of different modules of an Operating system, like process
management, memory management, device management and file system, etc and will be able to
apply these concepts in given test cases.
3. Compare and contrast different policies of CPU scheduling, Inter-process Communication, Page
replacement and disk scheduling algorithms etc.
4. Design and develop small modules, shell and utility programs using system calls of Linux or
some educational operating system.

Expected Work
1. Substantial Reading (Textbooks & References)
2. Homeworks (May require Programming)
3. Tests (at least two)+small quizzes (any number)+ Class Activities

TEXT BOOKS RECOMMENDED:


1. Silberschatz, Galvin, Gagne, “Operating System Concepts’’, Wiley, 10/E, 2019. ISBN: 978-1-
119-45586-8
2. William Stalling, “Operating Systems”, Pearson Education, 9/E, 2019.

REFERENCE BOOKS & WEB RESOURCES:


1. Andrew S. Tanenbaum, “Modern Operating Systems”, 4/E, Prentice Hall, 2014.
2. Maurice J. Bach, “The Design of Unix Operating System”, First Edition, Pearson, 2015.
3. Bovet & Cesati, “Understanding the Linux Kernel”, O’Reily, 3/E, 2005.
4. [Link]
Lecture Plan

Unit I INTRODUCTION TO OS (No. of Lectures :03)

Lect. 1: Introduction to Subject, Usage of reading this course function of an O.S, Desirable features
of an O.S. Structure of an O.S. , Evolution of O.S., Different Type of O.S, Overview of
Computer System.
Lect. 2:Booting process ,Different type of Operating System Services & methods of providing these
Services , Layered structure ,Monolithic & Microkernel Architecture, System Calls and System
Programs, Importance of System Calls
Lect. 3: Type of System Calls, Example and how do we make a call system call System Calls in
UNIX/ LINUX.

UNIT II OPERATING SYSTEM AS RESOURCE MANAGER (Total no. of Lectures :26)

II A <PROCESS MANAGEMENT , IPC & DEADLOCKS> (No. of Lectures=15)

Lect. 4: Concept of Process, representation of process: PCB, State of process, Process Creation.
Scheduling of a process, Type of Schedule [Link] switching, Different type of schedulers
in OS, Five State Process diagram. Introduction to CPU scheduling.
Lect. 5-6: System based and user based Scheduling Criteria , Scheduling algorithms.
Lect. 7 Numerical on CPU scheduling, Calculating turnaround time and waiting time
Lect. 8-9: Algorithm Evaluation. System calls for Process Management, Suitability of algorithm for
particular scenario in CPU scheduling. Threads, Process Vs. Threads, Various Models for
implementing Threads.
Lect.10: User level Threads and Kernel level threads, Process Synchronization, Real & Virtual
concurrency
Lect 11: Mutual exclusion, the critical section problem, Dekker’s & Peterson’s Solutions.
Lect.12: Peterson’s solutions continued, Inter process communication & Message passing,
classical problems of IPC
Lect.13-14: Solutions to problems of critical section and synchronization using semaphores (Wait/
Signal) and their implementation.
Lect.15: Monitors, Hardware Lock/ Unlock Instructions.
Lect.16: What is deadlock? Characterization of deadlock. Deadlock vs. Live lock, Handling
Deadlock.
Lect.17: Deadlock prevention, Deadlock Avoidance & Recovery
Lect.18-19: Deadlock avoidance mechanisms and comparison between techniques of prevention
Avoidance and detection

II B <MEMORY MANAGEMENT> (No of Lectures:06)

Lect.20: Address binding to be learned by students themselves, memory hierarchy, Logical Vs.
Physical address space, Overlays, Dynamic linking & loading, Different techniques of
memory management.
Lect.21 Contiguous memory allocation, Swapping, Memory allocation with multiple partitions,
Fragmentation & Compaction.
Lect.22: Paging, Hardware support and protection, Shared pages, Page table structures
Lect.23: Segmentation, Paged Segmentation, Case Study: Memory Management in Intel Pentium
Architecture
Lect.24: Performance of these techniques. Comparison of Techniques of memory management,
Virtual memory: concept & implementation, Demand paging, Copy on Write, page
Replacement.
Lect.25: Page replacement polices & Thrashing, Frame allocation to processes, Kernel memory
allocation, Issues related to paging: Pre paging, Page size etc. Case Study: Virtual memory
in Windows XP.

II C <FILE SYSTEMS> (No of Lectures:05)

Lect.26-27: Concept of file, user’s programmer’s view of file systems. Ordered & unordered files,
Sequential files. Different File types, Directory structures, Partitioning of hard disk
Lect.28: High level & low level formatting. Disk space allocation methods, File system creation.
Lect.29: File System Implementation. Different Layers of File system. In Memory data structures
for file system implementation
Lect.30: Packing / Unpacking of blocks, File sharing.,Volume maps of UFS and NTFS, System
Calls for file management.

UNIT III THE I/O SYSTEM (No of Lectures: 04)

Lect.31: Classification of I/O devices: block & character devices, different I/O Techniques, I/O
Software goals, Assembly Structure of a hard Disk and access time of a disk.
Lect.32: Layers of I/O System, Interrupt handlers & Device drivers.
Lect.33: Devices independent I/O Software, user space I/O Software, Device Drivers & Controllers
Lect.34: Disk scheduling algorithms & their comparison.

UNIT IV CASE STUDIES (No of Lectures:02)

Lect.35: Introduction to N/W OS & distributed OS,Introduction to real time OS and Multiprocessor
OS.
Lect.36: Case Study 1: UNIX/ LINUX, Case Study 2: Windows 7/8/10 etc.

Instructions to students.

1. The language for technical communication is strictly ENGLISH so you are advised to start
speaking ,writing & reading in English.
2. You are advised to attend classes sincerely & maintain proper online etiquette.
3. You will be given class activities for this course, which will be evaluated.
4. There will be 02 tests and any number of quizzes which will be taken for evaluation.
5. You can contact me for your problems regarding subject matter on Whatsapp group .
6. For OS Lab, you are advised to learn to install LINUX on your machine & start Learning the
basics.
7. Read the assigned chapter from the textbook before the lecture to understand the basic
idea & terminologies used.
8. This is a fundamental course and as other subjects ,would benefit you for the campus
selection so you are advised to be diligent in the lectures.

BEST OF LUCK

Common questions

Powered by AI

Virtual memory allows an operating system to use hardware and software to enable a computer to compensate for physical memory shortages, offering the illusion of a larger memory space . It is implemented through techniques like paging and segmentation, where a virtual address space is mapped onto physical memory. When a page is not in physical memory, a page fault occurs, causing the system to fetch the data from disk storage, thus optimizing memory utilization and enabling large applications to run on systems with limited RAM . This complexity allows for a more efficient handling of processes and better overall system performance.

Process management handles multiple processes using scheduling algorithms and process synchronization techniques. Scheduling algorithms like First-Come-First-Served (FCFS), Shortest Job Next (SJN), and Round-Robin ensure the efficient allocation of CPU time to various processes, minimizing waiting and turnaround times . Process synchronization problems, such as the critical section problem, are resolved using solutions like Peterson’s Solution and semaphores, allowing processes to safely access shared resources without conflicts . These techniques work together to ensure that resources are utilized efficiently while maintaining process execution integrity.

CPU scheduling algorithms, such as First-Come-First-Served, Shortest Job Next, and Round Robin, affect system performance by influencing process wait and response times . FCFS is simple but can lead to convoy effects where shorter processes wait for longer ones. SJN minimizes average waiting time by prioritizing short tasks but requires accurate prediction of execution time, which is not always feasible . Round Robin uses time slicing for fair CPU time allocation, beneficial in time-sharing environments but can increase context switching overhead, affecting throughput. The choice of algorithm impacts how efficiently a system can handle process loads and user responsiveness.

System calls act as the interface between the operating system and user-level applications, allowing programs to request services from the OS kernel . Examples include 'fork' for process creation, 'exec' for executing a file, and 'wait' for process waiting in UNIX/Linux systems. These calls allow user programs to perform operations like file manipulation ('open', 'read', 'write'), and process control, by abstracting low-level hardware interactions into higher-level operations understood by the OS . This communication is crucial for process management, file operations, and providing a standard way for applications to interact with the OS.

Semaphores are synchronization primitives used to control access to a common resource by multiple processes in a concurrent system, using wait and signal operations to manage process access and prevent race conditions . Monitors provide a higher-level abstraction, encapsulating shared resources, procedures, and the synchronization required, ensuring that only one thread can access a monitor's procedures at any time . These tools help prevent synchronization problems like race conditions and ensure mutual exclusion in critical sections.

Deadlock prevention involves designing a system in such a way that deadlocks are structurally impossible, typically by negating one of the conditions for deadlock (e.g., resource allocation policies). This can lead to inefficient resource usage. Deadlock avoidance, on the other hand, dynamically evaluates whether a system will remain safe before granting resource requests, allowing more flexibility but requiring more computational resources . Deadlock recovery deals with resolving deadlocks once they occur, often through process termination or resource preemption, which can lead to system performance degradation. Each method balances between system efficiency and complexity of implementation.

Block I/O devices, such as hard drives and CD-ROMs, store data in fixed-size blocks, allowing the OS to manage data in larger units, which optimizes throughput in sequential read/write operations . Character I/O devices, like keyboards and mice, handle data as a stream of characters, suited for interactive or low-latency inputs. I/O management must accommodate these differences, optimizing batching and buffering for block devices while ensuring responsiveness and low latency for character devices . This classification impacts the design of device drivers and the overall I/O scheduling strategy within the operating system.

Implementing a real-time operating system (RTOS) involves challenges such as guaranteeing deterministic task processing and meeting strict timing constraints. These challenges are addressed through priority-based scheduling, allowing higher-priority tasks to preempt lower-priority ones to meet deadlines . Other techniques include deadline-monotonic and rate-monotonic scheduling, which prioritize tasks based on frequency or deadline proximity. Additionally, the RTOS must have minimal latency and jitter, achieved through lightweight kernel design and deterministic interrupt handling, ensuring consistent and predictable operation necessary in time-sensitive environments such as embedded systems and industrial automation .

File system structures, such as hierarchical directory structures versus flat systems, influence organizational efficiency and usability, with hierarchical structures enhancing file retrieval speed through organized paths . Disk space allocation methods, like contiguous, linked, and indexed allocations, affect performance by influencing the speed of file access and modification. Contiguous allocation provides fast access but suffers from fragmentation, whereas linked allocation avoids fragmentation at the cost of slower access times . Indexed allocation balances speed and flexibility but requires more overhead for pointer management, impacting reliability and overall performance depending on the implementation.

User-level threads are managed by a user-level library rather than the operating system, allowing for quick and efficient context switching since it doesn't require mode changes . However, they lack the ability to leverage multi-core systems since only one thread can execute at a time from each process. Kernel-level threads are managed by the OS, incurring more overhead due to kernel involvement but enabling true parallel execution on multi-core processors . The choice impacts performance, with user-level threads offering less overhead in non-parallel tasks, while kernel-level threads allow for better scaling across multiple processors in high parallel workloads.

You might also like