1.1 What is an Operating System?
An Operating System (OS) is system software that acts as an intermediary between the user and the computer hardware. It manages all the software and hardware on the computer and ensures that different programs and users running on the system do not interfere with each other.
✦ Main roles of an OS:
- Controls hardware (CPU, memory, I/O devices).
- Runs applications and programs.
- Provides a user interface.
- Manages files and folders.
✦ Simple analogy:
Think of the OS like a manager in a company. It assigns tasks, ensures that work is done efficiently, and handles communication between departments (programs and hardware).
1.2 History and Evolution of Operating Systems
The development of OS has gone through several generations. Here’s a timeline:
➤ 1st Generation (1940s – Early 1950s): No OS
- Computers had no operating systems.
- Programmers interacted directly with the hardware.
- Programs were entered using punch cards or switches.
- Very slow and error-prone.
➤ 2nd Generation (1950s – 1960s): Batch Operating Systems
- Jobs were grouped (batched) and processed without user interaction.
- Input via punch cards.
- OS handled job sequencing and loading.
➤ 3rd Generation (1960s – 1980s): Multiprogramming and Time-sharing OS
- Multiple programs loaded in memory and managed by OS.
- Time-sharing allowed multiple users to interact with the computer at once.
- UNIX OS emerged during this time.
➤ 4th Generation (1980s – 1990s): GUI and Personal Computers
- Introduction of Graphical User Interface (GUI).
- Popular OS: MS-DOS, Windows, macOS.
- More user-friendly.
➤ 5th Generation (1990s – Present): Networked and Mobile Systems
- Internet support, cloud computing.
- Mobile OS like Android and iOS.
- Virtualization and multi-core processing.
1.3 Functions of an Operating System
An OS performs several key functions to keep the system running smoothly:
🔹 1. Process Management
- Manages processes (programs in execution).
- Handles creation, scheduling, and termination of processes.
🔹 2. Memory Management
- Allocates memory to processes.
- Ensures efficient memory use and prevents memory leaks.
🔹 3. File System Management
- Manages file creation, deletion, storage, and access.
- Organizes data into directories and subdirectories.
🔹 4. Device Management
- Coordinates communication between hardware devices and software.
- Uses device drivers to control I/O devices.
🔹 5. Security and Access Control
- Protects data and system from unauthorized access.
- Handles user authentication (like passwords or biometrics).
🔹 6. User Interface
- Provides CLI (Command-Line Interface) or GUI (Graphical User Interface) for user interaction.
🔹 7. Resource Allocation
- Manages system resources (CPU, RAM, etc.) fairly among users and applications.
1.4 Types of Operating Systems
Operating Systems are categorized based on their features and usage.
A. Batch Operating System
✦ Description:
- Executes jobs in batches without user interaction.
- Suitable for repetitive tasks.
✦ Example:
- IBM OS/360
✦ Pros:
- Good for large volumes of similar tasks.
- Reduces idle time of CPU.
✦ Cons:
- No interaction with the system once job is submitted.
- Debugging is difficult.
B. Time-Sharing Operating System
✦ Description:
- Allows multiple users to share system resources at the same time.
- CPU switches rapidly between users (multitasking).
✦ Example:
- UNIX
✦ Pros:
- Interactive user experience.
- Efficient resource utilization.
✦ Cons:
- Complex to implement.
- Higher maintenance cost.
C. Distributed Operating System
✦ Description:
- Controls multiple computers connected via a network.
- Appears to the user as a single system.
✦ Example:
- Amoeba, Plan 9
✦ Pros:
- Increased performance.
- High reliability and fault tolerance.
✦ Cons:
- Security issues over network.
- Complex management.
D. Real-Time Operating System (RTOS)
✦ Description:
- Processes data as it comes in, with very short response time.
- Used in systems requiring immediate response.
✦ Types:
- Hard Real-Time: Deadline must be met (e.g., medical systems).
- Soft Real-Time: Delays acceptable to a degree (e.g., video streaming).
✦ Example:
- VxWorks, RTLinux
✦ Pros:
- Very reliable and timely.
- Consistent and predictable.
✦ Cons:
- Expensive to design.
- Limited multitasking.
E. Embedded Operating System
✦ Description:
- Designed for specific devices (not general-purpose).
- Minimal resources, highly efficient.
✦ Example:
- Embedded Linux, FreeRTOS
✦ Used In:
- Smart TVs, ATMs, Microwave ovens, Traffic lights
✦ Pros:
- Low resource consumption.
- Fast booting.
✦ Cons:
- Limited functionalities.
- Hard to update or upgrade.
F. Mobile Operating System
✦ Description:
- Designed specifically for smartphones and tablets.
- Supports wireless communication, touchscreen, sensors.
✦ Examples:
- Android, iOS, HarmonyOS
✦ Pros:
- User-friendly.
- Optimized for mobile hardware.
✦ Cons:
- Limited file access compared to desktop OS.
- App compatibility issues between platforms.
2. System Architecture
System architecture defines how the hardware and software components of a computer system are organized and interact with each other. Understanding this helps students see how the OS fits into the bigger picture.
2.1 Computer System Structure
A computer system consists of four main components:
🔹 1. Hardware
- Includes physical devices like CPU, memory, disk drives, keyboard, and display.
- Provides the basic computing resources.
🔹 2. Operating System
- Acts as a manager and controller of the hardware.
- Coordinates the execution of programs and manages hardware resources.
🔹 3. Application Programs
- Software like browsers, media players, games, or text editors.
- Perform specific tasks for users.
🔹 4. Users
- The people using the computer system.
🧠 Working Flow:
When you open an app (like Chrome):
- The app sends a request to the OS.
- The OS uses hardware to perform the task (like display output or access the internet).
- The user sees the result.
2.2 Kernel vs User Mode
The CPU has two different modes of operation to protect system resources:
🔸 User Mode:
- Where regular applications run.
- Limited access to hardware.
- Cannot directly interact with devices like disk or memory.
- Helps protect the system from buggy or malicious code.
🔸 Kernel Mode (Supervisor Mode):
- Where the operating system runs.
- Full access to all hardware and system resources.
- Can execute any instruction.
🧠 Why two modes?
To prevent applications from damaging or interfering with system operations. If every program could control hardware directly, it would be easy to crash the system.
2.3 System Calls
System Calls are how programs interact with the Operating System. When a program needs to do something like read a file or allocate memory, it makes a system call.
🔹 Examples of System Calls:
read()
– Read from a file.write()
– Write to a file.fork()
– Create a new process.exec()
– Execute a new program.open()
– Open a file.
🔹 Types of System Calls:
- Process Control – Start, stop, or manage processes.
- File Management – Create, read, write, or delete files.
- Device Management – Access devices like printers or USBs.
- Information Maintenance – Get system time, OS version.
- Communication – Allow two processes to exchange information.
🧠 How it works:
- Program requests service (e.g., read a file).
- It makes a system call.
- OS takes over (in kernel mode), performs the task.
- Control returns to the program (in user mode).
2.4 Monolithic, Microkernel, and Hybrid Architectures
These are different designs of how the operating system is built internally.
A. Monolithic Architecture
✦ Description:
- Entire OS runs in a single large program in kernel mode.
- All services (file system, memory management, device drivers) are part of the kernel.
✦ Example:
- Traditional UNIX, MS-DOS
✦ Pros:
- Fast and efficient (since everything runs together).
- Simple to build initially.
✦ Cons:
- Hard to maintain or debug.
- One bug can crash the entire system.
B. Microkernel Architecture
✦ Description:
- Only essential services (like memory management, process scheduling) run in kernel mode.
- Other services (file system, device drivers) run in user mode.
✦ Example:
- MINIX, QNX
✦ Pros:
- More secure and stable.
- Easier to maintain or update services.
✦ Cons:
- Slightly slower due to frequent mode switching between user and kernel mode.
C. Hybrid Architecture
✦ Description:
- Combines features of both monolithic and microkernel architectures.
- Some parts of the OS (like drivers) may run in kernel mode for performance, while others stay in user mode for safety.
✦ Example:
- Windows NT, macOS, modern Linux
✦ Pros:
- Balances performance and security.
- Modular design allows for easier updates and better protection.
✦ Cons:
- More complex than monolithic systems.
✅ Summary Table:
Architecture | Kernel Size | Speed | Stability | Example OS |
---|---|---|---|---|
Monolithic | Large | Fast | Less Safe | UNIX, MS-DOS |
Microkernel | Small | Slower | Safer | MINIX, QNX |
Hybrid | Medium | Good | Good | Windows, macOS, Linux |
3. Process Management
In a modern OS, many tasks are happening at once. The OS is responsible for managing all of them efficiently. These tasks are called processes.
3.1 Process Concept
A process is an active instance of a program in execution. While a program is just a file (passive), a process is what you get when the program is run.
🧠 Example:
If you open Google Chrome, each open tab might be a different process or thread, depending on how it’s managed.
🔹 A process contains:
- Program code (text section)
- Program counter (current instruction address)
- Stack (function calls, parameters, local variables)
- Data section (global variables)
- Heap (dynamically allocated memory)
✦ Types of processes:
- Foreground Process – Interacts with the user (e.g., browser)
- Background Process – Runs silently (e.g., antivirus scan)
3.2 Process Lifecycle
A process doesn’t just start and finish—it goes through several stages:
🔸 1. New – Process is being created.
🔸 2. Ready – Waiting to be assigned to the CPU.
🔸 3. Running – Instructions are being executed by CPU.
🔸 4. Waiting (Blocked) – Waiting for an event (e.g., input/output).
🔸 5. Terminated – Process has finished or was killed.
🔁 The OS moves processes between these states using a scheduler.
3.3 Process Control Block (PCB)
The PCB is like an ID card or record sheet for every process. It stores important information about each running process.
📘 Contains:
- Process ID (PID)
- Process State (Ready, Running, etc.)
- Program Counter (where it left off)
- CPU Registers
- Memory Limits (where it’s stored in RAM)
- I/O status (devices in use)
- Accounting Information (CPU usage, process time)
🧠 Why is PCB important?
Without the PCB, the OS wouldn’t know how to pause and resume processes.
3.4 Threads vs Processes
🔸 Process:
- Independent, has its own memory and resources.
- Switching between processes is heavier (more overhead).
🔸 Thread:
- A lightweight unit within a process.
- Threads in the same process share memory and resources.
✦ Analogy:
- A process is a house.
- Threads are people living in that house. They share the same kitchen, bathroom (memory), etc.
📋 Key Differences:
Feature | Process | Thread |
---|---|---|
Memory | Separate | Shared |
Overhead | High | Low |
Communication | Slower (via IPC) | Faster (shared) |
Creation Time | Slower | Faster |
3.5 Inter-Process Communication (IPC)
Sometimes processes need to talk to each other – to exchange data or coordinate actions. This is done via IPC mechanisms.
🔹 Methods of IPC:
1. Shared Memory
- Multiple processes access the same memory space.
- Fast, but needs synchronization (like semaphores).
2. Message Passing
- Processes send and receive messages.
- Slower, but safer and easier to manage.
✦ Examples:
- Pipes
- Message Queues
- Sockets
- Semaphores
3.6 Context Switching
When the OS stops one process and starts/resumes another, it performs a context switch.
🔁 What is switched:
- CPU registers
- Program counter
- Stack pointers
- PCB of current and next process
✦ Steps:
- Save the state (context) of the current process to its PCB.
- Load the state of the new process from its PCB.
- Transfer control to the new process.
🧠 Why is it important?
Without context switching, multitasking would not be possible. It allows the CPU to serve multiple processes quickly.
⚠️ Downside:
Context switching takes time (CPU overhead), which is why fewer switches = better performance.
✅ Summary of Key Terms:
Term | Meaning |
---|---|
Process | A running program |
Thread | A smaller, faster unit of a process |
PCB | Data structure storing process information |
IPC | Communication between processes |
Context Switch | Switching the CPU from one process to another |
4. CPU Scheduling
When multiple processes are waiting to use the CPU, the CPU scheduler decides which one gets to run next. This improves efficiency and ensures that all tasks are handled fairly and quickly.
4.1 Scheduling Concepts
🔹 What is CPU Scheduling?
CPU Scheduling is the process of selecting a process from the ready queue and assigning it to the CPU for execution.
🧠 Why is it needed?
- In a multitasking system, there are often many processes ready to run.
- The CPU can only execute one process at a time (in single-core systems).
- CPU scheduling optimizes CPU usage and keeps the system responsive.
🔸 Key Terms:
- Scheduler: The part of the OS that performs scheduling.
- Ready Queue: List of processes that are waiting to run.
- Dispatcher: Gives control of the CPU to the selected process.
4.2 Scheduling Criteria
When evaluating a scheduling algorithm, we consider several criteria:
Criterion | Description |
---|---|
CPU Utilization | Keep the CPU as busy as possible (ideal: 100%). |
Throughput | Number of processes completed per unit time. |
Turnaround Time | Time from submission to completion of a process. |
Waiting Time | Total time a process spends waiting in the ready queue. |
Response Time | Time from submission to the first output/response. |
Fairness | Every process should get a chance to use the CPU. |
4.3 Scheduling Algorithms
Let’s explore the five major CPU scheduling algorithms, each with different strategies:
A. FCFS (First-Come, First-Served)
- How it works: The process that arrives first is executed first.
- Non-preemptive (cannot be interrupted).
🧠 Example:
Processes:
P1 – Arrival: 0, Burst: 5
P2 – Arrival: 1, Burst: 3
P3 – Arrival: 2, Burst: 1
➡ Execution Order: P1 → P2 → P3
✅ Pros:
- Simple to implement.
- Fair in arrival order.
❌ Cons:
- Long waiting time for short jobs (Convoy Effect).
- Not ideal for interactive systems.
B. SJF (Shortest Job First)
- How it works: Process with the shortest burst time runs first.
- Can be preemptive (Shortest Remaining Time First) or non-preemptive.
✅ Pros:
- Minimizes average waiting time.
❌ Cons:
- Difficult to predict burst time.
- May cause starvation for long processes.
C. Round Robin (RR)
- How it works: Each process gets a fixed time slice (quantum) in rotation.
- Preemptive – if time runs out, the process goes back to the ready queue.
🧠 Example:
Time quantum = 2 units
Processes: P1 (5), P2 (3), P3 (4)
➡ Order: P1(2) → P2(2) → P3(2) → P1(2) → P2(1) → P3(2) → P1(1)
✅ Pros:
- Good for interactive systems.
- Fair to all processes.
❌ Cons:
- Performance depends on the time quantum.
- Too short → overhead; Too long → behaves like FCFS.
D. Priority Scheduling
- How it works: Each process is assigned a priority number. Higher priority gets the CPU first.
- Can be preemptive or non-preemptive.
✅ Pros:
- Important tasks get executed faster.
❌ Cons:
- Low-priority processes may starve.
- Solution: Aging – increase priority of waiting processes over time.
E. Multilevel Queue Scheduling
- How it works: Processes are divided into multiple queues based on priority/type (e.g., system, interactive, batch).
- Each queue may use a different algorithm (e.g., Round Robin in one queue, FCFS in another).
✅ Pros:
- Good for systems with different process types.
- Tailored scheduling per group.
❌ Cons:
- Rigid – once assigned to a queue, it can’t move.
- May lead to poor CPU utilization if some queues are empty.
4.4 Multithreading and Scheduling
Modern OS supports multithreading, where multiple threads run within a single process.
🔹 How Scheduling Works with Threads:
- The OS may schedule individual threads instead of whole processes.
- Each thread can be in different states (ready, running, etc.).
🔸 Thread Scheduling Types:
- User-Level Threads:
- Managed by a user-level library.
- Faster but cannot use multiple cores efficiently.
- Kernel-Level Threads:
- Managed by the OS.
- True parallelism on multicore systems.
🧠 Benefit:
Thread scheduling allows better performance in multi-core CPUs and supports concurrent execution of lightweight tasks.
✅ Summary Table:
Algorithm | Preemptive | Ideal For | Major Drawback |
---|---|---|---|
FCFS | No | Simple tasks | High waiting time |
SJF | Optional | Short processes | Starvation |
Round Robin | Yes | Time-sharing systems | Depends on time quantum |
Priority | Optional | Important tasks | Starvation |
Multilevel Queue | Optional | Mixed workload systems | Inflexibility |
5. Synchronization and Concurrency
When multiple processes or threads run at the same time (concurrently), they may share data or resources (like a file or printer). Without proper control, they can cause data corruption or system errors.
5.1 The Critical Section Problem
A critical section is a part of a program where shared resources (like variables or files) are accessed.
🔹 Problem:
If two or more processes enter the critical section at the same time, they may change the data in unexpected ways.
🧠 Example:
- Process A and B both try to update a shared bank balance.
- If not synchronized, one update may overwrite the other.
🔸 Goal:
Ensure that only one process enters the critical section at a time.
🔒 Solution Requirements (3 Conditions):
- Mutual Exclusion – Only one process in the critical section.
- Progress – No unnecessary delay if no process is in the section.
- Bounded Waiting – A process should not wait forever.
5.2 Mutexes, Semaphores, and Monitors
These are synchronization tools to solve the critical section problem.
A. Mutex (Mutual Exclusion Object)
- Binary Lock (either locked or unlocked).
- Before entering the critical section, the process locks the mutex.
- After exiting, it unlocks it.
🔑 Example (Pseudocode):
cCopyEditmutex.lock();
// critical section
mutex.unlock();
✅ Pros:
- Simple and fast.
❌ Cons:
- Can cause deadlocks if not used carefully.
B. Semaphore
- A counter used to control access to shared resources.
- Two types:
- Binary Semaphore (like a mutex: 0 or 1)
- Counting Semaphore (value >1, used for multiple identical resources)
🔑 Operations:
wait()
(also calledP()
): Decreases the count. If count < 0, the process waits.signal()
(also calledV()
): Increases the count. Wakes a waiting process.
🧠 Example Use:
To allow 3 users to access a shared printer:
cCopyEditSemaphore printer = 3;
wait(printer); // try to access
// use printer
signal(printer); // release access
C. Monitors
- A high-level abstraction that wraps shared data with procedures to access them.
- Allows only one process inside at a time, automatically handles synchronization.
🔑 Key Feature:
- Easier to implement than semaphores.
- Common in object-oriented languages like Java.
5.3 Deadlocks
A deadlock occurs when a group of processes are waiting for each other forever, and none can proceed.
🧠 Real-life Analogy:
Two people holding one end of two pens and waiting for the other to release – but neither lets go.
🔹 Necessary Conditions for Deadlock (Coffman’s Conditions):
- Mutual Exclusion – Resources can’t be shared.
- Hold and Wait – A process holds one resource and waits for another.
- No Preemption – Resources can’t be forcibly taken.
- Circular Wait – A closed loop of waiting processes exists.
🧩 Dealing with Deadlocks:
🔸 1. Deadlock Prevention
- Design the system so that at least one of the four conditions never holds.
- Example: Don’t allow hold-and-wait.
🔸 2. Deadlock Avoidance
- Use Banker’s Algorithm (like a loan officer checking if a loan is safe).
- Allocate resources only if it leads to a safe state.
🔸 3. Deadlock Detection
- Allow deadlock to occur and then detect it using a graph or matrix.
- Recover afterward.
🔸 4. Deadlock Recovery
- Kill one or more processes to break the cycle.
- Preempt resources or roll back to a safe state.
5.4 Classical Synchronization Problems
These are standard problems used to practice synchronization techniques.
A. Dining Philosophers Problem
🧠 Scenario:
- 5 philosophers sitting around a table, 1 fork between each pair.
- Each needs 2 forks to eat, but only one fork can be picked at a time.
🔸 Problem:
Risk of deadlock if all pick up one fork and wait for the second.
🔒 Solution:
- Use semaphores or ensure only 4 philosophers can sit at a time.
B. Producer-Consumer Problem (Bounded Buffer)
🧠 Scenario:
- One process (Producer) creates data and puts it in a buffer.
- Another process (Consumer) removes data and uses it.
- The buffer has limited size.
🔸 Problem:
- Producer should wait if buffer is full.
- Consumer should wait if buffer is empty.
🔒 Solution:
- Use semaphores to track buffer space and synchronize access.
C. Readers-Writers Problem
🧠 Scenario:
- Multiple readers can read from a database at the same time.
- But if a writer is updating, no one else should read/write.
🔒 Goal:
- Maximize reader concurrency.
- Ensure mutual exclusion for writers.
🔒 Solution:
- Use reader and writer locks or semaphores.
✅ Summary of Key Concepts:
Concept | Description |
---|---|
Critical Section | Code accessing shared data |
Mutex | Lock that ensures mutual exclusion |
Semaphore | Counter for managing access |
Monitor | Object that automatically manages access |
Deadlock | Processes stuck waiting on each other |
Dining Philosophers | Classic deadlock problem |
Producer-Consumer | Buffer management problem |
Readers-Writers | Synchronizing read/write access |
6. Memory Management
Memory Management is the process by which the Operating System controls and coordinates computer memory, assigning portions called blocks to various running programs to optimize overall system performance.
6.1 Memory Allocation Techniques
When a program runs, it needs memory. The OS decides how to allocate and manage this memory.
🔹 Types of Allocation:
A. Single Contiguous Allocation
- All memory (except OS memory) is allocated to one single process.
- Simple but wastes memory; used in very old systems.
B. Fixed Partitioning
- Memory is divided into fixed-sized partitions.
- Each partition holds one process.
❌ Drawback:
- Internal Fragmentation – unused space inside partition if process is smaller.
C. Dynamic Partitioning
- Partitions are created dynamically as needed.
- No fixed sizes.
❌ Drawback:
- External Fragmentation – free space scattered in small pieces.
D. Best Fit, First Fit, Worst Fit Algorithms
- First Fit: Allocate the first hole large enough.
- Best Fit: Allocate the smallest hole that fits.
- Worst Fit: Allocate the largest hole.
6.2 Contiguous and Non-Contiguous Allocation
A. Contiguous Allocation
- Process is stored in a single block of memory.
- Easy to implement, fast access.
❌ Problem:
- Leads to fragmentation (wasted space).
B. Non-Contiguous Allocation
- Process is divided and stored in multiple blocks scattered throughout memory.
✅ Benefit:
- More flexible and uses memory efficiently.
- Enabled by Paging and Segmentation.
6.3 Paging and Segmentation
These are two non-contiguous memory allocation techniques used to manage memory efficiently.
A. Paging
🔹 Concept:
- Memory is divided into fixed-size blocks:
- Pages (process side)
- Frames (physical memory side)
- Each page maps to a frame.
🧠 Example:
If a process has 4 pages and memory has 4 available frames, the pages are loaded into the frames.
✅ Pros:
- No external fragmentation.
- Easy to implement.
❌ Cons:
- Can cause internal fragmentation if the process doesn’t fully use a page.
B. Segmentation
🔹 Concept:
- Divides memory into logical segments (like code, data, stack).
- Each segment has different size.
✅ Pros:
- More logical and modular than paging.
- Good for protection and sharing.
❌ Cons:
- Can cause external fragmentation.
6.4 Virtual Memory
🔹 Concept:
Virtual Memory is a technique that allows the system to use more memory than physically available by using a portion of the hard disk (called swap space or page file) as temporary memory.
🧠 Analogy:
Like using a notebook when your whiteboard (RAM) is full – you swap data in and out.
✅ Benefits:
- Enables execution of large programs.
- Multitasking becomes easier.
- Each process gets its own address space.
6.5 Demand Paging, Page Replacement Algorithms
A. Demand Paging
🔹 Concept:
- Only loads a page into memory when it is needed (on demand).
- Pages not in memory cause a page fault.
✅ Advantage:
- Saves memory.
- Fast startup time for programs.
B. Page Replacement Algorithms
When memory is full, and a new page is needed, the OS replaces an existing page using these strategies:
1. FIFO (First-In, First-Out)
- Replaces the oldest page.
🧠 Example:
Pages in memory: [A, B, C]
New page: D → remove A (oldest).
✅ Simple, but not always optimal.
2. LRU (Least Recently Used)
- Replaces the page that hasn’t been used for the longest time.
🧠 Tracks recent usage.
✅ Better than FIFO, but harder to implement.
3. Optimal Replacement
- Replaces the page that won’t be used for the longest future time.
✅ Best performance.
❌ Not possible in real time (requires future knowledge).
6.6 Thrashing
🔹 What is Thrashing?
Thrashing occurs when a system spends more time swapping pages in and out of memory than executing processes.
🧠 Why it happens:
- Too many processes.
- Not enough memory.
- High page fault rate.
🔒 Solution:
- Reduce number of processes.
- Increase RAM.
- Use smart page replacement algorithms.
- Use Working Set Model – only keep active pages in memory.
✅ Summary Table:
Concept | Description |
---|---|
Contiguous Allocation | Memory allocated in a single block |
Paging | Memory divided into fixed-size pages/frames |
Segmentation | Logical division into code/data/stack |
Virtual Memory | Uses disk as extra RAM |
Demand Paging | Loads pages only when needed |
Thrashing | Too much paging; system slows down |
7. Storage Management
Storage management in Operating Systems refers to how data is stored, organized, and accessed on storage devices like hard drives, SSDs, or USB drives. The OS manages files, directories, allocation, and free space to ensure data is stored efficiently and reliably.
7.1 File System Concepts
🔹 What is a File?
- A file is a named collection of related information stored on disk.
- It can be a document, program, image, or any other type of data.
🔹 What is a File System?
- A file system is the method the OS uses to store and organize files on storage devices.
- It includes:
- File names
- Locations
- Sizes
- Permissions (who can access)
✅ Common File Systems:
- FAT32, NTFS (Windows)
- ext3, ext4 (Linux)
- APFS (macOS)
7.2 File Access Methods
How a user or program accesses the contents of a file.
🔸 1. Sequential Access
- Data is read one record after another.
- Like a cassette tape – you must go through in order. ✅ Simple and fast for reading whole files
❌ Slow if you need data in the middle
🔸 2. Direct Access (Random Access)
- Access any part of the file instantly, like jumping to a chapter in a book. ✅ Fast, ideal for databases
❌ More complex to implement
🔸 3. Indexed Access
- An index is created for quick lookup (like an index at the back of a book).
- Index tells where each block of data is stored. ✅ Fast for large, structured data
❌ Needs extra space for index
7.3 Directory Structure
A directory contains information about files — names, sizes, types, and locations. Like folders on your computer.
🔹 Types of Directory Structures:
1. Single-Level Directory
- All files are in one folder.
- Simple, but confusing with many users or files.
2. Two-Level Directory
- Each user has their own folder (user directory).
- Reduces name conflicts.
3. Tree-Structured Directory
- Like a tree: folders can have subfolders.
- Most common today (e.g., Windows Explorer).
4. Acyclic Graph Directory
- Allows sharing of files and folders.
- Prevents cycles (looping).
5. General Graph Directory
- Allows complete flexibility, including cycles.
- Needs cycle detection to avoid infinite loops.
7.4 File System Implementation
How the OS actually stores and manages files on a disk.
🔹 Key Concepts:
- Boot Control Block – contains info to start OS
- Volume Control Block – info about file system size, blocks, etc.
- Directory Structure – tracks file names and paths
- File Control Block (FCB) – stores file metadata (permissions, location)
7.5 Allocation Methods
How disk space is assigned to files.
🔸 1. Contiguous Allocation
- All blocks of a file are stored together. ✅ Fast access
❌ File size must be known in advance, causes fragmentation
🔸 2. Linked Allocation
- Each block has a pointer to the next block. ✅ No fragmentation
❌ Slower access, can’t do direct access easily
🔸 3. Indexed Allocation
- Each file has an index block containing addresses of all file blocks. ✅ Direct access is easy
❌ Index takes up space, especially for large files
🧠 Example:
If a file needs 4 blocks:
- Contiguous: Block 5 → 6 → 7 → 8
- Linked: Block 5 → Block 13 → Block 9 → Block 27
- Indexed: Index block points to blocks 5, 13, 9, 27
7.6 Free Space Management
How the OS keeps track of unused disk space so new files can be stored.
🔹 Methods:
1. Bit Vector (Bitmap)
- Each bit represents a block: 1 = used, 0 = free
- Easy to find free blocks
2. Linked List of Free Blocks
- All free blocks are linked together
- Simple, but slow for finding large contiguous space
3. Grouping
- Store addresses of several free blocks in a block
- Faster allocation than linked list
4. Counting
- Store starting address and number of free blocks together
- Efficient for large free block groups
✅ Summary Table
Concept | Description |
---|---|
File System | Method OS uses to store/manage files |
Access Methods | Sequential, Direct, Indexed |
Directory Structures | Single, Two-Level, Tree, Graph |
Allocation Methods | Contiguous, Linked, Indexed |
Free Space Management | Bitmaps, Linked list, Counting |
8. I/O Systems
Input/Output (I/O) systems manage how the computer communicates with external devices like keyboards, mice, printers, disks, and network cards. The Operating System (OS) acts as a bridge between hardware and applications.
8.1 I/O Hardware and Controllers
🔹 I/O Devices
- Devices are classified as:
- Input (e.g., keyboard, mouse, scanner)
- Output (e.g., monitor, printer)
- Both (e.g., hard disk, touchscreen)
🔹 I/O Controller
- A small device or chip that controls I/O devices.
- Acts as a middleman between the device and CPU.
🔹 Main Components of I/O Controller:
- Control Registers – give commands to the device
- Status Registers – tell the device’s status (ready, busy)
- Data Buffer – stores data being transferred
- Device Logic – the actual hardware mechanism
8.2 Polling, Interrupts, and DMA
These are techniques for handling I/O communication.
🔸 Polling
- CPU repeatedly checks the device status to see if it needs attention. ✅ Simple
❌ Wastes CPU time
🧠 Example: Constantly checking if the printer is ready.
🔸 Interrupts
- The device notifies the CPU when it’s ready.
- More efficient than polling. ✅ Saves CPU time
❌ Needs extra hardware and software (interrupt handlers)
🧠 Example: Keyboard sends an interrupt when a key is pressed.
🔸 DMA (Direct Memory Access)
- Devices transfer data directly to memory, without CPU involvement. ✅ Fastest method
❌ More complex hardware required
🧠 Used for high-speed devices like disks or network cards.
8.3 Device Drivers
- A device driver is software that allows the OS to talk to hardware.
- Each device (printer, mouse, etc.) has its own driver.
- Drivers are device-specific and often provided by the hardware manufacturer.
🧠 When you install a printer, you’re actually installing a driver that lets the OS communicate with the printer hardware.
8.4 Disk Scheduling Algorithms
Disk scheduling determines the order in which disk I/O requests are processed. It improves efficiency by reducing movement of the disk’s read/write head.
🔸 1. FCFS (First Come First Serve)
- Processes requests in the order they arrive. ✅ Fair
❌ Slow for large queues
🔸 2. SSTF (Shortest Seek Time First)
- Serves the request closest to the current head position. ✅ Faster
❌ May cause starvation for far requests
🔸 3. SCAN (Elevator Algorithm)
- Head moves in one direction, servicing requests, then reverses direction. ✅ Reduces waiting time
❌ Can still favor middle tracks
🔸 4. C-SCAN (Circular SCAN)
- Like SCAN, but after reaching one end, jumps back to the beginning without servicing in the return. ✅ More uniform waiting time
🔸 5. LOOK
- Similar to SCAN, but reverses only when there are no more requests in the current direction.
✅ Visual Comparison:
If disk head is at track 50 and requests are at 10, 22, 36, 75, 85:
- FCFS → 10 → 22 → 36 → 75 → 85
- SSTF → 36 → 22 → 10 → 75 → 85
- SCAN → 36 → 22 → 10 → (reverse) → 75 → 85
- C-SCAN → 36 → 22 → 10 → (jump to end) → 75 → 85
8.5 RAID Structure
RAID (Redundant Array of Independent Disks) is a system that uses multiple disks to improve performance or provide fault tolerance.
🔹 Why RAID?
- Faster read/write
- Prevent data loss
- Increase system uptime
🔸 Common RAID Levels:
RAID Level | Description | Pros | Cons |
---|---|---|---|
RAID 0 | Striping (splits data across disks) | Fast | No fault tolerance |
RAID 1 | Mirroring (copies data on two disks) | Reliable | Expensive |
RAID 5 | Striping + parity (can recover lost data) | Balanced | Slower write |
RAID 10 | Mirroring + Striping | Fast + safe | Requires many disks |
✅ Summary
Component | Purpose |
---|---|
I/O Controller | Manages hardware communication |
Polling | CPU asks repeatedly |
Interrupts | Device notifies CPU |
DMA | Direct memory transfer |
Device Drivers | Software to control hardware |
Disk Scheduling | Efficient disk access |
RAID | Storage performance and safety |
9. Security and Protection
Security and protection are critical aspects of an operating system. They ensure that users, programs, and data are safe from unauthorized access, damage, or misuse.
9.1 Goals of Protection
Protection in OS refers to controlling access to system resources like memory, files, CPU, and I/O devices.
🎯 Main Goals:
- Prevent accidental/deliberate misuse of system resources.
- Ensure one process does not interfere with another.
- Limit user access to only authorized resources.
- Isolate faulty or malicious programs.
9.2 Domains of Protection
A domain defines a set of access rights a process or user has to resources.
- Each process runs in a domain, which specifies what operations it can perform.
- A domain can be:
- User-level (limited access)
- Kernel-level (full access)
🧠 Example:
- A text editor may not access disk drivers directly—it runs in user domain.
- The OS itself runs in the kernel domain with full access.
9.3 Access Control
Access control is how the OS grants or denies permission to use resources.
🔸 Access Control Matrix
A table showing which subjects (users/processes) can access which objects (files/devices) and how.
File1 | File2 | Printer | |
---|---|---|---|
Alice | R/W | R | No |
Bob | R | R/W |
- R/W = Read and Write
- Print = Can use printer
🔸 Access Control Lists (ACLs)
- For each object, list all users and their permissions.
🧠 Example for File1:
- Alice: read, write
- Bob: read only
9.4 Security Threats
These are potential dangers to the system.
🔹 Common Threats:
- Unauthorized access
- Data leakage
- Malware (viruses, worms, trojans)
- Denial of Service (DoS)
- Phishing or Social Engineering
9.5 Authentication and Authorization
🔸 Authentication
Verifying a user’s identity (e.g., passwords, biometrics, OTP).
🔸 Authorization
Granting or denying access after authentication.
🧠 Example:
- You log in with a password (authentication).
- The system checks what files you can open (authorization).
9.6 Encryption and Security Mechanisms
Encryption protects data by converting it into unreadable form unless decrypted with a key.
🔸 Types:
- Symmetric encryption – one key for both encryption/decryption (e.g., AES).
- Asymmetric encryption – public/private key pair (e.g., RSA).
🔸 Other Security Mechanisms:
- Firewalls – block unauthorized access.
- Antivirus software
- Intrusion Detection Systems (IDS)
- Two-factor authentication (2FA)
✅ Summary Table:
Concept | Purpose |
---|---|
Protection | Controls how processes access resources |
Domain | Defines what rights a process has |
Access Control | Manages who can access what |
Authentication | Confirms identity |
Authorization | Grants access to resources |
Encryption | Secures data in transit or storage |
Security Mechanisms | Prevent threats and attacks |
10. Virtualization and Cloud OS
10.1 Concept of Virtualization
- Virtualization means creating a virtual version of something, such as hardware, an OS, storage, or network.
- It allows multiple virtual machines (VMs) to run on a single physical machine, sharing resources.
Benefits:
- Better hardware utilization
- Isolation between VMs (fault tolerance, security)
- Easier testing and development environments
10.2 Hypervisors (Type 1 and Type 2)
A hypervisor is software that creates and manages VMs.
- Type 1 (Bare-metal hypervisor):
- Runs directly on hardware
- Examples: VMware ESXi, Microsoft Hyper-V
- Efficient and used in servers and data centers.
- Type 2 (Hosted hypervisor):
- Runs on top of a host OS
- Examples: VMware Workstation, Oracle VirtualBox
- Easier for desktops, less efficient than Type 1.
10.3 OS Support for Virtual Machines
- Modern OSs provide features to support virtualization:
- CPU extensions for virtualization (Intel VT-x, AMD-V)
- Memory management for VMs
- Device emulation and paravirtualization to improve performance
10.4 Containers vs Virtual Machines
Feature | Virtual Machines | Containers |
---|---|---|
Virtualization Type | Full OS virtualization | OS-level virtualization |
Size | Large (GBs) | Small (MBs) |
Startup Time | Minutes | Seconds |
Isolation | Strong (separate OS) | Lightweight (shares host OS) |
Use Cases | Running multiple OS types | Microservices, scalable apps |
10.5 Cloud-based Operating Systems
- Cloud OS runs in the cloud, managing resources across many servers.
- Examples: Google Chrome OS, Microsoft Azure, AWS EC2.
- Provides users with on-demand computing resources without managing physical hardware.
11. Distributed Operating Systems
11.1 Characteristics of Distributed OS
- Manages a group of independent computers and makes them appear as a single system.
- Features:
- Transparency (location, access, concurrency)
- Scalability
- Fault tolerance
- Concurrency
11.2 Network Communication and Protocols
- Uses communication protocols like TCP/IP, RPC (Remote Procedure Call), and message passing.
- Enables computers to communicate, share resources, and coordinate tasks.
11.3 Synchronization in Distributed Systems
- Synchronizing clocks, processes, and data is harder due to network delays.
- Algorithms like Lamport’s logical clocks help maintain order of events.
11.4 Distributed File Systems
- Allow files to be stored across multiple machines but accessed as if local.
- Examples: NFS (Network File System), AFS (Andrew File System).
- Provide reliability and improved access speed.
12. Case Studies of Operating Systems
12.1 UNIX/Linux
Overview:
- UNIX is one of the oldest and most influential OSes, developed in the 1970s.
- Linux is a free, open-source UNIX-like OS created by Linus Torvalds in 1991.
Key Features:
- Multiuser and multitasking capability.
- Modular design with a kernel and user space utilities.
- Uses file system hierarchy (everything is a file).
- Powerful shell and scripting for automation.
- Strong security model and permission system.
Usage:
- Servers, supercomputers, embedded systems, smartphones (Android is Linux-based).
- Preferred by developers for its flexibility and control.
12.2 Windows
Overview:
- Developed by Microsoft, Windows is the most widely used desktop OS.
- First version in 1985; modern versions include Windows 10, 11.
Key Features:
- Graphical User Interface (GUI) focused.
- Strong backward compatibility.
- Integrated with Microsoft ecosystem (Office, Azure).
- Supports multitasking, multiuser (limited compared to UNIX).
- Uses NT kernel architecture for stability.
Usage:
- Personal computers, business environments, gaming, tablets.
12.3 Android
Overview:
- Based on Linux kernel, developed by Google for mobile devices.
- Released in 2008, dominant mobile OS worldwide.
Key Features:
- Designed for touch interfaces.
- Uses Dalvik/ART runtime for apps.
- Strong integration with Google services.
- Open-source with a huge developer community.
- Supports multitasking and multiuser profiles.
Usage:
- Smartphones, tablets, smart TVs, wearables.
12.4 macOS
Overview:
- Developed by Apple Inc. for Mac computers.
- Based on BSD Unix and the XNU kernel.
- Known for sleek GUI and integration with Apple ecosystem.
Key Features:
- Unix-based stability and security.
- Smooth graphical interface and multimedia capabilities.
- Supports multitasking and multiple users.
- Strong support for creative professionals (graphics, video).
Usage:
- Personal computers, creative industries, software development.
✅ Summary Table:
OS | Kernel Type | Main Use | Key Strengths |
---|---|---|---|
UNIX/Linux | Monolithic (Linux) | Servers, desktops, embedded | Open source, stability, flexibility |
Windows | Hybrid (NT) | Desktops, business | User-friendly, wide software support |
Android | Linux-based | Mobile devices | Mobile-focused, open ecosystem |
macOS | Unix-based (XNU) | Macs, creative work | Stability, design, ecosystem |
13. Advanced Topics
13.1 Real-Time Operating Systems (RTOS)
What is RTOS?
- An RTOS is designed to process data as it comes in, within strict timing constraints.
- It guarantees responses within a fixed time, which is critical for applications where delays cannot be tolerated.
Key Features:
- Deterministic behavior (predictable response time)
- Supports priority-based scheduling
- Minimal latency and fast context switching
- Examples: VxWorks, FreeRTOS, QNX
Use Cases:
- Industrial control systems
- Medical devices
- Automotive systems (ABS, airbag controllers)
- Robotics
13.2 Mobile Operating Systems
Overview:
- OS designed specifically for mobile devices such as smartphones and tablets.
- Optimized for power efficiency, touch input, and wireless connectivity.
Popular Mobile OS:
- Android (Google)
- iOS (Apple)
Features:
- App-centric with sandboxing for security
- Power management to extend battery life
- Gesture and touchscreen support
- Integrated app stores and communication services
13.3 Embedded Systems
What is an Embedded OS?
- Embedded OS is designed to run on small devices that perform dedicated functions.
- Usually has limited resources like CPU, memory, and storage.
Features:
- Lightweight and efficient
- Real-time capabilities are common
- Often built into devices (called firmware)
Examples:
- OS in microwaves, washing machines
- Automotive controllers
- IoT devices
13.4 OS for IoT (Internet of Things)
Overview:
- IoT devices are connected objects that collect and exchange data.
- IoT OS manage limited hardware while ensuring connectivity and security.
Key Features:
- Low power consumption
- Support for wireless communication protocols (Bluetooth, Zigbee, Wi-Fi)
- Lightweight and modular design
Examples of IoT OS:
- Contiki
- TinyOS
- RIOT OS
- FreeRTOS
✅ Summary Table:
Topic | Description | Use Cases |
---|---|---|
RTOS | OS with guaranteed timing for critical tasks | Robotics, medical devices |
Mobile OS | OS optimized for mobile hardware and apps | Smartphones, tablets |
Embedded Systems OS | Lightweight OS for dedicated hardware | Appliances, automotive controls |
IoT OS | OS for small, connected devices | Smart sensors, wearables |
Conclusion: Understanding Operating Systems
Operating Systems form the core software that manages computer hardware and software resources, providing a platform for applications and users to interact seamlessly with the machine. This comprehensive guide covered all essential aspects of Operating Systems, organized in a logical flow from basic concepts to advanced topics.
Summary of Key Sections
- Introduction to Operating Systems
We started with the foundation—understanding what an OS is, its history, types, and core functions. Different types like batch, time-sharing, distributed, real-time, embedded, and mobile OS set the stage for deeper learning. - System Architecture
We explored how OS works within the computer hardware, differentiating kernel and user modes, and discussed system calls and architectural designs like monolithic and microkernels. - Process Management
The OS’s role in managing processes was detailed, including the lifecycle, threads, inter-process communication, and context switching, ensuring multitasking and resource sharing. - CPU Scheduling
We examined how the OS schedules CPU time among processes, looking at criteria and algorithms such as FCFS, SJF, Round Robin, and priority scheduling to optimize performance. - Synchronization and Concurrency
Managing concurrent access to shared resources is critical. We studied the critical section problem, synchronization tools like mutexes and semaphores, deadlocks, and classical synchronization challenges. - Memory Management
Techniques for efficient memory allocation, paging, segmentation, virtual memory, and page replacement algorithms were explored, along with understanding thrashing—a performance issue. - Storage Management
The OS’s role in managing files, directories, allocation methods (contiguous, linked, indexed), and free space was explained, showing how data is organized on disks. - I/O Systems
How OS handles communication with external devices through hardware controllers, interrupts, DMA, device drivers, and disk scheduling algorithms was covered. RAID structures were introduced to improve reliability and performance. - Security and Protection
We discussed mechanisms to protect system resources and users from threats via access control, authentication, authorization, encryption, and common security threats. - Virtualization and Cloud OS
Virtualization concepts, hypervisors, OS support for VMs, differences between containers and VMs, and cloud-based OS were introduced, highlighting modern computing trends. - Distributed Operating Systems
Characteristics, communication protocols, synchronization, and distributed file systems were discussed to understand OS managing multiple interconnected computers. - Case Studies of Operating Systems
Real-world OS examples—UNIX/Linux, Windows, Android, and macOS—were examined to understand different design philosophies and usage scenarios. - Advanced Topics
Specialized OS types such as Real-Time OS, Mobile OS, Embedded Systems, and IoT OS showed the breadth and depth of OS applications in modern technology.
overview
Operating Systems are complex but fascinating software layers that empower all computing devices from massive servers to tiny IoT gadgets. Mastery of OS concepts enables understanding of how software and hardware interact, improves programming skills, and prepares you for advanced topics in computer science and technology development.
This structured exploration from basics to advanced topics equips students, developers, and enthusiasts with a solid foundation to appreciate, use, and innovate within the OS domain.
Leave a Reply