Introduction
An Operating System (OS) is system software that manages computer hardware resources and provides services for application programs. It acts as an intermediary between the user and the hardware of a system, ensuring that programs can run efficiently while sharing system resources safely.
It also allows us to communicate with the computer system without knowing how to speak the computer language. Without an OS a computer is not useful.
Modern computers run multiple programs simultaneously, handle input/output devices, manage memory, and coordinate thousands of operations per second. The operating system is responsible for managing all these activities.
Characteristics
- 🔹 Resource Management: OS manages hardware resources (CPU, memory, I/O devices) to allocate them efficiently among processes.
- 🔹 Process Management: It controls the creation, scheduling, and termination of processes, ensuring smooth multitasking.
- 🔹 Memory Management: OS handles memory allocation and deallocation, ensuring safe access and efficient use of RAM.
- 🔹 Security: OS ensures data integrity, user authentication, and protection from unauthorized access.
- 🔹 File System Management: It manages how data is stored, retrieved, and organized in files, directory structures and providing access permission.
2. Program, Process, and Thread
Understanding the difference between programs, processes, and threads is one of the most fundamental operating system concepts.
A. Program
A program is a collection of instructions written in a programming language and stored on a storage device such as a hard disk or SSD. A program is a passive state, meaning it does not perform any action until it is executed and does not allocate any memory.
For example, a compiled C++ file or a software application stored on disk is a program. It only becomes active when the operating system loads it into memory and begins execution.
B. Process
A process is a program which is under execution or it is a runtime instance of a program. When the OS executes a program, it creates a process that several components as:
- 📌 Program code
- 📌 Stack memory (stores temporary data such as function calls)
- 📌 Heap memory (stores dynamically allocated memory)
- 📌 Data section (global variables)
- 📌 Program counter (tracks current instruction)
- 📌 CPU registers
Each process is assigned a Process Control Block (PCB) by the operating system. The PCB stores essential information such as process state, process ID, CPU registers, scheduling info, and memory information.
Each process allocate separate memory spaces and this isolation improves safety and stability.
C. Thread
A thread is a light weight process and forms the basic unit of CPU utilization. A process can perform more than one operation by including multiple threads.
A thread share resources with other thread within a same process such as:
- 🔹 Code section
- 🔹 Data section
- 🔹 Heap memory
However, each thread has its own:
- 🔸 Stack
- 🔸 Program counter
- 🔸 CPU registers
Because threads share memory, they are more efficient than processes in terms of communication and context switching. Many modern applications use multithreading to improve performance and responsiveness. EXP: Web Browser.
Process Life Cycle
The process life cycle describes the different states a process goes through from its creation to its termination. It helps in understanding how the operating system manages processes efficiently.
Process States
- 🔹 New: The process is being created.
- 🔹 Ready: The process is waiting to be assigned to the CPU.
- 🔹 Running: The process is currently being executed by the CPU.
- 🔹 Waiting: The process is waiting for some event (like I/O completion).
- 🔹 Terminated: The process has finished execution.
State Transitions
- 🔸 Admitted: New → Ready
- 🔸 Scheduler Dispatch: Ready → Running
- 🔸 Interrupt: Running → Ready
- 🔸 I/O or Event Wait: Running → Waiting
- 🔸 I/O Completion: Waiting → Ready
- 🔸 Exit: Running → Terminated
Process Life Cycle Diagram
3. Types of Operating Systems
Operating system can be classified based on how they manages process, resources and memory efficiently.
-
Batch OS:
A batch operating system processes groups of jobs together without requiring interaction from the user during execution. Jobs are collected and processed sequentially.
These systems were commonly used in early computing environments where users submitted jobs using punch card and waited for results later.
Example: Early IBM systems.
-
Multiprogramming OS:
Allows multiple program to reside in memory and waiting for CPU time. The OS selects one of the processes and assigns it to the CPU. Whenever the executing process needs to wait for any other operation (like I/O), the OS selects another process from the job queue and assigns it to the CPU. This way CPU never kept and system behave like multiple process done at once.
Example: UNIX
-
Multitasking OS:
Allows multiple tasks (processes) to run simultaneously on a single CPU by switching rapidly between them and this switches is so quick that creates an illusion of multiple process done parallelly.
Example: Windows, macOS
-
Multiprocessing OS:
This type of Operating System Uses two or more CPUs for processing tasks in parallel to improve performance and reliability and way of execution.
Example: Linux (on multi-core systems)
-
Time-Sharing OS:
Allows multiple users to use the system simultaneously by giving each user a small time slice. It uses round robin scheduling for fair CPU time distribution among all its clients or users such that user feels they have full control over system.
Example: Mainframe systems, UNIX
-
Real-Time OS (RTOS):
An OS designed to process data and respond within a guaranteed time. This type of OS is designed for dedicated system like Defence, Medical, Robotics, Space Operations etc.
Example: VxWorks, RTLinux
4. CPU Scheduling
CPU scheduling refers to the method used by the operating system to determine which process should be executed by the CPU at any given time.
Since multiple processes compete for CPU time, scheduling algorithms help ensure fair and efficient allocation of processor resources.
Important metrics used in scheduling include:
- 🔹 Arrival Time – Time when a process enters the ready queue
- 🔹 Burst Time – Time required by a process for CPU execution
- 🔹 Waiting Time – Time spent waiting in the ready queue
- 🔹 Turnaround Time – Total time from process submission to completion
- 🔹 Response Time – Time taken to start responding to a process request
-
First Come First Serve (FCFS)
FCFS is the simplest CPU scheduling algorithm. Processes are executed in the order they arrive in the ready queue.
Although simple to implement, FCFS may lead to poor performance when long processes delay shorter ones, a situation known as the convoy effect.
- 🔸 Type: Non-Preemptive
- 🔸 Advantage: Simple
- 🔸 Disadvantage: Causes Convoy Effect (long job delays short jobs)
-
Shortest Job First (SJF)
The Shortest Job First algorithm selects the process with the smallest CPU burst time for execution.
SJF is theoretically optimal in minimizing average waiting time. However, it requires accurate knowledge of the future burst time of processes, which is often difficult to estimate.
On the other side, SRTF is a preemptive mode of SJF algorithm in which jobs are scheduled according to the shortest remaining time. It is comparatively better than SJF scheduling algorithm.
SJF can be implemented in two forms:
- Non-preemptive SJF
- Preemptive SJF, also called Shortest Remaining Time First (SRTF)
- 🔸 Type: Non-Preemptive
- 🔸 Advantage: Minimum average waiting time
- 🔸 Disadvantage: Starvation possible
-
Round Robin (RR)
Round Robin (RR) is a preemptive CPU scheduling algorithm designed for time-sharing systems, where each process is assigned a fixed time slot, called a time quantum (k), in a circular queue.
It ensures fairness by giving each process equal CPU time to execute.
- 🔸 Advantage: Eliminating starvation and reducing response time
- 🔸 Disadvantage: It is difficult to evaluate/decide time quantum
-
Priority Based Scheduling
In this scheduling, processes are scheduled according to their priorities, i.e., highest priority process is scheduled first. If priorities of two processes match, then scheduling is according to the arrival time or round robin as per the application importance.
- 🔸 Type: Preemptive
- 🔸 Advantage: Fair scheduling
- 🔸 Used In: Time-sharing systems
-
Multilevel Queue Scheduling (MLQ)
According to the priority of the process, processes are placed in the different queues. Generally high priority processes are placed in the top level queue. Only after completion of processes from the top level queue, lower level queued processes are scheduled.
-
Multilevel Feedback Queue (MLFQ)
It allows the process to move in between queues. The idea is to separate processes according to the characteristics and behavior of their CPU bursts. If a process uses too much CPU time, it is moved to a lower-priority queue.
- 🔸 Advantage: Prevents starvation
- 🔸 Used In: Modern OS
5. Deadlock and Starvation
A Deadlock is a situation/problem where each of the computer processes waits for a resource which is being assigned to some other process. In this situation, none of the processes gets executed since the resource it needs is held by some other process which is also waiting for some other resource to be released.
Deadlock can occurs if the 4 necessary conditions holds simultaneously:
- Mutual Exclusion: One or more than one resource is non-sharable.
- Hold and Wait: A process is holding at least one resource and waiting for resources.
- No Preemption: A resource cannot be taken from a process unless the process releases the resource.
- Circular Wait: A set of processes are waiting for each other in circular form.
Methods for handling deadlock:
There are three ways to handle deadlock
- Deadlock prevention or avoidance: The idea is to not let the system into a deadlock state example Banker Algorithm.
- Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once occurred.
- Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and Linux take.
Banker Algorithm
It is a banker algorithm used to avoid deadlock and allocate resources safely to each process in the computer system. The 'S-State' examines all possible tests or activities before deciding whether the allocation should be allowed to each process. It also helps the operating system to successfully share the resources between all the processes effectively.
The banker's algorithm is named because it checks whether a person should be sanctioned a loan amount or not to help the bank system safely simulate allocation resources.
Starvation
Starvation occurs when a process waits indefinitely for CPU time or other resources because higher priority processes continue to receive service.
The most common technique to prevent starvation is aging, where waiting processes gradually gain higher priority.
6. Synchronization Tool
In an operating system (OS), synchronization tools are mechanisms used to coordinate the execution of multiple processes or threads to ensure they access shared resources in a controlled and predictable manner. This helps prevent issues like race conditions, and ensures data consistency and integrity.
What is Race Condition
A race condition occurs when multiple processes access shared data concurrently and the final result depends on the order of execution.
Race conditions can cause inconsistent or unpredictable results.
Semaphore
Semaphore is a protected variable or abstract data type that is used to lock the resource being used. The value of the semaphore indicates the status of a common resource.
There are two types of semaphores:
-
Binary semaphores:
Binary semaphores take only 0 and 1 as value and are used to implement mutual exclusion and synchronize concurrent processes.
-
Counting semaphores:
A counting semaphore allows multiple processes to access a resource simultaneously, up to a predefined limit.
Semaphores operate using two fundamental operations:
- 🔹 Wait operation – decreases the semaphore value
- 🔹 Signal operation – increases the semaphore value
Semaphores are commonly used in classic synchronization problems such as the Producer–Consumer problem and the Readers–Writers problem.
7. Memory Management
Memory management is responsible for allocating and deallocating memory to processes while ensuring efficient utilization.
The operating system must ensure that multiple processes can run simultaneously without interfering with each other's memory.
- 🔹 A. Overlays: Memory should contains only those instruction or data that are currently executed or required in a memory.
- 🔹 B. Swapping: Swapping is an another memory management technique where the jobs or instructions are swapped out form the memory if it has completed the time slice or time quantum.
Partition Techniques
- 🔸 A. Single Partition Allocation Schemes: The memory is divided into two parts. One part is kept to be used by the OS and the other is kept to be used by the users.
- 🔸 B. Multiple Partition Schemes:
- Fixed Partition: The memory is divided into fixed size partitions.
- Variable Partition: The memory is divided into variable sized partitions.
Variable partition allocation schemes:
- First Fit: The arriving process is allotted the first hole of memory in which it fits completely.
- Best Fit: The arriving process is allotted the hole of memory in which it fits the best by leaving the minimum memory empty.
- Worst Fit: The arriving process is allotted the hole of memory in which it leaves the maximum gap.
i. Paging
Paging is a memory management technique that divides logical memory into fixed-size blocks called pages, and physical memory into blocks called frames.
When a process is executed, its pages are loaded into available frames in main memory.
Paging eliminates the problem of external fragmentation, although internal fragmentation may still occur.
Address translation between logical and physical addresses is performed using a page table.
ii. Segmentation
Segmentation divides memory into logical segments based on program structure, such as code, data, and stack.
Each segment has its own base address and length, which allows programs to be structured in a more logical manner.
However, segmentation can suffer from external fragmentation.
iii. Virtual Memory
Virtual memory allows processes to execute even if they are larger than the available physical memory.
Only the required portions of a program are loaded into memory, while the remaining parts are stored on disk.
This technique improves system performance and enables efficient multitasking.
iv. Critical Section
A critical section is a part of a program where shared resources are accessed. Only one process or thread should execute the critical section at a time.
8. Page Scheduling (Page Replacement Algorithms)
When memory becomes full and a new page needs to be loaded, the operating system must decide which page to remove. This decision is made using page replacement algorithms.
i. FIFO Page Replacement
The First-In First-Out algorithm replaces the oldest page in memory.
Although simple to implement, FIFO may lead to inefficient results due to Belady's anomaly, where increasing the number of page frames unexpectedly increases the number of page faults.
ii. Least Recently Used (LRU)
LRU replaces the page that has not been used for the longest period of time in previous.
This algorithm is based on the principle that recently used pages are likely to be used again soon.
LRU provides better performance compared to FIFO but requires additional overhead to track page usage.
iii. Optimal Page Replacement
The optimal algorithm replaces the page that will not be used for the longest time in the future.
Although it provides the best theoretical performance, it cannot be implemented in real systems because future memory references are unknown. It is often used to set a benchmark to evaluate the performance of other page scheduling algorithms.
9. Disk Scheduling
Disk scheduling is done by operating systems to schedule I/O requests arriving for disk. Disk scheduling is also known as I/O scheduling.
- 🔹 i. Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or written.
- 🔹 ii. Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads.
- 🔹 iii. Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.
- 🔹 iv. Disk Access Time: It is sum of [Seek Time + Rotational Latency + Transfer Time]
- 🔹 v. Disk Response Time: Response Time is the average time spent by a request waiting to perform its I/O operation. Average Response time is the response time of all requests.
Disk Scheduling Algorithms
-
FCFS Disk Scheduling:
Requests are processed in the order they arrive in the disk queue. While simple, this method may lead to inefficient disk movement.
-
SSTF (Shortest Seek Time First)
In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first.
Although SSTF improves performance compared to FCFS, it may lead to starvation for requests that are far from the current head position.
-
SCAN Algorithm:
In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence is also known as elevator algorithm.
This reduces excessive disk movement and improves overall efficiency.
-
CSCAN:
In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.
-
LOOK:
It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm instead of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only.
Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
-
CLOOK:
As LOOK is similar to SCAN algorithm, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm instead of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request.
Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
Conclusion
At the end, we can say that the Operating System is the backbone of a modern computer system, which helps manage and coordinate all system-related operations. Having a strong foundation in Operating Systems is essential for understanding how software interacts with hardware.
In the next article, we will discuss some of the most important questions that are often asked in technical interviews.
Note: This article is specifically focused on the interview perspective and is not sufficient for GATE preparation, but it will definitely help in quick revision.


.png)
Comments
Post a Comment