Preemptive Scheduling is a common method where the operating system forcibly interrupts a running process to give CPU time to another. This ensures that no single process can 'hang' the entire system by entering an infinite loop or performing heavy calculations without yielding.
Pipelining is a technique where multiple instructions or tasks are overlapped in execution, similar to an assembly line. While one task is being outputted, the next is being processed, and a third is being fetched, significantly increasing the rate of task completion.
Interrupt Handling allows the system to react to external events, such as a mouse click or a network packet arrival. When an interrupt occurs, the CPU pauses its current concurrent task, handles the event, and then resumes the interleaved schedule.
It is vital to distinguish between concurrency (managing many tasks) and parallelism (doing many tasks at once).
| Feature | Concurrent Processing | Parallel Processing |
|---|---|---|
| Hardware | Can run on a single-core CPU | Requires multi-core or multiple CPUs |
| Execution | Tasks are interleaved (one at a time) | Tasks run simultaneously (at the same time) |
| Primary Goal | Responsiveness and Throughput | Computational Speed and Performance |
| Analogy | One cook juggling three different pans | Three cooks each handling one pan |
While parallel processing is a subset of concurrency that requires specific hardware, concurrent processing is a software design pattern that can be implemented on any hardware to improve task management.
When asked to explain concurrency, always mention time-slicing and interleaving. Examiners look for the specific technical detail that tasks are given 'fractions of time' to make progress.
If a question asks for the benefits of concurrency in a specific scenario (like a web server or a game), focus on responsiveness. Explain that it prevents the system from appearing 'frozen' while waiting for background tasks like file saving or network requests.
Always check for dependencies in a problem set. If Task A must finish for Task B to start, they are 'sequential' and cannot be optimized through concurrency; identifying this shows a high level of understanding.
A common misconception is that concurrency always makes a single task run faster. In reality, a single computation-heavy task may actually take longer to finish in a concurrent environment due to the overhead of context switching and sharing CPU time with other tasks.
Starvation occurs when a low-priority task is perpetually denied CPU time because higher-priority tasks are constantly being scheduled. This is a failure of the scheduling algorithm to ensure 'fairness' in a concurrent system.
Race Conditions happen when two concurrent tasks attempt to modify the same piece of data at the same time. Without proper synchronization, the final state of the data depends on the unpredictable order of the time-slices, leading to logic errors.