1.What is a thread?
A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization, it comprises of a thread id, a program counter, a register set and a stack. It shares with other threads belonging to the same process its code section, data section, and operating system resources such as open files and signals.
2. What are the benefits of multithreaded programming?
The benefits of multithreaded programming can be broken down into four major categories:
- Resource sharing
- Utilization of multiprocessor architectures
3.Compare user threads and kernel threads.
User threads are supported above the kernel and are implemented by a thread library at the user level. Thread creation & scheduling are done in the user space, without kernel intervention. Therefore they are fast to create and manage blocking system call will cause the entire process to block
Kernel threads are supported directly by the operating system .Thread creation, scheduling and management are done by the operating system. Therefore they are slower to create & manage compared to user threads. If the thread performs a blocking system call, the kernel can schedule another thread in the application for execution.
4.What is the use of fork and exec system calls?
Fork is a system call by which a new process is created.Exec is also a system call, which is used after a fork by one of the two processes to place the process memory space with a new program.
5.Define thread cancellation & target thread.
The thread cancellation is the task of terminating a thread before it has completed. A thread that is to be cancelled is often referred to as the target thread.For example, if multiple threads are concurrently searching through a database and one thread returns the result, the remaining threads might be cancelled.
6.What are the different ways in which a thread can be cancelled?
Cancellation of a target thread may occur in two different scenarios:
- Asynchronous cancellation: One thread immediately terminates the target thread is called asynchronous cancellation.
- Deferred cancellation: The target thread can periodically check if it should terminate, allowing the target thread an opportunity to terminate itself in an orderly fashion.
7.Define CPU scheduling.
CPU scheduling is the process of switching the CPU among various processes. CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive.
8.What is preemptive and nonpreemptive scheduling?
Under nonpreemptive scheduling once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or switching to the waiting state. Preemptive scheduling can preempt a process which is utilizing the CPU in between its execution and give the CPU to another process.
9.What is a Dispatcher?
The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves:
- Switching context
- Switching to user mode
- Jumping to the proper location in the user program to restart that program.
10.What is dispatch latency?
The time taken by the dispatcher to stop one process and start another running is known as dispatch latency.
11.What are the various scheduling criteria for CPU scheduling?
The various scheduling criteria are
- CPU utilization
- Turnaround time
- Waiting time
- Response time
Throughput in CPU scheduling is the number of processes that are completed per unit time. For long processes, this rate may be one process per hour; for short transactions, throughput might be 10 processes per second.
13.What is turnaround time?
Turnaround time is the interval from the time of submission to the time of completion of a process. It is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
14.Define race condition.
When several process access and manipulate same data concurrently, then the outcome of the execution depends on particular order in which the access takes place is called race condition. To avoid race condition, only one process at a time can manipulate the shared variable.
15.What is critical section problem?
Consider a system consists of ‘n’ processes. Each process has segment of code called a critical section, in which the process may be changing common variables, updating a table, writing a file. When one process is executing in its critical section, no other process can allowed to execute in its criticalsection.
16.What are the requirements that a solution to the critical section problem must satisfy?
The three requirements are
- Mutual exclusion
17.Define entry section and exit section.
The critical section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section. The section of the code implementing this request is the entry section. The critical section is followed by an exit section. The remaining code is the remainder section.
19.What is semaphores?
A semaphore ‘S’ is a synchronization tool which is an integer value that, apart from initialization, is accessed only through two standard atomic operations; wait and signal.Semaphores can be used to deal with the n-process critical section problem. It can be also used to solve various Synchronization problems.
20.Define busy waiting and spinlock.
When a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the entry code. This is called as busy waiting and this type of semaphore is also called a spinlock,because the process while waiting for the lock.