An example of a Unix system call for process management is the fork() system call. The said system call creates a new process. Thus, we have the notion of the parent and child process and each process generates a process ID (a return value), which is a unique integer. Generally, the new process (child) contains a copy of the address of the original process (parent). Both process will continue its execution after the fork() system call is invoked. The fork() call returns zero for the new process and the nonzero process ID of the child process is returned to the parent process.
Shell commands like “echo” simply outputs the results when a system call is invoked. 2. Multiprogramming is a process in which the operating system organizes job in such a way that the CPU is always busy (which is means the CPU has always a process to execute). On the other hand, time sharing is said to be “a logical extension of multiprogramming” wherein the CPU executes multiple jobs by swapping among them, making the swapping so frequent. Thus, users can interact with each program while it is running. Apparently, both multiprogramming and time sharing utilize a system’s resources (such as CPU and memory) properly and effectively.
However, multiprogramming does not provide an interactive environment for the user while time sharing allows the user communicate with the system directly since it only has a short response time. It also allows multiple users to share a computer and use it simultaneously. 3. There are three general ways or approaches in designing an operating system: using a simple structure approach, layered approach and microkernel approach. Early operating systems are just small and simple systems before. Its primary intention is to provide all the necessary functionality using least space, thus, disregarding an intricate design in the components of an OS.
Example of an OS with a simple structure is the MS-DOS. On the other, the layered approach is the method in which the operating system is broken down into a number of levels or layers. Each layer provides different functionality for the system. For example, we look of a specific layer, let us say layer N. In the layered approach, layer N’s data structures and routines can be invoked by layer N+1 for some of its services and routine. Thus, we can also say that layer N invokes some of the services provided by the layer below it (layer N-1).
Lastly, the microkernel approach structures the operating system in such a way that it removes all unnecessary components of the kernel and “implementing them as system and user-level programs. ” Microkernel provides least process and memory management. Virtual machines take the layered approach “to its logical conclusion. ” A virtual machine creates an interface similar to the actual hardware of the system. It creates an illusion that there are multiple processes, with each process being executed on its own processor and its own memory.
The resources of the physical system are split in order to create virtual machines. An apparent advantage of the virtual machine concept is that it allows a complete protection of a computer’s resources. This is so since each virtual machine is separated from other virtual machines and does not allow direct sharing of resources. However, virtual machines are hard to implement since it requires an exact copy of the underlying hardware in order to function properly. 4. During a process execution, a process changes its “state”.
A process in the “ready” state indicates that the process is waiting to be assigned by the processor for execution. On the other hand, the “running” state shows that the instructions are currently being executed. It is important to note that only one process can be in the “running” state at one time. 5. A process is defined as a unit of work in a time-sharing system. To simply put, it is a program in execution. On the other hand, a thread is a basic unit of CPU utilization. It usually shares with other thread some of its sections and other OS resources. A conventional process generally uses a single thread.
If a process has multiple threads, it can be concluded that it can perform more that one job at a time. Thus, we now have the concept of single-threaded process and multithreaded process. It is important for an operating system to support both process and thread. It increases responsiveness and user interaction, specifically in multithreaded processes. The combination of process and threads also provides an effective resource sharing mechanism since thread share resources of the process in which they belong. Threads and processes are also important in multiprocessor architectures. 6.
One of the noticeable advantages of using semaphore over Dekker’s algorithm in process synchronization is that processes do not undergo busy waiting while waiting for the needed resources. In contrast, Dekker’s algorithm uses busy waiting. While these processes are waiting for resources, the CPU is allowed to perform other jobs. In addition, semaphores works on multiprocessor systems. Synchronization using Dekker’s algorithm is limited only to two processes. Most CPUs as of now execute jobs in an indefinite manner. Thus, Dekker’s is now going to work on multiprocessor systems without utilizing memory barriers.
7. In a CPU scheduling method, usually the Round Robin, each process gets a small amount of CPU time, usually 10-100 milliseconds, which is called time quantum. Once the time quantum has elapsed, the process is removed and added to the end of the ready queue. When the time quantum is too small, we can say that the overhead is too high with respect to the context switch. On the other hand, when the time quantum is too large, we can conclude that the performance of the CPU scheduler will look like just of the first in first out (FIFO) scheduling algorithm.
8. The data transfer or reading using DMA (direct memory access) starts when the device driver is asked to transmit disk data to buffer at a certain address. The device driver then informs the disk controller to transfer X bytes from disk to the said buffer. Disk controller initiates DMA transfer. Then the disk controller sends each byte to the DMA controller. Afterward, DMA controller transmits bytes to the buffer. Lastly, after transferring of data, DMA sends an interrupt to the CPU that indicates transfer completion. 9.
Some of the goals of device-independent I/O software are uniform interfacing for device drivers, device naming mechanisms and report errors if detected. Device independence visualizes a computing environment in which programs can access any I/O device without specifying the device that will be used in advance. Thus, it is important to have uniform interfacing for all device derivers in order to achieve it. The device naming mechanism refers to the uniform naming of I/O devices. Lastly, error handling of device-independent I/O software should be done as close to the hardware as possible in order to manage error more effectively.