06/06/2017

Operating System Introduction (Unit 1)

Operating System is software that works as an interface between a user and the computer hardware. The primary objective of an operating system is to make computer system convenient to use and to utilize computer hardware in an efficient manner. The operating system performs the basic tasks such as receiving input from the keyboard, processing instructions and sending output to the screen.

Operating system manages overall activities of a computer and the input/output devices attached to the computer. It is the first software you see when you turn on the computer, and the last software you see when the computer is turned off. 

When you turn on the computer, the operating system program is loaded into the main memory. This program is called the kernel. Once initialized, the system program is prepared to run the user programs and permits them to use the hardware efficiently. Its also a central unit of Operating system and in informal terms can also be called as a heart of operating system.


OS Layered Architecture


Types Of Operating System


Mainframe Operating Systems

At the high end are the operating systems for the mainframes, those room sized computers. These computers differ from personal computers in terms of their i/O capacity. The operating systems for mainframes are heavily oriented toward processing many jobs at once, Offer three kinds of services: batch, transaction processing, and time sharing.

A batch system is one that processes routine jobs without any interactive user present. Claims processing in an insurance company.

Transaction processing systems handle large numbers of small requests, for example, check processing at a bank

Time sharing systems allow multiple remote users to run jobs on the computer at once, such as querying a big database.

Mainframe operating systems often perform all of them. eg,OS/390, a descendant of OS/360. however mainframe operating systems are gradually being replaced by UNIX variants such as Linux.


Server Operating Systems

They run on servers, which are either very large personal computers, workstations, or even mainframes. They serve multiple users at once over a network and allow the users to share hardware and software resources. Servers can provide print service, file service, or Web service.Internet providers run many server machines to support their customers. Websites use servers to store the Web pages and handle the incoming requests. Typical server operating systems are Solaris, FreeBSD, Linux and Windows Server 200x.


Multiprocessor Operating Systems

Multiple CPUs into a single system, these systems are called parallel computers, multi computers, or multiprocessors. Many popular operating systems, including Windows and Linux, run on multiprocessors



Personal Computer Operating Systems

personal computer operating system - support multi programming,Their job is to provide good support to a single user. Widely used for word processing, spreadsheets, and Internet access. Common examples are Linux, FreeBSD, Windows Vista, and the Macintosh operating system


Handheld Computer Operating Systems

handheld computer or PDA (Personal Digital Assistant) is a small computer that fits in a shirt pocket and performs a small number of functions,eg electronic address book. The operating systems that run on these handhelds are sophisticated, with the ability to handle telephony, digital photography, and other functions also runs third-party application eg, Symbian OS and Palm OS.

difference between handhelds and PCs is that the former do not have multi gigabyte hard disks


Embedded Operating Systems

Embedded systems run on the computers that control devices that are not generally thought of as computers and which do not accept user-installed software. Typical examples are microwave ovens, TV sets, cars, No untrusted software will ever run on it.. distinguish from handheld , This means that there is no need for protection between applications, leading to some simplification. Systems such as QNX and VxWorks


Sensor Node Operating Systems

Networks of tiny sensor nodes are being deployed for numerous purposes.

-works wirelessly
-communicates with base station

Guard national borders, detect fires in forests, measure temperature and precipitation for weather forecasting The sensors are small battery-powered computers with built-in radios. They have limited power and must work for long periods of time. programs are loaded in advance;design simpler. TinyOS is a well-known operating system or a sensor node.


Real-Time Operating Systems

These systems are characterized by having time as a key parameter. If the action absolutely must occur at a certain moment (or within a certain range), we have a hard real-time system. Many of these are found in industrial process control,These systems must provide absolute guarantees that a certain action will occur by a certain time. Soft real-time system, in which missing an occasional deadline, while not desirable, is acceptable and does not cause any damage. Real-time systems are more for industrial usage.


Smart Card Operating Systems

The smallest operating systems run on smart cards,  credit card sized devices containing a CPU chip have very severe processing power and memory constraints. Some are powered by contacts in the reader into which they are inserted, but contact less smart cards are inductively powered, which greatly limits what they can do. Some of them can handle only a single function, such as electronic payments, but others can handle multiple functions on the same smart card. Smart cards are Java oriented


Views of Operating system


User View

The user's view of the computer varies according to the interface being used. Most computer users sit in front of a PC, consisting of a monitor, keyboard, mouse, and system unit. Such a system is designed for one user to monopolize its resources. The goal is to maximize the work (or play) that the user is performing. In this case, the operating system is designed mostly for ease of use, with some attention paid to performance and none paid to resource utilization—how various hardware and software resources are shared. Performance is, of course, important to the user; but rather than resource utilization, such systems are optimized for the single-user experience.

In other cases, a user sits at a terminal connected to a mainframe or minicomputer. Other users are accessing the same computer through other terminals. These users share resources and may exchange information. The operating system in such cases is designed to maximize resource utilization— to assure that all available CPU time, memory, and I/O are used efficiently and that no individual user takes more than her fair share. In still other cases, users sit at workstations connected to networks of other workstations and servers. These users have dedicated resources at their disposal, but they also share resources such as networking and servers—file, compute, and print servers. Therefore, their operating system is designed to compromise between individual usability and resource utilization. Recently, many varieties of handheld computers have come into fashion. Most of these devices are standalone units for individual users. Some are connected to networks, either directly by wire or (more often) through wireless modems and networking.

System View


From the computer's point of view, the operating system is the program most intimately involved with the hardware. In this context, we can view an operating system as a resource allocator. A computer system has many resources that may be required to solve a problem: CPU time, memory space, file-storage space, I/O devices, and so on. The operating system acts as the manager of these resources. Facing numerous and possibly conflicting requests for resources, the operating system must decide how to allocate them to specific programs and users so that it can operate the computer system efficiently and fairly. As we have seen, resource allocation is especially important where many users access the same mainframe or minicomputer. A slightly different view of an operating system emphasizes the need to control the various I/O devices and user programs. An operating system is a control program. A control program manages the execution of user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

System Call

We can say that For Performing any Operation a user must have to Request for a Service from the System. For Making any Request a user will prepare a Special call which is also known as the System Call.

The System Call is the Request for Running any Program and for Performing any Operation on the System. When a user First Time Starts the System then the System is in the user Mode and When he request For a Service then the User Mode will be Converted into the Kernel Mode Which just Listen the Request of the user and Process the Request and Display the Results those are Produced after the Processing. When a user Request for Opening any Folder or When a Moves his Mouse on the Screen, then this is called as the System call which he is using for performing any Operation. Generally, system calls are made by the user level programs in the following situations:
  • Creating, opening, closing and deleting files in the file system. 
  • Creating and managing new processes.
  • Creating a connection in the network, sending and receiving packets.
  • Requesting access to a hardware device, like a mouse or a printer.
In a typical UNIX system, there are around 300 system calls. Some of them which are important ones in this context, are described below.


The fork() system call is used to create processes. When a process (a program in execution) makes a fork() call, an exact copy of the process is created. Now there are two processes, one being the parent process and the other being the child process.
The process which called the fork() call is the parent process and the process which is created newly is called the child process. The child process will be exactly the same as the parent. 
The exec() system call is also used to create processes. But there is one big difference between fork() and exec() calls. The fork() call creates a new process while preserving the parent process. But, an exec() call replaces the address space, text segment, data segment etc. of the current process with the new process.
It means, after an exec() call, only the new process exists. The process which made the system call, wouldn’t exist. There are many system calls in system. Below diagram shows system calls used in particular tasks.



Operating System Structure

For efficient performance and implementation an OS should be partitioned into separate subsystems, each with carefully defined tasks, inputs, outputs, and performance characteristics. These subsystems can then be arranged in various architectural configurations:

Simple Structure

When DOS was originally written its developers had no idea how big and important it would eventually become. It was written by a few programmers in a relatively short amount of time, without the benefit of modern software engineering techniques, and then gradually grew over time to exceed its original expectations.

It does not break the system into subsystems, and has no distinction between user and kernel modes, allowing all programs direct access to the underlying hardware. ( Note that user versus kernel mode was not supported by the 8088 chip set anyway, so that really wasn't an option back then. ) Below is MS-DOS Layer structure.


The original UNIX OS used a simple layered approach, but almost all the OS was in one big layer, not really breaking the OS down into layered subsystems:


Layered Approach


Another approach is to break the OS into a number of smaller layers, each of which rests on the layer below it, and relies solely on the services provided by the next lower layer.This approach allows each layer to be developed and debugged independently, with the assumption that all lower layers have already been debugged and are trusted to deliver proper services. 


Layered approaches can also be less efficient, as a request for service from a higher layer has to filter through all lower layers before it reaches the Hardware, possibly with significant processing at each step.



 Micro kernel

  • The basic idea behind micro kernel is to remove all non-essential services from the kernel, and implement them as system applications instead, thereby making the kernel as small and efficient as possible.
  • Most microkernels provide basic process and memory management, and message passing between other services, and not much more.
  • Security and protection can be enhanced, as most services are performed in user mode, not kernel mode.
  • Mach was the first and most widely known microkernel, and now forms a major component of Mac OSX.
  • Windows NT was originally microkernel, but suffered from performance problems ,improved performance by moving more services into the kernel, and now XP is back to being more monolithic.
  • Another microkernel example is QNX, a real-time OS for embedded systems.



    Monolithic kernel

    All the parts of a kernel like the Scheduler, File System, Memory Management, Networking Stacks, Device Drivers, etc., are maintained in one unit within the kernel in Monolithic Kernel

    Advantages
    •Faster processing

    Disadvantages
    •Crash Insecure
    •Porting Inflexibility 
    •Kernel Size explosion

    Examples 
    •MS-DOS, Unix, Linux

    Concept of Virtual Machines

    The concept of a virtual machine is to provide an interface that looks like independent hardware, to multiple different OSes running simultaneously on the same physical hardware. Each OS believes that it has access to and control over its own CPU, RAM, I/O devices, hard drives, etc.One obvious use for this system is for the development and testing of software that must run on multiple platforms and/or OSes. One obvious difficulty involves the sharing of hard drives, which are generally partitioned into separate smaller virtual disks for each operating OS.

    Benefits

    • Each OS runs independently of all the others, offering protection and security benefits.
    • Sharing of physical resources is not commonly implemented, but may be done as if the virtual machines were networked together. 
    • Virtual machines are a very useful tool for OS development, as they allow a user full access to and control over a virtual machine, without affecting other users operating the real machine.
    • As mentioned before, this approach can also be useful for product development and testing of SW that must run on multiple OS / HW platforms. Diagram explains about virtual machine



    Below is the diagram for the comparision of different types of Kernel :-