Homeworks academic service


A description of the third generation of computer operating system known as a multics

This section reviews the history and underlying concepts of operating system development, and is organized as follows: Early Operating Systems 1940s 2. Growth of Operating System Concepts 1950s 2. Concurrency and Parallelism 2. Impact of Personal Computers on OS Technology Discussion is based on information compiled from texts by Tannenbaum, Peterson, and Crowley, several of which are listed in the Optional Class Materials section of the main Web page for this course.

We begin our historical discussion of operating systems with a perspective on generations of computer technology and early efforts at OS development in Section 2. Computer hardware technology has been classified by Tannenbaum [1] into the following generational categories: Vacuum Tubes and Plugboards 1945-1955 Generation 2: Transistors and Batch Systems 1955-1965 Generation 3: Integrated Circuits and Multiprogramming 1965-1980 Generation 4: Personal Computers 1980-1990 To this taxonomy we would also add: Networks and Distributed OS 1985-present.

Instead of exactly following Tannenbaum's classification scheme, we present a simpler organization of OS development into early, mid-level, and more sophisticated efforts, as follows. Early Operating Systems 1940s. Recall from the in-class discussion of history of computers that Charles Babbage and others who developed purely mechanical computers did not have to worry about an operating system. The computational hardware had fixed functionality, and only one computation could run at any given time.

2. Inception

Furthermore, programming as we know it did not exist. Thus, there was no operating system to interface the user to the bare machine. However, the Princeton and Penn State machines, which were based on vacuum tubes and relays, were complete working machines during the WWII period.

In contrast, Atanasoff's machine was used prior to WWII to solve systems of differential equations, and Zuse's machine only worked in part. These machines were programmed with plugboards that encoded a form of machine language. High-level programming languages and operating systems had not yet been discovered.

On these early vacuum-tube machines, a programmer would run a program by inserting his plugboard s into a rack, then turning on the machine and hoping that none of the vacuum tubes would die during the computation. Most of the computations were numerical, for example, tables of sine, cosine, or logarithm functions. In WWII, these machines were used to calculate artillery trajectories. It is interesting to note that the Generation 1 computers produced results at a slower rate than today's hand-held calculators.

Growth of Operating System Concepts 1950s By the early 1950s, punch-card input had been developed, and there was a wide variety of machinery that supported the punching both manual and automatic and reading of card decks.

The programmer would take his or her deck to the operator, who would insert it into the reader, and the program would be loaded into the machine. Computation would ensue, without other tasks being performed.

This was more flexible than using plugboards, but most programming was done in machine language ones and zeroes. Otherwise, the procedure was identical to the plugboard machines of the 1940s.

Evolution of Operating Systems Designs/What is an Operating System?

In 1947, the transistor was invented at Bell Laboratories. Within several years, transistorized electronic equipment was being produced in commercial quantity. This led computer designers to speculate on the use of transistor-based circuitry for computation. By 1955, computers were using transistor circuits, which were more reliable, smaller, and less power-consumptive than vacuum tubes.

As a result, a small number of computers were produced and sold commercially, but at enormous cost per machine.

A job is a program or collection of programs to be run on a computing machine. A job queue is a list of waiting jobs. The early transistor machines required that a job be submitted well in advance of being run.

The operator would run the card deck through the reader, load the job on tape, load the tape into the computer, run the computer, get the printout from the printer, and put the printout in the user's or programmer's mailbox. All this human effort took time that was large in relationship to the time spent running each job on the computer. If ancillary software e. At this time, a few businesses were using computers to track inventory and payroll, as well as to conduct research in process optimization.

The business leaders who had payed great sums of money for computers were chagrined that so much human effort was required to initiate and complete jobs, while the expensive machine sat idle. The first solution to this largely economic problem was batch processing. A batch processing system is a system that processes collections of multiple jobs, one collection at a time.

This processing does not occur in real time, but the jobs are collected for a time before they are processed. Off line processing consists of tasks performed by an ancillary machine not connected to the main computer. Batch processing used a small, inexpensive machine to input and collect jobs.

The jobs were read onto magnetic tape, which implemented a primitive type of job queue. When the reel of tape was full of input, an operator would take the reel to the main computer and load it onto the main computer's tape drive. The main computer would compute the jobs, and write the results to another tape drive.

After the batch was computed, the output tape would be taken to another small, inexpensive computer that was connected to the printer or to a card punch. The output would be produced by this third computer. The first and third computers performed off-line processing in input and output mode, respectively.

This approach greatly decreased the time operators dedicated to each job. However, the main computer needed to know what jobs were being input to its tape drive, and what were the requirements of those jobs. So, a system of punched card identifiers was developed served as input to a primitive operating system which in turn replaced the operator's actions in loading programs, compilers, and data into the computer.

For example, a COBOL program with some input data would have cards arranged in the following partitions: If present, data were encoded on one or more punched cards and were read in on an as-needed basis.

The primitive operating system also controlled the tape drive that read in the data. Specified the end of a job. This was required to keep the computer from running the scratch tape s and output tape s when there was no computation being performed.

A History of Operating Systems

These sequences of job control cards were forerunners of Job Control Language JCL and command-based operating systems. Unfortunately, computer manufacturers had developed a divided product line, with one set of computers for business e. The scientific machines had high-performance arithmetic logic units ALUswhereas business machines were better suited to processing large amounts of data using integer arithmetic.

Maintenance of two separate but incompatible product lines was prohibitively expensive. Also, manufacturers did not usually support customers who initially bought a small computer then wanted to move their programs to a bigger machine.

  1. Something analogous has happened in the computer industry. The first microcomputers were also capable of running only one program at a time, but later acquired the ability to multiprogram.
  2. The structure of a typical input job is shown in Fig. Bell gave away the source code to be studied by all licensees, which lead to UNIX being moved or ported to more types of computer than any other operating system.
  3. This partition is called a microkernel. There was no way that IBM or anybody else could write a piece of software to meet all those conflicting requirements.
  4. Other fields may also have this wheel of reincarnation, but in the computer industry it seems to spin faster. Consider a program which is doing a stock update, and the stock files is on mag.

Instead, the users had to rewrite all their programs to work on the larger machine. Both of these practices called application incompatibility and version incompatibility were terribly inefficient and contributed to the relatively slow adoption of computers by medium-sized businesses. A multi-purpose computer can perform many different types of computations, for example, business and scientific applications.

An upward compatible software and hardware paradigm ensures that a program written for a smaller or earlier machine in a product line will run on a larger subsquently-developed computer in the same product line. In the early 1960s, IBM proposed to solve such problems with one family of computers that were to be multi-purpose and have upward compatibility i. The computing hardware varied only in price, features, and performance, and was amenable to either business or scientific computing needs.

The 360 was the first computer to use transistors in small groups, which were forerunners of the integrated circuit introduced by Fairchild Electronics in 1968.

The idea of a family of software-compatible machines was quickly adopted by all the major computer manufacturers. Not only did the OS have to run well on all different types of machines and for different programs, but it had to be efficient for each of these applications. These new releases introduced further bugs.

  • European Sites Honeywell Bull had used Multics in the early 70s as a software factory for the Level 64;
  • Jan 18, 2002 By Andrew S;
  • Disks first appeared on large mainframes, then on minicomputers, microcomputers, and so on down the line.

In response to customer complaints arising from the effects of these bugs, IBM produced a steady stream of software patches designed to correct a given bug often, without much regard for other effects of the patch. Since the patches were hastily written, new bugs were often introduced in addition to the undesireable side effects of each new system release or patch. The IBM System 370 family of computers capitalized on the new technology of large-scale integration, and took IBM's large-scale operating system strategies well into the late 1970s, when several new lines of machines 4300 and 3090 series were developed, the latter of which persists to the present day.

We next discuss the key innovation of multiprogramming. Multiprogramming involves having more than one job in memory at a given time. Recall our discussion of early operating systems in Section 2. For example, when the main computer was loading a job from its input tape or was loading a compiled program from scratch tape, the CPU and arithmetic unit sat idle. Multiprogramming provided a solution to the idle CPU problem by casting computation in terms of a process cycle, which has the following process states: A given process P is loaded into memory from secondary storage such as tape or disk.

All or a part of P executes, producing intermediate or final results. The execution of P is suspended and P is loaded onto a wait queue, which is a job queue in memory where waiting jobs are temporarily stored similar to being put on hold during a telephone conversation.

P is taken out of the wait queue and resumes its execution on the CPU exactly where it left off when it was put into the wait queue also called the wait state. Execution of P terminates normally or abnormally and the results of P are written to an output device. The memory occupied by P is cleared for another process. Implementationally, memory was partitioned into several pieces, each of which had a different job. This virtually ensured that the CPU was utilized nearly all the time.

Spooling an acronym for Simultaneous Peripheral Operation On Line involves concurrent loading of a process from input into memory. Another technological development that made computers more efficient was the introduction of magnetic disks as storage media. The availability of disk storage led to the obsolescence of batch queues by allowing a program to be transferred directly to system disk while it was being read in from the card reader.

This application of spooling at the input, and the simultaneous introduction of spooling for printers, further increased computing efficiency by reducing the need for operators to mount tapes. Similarly, the need for separate computers to handle the assembly of job queues from cards to tape also disappeared, further reducing the cost of computing.