What Is Linux?

The following material is excerpted from

Graham Glass and King Ables, Linux for Programmers and Users,
Pearson Prentice-Hall, 2006. ISBN 0-13-185748-7. p 4-15

Operating System

The components of a computer system can't function together without an operating system. Many different operating systems are available for PCs, minicomputers, and mainframes-the most common ones being Linux, Open VMS, MacOS, various versions of UNIX, and Windows. Linux and UNIX are available for many different hardware platforms, whereas most other operating systems are tied to a specific hardware family. This is one of the first good things about Linux-it's available for just about any machine.

Some operating systems are very large and include the command interpreter, windowing capability, and tools built into the operating system code. Linux is different. The part of Linux that can be considered the running "system" is known as the Linux kernel and provides only the "core" capabilities and interfaces for moving data between devices and managing running processes. The commands, editors, programs, windowing systems, and most of the other parts of the system with which people interface run separately from the kernel code.

Of the operating systems listed above, only Linux, Open VMS, and UNIX allow more than one user to use the computer system at a time, providing a multi-user environment. Some businesses still buy a powerful minicomputer with twenty or more terminals and then use UNIX as the operating system that shares the CPUs, memory, and disks among the users. Now that workstation hardware is relatively inexpensive, every user can run a UNIX or Linux system on his or her desk.

Software

One way to describe the hardware of a computer system is that it provides a framework for executing programs and storing files. The kinds of programs that run on Linux platforms vary widely in size and complexity, but tend to share certain common characteristics. Here is a list of useful facts concerning Linux programs and files:

Figure 1 is an illustration of a tiny Linux directory hierarchy that contains four files and a process running the "sort" utility.


Figure 1. Directory Hierarchy

Sharing Resources

Another operating system function that Linux provides is the sharing of limited resources among competing processes. Limited resources in a typical computer system include CPUs, memory, disk space, and peripherals such as printers. Here is a brief outline of how these resources are shared:

Chapter 13, "Linux Internals," contains more details on how these sharing mechanisms are implemented. We've now looked at every major role that Linux plays as an operating system except one-as a medium for communication.

Communication

The components of a computer system cannot achieve very much when they work in isolation:

Linux provides several different ways for processes and peripherals to talk to each other, depending on the type and speed of the communication. For example, one way that a process can talk to another process is via an interprocess communication mechanism called a "pipe." A pipe is a one-way medium-speed data channel that allows two processes on the same machine to talk. If the processes are on different machines connected by a network, then a mechanism called a "socket" may be used instead. A socket is a two-way high-speed data channel.

It is becoming quite common nowadays for different pieces of a problem to be tackled by different processes on different machines. For example, there is a graphics system called the X Window System that works by using something termed a "client-server" model. One computer (the X "server") is used to control a graphics terminal and to draw the various lines, circles, and windows, while another computer (the X "client") generates the data that is to be displayed. Arrangements like this are examples of distributed processing, where the burden of computation is spread among many computers. In fact, a single X server may service many X clients. Figure 2 is an illustration of an X-based system.


Figure 2. An X server with X clients.

We will discuss the X Window System further in Chapter 10, "The Linux Desktop."

Utilities

Even the most powerful operating system isn't worth much to the average user unless there is useful software available for it. Linux distributions come complete with at least two hundred small utility programs, including a couple of text editors, a C/C++ compiler, a sorting utility, a graphical user interface, several command shells, and text-processing tools. Through the open source movement and commercial resources, many other popular packages like spreadsheets, compilers, and desktop publishing tools are also available.

Programmer Support

Any good operating system must also provide an environment in which programmers can develop new and innovative software to address the changing needs of the user community. Linux caters very well to programmers. It is an example of an "open" system, which means that the internal software architecture is well documented and available in source code form, either free of charge or for a relatively small fee. The features of Linux-such as parallel processing, interprocess communication, and file handling-are all easily accessible from a programming language such as C via a set of functions known as "system calls." Many facilities that were difficult to use on older operating systems are now within the reach of every systems programmer.

Standards

In "the old days," a computer ran an operating system that was designed and developed to run only on that specific computer. Another computer built by another company ran a different operating system, so not only was it difficult to move application software to another system, it was difficult to even use the other system if you didn't already know the operating system.

When UNIX, the predecessor and inspiration for Linux, was created, the authors wrote a great deal of the code in the C programming language. This made it relatively easy to port UNIX to different hardware platforms. This is an important benefit and has contributed a great deal to the proliferation and success of UNIX.

Over time, this portability led to the definition of a standard for the interfaces and behavior of a "portable operating system." Today, POSIX 1003.1 is the standard for UNIX and UNIX-like operating systems and is maintained by IEEE and The Open Group. Because Linux implements this POSIX standard, it "looks and feels" like a UNIX system even though no codel from any UNIX implementation is used in Linux. For more information about UNIX standards, visit the following web sites:

http://www.ieee.org

http://www.opengroup.org

http://www.unix.org

Linux Lineage

Linux has emerged as the most successful operating system adhering to the POSIX standard for a portable operating system. This is largely due to the already existing popularity of UNIX, the availability of Linux for many different platforms, and the freedom of use and low cost for Linux because of its distribution as open software. In order to understand what Linux is, you have to know a little something about its roots.

UNIX

A computer scientist named Ken Thompson was interested in building a system for a game called "Space Wars," which required a fairly fast response time. The operating system that he was using, MULTICS2, didn't give him the performance that he needed, so he decided to build his own operating system on a spare PDP-7 system. He called it UNICS because the "UNI" part of the name implied that it would do one thing well, as opposed to the "MULTI" part of the "MULTICS" name, which he felt tried to do many things without much success. He wrote his operating system in assembly language, and the first version was very primitive; it was only a single-user system, it had no network capability, and it had a poor memory management system for sharing memory between processes. However, it was efficient, compact, and fast, which was exactly what he wanted.

A few years later, a colleague of Ken's, Dennis Ritchie, suggested that they rewrite his operating system using the C language, which Dennis had recently developed from a language called B. The idea that an operating system could be written in a high-level language was an unusual approach at that time. Most people felt that compiled code would not run fast enough3. and that only direct use of machine language was sufficient for such an important component of a computer system. Fortunately, C was slick enough that the conversion was successful, and the new operating system suddenly had a huge advantage over other operating systems-its source code was understandable. Only a small percentage of the original source code remained in assembly language, which meant that porting the operating system to a different machine was possible. As long as the target machine had a C compiler, most of the operating system would work with no changes; only the assembly-language sections had to be rewritten.

Bell Laboratories started using this prototype version of what was by then called UNIX in its patent department, primarily for text processing, and a number of UNIX utilities that are found in modern UNIX systems were originally designed during this time period. Examples of these utilities are nroff and troff. But because AT&T was prohibited from selling software due to antitrust regulations in the early 1970s, Bell Laboratories licensed UNIX source code to universities free of charge, hoping that enterprising students would enhance the system and further its progress into the marketplace.

Indeed, graduate students at the University of California at Berkeley (Bill Joy, co-founder of Sun Microsystems, among them) took the task to heart and made some huge improvements over the years, including the first good memory management system and the first real networking capability. In the late 1970s, the university began to distribute its own version of UNIX, called the Berkeley Software Distribution (BSD UNIX), to the general public. The differences between these versions of UNIX can still be seen in some versions of UNIX to this day.

With the breakup of the Bell System and release from many antitrust restrictions, AT&T was free to start selling UNIX licenses in the mid 1980s. AT&T UNIX had proceeded through releases known as System III and System V. By the end of the 1980s, workstation hardware was becoming economical and UNIX was infiltrating businesses and engineering environments, because companies like Sun (who commercialized BSD UNIX) and AT&T were selling and supporting its use.

Both System V and BSD UNIX have their own strengths and weaknesses, as well as a lot of commonality. Two consortiums of leading computer manufacturers gathered behind these two versions of UNIX, each believing its own version to be the best. UNIX International, headed by AT&T and Sun, backed the latest version of System V UNIX, called System V Release 4. The Open Software Foundation (OSF), headed by IBM, Digital Equipment Corporation, and Hewlett-Packard, attempted to create the successor to BSD UNIX called OSF/l. Both groups complied with a set of standards created by the Portable Operating System Interface (POSIX) committee of The Institute of Electrical and Electronics Engineers (IEEE). The OSF project has fallen by the wayside in recent years, leaving System V as the apparent "winner" of the "UNIX wars," although most of the best features of BSD UNIX have been rolled into most System V-based versions of UNIX. Hence, Solaris (from Sun Microsystems), HP-UX (from Hewlett-Packard), AIX (from IBM), and IRIX (from Silicon Graphics, Inc.), while all System V-based, also include most of the different features of BSD UNIX at varying levels of completeness.

While the so-called "UNIX wars" (BSD vs. System V) were playing out, however, the watershed event that would lead to the evolution of Linux was brewing.

Open Source Software and the Free Software Foundation

The UNIX community has a long tradition of software being available in source code form, either free or charge or for a reasonably small fee, enabling people to learn from or improve the code in order to evolve the state of the art. UNIX itself started out this way, and many individual components related to (and in many cases now a part of) UNIX share this tradition. So it's no surprise that in a world of otherwise proprietary, shrink-wrapped software, where you buy what's available and conform your requirements so that they are satisfied by the software, those who support the idea of freely available source code have banded together.

One of the first proponents of the idea of freely available software was Richard Stallman, one of the founders of the Free Software Foundation in the mid-1980s. Stallman had already written a version of the popular Emacs text editor and made it publicly available. He believed that everyone should have the right to obtain, use, view, and modify software. He started the GNU4 Project whose goal was to reproduce popular UNIX tools, and ultimately an entire UNIX-like operating system, in new code that could be freely distributed because it did not contain any licensed code as UNIX did. Early products included a version of the popular text editor Emacs and the GNU C Compiler. Today, GNU applications are numerous and popular, but the kernel itself proved to be more challenging. Work continues on GNU Hurd, a Machbased Unix-like kernel, that will complete FSFs goal of providing a free and standard operating system and tools.

The "free" in FSFs philosophy of free software does not mean the software is available at no cost, but rather that it comes with the freedom to use, view, and modify it. Up to this point, when someone wanted to give away their software, they simply stated that it belonged to the public domain. However, this allowed people to change it and include it in proprietary software, thus removing the freedom for others that had allowed them to use it. In order to retain their ownership and rights to GNU software but still provide for its use by the widest possible audience, the FSF developed the GNU General Public License (GPL) under which GNU software is licensed to the world. The GNU GPL provides for the copying, use, modification, and redistribution of GNU software provided that the same freedom to use, modify, and distribute is passed on to anyone who uses your version of the software. Where a copyright is used to protect the rights of the owner, the goal here is to protect the rights of the recipient of a distribution of the software as well. Thus, the FSF coined the term copy left to describe this somewhat inverted meaning. [Fink, 2003] is an excellent examination of the phenomenon of open source software, why it came about, where it works, and where it does not. For more information on the Free Software Foundation and the GNU Project, visit their web site:

http://www.fsf.org

Linus

In 1991, Linus Torvalds, a student at the University of Helsinki in Finland, posted a message to an Internet newsgroup, asking if anyone was interested in helping him develop a UNIX-like kernel. He had been playing with Minix, a small UNIX-like kernel developed by Andrew Tanenbaum for teaching operating system concepts, but Minix's role was to be small and demonstrate concepts, not to be a "real" operating system. Linus and like-minded programmers found each other and began to develop their own kernel code.

When he started his work, Linus had no intention of it becoming anything more than a hobby. Because he wanted others to be able to use it freely, Linus released Linux (standing for "Linus' Minix") 1.0 under the GNU GPL in 1994.

At first, Linus and a few friends maintained and modified the source code, but today thousands of volunteer developers around the world contribute new code and fixes. The combination of the Linux kernel and GNU utilities allows one to create a complete UNIX-like operating system, running on many different hardware platforms, and available in source form so you can make your own bug fixes and enhancements to it.

Linux shares no common code with any version of UNIX but adheres to the POSIX operating system standard, so it is indistinguishable from UNIX to the casual user. And because it has been written with the benefit of years of operating systems knowledge, in many places it is actually a significant improvement over UNIX.

With the release of Linux 2.0 in 1996, Linux became a major competitor to other popular operating systems, including commercial versions of UNIX.

Linux Packaging

Linus and his group of volunteer programmers developed a kernel, which is the core part of the operating system. But if you installed a kernel on a machine without the hundreds of tools, utilities, and applications that users require, it would not be of much use to most people. To complement the Linux kernel, the UNIX-like tools developed by the Free Software Foundation can be added to the kernel code and packaged as a distribution of open source software.

The distinction between the Linux kernel and the GNU utilities is an important one. While most people refer to a complete system as Linux, this is strictly not correct. Linux is technically only the kernel itself; most of the command utilities and applications come from the GNU Project.

Many companies and organizations have created their own distributions of Linux and the GNU utilities, as we will see in Chapter 2, "Installing Your Linux System." When you receive a Linux distribution from a vendor, it was packaged and perhaps modified or added to by that vendor, but contains code from Linus and his kernel team and the FSF GNU Project. Because it is all covered by the GNU GPL, you are free to use and modify all of the code in any way you wish, as long as, if you redistribute it, you do so also under the terms of the GNU GPL (thus allowing anyone else to use and modify any code you might have added).

More information on Linux, Linux distributions, download locations, and documentation, can be found on the following web site:

http://www.linux.org

The linux and UNIX Philosophy

So what is Linux? Let's be clear, Linux is not UNIX. It shares no code with UNIX. But because both operating systems adhere to the same POSIX standard, they look and act almost alike, so for most people, the fact that they are not the same thing is only a technicality. But it is an extremely important technicality!

Linux is a complete reimplementation. Because it shares no common code with any version of UNIX, it does not connect into the UNIX "family tree." Even so, Linux has strong philosophical connections and design influences derived from virtually all versions of UNIX. When one talks about the philosophy of Linux and GNU utilities, it is truly to talk about the UNIX philosophy.

The original UNIX system was lean and mean. It had a very small number of utilities and virtually no network or security functionality. The original designers of UNIX had some pretty strong notions about how utilities should be written: a program should do one thing, do it well, and complex tasks should be performed by using these utilities together. To this end, they built a special mechanism called a "pipe" into the heart of UNIX to support their vision. A pipe allows a user to specify that the output of one process is to be used as the input to another process. Two or more processes may be connected in this fashion, resulting in a "pipeline" of data flowing from the first process through to the last (Figure 3).


Figure 3. A pipeline.

The nice thing about pipelines is that many problems can be solved by such an arrangement of processes. Each process in the pipeline performs a set of operations upon the data and then passes the results on to the next process for further processing. For example, imagine that you wish to obtain a sorted list of all the users on the system. There is a utility called who that outputs an unsorted list of the users, and another utility called sort that outputs a sorted version of its input. These two utilities may be connected together with a pipe so that the output from who passes directly into sort, resulting in a sorted list of users (Figure 4).


Figure 4. A pipeline that sorts.

This is a more powerful approach to solving problems than writing a fresh program from scratch every time or using two programs but having to store the intermediate data in a temporary file in order for the next program to have access to it.

The UNIX (and therefore Linux) philosophy for solving problems can thus be stated:

Inside Linux is hidden another more subtle philosophy that is slowly eroding. The original system was designed by programmers who liked to have the power to access data or code anywhere in the system, regardless of who owned it. To support this capability, they built the concept of a "super-user" into UNIX, which meant that certain privileged individuals could have special access rights. For example, the system administrator of a UNIX system always has the capability of becoming a super-user so that he/she may perform cleanup tasks such as terminating rogue processes or removing unwanted users from the system. The concept of super-user has security implications that are a little frightening. Anyone with the right password could potentially wipe out an entire system, or extract top-security data with relative ease.

linux Features

Here is a recap of the features that Linux provides:

Because it provides all the features expected of a modern operating system, doing it in a way that is well documented, accessible, and adheres to a defined standard, and because its implementation is open source and freely available, Linux has made, and will continue to make, its mark on modern operating system design.

Throughout this book references to web sites are listed for various specific topics relevant to the topic at hand. In addition to all future specific references, there are a number of useful web sites containing a great deal of valuable information about Linux (Figure 5)

http://www.kernel.org/ Linux Kernel Archives
http://www.li.org/ Linux International
http://www.linux.org/ Linux homepage at Linux Online
http://www.linuxhq.com/ Linux headquarters
http://www.linuxjournal.com/ Linux journal
http://www.tldp.org/ The Linux Documentaion Project

Figure 5. Useful Linux web sites.

Linux Users

Linux users tend to fall into one of several categories:

The Rest of This Book

As you can probably tell by now, Linux is a fairly substantial topic, and can only be properly digested in small portions. In order to aid this process, and to allow individual readers to focus on the subjects that they find most applicable, I decided to write this book's chapters based on the different kinds of Linux user.

To begin with, read the chapters that interest you the most. Then go back and fill in the gaps when you have the time. If you're unsure of which chapters are most appropriate for your skill level, read the introductory section "About This Book" for some hints.

Chapter Review

In this chapter, I mentioned:

Checklist

Quiz

  1. What are the two main versions of UNIX that influenced Linux, and how did each begin?
  2. Write down five main functions of an operating system.
  3. What is the difference between a process and a program?
  4. What is the UNIX/Linux philosophy?
  5. What is the difference between an "open system" and an open source system?

Source

Graham Glass and King Ables, Linux for Programmers and Users,
Pearson Prentice-Hall, 2006. ISBN 0-13-185748-7. p 4-15

Footnotes

  1. A pending lawsuit by sca against IBM disputes this. sca alleges that IBM has used some original UNIX code in their distribution(s) of Linux. Should this tum out to be true, the code will most certainly be removed, so even if this is an issue, it is only a temporary one.

  2. The Multiplexed Information and Computing Service, originally developed by Bell Labs, MIT, and General Electric.

  3. Compiler technology has also improved greatly since then, so code most compilers produce is much more efficient.

  4. GNU is a recursive acronym standing for "GNUs not UNIX" and pronounced "guh-NEW."


Maintained by John Loomis, last updated 4 March 2006