Chapter 1. An Introduction to OpenAFS

Table of Contents

AFS Concepts
Client/Server Computing
Distributed File Systems
AFS Filespace and Local Filespace
Cells and Sites
Volumes and Mount Points
Volume Quotas
Using Files in AFS
The Cache Manager
Updating Copies of Cached Files
Multiple Users Modifying Files
AFS Security
Passwords and Mutual Authentication
Access Control Lists
Differences Between UNIX and AFS
File Sharing
Login and Authentication
File and Directory Protection
Machine Outages
Remote Commands
Differences in the Semantics of Standard UNIX Commands
Using OpenAFS with NFS

This chapter introduces basic AFS concepts and terms. It assumes that you are already familiar with standard UNIX commands, file protection, and pathname conventions.

AFS Concepts

AFS makes it easy for people to work together on the same files, no matter where the files are located. AFS users do not have to know which machine is storing a file, and administrators can move files from machine to machine without interrupting user access. Users always identify a file by the same pathname and AFS finds the correct file automatically, just as happens in the local file system on a single machine. While AFS makes file sharing easy, it does not compromise the security of the shared files. It provides a sophisticated protection scheme.

Client/Server Computing

AFS uses a client/server computing model. In client/server computing, there are two types of machines. Server machines store data and perform services for client machines. Client machines perform computations for users and access data and services provided by server machines. Some machines act as both clients and servers. In most cases, you work on a client machine, accessing files stored on a file server machine.

Distributed File Systems

AFS is a distributed file system which joins together the file systems of multiple file server machines, making it as easy to access files stored on a remote file server machine as files stored on the local disk. A distributed file system has two main advantages over a conventional centralized file system:

  • Increased availability: A copy of a popular file, such as the binary for an application program, can be stored on many file server machines. An outage on a single machine or even multiple machines does not necessarily make the file unavailable. Instead, user requests for the program are routed to accessible machines. With a centralized file system, the loss of the central file storage machine effectively shuts down the entire system.

  • Increased efficiency: In a distributed file system, the work load is distributed over many smaller file server machines that tend to be more fully utilized than the larger (and usually more expensive) file storage machine of a centralized file system.

AFS hides its distributed nature, so working with AFS files looks and feels like working with files stored on your local machine, except that you can access many more files. And because AFS relies on the power of users' client machines for computation, increasing the number of AFS users does not slow AFS performance appreciably, making it a very efficient computing environment.

AFS Filespace and Local Filespace

AFS acts as an extension of your machine's local UNIX file system. Your system administrator creates a directory on the local disk of each AFS client machine to act as a gateway to AFS. By convention, this directory is called /afs, and it functions as the root of the AFS filespace.

Just like the UNIX file system, AFS uses a hierarchical file structure (a tree). Under the /afs root directory are subdirectories created by your system administrator, including your home directory. Other directories that are at the same level of the local file system as /afs, such as /usr, /etc, or /bin, can either be located on your local disk or be links to AFS directories. Files relevant only to the local machine are usually stored on the local machine. All other files can be stored in AFS, enabling many users to share them and freeing the local machine's disk space for other uses.


You can use AFS commands only on files in the AFS filespace or the local directories that are links to the AFS filespace.

Cells and Sites

The cell is the administrative domain in AFS. Each cell's administrators determine how client machines are configured and how much storage space is available to each user. The organization corresponding to a cell can be a company, a university department, or any defined group of users. From a hardware perspective, a cell is a grouping of client machines and server machines defined to belong to the same cell. An AFS site is a grouping of one or more related cells. For example, the cells at the Example Corporation form a single site.

By convention, the subdirectories of the /afs directory are cellular filespaces, each of which contains subdirectories and files that belong to a single cell. For example, directories and files relevant to the Example Corporation cell are stored in the subdirectory /afs/

While each cell organizes and maintains its own filespace, it can also connect with the filespace of other AFS cells. The result is a huge filespace that enables file sharing within and across cells.

The cell to which your client machine belongs is called your local cell. All other cells in the AFS filespace are termed foreign cells.

Volumes and Mount Points

The storage disks in a computer are divided into sections called partitions. AFS further divides partitions into units called volumes, each of which houses a subtree of related files and directories. The volume provides a convenient container for storing related files and directories. Your system administrators can move volumes from one file server machine to another without your noticing, because AFS automatically tracks a volume's location.

You access the contents of a volume by accessing its mount point in the AFS filespace. A mount point is a special file system element that looks and acts like a regular UNIX directory, but tells AFS the volume's name. When you change to a different directory (by using the cd command, for example) you sometimes cross a mount point and start accessing the contents of a different volume than before. You normally do not notice the crossing, however, because AFS automatically interprets mount points and retrieves the contents of the new directory from the appropriate volume. You do not need to track which volume, partition, or file server machine is housing a directory's contents. If you are interested, though, you can learn a volume's location; for instructions, see Locating Files and Directories.

If your system administrator has followed the conventional practice, your home directory corresponds to one volume, which keeps its contents together on one partition of a file server machine. User volumes are typically named user.username. For example, the volume for a user named smith in the cell is called user.smith and is mounted at the directory /afs/

Because AFS volumes are stored on different file server machines, when a machine becomes unavailable only the volumes on that machine are inaccessible. Volumes stored on other machines are still accessible. However, if a volume's mount point resides in a volume that is stored on an unavailable machine, the former volume is also inaccessible. For that reason, volumes containing frequently used directories (for example, /afs and /afs/cellname) are often copied and distributed to many file server machines.

Volume Quotas

Each volume has a size limit, or quota, assigned by the system administrator. A volume's quota determines the maximum amount of disk space the volume can consume. If you attempt to exceed a volume's quota, you receive an error message. For instructions on checking volume quota, see Displaying Volume Quota.

Volumes have completely independent quotas. For example, say that the current working directory is /afs/, which is the mount point for the user.smith volume with 1000 free blocks. You try to copy a 500 block file from the current working directory to the /afs/ directory, the mount point for the volume user.pat. However, you get an error message saying there is not enough space. You check the volume quota for user.pat, and find that the volume only has 50 free blocks.