Section 6.1: Overview
- There are several interfaces offered by the Cache Manager, allowing clients to access the files stored by the community of AFS File Servers, to configure the Cache Manager's behavior and resources, to store and retrieve authentication information, to specify the location of community Authentication Server and Volume Location Server services, and to observe and debug the Cache Manager's state and actions. This chapter will cover the following five interfaces to the Cache Manager:
- ioctl(): The standard unix ioctl() system call has been extended to include more operations, namely waiting until data stores to a File Server complete before returning to the caller (VIOCCLOSEWAIT) and getting the name of the cell in which an open file resides (VIOCIGETCELL).
- pioctl(): An additional system call is provided through which applications can access operations specific to AFS, which are often tied to a particular pathname. These operations include Access Control List (ACL) and mount point management, Kerberos ticket management, cache configuration, cell configuration, and status of File Servers.
- RPC: Interface by which outside servers and investigators can manipulate the Cache Manager. There are two main categories of routines: callback management, typically called by the File Server, and debugging/statistics, called by programs such as cmdebug and via the xstat user-level library for collection of extended statistics.
- Files: Much of the Cache Manager's configuration information, as well as its view of the AFS services available from the outside world, is obtained from parsing various files. One set of these files is typically located in /usr/vice/etc, and includes CellServDB, ThisCell, and cacheinfo. Another set is usually found in /usr/vice/cache, namely CacheItems, VolumeItems, and AFSLog.
- Mariner: This is the interface by which file transfer activity between the Cache Manager and File Servers may be monitored. Specifically, it is used to monitor the names of the files and directories being fetched and/or stored over the network.
- Another important component not described in this document is the afsd program. It is afsd's job to initialize the Cache Manager on a given machine and to start up its related daemon threads. It accepts a host of configuration decisions via its command-line interface. In addition, it parses some of the information kept in the configuration files mentioned above and passes that information to the Cache Manager. The reader may find a full description of afsd in the AFS 3.0 Command Reference Manual[2].
Section 6.2: Definitions
- This section defines data structures that are used by the pioctl() calls.
Section 6.2.1: struct VenusFid
- The Cache Manager is the sole active AFS agent aware of the cellular architecture of the system. Since AFS file identifiers are not guaranteed to be unique across cell boundaries, it must further qualify them for its own internal bookkeeping. The struct VenusFid provides just such additional qualification, attaching the Cache Manager's internal cell identifier to the standard AFS fid.
Fields
- long Cell - The internal identifier for the cell in which the file resides.
- struct ViceFid Fid - The AFS file identifier within the above cell.
Section 6.2.2: struct ClearToken
- This is the clear-text version of an AFS token of identity. Its fields are encrypted into the secret token format, and are made easily available to the Cache Manager in this structure.
Fields
- long AuthHandle - Key version number.
- char HandShakeKey[8] - Session key.
- long ViceId - Identifier for the AFS principal represented by this token.
- long BeginTimestamp - Timestamp of when this token was minted, and hence came into effect.
- long EndTimestamp - Timestamp of when this token is considered to be expired, and thus disregarded.
Section 6.3: ioctl() Interface
- The standard unix ioctl() system call performs operations on file system objects referenced with an open file descriptor. AFS has augmented this system call with two additional operations, one to perform "safe stores", and one to get the name of the cell in which a file resides. A third ioctl() extension is now obsolete, namely aborting a store operation currently in progress.
Section 6.3.1: VIOCCLOSEWAIT
- [Opcode 1] Normally, a client performing a unix close() call on an AFS file resumes once the store operation on the given file data to the host File Server has commenced but before it has completed. Thus, it is possible that the store could actually fail (say, because of network partition or server crashes) without the client's knowledge. This new ioctl opcode specifies to the Cache Manager that all future close() operations will wait until the associated store operation to the File Server has completed fully before returning.
Section 6.3.2: VIOCABORT
- [Opcode 2] This ioctl() extension is now obsolete. This call results in a noop. The original intention of this call was to allow a store operation currently in progress to a File Server on the named fid to be aborted.
Section 6.3.3: VIOIGETCELL
- [Opcode 3] Get the name of the cell in which the given fid resides. If the file is not an AFS file, then ENOTTY is returned. The output buffer specified in the data area must be large enough to hold the null-terminated string representing the file's cell, otherwise EFAULT is returned. However, an out size value of zero specifies that the cell name is not to be copied into the output buffer. In this case, the caller is simply interested in whether the file is in AFS, and not its exact cell of residence.
Section 6.4: pioctl() Interface
Section 6.4.1: Introduction
- There is a new unix system call, pioctl(), which has been defined especially to support the AFS Cache Manager. Its functional definition is as follows:
int afs syscall pioctl(IN char *a pathP,
IN int a opcode,
IN struct ViceIoctl *a paramsP,
IN int a followSymLinks)
- This new call is much like the standard ioctl() call, but differs in that the affected file (when applicable) is specified by its path, not by a file descriptor. Another difference is the fourth parameter, a followSymLinks, determines which file should be used should a pathP be a symbolic link. If a followSymLinks be set to 1, then the symbolic link is followed to its target, and the pioctl() is applied to that resulting file. If a followSymLinks is set to 0, then the pioctl() applies to the symbolic link itself.
- Not all pioctl() calls affect files. In those cases, the a pathP parameter should be set to a null pointer. The second parameter to pioctl(), a opcode, specifies which operation is to be performed. The opcode for each of these operations is included in the text of the description. Note that not all pioctl() opcodes are in use. These unused values correspond to obsolete operations.
- The descriptions that follow identify some of the possible error codes for each pioctl() opcode, but do not offer a comprehensive lists. All pioctl() calls return 0 upon success.
- The rest of this section proceeds to describe the individual opcodes available. First, though, one asymmetry in this opcode set is pointed out, namely that while various operations are defined on AFS mount points, there is no direct way to create a mount point.
- This documentation partitions the pioctl() into several groups:
- Volume operations
- File Server operations
- Cell Operations
- Authentication Operations
- ACL Operations
- Cache operations
- Miscellaneous operations
- For all pioctl()s, the fields within the a paramsP parameter will be referred to directly. Thus, the values of in, in size, out, and out size are discussed, rather than the settings for a paramsP->in, a paramsP->in size, a paramsP->out, and a paramsP->out size.
- For convenience of reference, a list of the actively-supported pioctl()s, their opcodes, and brief description appears (in opcode order) below.
- [1] VIOCSETAL : Set the ACL on a directory
- [2] VIOCGETAL : Get the ACL for a directory
- [3] VIOCSETTOK : Set the caller's token for a cell
- [4] VIOCGETVOLSTAT : Get volume status
- [5] VIOCSETVOLSTAT : Set volume status
- [6] VIOCFLUSH : Flush an object from the cache
- [8] VIOCGETTOK : Get the caller's token for a cell
- [9] VIOCUNLOG : Discard authentication information
- [10] VIOCCKSERV : Check the status of one or more File Servers
- [11] VIOCCKBACK : Mark cached volume info as stale
- [12] VIOCCKCONN : Check caller's tokens/connections
- [14] VIOCWHEREIS : Find host(s) for a volume
- [20] VIOCACCESS : Check caller's access on object
- [21] VIOCUNPAG : See [9] VIOCUNLOG
- [22] VIOCGETFID : Get fid for named object
- [24] VIOCSETCACHESIZE : Set maximum cache size in blocks
- [25] VIOCFLUSHCB : Unilaterally drop a callback
- [26] VIOCNEWCELL : Set cell service information
- [27] VIOCGETCELL : Get cell configuration entry
- [28] VIOCAFS DELETE MT PT : Delete a mount point
- [29] VIOC AFS STAT MT PT : Get the contents of a mount point
- [30] VIOC FILE CELL NAME : Get cell hosting a given object
- [31] VIOC GET WS CELL : Get caller's home cell name
- [32] VIOC AFS MARINER HOST : Get/set file transfer monitoring output
- [33] VIOC GET PRIMARY CELL : Get the caller's primary cell
- [34] VIOC VENUSLOG : Enable/disable Cache Manager logging
- [35] VIOC GETCELLSTATUS : Get status info for a cell entry
- [36] VIOC SETCELLSTATUS : Set status info for a cell entry
- [37] VIOC FLUSHVOLUME : Flush cached data from a volume
- [38] VIOC AFS SYSNAME : Get/set the mapping
- [39] VIOC EXPORTAFS : Enable/disable NFS/AFS translation
- [40] VIOCGETCACHEPARAMS : Get current cache parameter values
Section 6.4.2: Mount Point Asymmetry
- There is an irregularity which deserves to be mentioned regarding the pioctl() interface. There are pioctl() operations for getting information about a mount point (VIOC AFS STAT MT PT) and for deleting a mount point (VIOC AFS DELETE MT PT), but no operation for creating mount points. To create a mount point, a symbolic link obeying a particular format must be created. The first character must be either a "%" or a "#", depending on the type of mount point being created (see the discussion in Section 6.4.4.4). If the mount point carries the name of the cell explicitly, the full cell name will appear next, followed by a colon. In all cases, the next portion of the mount point is the volume name. By convention, the last character of a mount point must always be a period ("."). This trailing period is not visible in the output from fs lsmount.
Section 6.4.3: Volume Operations
- There are several pioctl() opcodes dealing with AFS volumes. It is possible to get and set volume information (VIOCGETVOLSTAT, VIOCSETVOLSTAT), discover which volume hosts a particular file system object (VIOCWHEREIS), remove all objects cached from a given volume (VIOC FLUSHVOLUME), and revalidate cached volume information (VIOCCKBACK).
Section 6.4.3.1: VIOCGETVOLSTAT: Get volume
status for pathname
- [Opcode 4] Fetch information concerning the volume that contains the file system object named by a pathP. There is no other input for this call, so in size should be set to zero. The status information is placed into the buffer named by out, if out size is set to a value of sizeof(struct VolumeStatus) or larger. Included in the volume information are the volume's ID, quota, and number of blocks used in the volume as well as the disk partition on which it resides. Internally, the Cache Manager calls the RXAFS GetVolumeInfo() RPC (See Section 5.1.3.14) to fetch the volume status.
- Among the possible error returns, EINVAL indicates that the object named by a pathP could not be found.
Section 6.4.3.2: VIOCSETVOLSTAT: Set volume
status for pathname
- [Opcode 5] Set the status fields for the volume hosting the file system object named by a pathP. The first object placed into the input buffer in is the new status image. Only those fields that may change, namely MinQuota and MaxQuota fields, are interpreted upon receipt by the File Server, and are set to the desired values. Immediately after the struct VolumeStatus image, the caller must place the null-terminated string name of the volume involved in the input buffer. New settings for the offline message and MOTD (Message of the Day) strings may appear after the volume name. If there are no changes in the offline and/or MOTD messages, a null string must appear for that item. The in size parameter must be set to the total number of bytes so inserted, including the nulls after each string. Internally, the Cache Manager calls the RXAFS SetVolumeStatus() RPC (See Section 5.1.3.16) to store the new volume status.
- Among the possible error returns, EINVAL indicates that the object named by a pathP could not be found.
Section 6.4.3.3: VIOCWHEREIS: Find the
server(s) hosting the pathname's volume
- [Opcode 14] Find the set of machines that host the volume in which the file system object named by a pathP resides. The input buffer in is not used by this call, so in size should be set to zero. The output buffer indicated by out is filled with up to 8 IP addresses, one for each File Server hosting the indicated volume. Thus, out size should be set to at least (8*sizeof(long)). This group of hosts is terminated by the first zeroed IP address that appears in the list, but under no circumstances are more than 8 host IP addresses returned.
- Among the possible error returns is EINVAL, indicating that the pathname is not in AFS, hence is not contained within a volume. If ENODEV is returned, the associated volume information could not be obtained.
Section 6.4.3.4: VIOC FLUSHVOLUME: Flush all
data cached from the pathname's volume
- [Opcode 37] Determine the volume in which the file system object named by a pathP resides, and then throw away all currently cached copies of files that the Cache Manager has obtained from that volume. This call is typically used should a user suspect there is some cache corruption associated with the files from a given volume.
Section 6.4.3.5: VIOCCKBACK: Check validity
of all cached volume information
- [Opcode 11] Ask the Cache Manager to check the validity of all cached volume information. None of the call's parameters are referenced in this call, so a pathP and in should be set to the null pointer, and in size and out size should be set to zero.
- This operation is performed in two steps:
- 1 The Cache Manager first refreshes its knowledge of the root volume, usually named root.afs. On success, it wakes up any of its own threads waiting on the arrival of this information, should it have been previously unreachable. This typically happens should the Cache Manager discover in its startup sequence that information on the root volume is unavailable. Lacking this knowledge at startup time, the Cache Manager settles into a semi-quiescent state, checking every so often to see if volume service is available and thus may complete its own initialization.
- 2 Each cached volume record is flagged as being stale. Any future attempt to access information from these volumes will result in the volume record's data first being refreshed from the Volume Location Server.
Section 6.4.4: File Server Operations
- One group of pioctl() opcodes is aimed at performing operations against one or more File Servers directly. Specifically, a caller may translate a pathname into the corresponding AFS fid (VIOCGETFID), unilaterally discard a set of callback promises (VIOCFLUSHCB), get status on mount points (VIOC AFS STAT MT PT), delete unwanted mount points (VIOC AFS DELETE MT PT), and check the health of a group of File Servers(VIOCCKSERV).
Section 6.4.4.1: VIOCGETFID: Get augmented
fid for named file system object
- [Opcode 22] Return the augmented file identifier for the file system object named by a pathP. The desired struct VenusFid is placed in the output buffer specified by out. The output buffer size, as indicated by the out size parameter, must be set to the value of sizeof(struct VenusFid) or greater. The input buffer is not referenced in this call, so in should be set to the null pointer and in size set to zero.
- Among the possible error returns, EINVAL indicates that the object named by a pathP was not found.
Section 6.4.4.2: VIOCFLUSHCB: Unilaterally
drop a callback
- [Opcode 25] Remove any callback information kept by the Cache Manager on the file system object named by a pathP. Internally, the Cache Manager executes a call to the RXAFS GiveUpCallBacks() RPC (See Section 5.1.3.13) to inform the appropriate File Server that it is being released from its particular callback promise. Note that if the named file resides on a read-only volume, then the above call is not made, and success is returned immediately. This optimization is possible because AFS File Servers do not grant callbacks on files from read-only volumes.
- Among the possible error returns is EINVAL, which indicates that the object named by a pathP was not found.
Section 6.4.4.3: VIOC AFS DELETE MT PT:
Delete a mount point
- [Opcode 28] Remove an AFS mount point. The name of the directory in which the mount point exists is specified by a pathP, and the string name of the mount point within this directory is provided through the in parameter. The input buffer length, in size, is set to the length of the mount point name itself, including the trailing null. The output buffer is not accessed by this call, so out should be set to the null pointer and out size to zero.
- One important note is that the a followSymLinks argument must be set to zero for correct operation. This is counter-intuitive, since at first glance it seems that a symbolic link that resolves to a directory should be a valid pathname parameter. However, recall that mount points are implemented as symbolic links that do not actually point to another file system object, but rather simply contain cell and volume information (see the description in Section 6.4.2). This "special" symbolic link must not be resolved by the pioctl(), but rather presented as-is to the Cache Manager, which then properly interprets it and generates a reference to the given volume's root directory. As an unfortunate side-effect, a perfectly valid symbolic link referring to a directory will be rejected out of hand by this operation as a value for the a pathP parameter.
- Among the possible error returns, EINVAL reports that the named directory was not found, and ENOTDIR indicates that the pathname contained within a pathP is not a directory.
Section 6.4.4.4: VIOC AFS STAT MT PT: Get the
contents of a mount point
- [Opcode 29] Return the contents of the given mount point. The directory in which the mount point in question resides is provided via the a pathP argument, and the in buffer contains the name of the mount point object within this directory. As usual, in size is set to the length of the input buffer, including the trailing null. If the given object is truly a mount point and the out buffer is large enough (its length appears in out size), the mount point's contents are stored into out.
- The mount point string returned obeys a stylized format, as fully described in Section 5.6.2 of the AFS 3.0 System Administrator's Guide[1]. Briefly, a leading pound sign ("#") indicates a standard mount point, inheriting the read-only or read-write preferences of the mount point's containing volume. On the other hand, a leading percent sign ("%") advises the Cache Manager to cross into the read-write version of the volume, regardless of the existence of read-only clones. If a colon (":") separator occurs, the portion up to the colon itself denotes the fully-qualified cell name hosting the volume. The rest of the string is the volume name itself.
- Among the possible error codes is EINVAL, indicating that the named object is not an AFS mount point. Should the name passed in a pathP be something other than a directory, then ENOTDIR is returned.
Section 6.4.4.5: VIOCCKSERV: Check the status
of one or more File Servers
- [Opcode 10] Check the status of the File Servers that have been contacted over the lifetime of the Cache Manager. The a pathP parameter is ignored by this call, so it should be set to the null pointer. The input parameters as specified by in are completely optional. If something is placed in the input buffer, namely in size is not zero, then the first item stored there is a longword used as a bit array of flags. These flags carry instructions as to the domain and the "thoroughness" of this check.
- Only the settings of the least-significant two bits are recognized. Enabling the lowest bit tells the Cache Manager not to ping its list of servers, but simply report their status as contained in the internal server records. Enabling the next-higher bit limits the search to only those File Servers in a given cell. If in size is greater than sizeof(long),a null-terminated cell name string follows the initial flag array, specifying the cell to check. If this search bit is set but no cell name string follows the longword of flags, then the search is restricted to those servers contacted from the same cell as the caller.
- This call returns at least one longword into the output buffer out, specifying the number of hosts it discovered to be down. If this number is not zero, then the longword IP address for each dead (or unreachable) host follows in the output buffer. At most 16 server addresses will be returned, as this is the maximum number of servers for which the Cache Manager keeps information.
- Among the possible error returns is ENOENT, indicating that the optional cell name string input value is not known to the Cache Manager.
Section 6.4.5: Cell Operations
- The Cache Manager is the only active AFS agent that understands the system's cellular architecture. Thus, it keeps important information concerning the identities of the cells in the community, which cell is in direct administrative control of the machine upon which it is running, status and configuration of its own cell, and what cell-specific operations may be legally executed. The following pioctl()s allow client processes to access and update this cellular information. Supported operations include adding or updating knowledge of a cell, including the cell overseeing the caller's machine (VIOCNEWCELL), fetching the contents of a cell configuration entry (VIOCGETCELL), finding out which cell hosts a given file system object (VIOC FILE CELL NAME), discovering the cell to which the machine belongs (VIOC GET WS CELL), finding out the caller's "primary" cell (VIOC GET PRIMARY CELL), and getting/setting certain other per-cell system parameters (VIOC GETCELLSTATUS, VIOC SETCELLSTATUS).
Section 6.4.5.1: VIOCNEWCELL: Set cell
service information
- [Opcode 26] Give the Cache Manager all the information it needs to access an AFS cell. Exactly eight longwords are placed at the beginning of the in input buffer. These specify the IP addresses for the machine providing AFS authentication and volume location authentication services. The first such longword set to zero will signal the end of the list of server IP addresses. After these addresses, the input buffer hosts the null-terminated name of the cell to which the above servers belong. The a pathP parameter is not used, and so should be set to the null pointer.
- Among the possible error returns is EACCES, indicating that the caller does not have the necessary rights to perform the operation. Only root is allowed to set cell server information. If either the IP address array or the server name is unacceptable, EINVAL will be returned.
Section 6.4.5.2: VIOCGETCELL: Get cell
configuration entry
- [Opcode 27] Get the i'th cell configuration entry known to the Cache Manager. The index of the desired entry is placed into the in input buffer as a longword, with the first legal value being zero. If there is a cell associated with the given index, the output buffer will be filled with an array of 8 longwords, followed by a null-terminated string.
- The longwords correspond to the list of IP addresses of the machines providing AFS authentication and volume location services. The string reflects the name of the cell for which the given machines are operating. There is no explicit count returned of the number of valid IP addresses in the longword array. Rather, the list is terminated by the first zero value encountered, or when the eighth slot is filled.
- This routine is intended to be called repeatedly, with the index starting at zero and increasing each time. The array of cell information records is kept compactly, without holes. A return value of EDOM indicates that the given index does not map to a valid entry, and thus may be used as the terminating condition for the iteration.
Section 6.4.5.3: VIOC FILE CELL NAME: Get
cell hosting a given object
- [Opcode 30] Ask the Cache Manager to return the name of the cell in which the file system object named by a pathP resides. The input arguments are not used, so in should be set to the null pointer and in size should be set to zero. The null-terminated cell name string is returned in the out output buffer.
- Among the possible error values, EINVAL indicates that the pathname provided in a pathP is illegal. If there is no cell information associated with the given object, ESRCH is returned.
Section 6.4.5.4: VIOC GET WS CELL: Get
caller's home cell name
- [Opcode 31] Return the name of the cell to which the caller's machine belongs. This cell name is returned as a null-terminated string in the output buffer. The input arguments are not used, so in should be set to the null pointer and in size should be set to zero.
- Among the possible error returns is ESRCH, stating that the caller's home cell information was not available.
Section 6.4.5.5: VIOC GET PRIMARY CELL: Get
the caller's primary cell
- [Opcode 33] Ask the Cache Manager to return the name of the caller's primary cell. Internally, the Cache Manager scans its user records, and the cell information referenced by that record is used to extract the cell's string name. The input arguments are not used, so in should be set to the null pointer and in size should be set to zero. The a pathP pathname argument is not used either, and should similarly be set to the null pointer. The null-terminated cell name string is placed into the output buffer pointed to by out if it has suffcient room.
- Among the possible error returns is ESRCH, stating that the caller's primary cell information was not available.
Section 6.4.5.6: VIOC GETCELLSTATUS: Get
status info for a cell entry
- [Opcode 35] Given a cell name, return a single longword of status flags from the Cache Manager's entry for that cell. The null-terminated cell name string is expected to be in the in parameter, with in size set to its length plus one for the trailing null. The status flags are returned in the out buffer, which must have out size set to sizeof(long) or larger.
- The Cache Manager defines the following output flag values for this operation:
- 0x1 This entry is considered the caller's primary cell.
- 0x2 The unix setuid() operation is not honored.
- 0x4 An obsolete version of the Volume Location Server's database is being used. While defined, this flag should no longer be set in modern systems.
- Among the possible error returns is ENOENT, informing the caller that the Cache Manager has no knowledge of the given cell name.
Section 6.4.5.7: VIOC SETCELLSTATUS: Set
status info for a cell entry
- [Opcode 36] Given a cell name and an image of the cell status bits that should be set, record the association in the Cache Manager. The input buffer in must be set up as follows. The first entry is the longword containing the cell status bits to be set (see the VIOC GETCELLSTATUS description above for valid flag definitions). The next entry is another longword, ignored by the Cache Manager. The third and final entry in the input buffer is a null-terminated string containing the name of the cell for which the status flags are to be applied.
- Among the possible error returns is ENOENT, reflecting the Cache Manager's inability to locate its record for the given cell. Only root is allowed to execute this operation, and an EACCES return indicates the caller was not effectively root when the call took place.
Section 6.4.6: Authentication Operations
- The Cache Manager serves as the repository for authentication information for AFS clients. Each client process belongs to a single Process Authentication Group (PAG). Each process in a given PAG shares authentication information with the other members, and thus has the identical rights with respect to AFS Access Control Lists (ACLs) as all other processes in the PAG. As the Cache Manager interacts with File Servers as a client process' agent, it automatically and transparently presents the appropriate authentication information as required in order to gain the access to which the caller is entitled. Each PAG can host exactly one token per cell. These tokens are objects that unequivocally codify the principal's identity, and are encrypted for security. Token operations between a Cache Manager and File Server are also encrypted, as are the interchanges between clients and the Authentication Servers that generate these tokens.
- There are actually two different flavors of tokens, namely clear and secret. The data structure representing clear tokens is described in Section 6.2.2, and the secret token appears as an undifferentiated byte stream.
- This section describes the operations involving these tokens, namely getting and setting the caller's token for a particular cell (VIOCGETTOK, VIOCSETTOK), checking a caller's access on a specified file system object (VIOCACCESS), checking the status of caller's tokens associated with the set of File Server connections maintained on its behalf (VIOCCKCONN), and discarding tokens entirely (VIOCUNLOG, VIOCUNPAG). These abilities are used by such programs as login, klog, unlog, and tokens, which must generate, manipulate, and/or destroy AFS tokens.
Section 6.4.6.1: VIOCSETTOK: Set the caller's
token for a cell
- [Opcode 3] Store the caller's secret and clear tokens within the Cache Manager. The input buffer is used to hold the following quantities, laid out end to end. The first item placed in the buffer is a longword, specifying the length in bytes of the secret token, followed by the body of the secret token itself. The next field is another longword, this time describing the length in bytes of the struct ClearToken, followed by the structure. These are all required fields. The caller may optionally include two additional fields, following directly after the required ones. The first optional field is a longword which is set to a non-zero value if the cell in which these tokens were generated is to be marked as the caller's primary cell. The second optional argument is a null-terminated string specifying the cell in which these tokens apply. If these two optional arguments do not appear, the Cache Manager will default to using its home cell and marking the entry as non-primary. The a pathP pathname parameter is not used, and thus should be set to the null pointer.
- If the caller does not have any tokens registered for the cell, the Cache Manager will store them. If the caller already has tokens for the cell, the new values will overwrite their old values. Because these are stored per PAG, the new tokens will thus determine the access rights of all other processes belonging to the PAG.
- Among the possible error returns is ESRCH, indicating the named cell is not recognized, and EIO, if information on the local cell is not available.
Section 6.4.6.2: VIOCGETTOK: Get the caller's
token for a cell
- [Opcode 8] Get the specified authentication tokens associated with the caller. The a pathP parameter is not used, so it should be set to the null pointer. Should the input parameter in be set to a null pointer, then this call will place the user's tokens for the machine's home cell in the out output buffer, if such tokens exist. In this case, the following objects are placed in the output buffer. First, a longword specifying the number of bytes in the body of the secret token is delivered, followed immediately by the secret token itself. Next is a longword indicating the length in bytes of the clear token, followed by the clear token. The input parameter may also consist of a single longword, indicating the index of the token desired. Since the Cache Manager is capable of storing multiple tokens per principal, this allows the caller to iteratively extract the full set of tokens stored for the PAG. The first valid index value is zero. The list of tokens is kept compactly, without holes. A return value of EDOM indicates that the given index does not map to a valid token entry, and thus may be used as the terminating condition for the iteration.
- Other than EDOM, another possible error return is ENOTCONN, specifying that the caller does not have any AFS tokens whatsoever.
Section 6.4.6.3: VIOCACCESS: Check caller's
access on object
- [Opcode 20] This operation is used to determine whether the caller has specific access rights on a particular file system object. A single longword is placed into the input buffer, in, representing the set of rights in question. The acceptable values for these access rights are listen in Section 6.4.5. The object to check is named by the a pathP parameter. The output parameters are not accessed, so out should be set to the null pointer, and out size set to zero. If the call returns successfully, the caller has at least the set of rights denoted by the bits set in the input buffer. Otherwise, EACCESS is returned.
Section 6.4.6.4: VIOCCKCONN: Check status of
caller's tokens/connections
- [Opcode 12] Check whether the suite of File Server connections maintained on behalf of the caller by the Cache Manager has valid authentication tokens. This function always returns successfully, communicating the health of said connections by writing a single longword value to the specified output buffer in out. If zero is returned to the output buffer, then two things are true. First, the caller has tokens for at least one cell. Second, all tokens encountered upon a review of the caller's connections have been properly minted (i.e., have not been generated fraudulently), and, in addition, have not yet expired. If these conditions do not currently hold for the caller, then the output buffer value will be set to EACCES. Neither the a pathP nor input parameters are used by this call.
Section 6.4.6.5: VIOCUNLOG: Discard
authentication information
- [Opcode 9] Discard all authentication information held in trust for the caller. The Cache Manager sweeps through its user records, destroying all of the caller's associated token information. This results in reducing the rights of all processes within the caller's PAG to the level of file system access granted to the special system:anyuser group.
- This operation always returns successfully. None of the parameters are referenced, so they should all be set to null pointers and zeroes as appropriate.
Section 6.4.6.6: VIOCUNPAG: Discard
authentication information
- [Opcode 21] This call is essentially identical to the VIOCUNLOG operation, and is in fact implemented internally by the same code for VIOCUNLOG.
Section 6.4.7: ACL Operations
- This set of opcodes allows manipulation of AFS Access Control Lists (ACLs). Callers are allowed to fetch the ACL on a given directory, or to set the ACL on a directory. In AFS-3, ACLs are only maintained on directories, not on individual files. Thus, a directory ACL determines the allowable accesses on all objects within that directory in conjunction with their normal unix mode (owner) bits. Should the a pathP parameter specify a file instead of a directory, the ACL operation will be performed on the directory in which the given file resides.
- These pioctl() opcodes deal only in external formats for ACLs, namely the actual text stored in an AFS ACL container. This external format is a character string, composed of a descriptive header followed by some number of individual principal-rights pairs. AFS ACLs actually specify two sublists, namely the positive and negative rights lists. The positive list catalogues the set of rights that certain principals (individual users or groups of users) have, while the negative list contains the set of rights specifically denied to the named parties.
- These external ACL representations differ from the internal format generated by the Cache Manager after a parsing pass. The external format may be easily generated from the internal format as follows. The header format is expressed with the following printf() statement:
printf("%d\n%d\n", NumPositiveEntries, NumNegativeEntries);
- The header first specifies the number of entries on the positive rights list, which appear first in the ACL body. The number of entries on the negative list is the second item in the header. The negative entries appear after the last positive entry.
- Each entry in the ACL proper obeys the format imposed by the following printf() statement:
printf("%s\t%d\n", UserOrGroupName, RightsMask);
- Note that the string name for the user or group is stored in an externalized ACL entry. The Protection Server stores the mappings between the numerical identifiers for AFS principals and their character string representations. There are cases where there is no mapping from the numerical identifier to a string name. For example, a user or group may have been deleted sometime after they were added to the ACL and before the Cache Manager externalized the ACL for storage. In this case, the Cache Manager sets UserOrGroupName to the string version of the principal's integer identifier. Should the erz principal be deleted from the Protection Server's database in the above scenario, then the string '1019' will be stored, since it corresponded to erz's former numerical identifier.
- The RightsMask parameter to the above call represents the set of rights the named principal may exercise on the objects covered by the ACL. The following flags may be OR'ed together to construct the desired access rights placed in RightsMask:
#define PRSFS_READ 1
#define PRSFS_WRITE 2
#define PRSFS_INSERT 4
#define PRSFS_LOOKUP 8
#define PRSFS_DELETE 16
#define PRSFS_LOCK 32
#define PRSFS_ADMINISTER 64
Section 6.4.7.1: VIOCSETAL: Set the ACL on a
directory
- [Opcode 1] Set the contents of the ACL associated with the file system object named by a pathP. Should this pathname indicate a file and not a directory, the Cache Manager will apply this operation to the file's parent directory. The new ACL contents, expressed in their externalized form, are made available in in, with in size set to its length in characters, including the trailing null. There is no output from this call, so out size should be set to zero. Internally, the Cache Manager will call the RXAFS StoreACL() RPC (see Section 5.1.3.3 to store the new ACL on the proper File Server.
- Possible error codes include EINVAL, indicating that one of three things may be true: the named path is not in AFS, there are too many entries in the specified ACL, or a non-existent user or group appears on the ACL.
Section 6.4.7.2: VIOCGETAL: Get the ACL for a
directory
- [Opcode 2] Get the contents of the ACL associated with the file system object named by a pathP. Should this pathname indicate a file and not a directory, the Cache Manager will apply this operation to the file's parent directory. The ACL contents, expressed in their externalized form, are delivered into the out buffer if out size has been set to a value which indicates that there is enough room for the specified ACL. This ACL string will be null-terminated. There is no input to this call, so in size should be set to zero. Internally, the Cache Manager will call the RXAFS FetchACL() RPC (see Section 5.1.3.1) to fetch the ACL from the proper File Server.
- Possible error codes include EINVAL, indicating that the named path is not in AFS.
Section 6.4.8: Cache Operations
- It is possible to inquire about and affect various aspects of the cache maintained locally by the Cache Manager through the group of pioctl()s described below. Specifically, one may force certain file system objects to be removed from the cache (VIOCFLUSH), set the maximum number of blocks usable by the cache (VIOCSETCACHESIZE), and ask for information about the cache's current state (VIOCGETCACHEPARAMS).
Section 6.4.8.1: VIOCFLUSH: Flush an object
from the cache
- [Opcode 6] Flush the file system object specified by a pathP out of the local cache. The other parameters are not referenced, so they should be set to the proper combination of null pointers and zeroes.
- Among the possible error returns is EINVAL, indicating that the value supplied in the a pathP parameter is not acceptable.
Section 6.4.8.2: VIOCSETCACHESIZE: Set
maximum cache size in blocks
- [Opcode 24] Instructs the Cache Manager to set a new maximum size (in 1 Kbyte blocks) for its local cache. The input buffer located at in contains the new maximum block count. If zero is supplied for this value, the Cache Manager will revert its cache limit to its value at startup time. Neither the a pathP nor output buffer parameters is referenced by this operation. The Cache Manager recomputes its other cache parameters based on this new value, including the number of cache files allowed to be dirty at once and the total amount of space filled with dirty chunks. Should the new setting be smaller than the number of blocks currently being used, the Cache Manager will throw things out of the cache until it obeys the new limit.
- The caller is required to be effectively running as root, or this call will fail, returning EACCES. If the Cache Manager is configured to run with a memory cache instead of a disk cache, this operation will also fail, returning EROF.
Section 6.4.8.3: VIOCGETCACHEPARAMS: Get
current cache parameter values
- [Opcode 40] Fetch the current values being used for the cache parameters. The output buffer is filled with MAXGCSTATS (16) longwords, describing these parameters. Only the first two longwords in this array are currently set. The first contains the value of afs cacheBlocks, or the maximum number of 1 Kbyte blocks which may be used in the cache (see Section 6.4.8.2 for how this value may be set). The second longword contains the value of the Cache Manager's internal afs blocksUsed variable, or the number of these cache blocks currently in use. All other longwords in the array are set to zero. Neither the a pathP nor input buffer arguments are referenced by this call.
- This routine always returns successfully.
Section 6.4.9: Miscellaneous Operations
- There are several other AFS-specific operations accessible via the pioctl() interface that don't fit cleanly into the above categories. They are described in this section, and include manipulation of the socket-based Mariner file trace interface (VIOC AFS MARINER HOST), enabling and disabling of the file-based AFSLog output interface for debugging (VIOC VENUSLOG), getting and setting the value of the special pathname component mapping (VIOC AFS SYSNAME), and turning the NFS-AFS translator service on and off (VIOC EXPORTAFS).
Section 6.4.9.1: VIOC AFS MARINER HOST:
Get/set file transfer monitoring output
- [Opcode 32] This operation is used to get or set the IP address of the host destined to receive Mariner output. A detailed description of the Cache Manager Mariner interface may be found in Section 6.7.
- The input buffer located at in is used to pass a single longword containing the IP address of the machine to receive output regarding file transfers between the Cache Manager and any File Server. If the chosen host IP address is 0xffffffff, the Cache Manager is prompted to turn off generation of Mariner output entirely. If the chosen host IP address is zero, then the Cache Manager will not set the Mariner host, but rather return the current Mariner host as a single longword written to the out output buffer. Any other value chosen for the host IP address enables Mariner output (if it was not already enabled) and causes all further traffic to be directed to the given machine.
- This function always returns successfully.
Section 6.4.9.2: VIOC VENUSLOG:
Enable/disable Cache Manager logging
- [Opcode 34] Tell the Cache Manager whether to generate debugging information, and what kind of debugging output to enable. The input buffer located at in is used to transmit a single longword to the Cache Manager, expressing the caller's wishes. Of the four bytes making up the longword, the highest byte indicates the desired value for the internal afsDebug variable, enabling or disabling general trace output. The next highest byte indicates the desired value for the internal netDebug variable, enabling or disabling network-level debugging traces. The third byte is unused, and the low-order byte represents an overall on/off value for the functionality. There is a special value for the low-order byte, 99, which instructs the Cache Manager to return the current debugging setting as a single longword placed into the output buffer pointed to by out. The a pathP parameter is not referenced by this routine.
- Trace output is delivered to the AFSLog file, typically located in the /usr/vice/etc directory. When this form of debugging output is enabled, the existing AFSLog file is truncated, and its file descriptor is stored for future use. When this debugging is disabled, a close() is done on the file, forcing all its data to disk. For additional information on the AFSLog file for collecting Cache Manager traces, please see the description in Section 6.6.2.1.
- This call will only succeed if the caller is effectively running as root. If this is not the case, an error code of EACCES is returned.
Section 6.4.9.3: VIOC AFS SYSNAME: Get/set
the mapping
- [Opcode 38] Get or set the value of the special pathname component understood by the Cache Manager. The input buffer pointed to by in is used to house a longword whose value determines whether the value is being set (1) or whether the current value is being fetched (0). If it is being set, then a null-terminated string is expected to follow in the input buffer, specifying the new value of . Otherwise, if we are asking the Cache Manager for the current setting, a null-terminated string bearing that value will be placed in the out output buffer. The a pathP parameter is not used by this call, and thus should be set to a null pointer.
- There are no special privileges required of the caller to fetch the value of the current mapping. However, a native caller must be running effectively as root in order to successfully alter the mapping. An unauthorized attempt to change the setting will be ignored, and cause this routine to return EACCES. This requirement is relaxed for VIOC AFS SYSNAME pioctl() calls emanating from foreign file systems such as NFS and accessing AFS files through the NFS-AFS translator. Each such remote caller may set its own notion of what the mapping is without affecting native AFS clients. Since the uid values received in calls from NFS machines are inherently insecure, it is impossible to enforce the fact that the caller is truly root on the NFS machine. This, while any principal running on an NFS machine may change that foreign machine's perception of , it does not impact native AFS users in any way.
Section 6.4.9.4: VIOC EXPORTAFS:
Enable/disable NFS/AFS translation
- [Opcode 39] Enable or disable the ability of an AFS-capable machine to export AFS access to NFS clients. Actually, this is a general facility allowing exportation of AFS service to any number of other file systems, but the only support currently in place is for NFS client machines. A single longword is expected in the input buffer in. This input longword is partitioned into individual bytes, organized as follows. The high-order byte communicates the type of foreign client to receive AFS file services. There are currently two legal values for this field, namely 0 for the null foreign file system and 1 for NFS. The next byte determines whether the Cache Manager is being asked to get or set this information. A non-zero value here is interpreted as a command to set the export information according to what's in the input longword, and a zero-valued byte in this position instructs the Cache Manager to place a longword in the output buffer out, which contains the current export settings for the foreign system type specified in the high-order byte. The third input byte is not used, and the lowest-order input buffer byte determines whether export services for the specified system are being enabled or disabled. A non-zero value will turn on the services, and a zero value will shut them down. The a pathP pathname parameter is not used by this call, and the routine generates output only if the export information is being requested instead of being set.
- The caller must be effectively running as root in order for this operation to succeed. The call returns EACCES if the caller is not so authorized. If the caller specifies an illegal foreign system type in the high-order byte of the input longword, then ENODEV is returned. Again, NFS is the only foreign file system currently supported.
- Practically speaking, the machine providing NFS-AFS translation services must enable this service with this pioctl() before any NFS client machines may begin accessing AFS files. Conversely, if an administrator turns off this export facility, the export code on the translator machine will immediately stop responding to traffic from its active NFS clients.
Section 6.5: RPC Interface
Section 6.5.1: Introduction
- This section covers the structure and workings of the Cache Manager's RPC interface. Typically, these calls are made by File Server processes. However, some of the calls are designed specifically for debugging programs (e.g., the cmdebug facility) and for collection of statistical and performance information from the Cache Manager. Any client application that makes direct calls on the File Server RPC interface must be prepared to export a subset of the Cache Manager RPC interface, as discussed in Section 5.1.6.
- This section will first examine the Cache Manager's use of locks, whose settings may be observed via one of the RPC interface calls. Next, it will present some definitions and data structures used in the RPC interface, and finally document the individual calls available through this interface.
Section 6.5.2: Locks
- The Cache Manager makes use of locking to insure its internal integrity in the face of its multi-threaded design. A total of 11 locks are maintained for this purpose, one of which is now obsolete and no longer used (see below). These locks are strictly internal, and the Cache Manager itself is the only one able to manipulate them. The current settings for these system locks are externally accessible for debugging purposes via the AFSRXCB GetLock() RPC interface call, as described in Section 6.5.5.4. For each lock, its index in the locking table is given in the following text.
- afs xvcache [Index 0]: This lock controls access to the status cache entries maintained by the Cache Manager. This stat cache keeps stat()-related information for AFS files it has dealt with. The stat information is kept separate from actual data contents of the related file, since this information may change independently (say, as a result of a unix chown() call.
- afs xdcache [Index 1]: This lock moderates access to the Cache Manager's data cache, namely the contents of the file system objects it has cached locally. As stated above, this data cache is separate from the associated stat() information.
- afs xserver [Index 2]: This lock controls access to the File Server machine description table, which keeps tabs on all File Servers contacted in recent history. This lock thus indirectly controls access to the set of per-server RPC connection descriptors the File Server table makes visible.
- afs xvcb [Index 3]: This lock supervises access to the volume callback information kept by the Cache Manager. This table is referenced, for example, when a client decides to remove one or more callbacks on files from a given volume (see the RXAFS GiveUpCallBacks() description on Section 5.1.3.13).
- afs xbrs [Index 4]: This lock serializes the actions of the Cache Manager's background daemons, which perform prefetching and background file storage duties.
- afs xcell [Index 5]: This lock controls the addition, deletion, and update of items on the linked list housing information on cells known to the Cache Manager.
- afs xconn [Index 6]: This lock supervises operations concerning the set of RPC connection structures kept by the system. This lock is used in combination with the
- afs xserver lock described above. In some internal Cache Manager code paths, the File Server description records are first locked, and then the afs xconn lock is used to access the associated Rx connection records. afs xuser [Index 7]: This lock serializes access to the per-user structures maintained by the Cache Manager.
- afs xvolume [Index 8]: This lock is used to control access to the Cache Manager's volume information cache, namely the set of entries currently in memory, a subset of those stably housed in the VolumeItems disk file (see Section 6.6.2.3).
- afs puttofileLock [Index 9]: This lock is obsolete, and while still defined by the system is no longer used. It formerly serialized writes to a debugging output interface buffer, but the internal mechanism has since been updated and improved.
- afs ftf [Index 10]: This lock is used when flushing cache text pages from the machine's virtual memory tables. For each specific machine architecture on which the Cache Manager runs, there is a set of virtual memory operations which must be invoked to perform this operation. The result of such activities is to make sure that the latest contents of new incarnations of binaries are used, instead of outdated copies of previous versions still resident in the virtual memory system.
Section 6.5.3: Definitions and Typedefs
- This section documents some macro definitions and typedefs referenced by the Cache Manager's RPC interface. Specifically, these definitions and typedefs are used in the RXAFSCB GetXStats() and RXAFSCB XStatsVersion calls as described in Sections 6.5.5.6 and 6.5.5.7.
const AFSCB_XSTAT_VERSION = 1;
const AFSCB_MAX_XSTAT_LONGS = 2048;
typedef long AFSCB_CollData<AFSCB_MAX_XSTAT_LONGS>;
const AFSCB_XSTATSCOLL_CALL_INFO = 0;
const AFSCB_XSTATSCOLL_PERF_INFO = 1;
Section 6.5.4: Structures
- This section documents some structures used in the Cache Manager RPC interface. As with the constants and typedefs in the previous section, these items are used in the RXAFSCB GetXStats() and RXAFSCB XStatsVersion calls as described in Sections 6.5.5.6 and 6.5.5.7.
Section 6.5.4.1: struct afs MeanStats
- This structure may be used to collect a running average figure. It is included in some of the statistics structures described below.
Fields
- long average - The computed average.
- long elements - The number of elements sampled for the above aveage.
Section 6.5.4.2: struct afs CMCallStats
- This structure maintains profiling information, communicating the number of times internal Cache Manager functions are invoked. Each field name has a "C " prefix, followed by the name of the function being watched. As this structure has entries for over 500 functions, it will not be described further here. Those readers who wish to see the full layout of this structure are referred to Appendix A.
- The AFSCB XSTATSCOLL CALL INFO data collection includes the information in this structure.
Section 6.5.4.3: struct afs CMMeanStats
- This is the other part of the information (along with the struct afs CMCallStats construct described above) returned by the AFSCB XSTATSCOLL CALL INFO data collection defined by the Cache Manager (see Section 6.5.3). It is accessible via the RXAFSCB GetXStats() interface routine, as defined in Section 6.5.5.7.
- This structure represents the beginning of work to compute average values for some of the extended statistics collected by the Cache Manager.
Fields
- struct afs MeanStats something - Intended to collect averages for some of the Cache Manager extended statistics; not yet implemented.
Section 6.5.4.4: struct afs CMStats
- This structure defines the information returned by the AFSCB XSTATSCOLL CALL INFO data collection defined by the Cache Manager (see Section 6.5.3). It is accessible via the RXAFSCB GetXStats() interface routine, as defined in Section 6.5.5.7.
Fields
- struct afs CallStats callInfo - Contains the counts on the number of times each internal Cache Manager function has been called.
- struct afs MeanStats something - Intended to collect averages for some of the Cache Manager extended statistics; not yet implemented.
Section 6.5.4.5: struct afs CMPerfStats
- This is the information returned by the AFSCB XSTATSCOLL PERF INFO data collection defined by the Cache Manager (see Section 6.5.3). It is accessible via the RXAFSCB GetXStats() interface routine, as defined in Section 6.5.5.7.
Fields
- long numPerfCalls - Number of performance calls received.
- long epoch - Cache Manager epoch time.
- long numCellsContacted - Number of cells contacted.
- long dlocalAccesses - Number of data accesses to files within the local cell.
- long vlocalAccesses - Number of stat accesses to files within the local cell.
- long dremoteAccesses - Number of data accesses to files outside of the local cell.
- long vremoteAccesses - Number of stat accesses to files outside of the local cell.
- long cacheNumEntries - Number of cache entries.
- long cacheBlocksTotal - Number of (1K) blocks configured for the AFS cache.
- long cacheBlocksInUse - Number of cache blocks actively in use.
- long cacheBlocksOrig - Number of cache blocks configured at bootup.
- long cacheMaxDirtyChunks - Maximum number of dirty cache chunks tolerated.
- long cacheCurrDirtyChunks - Current count of dirty cache chunks.
- long dcacheHits - Number of data file requests satisfied by the local cache.
- long vcacheHits - Number of stat entry requests satisfied by the local cache.
- long dcacheMisses - Number of data file requests not satisfied by the local cache.
- long vcacheMisses - Number of stat entry requests not satisfied by the local cache.
- long cacheFlushes - Number of files flushed from the cache.
- long cacheFilesReused - Number of cache files reused.
- long numServerRecords - Number of records used for storing information concerning File Servers.
- long ProtServerAddr - IP addres of the Protection Server used (not implemented).
- long spare[32] - A set of longword spares reserved for future use.
Section 6.5.5: Function Calls
- This section discusses the Cache Manager interface calls. No special permissions are required of the caller for any of these operations. A summary of the calls making up the interface appears below:
- RXAFSCB Probe() "Are-you-alive" call.
- RXAFSCB CallBack() Report callbacks dropped by a File Server.
- RXAFSCB InitCallBackState() Purge callback state from a File Server.
- RXAFSCB GetLock() Get contents of Cache Manager lock table.
- RXAFSCB GetCE() Get cache file description.
- RXAFSCB XStatsVersion() Get version of extended statistics package.
- RXAFSCB GetXStats() Get contents of extended statistics data collection.
Section 6.5.5.1: RXAFSCB Probe - Acknowledge
that the underlying callback service is still operational
int RXAFSCB Probe(IN struct rx call *a rxCallP)
- Description
- [Opcode 206] This call simply implements an "are-you-alive" operation, used to determine if the given Cache Manager is still running. Any File Server will probe each of the Cache Managers with which it has interacted on a regular basis, keeping track of their health. This information serves an important purpose for a File Server. In particular, it is used to trigger purging of deceased Cache Managers from the File Server's callback records, and also to instruct a new or "resurrected" Cache Manager to purge its own callback state for the invoking File Server.
- Rx call information for the related Cache Manager is contained in a rxCallP.
- Error Codes
- ---No error codes are generated.
Section 6.5.5.2: RXAFSCB CallBack - Report
callbacks dropped by a File Server
int RXAFSCB CallBack(IN struct rx call *a rxCallP,
IN AFSCBFids *a fidArrayP,
IN AFSCBs *a callBackArrayP)
- Description
- [Opcode 204] Provide information on dropped callbacks to the Cache Manager for the calling File Server. The number of fids involved appears in a fidArrayP->AFSCBFids len, with the fids themselves located at a fidArrayP->AFSCBFids val. Similarly, the number of associated callbacks is placed in a callBackArrayP->AFSCBs len, with the callbacks themselves located at a callBackArrayP->AFSCBs val.
- Rx call information for the related Cache Manager is contained in a rxCallP.
- Error Codes
- ---No error codes are generated.
Section 6.5.5.3: RXAFSCB InitCallBackState -
Purge callback state from a File Server
int RXAFSCB InitCallBackState(IN struct rx call *a rxCallP)
- Description
- [Opcode 205] This routine instructs the Cache Manager to purge its callback state for all files and directories that live on the calling host. This function is typically called by a File Server when it gets a request from a Cache Manager that does not appear in its internal records. This handles situations where Cache Managers survive a File Server, or get separated from it via a temporary network partition. This also happens upon bootup, or whenever the File Server must throw away its record of a Cache Manager because its tables have been filled.
- Rx call information for the related Cache Manager is contained in a rxCallP.
- Error Codes
- ---No error codes are generated.
Section 6.5.5.4: RXAFSCB GetLock - Get
contents of Cache Manager lock table
int RXAFSCB GetLock(IN struct rx call *a rxCall,
IN long a index,
OUT AFSDBLock *a lockP)
- Description
- [Opcode 207] Fetch the contents of entry a index in the Cache Manager lock table. There are 11 locks in the table, as described in Section 6.5.2. The contents of the desired lock, including a string name representing the lock, are returned in a lockP.
- This call is not used by File Servers, but rather by debugging tools such as cmdebug.
- Rx call information for the related Cache Manager is contained in a rxCallP.
- Error Codes
- The index value supplied in a index is out of range; it must be between 0 and 10.
Section 6.5.5.5: RXAFSCB GetCE - Get cache
file description
int RXAFSCB GetCE(IN struct rx call *a rxCall,
IN long a index,
OUT AFSDBCacheEntry *a ceP)
- Description
- [Opcode 208] Fetch the description for entry a index in the Cache Manager file cache, storing it into the buffer to which a ceP points. The structure returned into this pointer variable is described in Section 4.3.2.
- This call is not used by File Servers, but rather by debugging tools such as cmdebug.
- Rx call information for the related Cache Manager is contained in a rxCallP.
- Error Codes
- The index value supplied in a index is out of range.
Section 6.5.5.6: RXAFSCB XStatsVersion - Get
version of extended statistics package
int RXAFSCB XStatsVersion(IN struct rx call *a rxCall,
OUT long *a versionNumberP)
- Description
- [Opcode 209] This call asks the Cache Manager for the current version number of the extended statistics structures it exports (see RXAFSCB GetXStats(), Section 6.5.5.7). The version number is placed in a versionNumberP.
- Rx call information for the related Cache Manager is contained in a rxCallP.
- Error Codes
- ---No error codes are generated.
Section 6.5.5.7: RXAFSCB GetXStats - Get
contents of extended statistics data collection
int RXAFSCB GetXStats(IN struct rx call *a rxCall,
IN long a clientVersionNumber,
IN long a collectionNumber,
OUT long *a srvVersionNumberP,
OUT long *a timeP,
OUT AFSCB CollData *a dataP)
- Description
- [Opcode 210] This function fetches the contents of the specified Cache Manager extended statistics structure. The caller provides the version number of the data it expects to receive in a clientVersionNumber. Also provided in a collectionNumber is the numerical identifier for the desired data collection. There are currently two of these data collections defined: AFSCB XSTATSCOLL CALL INFO, which is the list of tallies of the number of invocations of internal Cache Manager procedure calls, and AFSCB XSTATSCOLL PERF INFO, which is a list of performance-related numbers. The precise contents of these collections are described in Section 6.5.4. The current version number of the Cache Manager collections is returned in a srvVersionNumberP, and is always set upon return, even if the caller has asked for a different version. If the correct version number has been specified, and a supported collection number given, then the collection data is returned in a dataP. The time of collection is also returned, being placed in a timeP.
- Rx call information for the related Cache Manager is contained in a rxCallP.
- Error Codes
- The collection number supplied in a collectionNumber is out of range.
Section 6.6: Files
- The Cache Manager gets some of its start-up configuration information from files located on the client machine's hard disk. Each client is required to supply a /usr/vice/etc directory in which this configuration data is kept. Section 6.6.1 describes the format and purpose of the three files contributing this setup information: ThisCell, CellServDB, and cacheinfo.
Section 6.6.1: Configuration Files
Section 6.6.1.1: ThisCell
- The Cache Manager, along with various applications, needs to be able to determine the cell to which its client machine belongs. This information is provided by the ThisCell file. It contains a single line stating the machine's fully-qualified cell name.
- As with the CellServDB configuration file, the Cache Manager reads the contents of ThisCell exactly once, at start-up time. Thus, an incarnation of the Cache Manager will maintain precisely one notion of its home cell for its entire lifetime. Thus, changes to the text of the ThisCell file will be invisible to the running Cache Manager. However, these changes will affect such application programs as klog, which allows a user to generate new authentication tickets. In this example, klog reads ThisCell every time it is invoked, and then interacts with the set of Authentication Servers running in the given home cell, unless the caller specifies the desired cell on the command line.
- The ThisCell file is not expected to be changed on a regular basis. Client machines are not imagined to be frequently traded between different administrative organizations. The Unix mode bits are set to specify that while everyone is allowed to read the file, only root is allowed to modify it.
Section 6.6.1.2: CellServDB
- To conduct business with a given AFS cell, a Cache Manager must be informed of the cell's name and the set of machines running AFS database servers within that cell. Such servers include the Volume Location Server, Authentication Server, and Protection Server. This particular cell information is obtained upon startup by reading the CellServDB file. Thus, when the Cache Manager initialization is complete, it will be able to communicate with the cells covered by CellServDB.
- The following is an excerpt from a valid CellServDB file, demonstrating the format used.
...
>transarc.com #Transarc Corporation
192.55.207.7 #henson.transarc.com
192.55.207.13 #bigbird.transarc.com
192.55.207.22 #ernie.transarc.com
>andrew.cmu.edu #Carnegie Mellon University
128.2.10.2 #vice2.fs.andrew.cmu.edu
128.2.10.7 #vice7.fs.andrew.cmu.edu
128.2.10.10 #vice10.fs.andrew.cmu.edu
...
- There are four rules describing the legal CellServDB file format:
- 1. Each cell has a separate entry. The entries may appear in any order. It may be convenient, however, to have the workstation's local cell be the first to appear.
- 2. No blank lines should appear in the file, even at the end of the last entry.
- 3. The first line of each cell's entry begins with the '>' character, and specifies the cell's human-readable, Internet Domain-style name. Optionally, some white space and a comment (preceded by a '#') may follow, briefly describing the specified cell.
- 4. Each subsequent line in a cell's entry names one of the cell's database server machines. The following must appear on the line, in the order given:
- The Internet address of the server, in the standard 4-component dot notation.
- Some amount of whitespace.
- A '#', followed by the machine's complete Internet host name. In this instance, the '#' sign and the text beyond it specifying the machine name are NOT treated as a comment. This is required information.
- The Cache Manager will use the given host name to determine its current address via an Internet Domain lookup. If and only if this lookup fails does the Cache Manager fall back to using the dotted Internet address on the first part of the line. This dotted address thus appears simply as a hint in case of Domain database downtime.
- The CellServDB file is only parsed once, when the Cache Manager first starts. It is possible, however, to amend existing cell information records or add completely new ones at any time after Cache Manager initialization completes. This is accomplished via the VIOCNEWCELL pioctl() (see Section 6.4.5.1.
Section 6.6.1.3: cacheinfo
- This one-line file contains three fields separated by colons:
- AFS Root Directory: This is the directory where the Cache Manager mounts the AFS root volume. Typically, this is specified to be /afs.
- Cache Directory: This field names the directory where the Cache Manager is to create its local cache files. This is typically set to /usr/vice/cache.
- Cache Blocks: The final field states the upper limit on the number of 1,024-byte blocks that the Cache Manager is allowed to use in the partition hosting the named cache directory.
- Thus, the following cacheinfo file would instruct the Cache Manager to mount the AFS filespace at /afs, and inform it that it may expect to be able to use up to 25,000 blocks for the files in its cache directory, /usr/vice/cache.
/afs:/usr/vice/cache:25000
Section 6.6.2: Cache Information Files
Section 6.6.2.1: AFSLog
- This is the AFS log file used to hold Cache Manager debugging output. The file is set up when the Cache Manager first starts. If it already exists, it is truncated. If it doesn't, it is created. Output to this file is enabled and disabled via the the VIOC VENUSLOG pioctl() (see Section 6.4.9.2). Normal text messages are written to this file by the Cache Manager when output is enabled. Each time logging to this file is enabled, the AFSLog file is truncated. Only root can read and write this file.
Section 6.6.2.2: CacheItems
- The Cache Manager only keeps a subset of its data cache entry descriptors in memory at once. The number of these in-memory descriptors is determined by afsd. All of the data cache entry descriptors are kept on disk, in the CacheItems file. The file begins with a header region, taking up four longwords:
struct fheader { long magic AFS_FHMAGIC 0x7635fab8 long firstCSize: First chunk
size long otherCSize: Next chunk sizes long spare }
- The header is followed by one entry for each cache file. Each is:
struct fcache {
short hvNextp;
short hcNextp;
short chunkNextp;
struct VenusFid fid;
long modTime;
long versionNo;
long chunk;
long inode;
long chunkBytes;
char states;
};
Section 6.6.2.3: VolumeItems
- The Cache Manager only keeps at most MAXVOLS (50) in-memory volume descriptions. However, it records all volume information it has obtained in the VolumeItems file in the chosen AFS cache directory. This file is truncated when the Cache Manager starts. Each volume record placed into this file has the following struct fvolume layout:
struct fvolume {
long cell;
long volume;
long next;
struct VenusFid dotdot;
struct VenusFid mtpoint;
};
Section 6.7: Mariner Interface
- The Cache Manager Mariner interface allows interested parties to be advised in real time as to which files and/or directories are being actively transferred between the client machine and one or more File Servers. If enabled, this service delivers messages of two different types, as exemplified below:
Fetching myDataDirectory
Fetching myDataFile.c
Storing myDataObj.o
- In the first message, the myDataDirectory directory is shown to have just been fetched from a File Server. Similarly, the second message indicates that the C program myDataFile.c had just been fetched from its File Server of residence. Finally, the third message reveals that the myDataObj.o object file has just been written out over the network to its respective server.
- In actuality, the text of the messages carries a string prefix to indicate whether a Fetch or Store operation had been performed. So, the full contents of the above messages are as follows:
fetch$Fetching myDataDirectory
fetch$Fetching myDataFile.c
store$Storing myDataObj.o
The Mariner service may be enabled or disabled for a particular machine by using the VIOC AFS MARINER HOST pioctl() (see Section 6.4.9.1). This operation allows any host to be specified as the recipient of these messages. A potential recipient must have its host be declared the target of such messages, then listen to a socket on port 2106.
- Internally, the Cache Manager maintains a cache of NMAR (10) vnode structure pointers and the string name (up to 19 characters) of the associated file or directory. This cache is implemented as an array serving as a circular buffer. Each time a file is involved in a create or lookup operation on a File Server, the current slot in this circular buffer is filled with the relevant vnode and string name information, and the current position is advanced. If Mariner output is enabled, then an actual network fetch or store operation will trigger messages of the kind shown above. Since a fetch or store operation normally occurs shortly after the create or lookup, the mapping of vnode to name is likely to still be in the Mariner cache when it comes time to generate the appropriate message. However, since an unbounded number of other lookups or creates could have been performed in the interim, there is no guarantee that the mapping entry will not have been overrun. In these instances, the Mariner message will be a bit vaguer. Going back to our original example,
Fetching myDataDirectory
Fetching a file
Storing myDataObj.o
In this case, the cached association between the vnode containing myDataFile.c and its string name was thrown out of the Mariner cache before the network fetch operation could be performed. Unable to find the mapping, the generic phrase "a file" was used to identify the object involved.
- Mariner messages only get generated when RPC traffic for fetching or storing a file system object occurs between the Cache Manager and a File Server. Thus, file accesses that are handled by the Cache Manager's on-board data cache do not trigger such announcements.
Section 6.1: Overview
- This chapter provides a sample program showing the use of Rx. Specifically, the rxdemo application, with all its support files, is documented and examined. The goal is to provide the reader with a fully-developed and operational program illustrating the use of both regular Rx remote procedure calls and streamed RPCs. The full text of the rxdemo application is reproduced in the sections below, along with additional commentary.
- Readers wishing to directly experiment with this example Rx application are encouraged to examine the on-line version of rxdemo. Since it is a program of general interest, it has been installed in the usr/contrib tree in the grand.central.org cell. This area contains user-contributed software for the entire AFS community. At the top of this tree is the /afs/grand.central.org/darpa/usr/contrib directory. Both the server-side and client-side rxdemo binaries (rxdemo server and rxdemo client, respectively) may be found in the bin subdirectory. The actual sources reside in the .site/grand.central.org/rxdemo/src subdirectory.
- The rxdemo code is composed of two classes of files, namely those written by a human programmer and those generated from the human-written code by the Rxgen tool. Included in the first group of files are:
- rxdemo.xg This is the RPC interface definition file, providing high-level definitions of the supported calls.
- rxdemo client.c: This is the rxdemo client program, calling upon the associated server to perform operations defined by rxdemo.xg.
- rxdemo server.c: This is the rxdemo server program, implementing the operations promised in rxdemo.xg.
- Makefile: This is the file that directs the compilation and installation of the rxdemo code.
- The class of automatically-generated files includes the following items:
- rxdemo.h: This header file contains the set of constant definitions present in rxdemo.xg, along with information on the RPC opcodes defined for this Rx service.
- rxdemo.cs.c: This client-side stub file performs all the marshalling and unmarshalling of the arguments for the RPC routines defined in rxdemo.xg.
- rxdemo.ss.c: This stub file similarly defines all the marshalling and unmarshalling of arguments for the server side of the RPCs, invokes the routines defined within rxdemo server.c to implement the calls, and also provides the dispatcher function.
- rxdemo.xdr.c: This module defines the routines required to convert complex user-defined data structures appearing as arguments to the Rx RPC calls exported by rxdemo.xg into network byte order, so that correct communication is guaranteed between clients and server with different memory organizations.
- The chapter concludes with a section containing sample output from running the rxdemo server and client programs.
Section 6.2: Definitions
- The rxdemo application is based on the four human-authored files described in this section. They provide the basis for the construction of the full set of modules needed to implement the specified Rx service.
Section 6.2.1: struct VenusFid
- This file serves as the RPC interface definition file for this application. It defines various constants, including the Rx service port to use and the index of the null security object (no encryption is used by rxdemo). It defines the RXDEMO MAX and RXDEMO MIN constants, which will be used by the server as the upper and lower bounds on the number of Rx listener threads to run. It also defines the set of error codes exported by this facility. finally, it provides the RPC function declarations, namely Add() and Getfile(). Note that when building the actual function definitions, Rxgen will prepend the value of the package line in this file, namely "RXDEMO ", to the function declarations. Thus, the generated functions become RXDEMO Add() and RXDEMO Getfile(), respectively. Note the use of the split keyword in the RXDEMO Getfile() declaration, which specifies that this is a streamed call, and actually generates two client-side stub routines (see Section 6.3.1).
package RXDEMO_
%#include <rx/rx.h>
%#include <rx/rx_null.h>
%#define RXDEMO_SERVER_PORT 8000
%#define RXDEMO_SERVICE_PORT 0
%#define RXDEMO_SERVICE_ID 4
%#define RXDEMO_NULL_SECOBJ_IDX 0
%#define RXDEMO_MAX 3
%#define RXDEMO_MIN 2
%#define RXDEMO_NULL 0
%#define RXDEMO_NAME_MAX_CHARS 64
%#define RXDEMO_BUFF_BYTES 512
%#define RXDEMO_CODE_SUCCESS 0
%#define RXDEMO_CODE_CANT_OPEN 1
%#define RXDEMO_CODE_CANT_STAT 2
%#define RXDEMO_CODE_CANT_READ 3
%#define RXDEMO_CODE_WRITE_ERROR 4
Add(IN int a, int b, OUT int *result) = 1;
Getfile(IN string a_nameToRead<RXDEMO_NAME_MAX_CHARS>, OUT int *a_result)
split = 2;
Section 6.2.2: struct ClearToken
- The rxdemo client program, rxdemo client, calls upon the associated server to perform operations defined by rxdemo.xg. After its header, it defines a private GetIPAddress() utility routine, which given a character string host name will return its IP address.
#include <sys/types.h>
#include <netdb.h>
#include <stdio.h>
#include "rxdemo.h"
static char pn[] = "rxdemo";
static u_long GetIpAddress(a_hostName) char *a_hostName;
{
static char rn[] = "GetIPAddress";
struct hostent *hostEntP;
u_long hostIPAddr;
hostEntP = gethostbyname(a_hostName);
if (hostEntP == (struct hostent *)0) {
printf("[%s:%s] Host '%s' not found\n",
pn, rn, a_hostName);
exit(1);
}
if (hostEntP->h_length != sizeof(u_long)) {
printf("[%s:%s] Wrong host address length (%d bytes instead of
%d)",
pn, rn, hostEntP->h_length, sizeof(u_long));
exit(1);
}
bcopy(hostEntP->h_addr, (char *)&hostIPAddr, sizeof(hostIPAddr));
return(hostIPAddr);
}
- The main program section of the client code, after handling its command line arguments, starts off by initializing the Rx facility.
main(argc, argv)
int argc;
char **argv;
{
struct rx_connection *rxConnP;
struct rx_call *rxCallP;
u_long hostIPAddr;
int demoUDPPort;
struct rx_securityClass *nullSecObjP;
int operand1, operand2;
int code;
char fileName[64];
long fileDataBytes;
char buff[RXDEMO_BUFF_BYTES+1];
int currBytesToRead;
int maxBytesToRead;
int bytesReallyRead;
int getResults;
printf("\n%s: Example Rx client process\n\n", pn);
if ((argc < 2) || (argc > 3)) {
printf("Usage: rxdemo <HostName> [PortToUse]");
exit(1);
}
hostIPAddr = GetIpAddress(argv[1]);
if (argc > 2)
demoUDPPort = atoi(argv[2]);
else
demoUDPPort = RXDEMO_SERVER_PORT;
code = rx_Init(htons(demoUDPPort));
if (code) {
printf("** Error calling rx_Init(); code is %d\n", code);
exit(1);
}
nullSecObjP = rxnull_NewClientSecurityObject();
if (nullSecObjP == (struct rx_securityClass *)0) {
printf("%s: Can't create a null client-side security
object!\n", pn);
exit(1);
}
printf("Connecting to Rx server on '%s', IP address 0x%x, UDP port
%d\n", argv[1], hostIPAddr, demoUDPPort);
rxConnP = rx_NewConnection(hostIPAddr, RXDEMO_SERVER_PORT,
RXDEMO_SERVICE_ID, nullSecObjP, RXDEMO_NULL_SECOBJ_IDX);
if (rxConnP == (struct rx_connection *)0) {
printf("rxdemo: Can't create connection to server!\n");
exit(1);
} else
printf(" ---> Connected.\n");
- The rx Init() invocation initializes the Rx library and defines the desired service UDP port (in network byte order). The rxnull NewClientSecurityObject() call creates a client-side Rx security object that does not perform any authentication on Rx calls. Once a client authentication object is in hand, the program calls rx NewConnection(), specifying the host, UDP port, Rx service ID, and security information needed to establish contact with the rxdemo server entity that will be providing the service.
- With the Rx connection in place, the program may perform RPCs. The first one to be invoked is RXDEMO Add():
operand1 = 1;
operand2 = 2;
printf("Asking server to add %d and %d: ", operand1, operand2);
code = RXDEMO_Add(rxConnP, operand1, operand2, &sum);
if (code) {
printf(" // ** Error in the RXDEMO_Add RPC: code is %d\n", code);
exit(1);
}
printf("Reported sum is %d\n", sum);
- The first argument to RXDEMO Add() is a pointer to the Rx connection established above. The client-side body of the RXDEMO Add() function was generated from the rxdemo.xg interface file, and resides in the rxdemo.cs.c file (see Section 6.3.1). It gives the appearance of being a normal C procedure call.
- The second RPC invocation involves the more complex, streamed RXDEMO Getfile() function. More of the internal Rx workings are exposed in this type of call. The first additional detail to consider is that we must manually create a new Rx call on the connection.
printf("Name of file to read from server: ");
scanf("%s", fileName);
maxBytesToRead = RXDEMO_BUFF_BYTES;
printf("Setting up an Rx call for RXDEMO_Getfile...");
rxCallP = rx_NewCall(rxConnP);
if (rxCallP == (struct rx_call *)0) {
printf("** Can't create call\n");
exit(1);
}
printf("done\n");
- Once the Rx call structure has been created, we may begin executing the call itself. Having been declared to be split in the interface file, Rxgen creates two function bodies for rxdemo Getfile() and places them in rxdemo.cs.c. The first, StartRXDEMO Getfile(), is responsible for marshalling the outgoing arguments and issuing the RPC. The second, EndRXDEMO Getfile(), takes care of unmarshalling the non-streamed OUT function parameters. The following code fragment illustrates how the RPC is started, using the StartRXDEMO Getfile() routine to pass the call parameters to the server.
code = StartRXDEMO_Getfile(rxCallP, fileName);
if (code) {
printf("** Error calling StartRXDEMO_Getfile(); code is %d\n",
code);
exit(1);
}
- Once the call parameters have been shipped, the server will commence delivering the "stream" data bytes back to the client on the given Rx call structure. The first longword to come back on the stream specifies the number of bytes to follow.
- Begin reading the data being shipped from the server in response to * our setup call. The first longword coming back on the Rx call is the number of bytes to follow. It appears in network byte order, so we have to fix it up before referring to it.
bytesReallyRead = rx_Read(rxCallP, &fileDataBytes, sizeof(long));
if (bytesReallyRead != sizeof(long)) {
printf("** Only %d bytes read for file length; should have been %d\n",
bytesReallyRead, sizeof(long));
exit(1);
}
fileDataBytes = ntohl(fileDataBytes);
- Once the client knows how many bytes will be sent, it runs a loop in which it reads a buffer at a time from the Rx call stream, using rx Read() to accomplish this. In this application, all that is done with each newly-acquired buffer of information is printing it out.
printf("[file contents (%d bytes) fetched over the Rx call appear
below]\n\n", fileDataBytes);
while (fileDataBytes > 0)
{
currBytesToRead = (fileDataBytes > maxBytesToRead ? maxBytesToRead :
fileDataBytes);
bytesReallyRead = rx_Read(rxCallP, buff, currBytesToRead);
if (bytesReallyRead != currBytesToRead)
{
printf("\nExpecting %d bytes on this read, got %d instead\n",
currBytesToRead, bytesReallyRead);
exit(1);
}
buff[currBytesToRead] = 0;
printf("%s", buff);
fileDataBytes -= currBytesToRead;
}
- After this loop terminates, the Rx stream has been drained of all data. The Rx call is concluded by invoking the second of the two automatically-generated functions, EndRXDEMO Getfile(), which retrieves the call's OUT parameter from the server.
printf("\n\n[End of file data]\n");
code = EndRXDEMO_Getfile(rxCallP, &getResults);
if (code)
{
printf("** Error getting file transfer results; code is %d\n",
code);
exit(1);
}
- With both normal and streamed Rx calls accomplished, the client demo code concludes by terminating the Rx call it set up earlier. With that done, the client exits.
code = rx_EndCall(rxCallP, code);
if (code)
printf("Error in calling rx_EndCall(); code is %d\n", code);
printf("\n\nrxdemo complete.\n");
Server Program: rxdemo server.c
- The rxdemo server program, rxdemo server, implements the operations promised in the rxdemo.xg interface file.
- After the initial header, the external function RXDEMO ExecuteRequest() is declared. The RXDEMO ExecuteRequest() function is generated automatically by rxgen from the interface file and deposited in rxdemo.ss.c. The main program listed below will associate this RXDEMO ExecuteRequest() routine with the Rx service to be instantiated.
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/file.h>
#include <netdb.h>
#include <stdio.h>
#include "rxdemo.h"
#define N_SECURITY_OBJECTS 1
extern RXDEMO_ExecuteRequest();
- After choosing either the default or user-specified UDP port on which the Rx service will be established, rx Init() is called to set up the library.
main(argc, argv)
int argc;
char **argv;
{
static char pn[] = "rxdemo_server";
struct rx_securityClass
(securityObjects[1]);
struct rx_service *rxServiceP;
struct rx_call *rxCallP;
int demoUDPPort;
int fd;
int code;
printf("\n%s: Example Rx server process\n\n", pn);
if (argc >2) {
printf("Usage: rxdemo [PortToUse]");
exit(1);
}
if (argc > 1)
demoUDPPort = atoi(argv[1]);
else
demoUDPPort = RXDEMO_SERVER_PORT;
printf("Listening on UDP port %d\n", demoUDPPort);
code = rx_Init(demoUDPPort);
if (code) {
printf("** Error calling rx_Init(); code is %d\n", code);
exit(1);
}
- A security object specific to the server side of an Rx conversation is created in the next code fragment. As with the client side of the code, a "null" server security object, namely one that does not perform any authentication at all, is constructed with the rxnull NewServerSecurityObject() function.
securityObjects[RXDEMO_NULL_SECOBJ_IDX] =
rxnull_NewServerSecurityObject();
if (securityObjects[RXDEMO_NULL_SECOBJ_IDX] == (struct rx_securityClass
*) 0) {
printf("** Can't create server-side security object\n");
exit(1);
}
- The rxdemo server program is now in a position to create the desired Rx service, primed to recognize exactly those interface calls defined in rxdemo.xg. This is accomplished by calling the rx NewService() library routine, passing it the security object created above and the generated Rx dispatcher routine.
rxServiceP = rx_NewService( 0,
RXDEMO_SERVICE_ID,
"rxdemo",
securityObjects,
1,
RXDEMO_ExecuteRequest
);
if (rxServiceP == (struct rx_service *) 0) {
printf("** Can't create Rx service\n");
exit(1);
}
- The final step in this main routine is to activate servicing of calls to the exported Rx interface. Specifically, the proper number of threads are created to handle incoming interface calls. Since we are passing a non-zero argument to the rx StartServer() call, the main program will itself begin executing the server thread loop, never returning from the rx StartServer() call. The print statement afterwards should never be executed, and its presence represents some level of paranoia, useful for debugging malfunctioning thread packages.
rx_StartServer(1);
printf("** rx_StartServer() returned!!\n"); exit(1);
}
- Following the main procedure are the functions called by the automatically-generated routines in the rxdemo.ss.c module to implement the specific routines defined in the Rx interface.
- The first to be defined is the RXDEMO Add() function. The arguments for this routine are exactly as they appear in the interface definition, with the exception of the very first. The a rxCallP parameter is a pointer to the Rx structure describing the call on which this function was activated. All user-supplied routines implementing an interface function are required to have a pointer to this structure as their first parameter. Other than printing out the fact that it has been called and which operands it received, all that RXDEMO Add() does is compute the sum and place it in the output parameter.
- Since RXDEMO Add() is a non-streamed function, with all data travelling through the set of parameters, this is all that needs to be done. To mark a successful completion, RXDEMO Add() returns zero, which is passed all the way through to the RPC's client.
int RXDEMO_Add(a_rxCallP, a_operand1, a_operand2, a_resultP)
struct rx_call *a_rxCallP;
int a_operand1, a_operand2;
int *a_resultP;
{
printf("\t[Handling call to RXDEMO_Add(%d, %d)]\n",
a_operand1, a_operand2);
*a_resultP = a_operand1 + a_operand2;
return(0);
}
- The next and final interface routine defined in this file is RXDEMO Getfile(). Declared as a split function in the interface file, RXDEMO Getfile() is an example of a streamed Rx call. As with RXDEMO Add(), the initial parameter is required to be a pointer to the Rx call structure with which this routine is associated, Similarly, the other parameters appear exactly as in the interface definition, and are handled identically.
- The difference between RXDEMO Add() and RXDEMO Getfile() is in the use of the rx Write() library routine by RXDEMO Getfile() to feed the desired file's data directly into the Rx call stream. This is an example of the use of the a rxCallP argument, providing all the information necessary to support the rx Write() activity.
- The RXDEMO Getfile() function begins by printing out the fact that it's been called and the name of the requested file. It will then attempt to open the requested file and stat it to determine its size.
int RXDEMO_Getfile(a_rxCallP, a_nameToRead, a_resultP)
struct rx_call *a_rxCallP;
char *a_nameToRead;
int *a_resultP;
{
struct stat fileStat;
long fileBytes;
long nbofileBytes;
int code;
int bytesReallyWritten;
int bytesToSend;
int maxBytesToSend;
int bytesRead;
char buff[RXDEMO_BUFF_BYTES+1];
int fd;
maxBytesToSend = RXDEMO_BUFF_BYTES;
printf("\t[Handling call to RXDEMO_Getfile(%s)]\n", a_nameToRead);
fd = open(a_nameToRead, O_RDONLY, 0444);
if (fd <0) {
printf("\t\t[**Can't open file '%s']\n", a_nameToRead);
*a_resultP = RXDEMO_CODE_CANT_OPEN;
return(1);
} else
printf("\t\t[file opened]\n");
code = fstat(fd, &fileStat);
if (code) {
a_resultP = RXDEMO_CODE_CANT_STAT;
printf("\t\t[file closed]\n");
close(fd);
return(1);
}
fileBytes = fileStat.st_size;
printf("\t\t[file has %d bytes]\n", fileBytes);
- Only standard unix operations have been used so far. Now that the file is open, we must first feed the size of the file, in bytes, to the Rx call stream. With this information, the client code can then determine how many bytes will follow on the stream. As with all data that flows through an Rx stream, the longword containing the file size, in bytes, must be converted to network byte order before being sent. This insures that the recipient may properly interpret the streamed information, regardless of its memory architecture.
nbofileBytes = htonl(fileBytes);
bytesReallyWritten = rx_Write(a_rxCallP, &nbofileBytes, sizeof(long));
if (bytesReallyWritten != sizeof(long)) {
printf("** %d bytes written instead of %d for file length\n",
bytesReallyWritten, sizeof(long));
*a_resultP = RXDEMO_CODE_WRITE_ERROR;
printf("\t\t[file closed]\n");
close(fd);
return(1);
}
- Once the number of file bytes has been placed in the stream, the RXDEMO Getfile() routine runs a loop, reading a buffer's worth of the file and then inserting that buffer of file data into the Rx stream at each iteration. This loop executes until all of the file's bytes have been shipped. Notice there is no special end-of-file character or marker inserted into the stream.
- The body of the loop checks for both unix read() and rx Write errors. If there is a problem reading from the unix file into the transfer buffer, it is reflected back to the client by setting the error return parameter appropriately. Specifically, an individual unix read() operation could fail to return the desired number of bytes. Problems with rx Write() are handled similarly. All errors discovered in the loop result in the file being closed, and RXDEMO Getfile() exiting with a non-zero return value.
while (fileBytes > 0) {
bytesToSend = (fileBytes > maxBytesToSend ?
maxBytesToSend : fileBytes);
bytesRead = read(fd, buff, bytesToSend);
if (bytesRead != bytesToSend) {
printf("Read %d instead of %d bytes from the file\n",
bytesRead, bytesToSend);
*a_resultP = RXDEMO_CODE_WRITE_ERROR;
printf("\t\t[file closed]\n");
close(fd);
return(1);
}
bytesReallyWritten = rx_Write(a_rxCallP, buff, bytesToSend);
if (bytesReallyWritten != bytesToSend) {
printf("%d file bytes written instead of %d\n",
bytesReallyWritten, bytesToSend);
*a_resultP = RXDEMO_CODE_WRITE_ERROR;
printf("\t\t[file closed]\n");
close(fd);
return(1);
}
fileBytes -= bytesToSend;
}
- Once all of the file's bytes have been shipped to the remote client, all that remains to be done is to close the file and return successfully.
*a_resultP = RXDEMO_CODE_SUCCESS;
printf("\t\t[file closed]\n");
close(fd);
return(0);
}
Section 6.2.4: Makefile
- This file directs the compilation and installation of the rxdemo code. It specifies the locations of libraries, include files, sources, and such tools as Rxgen and install, which strips symbol tables from executables and places them in their target directories. This Makefile demostrates cross-cell software development, with the rxdemo sources residing in the grand.central.org cell and the AFS include files and libraries accessed from their locations in the transarc.com cell.
- In order to produce and install the rxdemo server and rxdemo client binaries, the system target should be specified on the command line when invoking make:
- A note of caution is in order concerning generation of the rxdemo binaries. While tools exist that deposit the results of all compilations to other (architecture-specific) directories, and thus facilitate multiple simultaneous builds across a variety of machine architectures (e.g., Transarc's washtool), the assumption is made here that compilations will take place directly in the directory containing all the rxdemo sources. Thus, a user will have to execute a make clean command to remove all machine-specific object, library, and executable files before compiling for a different architecture. Note, though, that the binaries are installed into a directory specifically reserved for the current machine type. Specifically, the final pathname component of the ${PROJ DIR}bin installation target is really a symbolic link to ${PROJ DIR}.bin/.
- Two libraries are needed to support the rxdemo code. The first is obvious, namely the Rx librx.a library. The second is the lightweight thread package library, liblwp.a, which implements all the threading operations that must be performed. The include files are taken from the unix /usr/include directory, along with various AFS-specific directories. Note that for portability reasons, this Makefile only contains fully-qualified AFS pathnames and "standard" unix pathnames (such as /usr/include).
SHELL = /bin/sh
TOOL_CELL = grand.central.org
AFS_INCLIB_CELL = transarc.com
USR_CONTRIB = /afs/${TOOL_CELL}/darpa/usr/contrib/
PROJ_DIR = ${USR_CONTRIB}.site/grand.central.org/rxdemo/
AFS_INCLIB_DIR = /afs/${AFS_INCLIB_CELL}/afs/dest/
RXGEN = ${AFS_INCLIB_DIR}bin/rxgen
INSTALL = ${AFS_INCLIB_DIR}bin/install
LIBS = ${AFS_INCLIB_DIR}lib/librx.a \ ${AFS_INCLIB_DIR}lib/liblwp.a
CFLAGS = -g \
-I. \
-I${AFS_INCLIB_DIR}include \
-I${AFS_INCLIB_DIR}include/afs \
-I${AFS_INCLIB_DIR} \
-I/usr/include
system: install
install: all
${INSTALL} rxdemo_client
${PROJ_DIR}bin
${INSTALL} rxdemo_server
${PROJ_DIR}bin
all: rxdemo_client rxdemo_server
rxdemo_client: rxdemo_client.o ${LIBS} rxdemo.cs.o ${CC} ${CFLAGS}
-o rxdemo_client rxdemo_client.o rxdemo.cs.o ${LIBS}
rxdemo_server: rxdemo_server.o rxdemo.ss.o ${LIBS} ${CC} ${CFLAGS}
-o rxdemo_server rxdemo_server.o rxdemo.ss.o ${LIBS}
rxdemo_client.o: rxdemo.h
rxdemo_server.o: rxdemo.h
rxdemo.cs.c rxdemo.ss.c rxdemo.er.c rxdemo.h: rxdemo.xg rxgen rxdemo.xg
clean: rm -f *.o rxdemo.cs.c rxdemo.ss.c rxdemo.xdr.c rxdemo.h \
rxdemo_client rxdemo_server core
Section 6.3: ioctl() Interface
- The four human-generated files described above provide all the information necessary to construct the full set of modules to support the rxdemo example application. This section describes those routines that are generated from the base set by Rxgen, filling out the code required to implement an Rx service.
Section 6.3.1: VIOCCLOSEWAIT
- The rxdemo client.c program, described in Section 6.2.2, calls the client-side stub routines contained in this module in order to make rxdemo RPCs. Basically, these client-side stubs are responsible for creating new Rx calls on the given connection parameter and then marshalling and unmarshalling the rest of the interface call parameters. The IN and INOUT arguments, namely those that are to be delivered to the server-side code implementing the call, must be packaged in network byte order and shipped along the given Rx call. The return parameters, namely those objects declared as INOUT and OUT, must be fetched from the server side of the associated Rx call, put back in host byte order, and inserted into the appropriate parameter variables.
- The first part of rxdemo.cs.c echoes the definitions appearing in the rxdemo.xg interface file, and also #includes another Rxgen-generated file, rxdemo.h.
#include "rxdemo.h"
#define RXDEMO_CODE_WRITE_ERROR 4
#include <rx/rx.h>
#include <rx/rx_null.h>
#define RXDEMO_SERVER_PORT 8000
#define RXDEMO_SERVICE_PORT 0
#define RXDEMO_SERVICE_ID 4
#define RXDEMO_NULL_SECOBJ_IDX 0
#define RXDEMO_MAX 3
#define RXDEMO_MIN 2
#define RXDEMO_NULL 0
#define RXDEMO_NAME_MAX_CHARS 64
#define RXDEMO_BUFF_BYTES 512
#define RXDEMO_CODE_SUCCESS 0
#define RXDEMO_CODE_CANT_OPEN 1
#define RXDEMO_CODE_CANT_STAT 2
#define RXDEMO_CODE_CANT_READ 3
#define RXDEMO_CODE_WRITE_ERROR 4
- The next code fragment defines the client-side stub for the RXDEMO Add() routine, called by the rxdemo client program to execute the associated RPC.
int RXDEMO_Add(z_conn, a, b, result) register struct rx_connection *z_conn;
int a, b;
int * result;
{
struct rx_call *z_call = rx_NewCall(z_conn);
static int z_op = 1;
int z_result;
XDR z_xdrs;
xdrrx_create(&z_xdrs, z_call, XDR_ENCODE);
if ((!xdr_int(&z_xdrs, &z_op))
|| (!xdr_int(&z_xdrs, &a))
|| (!xdr_int(&z_xdrs, &b))) {
z_result = RXGEN_CC_MARSHAL;
goto fail;
}
z_xdrs.x_op = XDR_DECODE;
if ((!xdr_int(&z_xdrs, result))) {
z_result = RXGEN_CC_UNMARSHAL;
goto fail;
}
z_result = RXGEN_SUCCESS;
fail: return rx_EndCall(z_call, z_result);
}
- The very first operation performed by RXDEMO Add() occurs in the local variable declarations, where z call is set to point to the structure describing a newly-created Rx call on the given connection. An XDR structure, z xdrs, is then created for the given Rx call with xdrrx create(). This XDR object is used to deliver the proper arguments, in network byte order, to the matching server stub code. Three calls to xdr int() follow, which insert the appropriate Rx opcode and the two operands into the Rx call. With the IN arguments thus transmitted, RXDEMO Add() prepares to pull the value of the single OUT parameter. The z xdrs XDR structure, originally set to XDR ENCODE objects, is now reset to XDR DECODE to convert further items received into host byte order. Once the return parameter promised by the function is retrieved, RXDEMO Add() returns successfully.
- Should any failure occur in passing the parameters to and from the server side of the call, the branch to fail will invoke Rx EndCall(), which advises the server that the call has come to a premature end (see Section 5.6.6 for full details on rx EndCall() and the meaning of its return value).
- The next client-side stub appearing in this generated file handles the delivery of the IN parameters for StartRXDEMO Getfile(). It operates identically as the RXDEMO Add() stub routine in this respect, except that it does not attempt to retrieve the OUT parameter. Since this is a streamed call, the number of bytes that will be placed on the Rx stream cannot be determined at compile time, and must be handled explicitly by rxdemo client.c.
int StartRXDEMO_Getfile(z_call, a_nameToRead)
register struct rx_call *z_call;
char * a_nameToRead;
{
static int z_op = 2;
int z_result;
XDR z_xdrs;
xdrrx_create(&z_xdrs, z_call, XDR_ENCODE);
if ((!xdr_int(&z_xdrs, &z_op)) || (!xdr_string(&z_xdrs, &a_nameToRead,
RXDEMO_NAME_MAX_CHARS))) {
z_result = RXGEN_CC_MARSHAL;
goto fail;
}
z_result = RXGEN_SUCCESS;
fail: return z_result;
}
- The final stub routine appearing in this generated file, EndRXDEMO Getfile(), handles the case where rxdemo client.c has already successfully recovered the unbounded streamed data appearing on the call, and then simply has to fetch the OUT parameter. This routine behaves identially to the latter portion of RXDEMO Getfile().
int EndRXDEMO_Getfile(z_call, a_result)
register struct rx_call *z_call;
int * a_result;
{
int z_result;
XDR z_xdrs;
xdrrx_create(&z_xdrs, z_call, XDR_DECODE);
if ((!xdr_int(&z_xdrs, a_result))) {
z_result = RXGEN_CC_UNMARSHAL;
goto fail;
}
z_result = RXGEN_SUCCESS; fail:
return z_result;
}
Section 6.3.2: VIOCABORT
- This generated file provides the core components required to implement the server side of the rxdemo RPC service. Included in this file is the generated dispatcher routine, RXDEMO ExecuteRequest(), which the rx NewService() invocation in rxdemo server.c uses to construct the body of each listener thread's loop. Also included are the server-side stubs to handle marshalling and unmarshalling of parameters for each defined RPC call (i.e., RXDEMO Add() and RXDEMO Getfile()). These stubs are called by RXDEMO ExecuteRequest(). The routine to be called by RXDEMO ExecuteRequest() depends on the opcode received, which appears as the very first longword in the call data.
- As usual, the first fragment is copyright information followed by the body of the definitions from the interface file.
#include "rxdemo.h"
#include <rx/rx.h>
#include <rx/rx_null.h>
#define RXDEMO_SERVER_PORT 8000
#define RXDEMO_SERVICE_PORT 0
#define RXDEMO_SERVICE_ID 4
#define RXDEMO_NULL_SECOBJ_IDX 0
#define RXDEMO_MAX 3
#define RXDEMO_MIN 2
#define RXDEMO_NULL 0
#define RXDEMO_NAME_MAX_CHARS 64
#define RXDEMO_BUFF_BYTES 512
#define RXDEMO_CODE_SUCCESS 0
#define RXDEMO_CODE_CANT_OPEN 1
#define RXDEMO_CODE_CANT_STAT 2
#define RXDEMO_CODE_CANT_READ 3
#define RXDEMO_CODE_WRITE_ERROR 4
- After this preamble, the first server-side stub appears. This RXDEMO Add() routine is basically the inverse of the RXDEMO Add() client-side stub defined in rxdemo.cs.c. Its job is to unmarshall the IN parameters for the call, invoke the "true" server-side RXDEMO Add() routine (defined in rxdemo server.c), and then package and ship the OUT parameter. Being so similar to the client-side RXDEMO Add(), no further discussion is offered here.
long _RXDEMO_Add(z_call, z_xdrs)
struct rx_call *z_call;
XDR *z_xdrs;
{
long z_result;
int a, b;
int result;
if ((!xdr_int(z_xdrs, &a)) || (!xdr_int(z_xdrs, &b)))
{
z_result = RXGEN_SS_UNMARSHAL;
goto fail;
}
z_result = RXDEMO_Add(z_call, a, b, &result);
z_xdrs->x_op = XDR_ENCODE;
if ((!xdr_int(z_xdrs, &result)))
z_result = RXGEN_SS_MARSHAL;
fail: return z_result;
}
- The second server-side stub, RXDEMO Getfile(), appears next. It operates identically to RXDEMO Add(), first unmarshalling the IN arguments, then invoking the routine that actually performs the server-side work for the call, then finishing up by returning the OUT parameters.
long _RXDEMO_Getfile(z_call, z_xdrs)
struct rx_call *z_call;
XDR *z_xdrs;
{
long z_result;
char * a_nameToRead=(char *)0;
int a_result;
if ((!xdr_string(z_xdrs, &a_nameToRead, RXDEMO_NAME_MAX_CHARS))) {
z_result = RXGEN_SS_UNMARSHAL;
goto fail;
}
z_result = RXDEMO_Getfile(z_call, a_nameToRead, &a_result);
z_xdrs->x_op = XDR_ENCODE;
if ((!xdr_int(z_xdrs, &a_result)))
z_result = RXGEN_SS_MARSHAL;
fail: z_xdrs->x_op = XDR_FREE;
if (!xdr_string(z_xdrs, &a_nameToRead, RXDEMO_NAME_MAX_CHARS))
goto fail1;
return z_result;
fail1: return RXGEN_SS_XDRFREE;
}
- The next portion of the automatically generated server-side module sets up the dispatcher routine for incoming Rx calls. The above stub routines are placed into an array in opcode order.
long _RXDEMO_Add();
long _RXDEMO_Getfile();
static long (*StubProcsArray0[])() = {_RXDEMO_Add, _RXDEMO_Getfile};
- The dispatcher routine itself, RXDEMO ExecuteRequest, appears next. This is the function provided to the rx NewService() call in rxdemo server.c, and it is used as the body of each listener thread's service loop. When activated, it decodes the first longword in the given Rx call, which contains the opcode. It then dispatches the call based on this opcode, invoking the appropriate server-side stub as organized in the StubProcsArray.
RXDEMO_ExecuteRequest(z_call)
register struct rx_call *z_call;
{
int op;
XDR z_xdrs;
long z_result;
xdrrx_create(&z_xdrs, z_call, XDR_DECODE);
if (!xdr_int(&z_xdrs, &op))
z_result = RXGEN_DECODE;
else if (op < RXDEMO_LOWEST_OPCODE || op > RXDEMO_HIGHEST_OPCODE)
z_result = RXGEN_OPCODE;
else
z_result = (*StubProcsArray0[op -RXDEMO_LOWEST_OPCODE])(z_call,
&z_xdrs);
return z_result;
}
Section 6.3.3: VIOIGETCELL
- This file is created to provide the special routines needed to map any user-defined structures appearing as Rx arguments into and out of network byte order. Again, all on-thewire data appears in network byte order, insuring proper communication between servers and clients with different memory organizations.
- Since the rxdemo example application does not define any special structures to pass as arguments in its calls, this generated file contains only the set of definitions appearing in the interface file. In general, though, should the user define a struct xyz and use it as a parameter to an RPC function, this file would contain a routine named xdr xyz(), which converted the structure field-by-field to and from network byte order.
#include "rxdemo.h"
#include <rx/rx.h>
#include <rx/rx_null.h>
#define RXDEMO_SERVER_PORT 8000
#define RXDEMO_SERVICE_PORT 0
#define RXDEMO_SERVICE_ID 4
#define RXDEMO_NULL_SECOBJ_IDX 0
#define RXDEMO_MAX 3
#define RXDEMO_MIN 2
#define RXDEMO_NULL 0
#define RXDEMO_NAME_MAX_CHARS 64
#define RXDEMO_BUFF_BYTES 512
#define RXDEMO_CODE_SUCCESS 0
#define RXDEMO_CODE_CANT_OPEN 1
#define RXDEMO_CODE_CANT_STAT 2
#define RXDEMO_CODE_CANT_READ 3
#define RXDEMO_CODE_WRITE_ERROR 4
Section 6.4: pioctl() Interface
- This section contains the output generated by running the example rxdemo server and rxdemo client programs described above. The server end was run on a machine named Apollo, and the client program was run on a machine named Bigtime.
- The server program on Apollo was started as follows:
- apollo: rxdemo_server
- rxdemo_server: Example Rx server process
- Listening on UDP port 8000
- At this point, rxdemo server has initialized its Rx module and started up its listener LWPs, which are sleeping on the arrival of an RPC from any rxdemo client.
- The client portion was then started on Bigtime:
bigtime: rxdemo_client apollo
rxdemo: Example Rx client process
Connecting to Rx server on 'apollo', IP address 0x1acf37c0, UDP port 8000
---> Connected. Asking server to add 1 and 2: Reported sum is 3
- The command line instructs rxdemo client to connect to the rxdemo server on host apollo and to use the standard port defined for this service. It reports on the successful Rx connection establishment, and immediately executes an rxdemo Add(1, 2) RPC. It reports that the sum was successfully received. When the RPC request arrived at the server and was dispatched by the rxdemo server code, it printed out the following line:
[Handling call to RXDEMO_Add(1, 2)]
- Next, rxdemo client prompts for the name of the file to read from the rxdemo server. It is told to fetch the Makefile for the Rx demo directory. The server is executing in the same directory in which it was compiled, so an absolute name for the Makefile is not required. The client echoes the following:
Name of file to read from server: Makefile Setting up an Rx call for RXDEMO_Getfile...done
- As with the rxdemo Add() call, rxdemo server receives this RPC, and prints out the following information:
- [Handling call to RXDEMO_Getfile(Makefile)]
- [file opened]
- [file has 2450 bytes]
- [file closed]
- It successfully opens the named file, and reports on its size in bytes. The rxdemo server program then executes the streamed portion of the rxdemo Getfile call, and when complete, indicates that the file has been closed. Meanwhile, rxdemo client prints out the reported size of the file, follows it with the file's contents, then advises that the test run has completed:
[file contents (2450 bytes) fetched over the Rx call appear below]
SHELL = /bin/sh
TOOL_CELL = grand.central.org
AFS_INCLIB_CELL = transarc.com
USR_CONTRIB = /afs/${TOOL_CELL}/darpa/usr/contrib/
PROJ_DIR = ${USR_CONTRIB}.site/grand.central.org/rxdemo/
AFS_INCLIB_DIR = /afs/${AFS_INCLIB_CELL}/afs/dest/
RXGEN = ${AFS_INCLIB_DIR}bin/rxgen
INSTALL = ${AFS_INCLIB_DIR}bin/install
LIBS = ${AFS_INCLIB_DIR}lib/librx.a \ ${AFS_INCLIB_DIR}lib/liblwp.a
CFLAGS = -g \
-I. \
-I${AFS_INCLIB_DIR}include \
-I${AFS_INCLIB_DIR}include/afs \
-I${AFS_INCLIB_DIR} \
-I/usr/include
system: install
install: all
${INSTALL} rxdemo_client ${PROJ_DIR}bin
${INSTALL} rxdemo_server ${PROJ_DIR}bin
all: rxdemo_client rxdemo_server
rxdemo_client: rxdemo_client.o ${LIBS} rxdemo.cs.o ${CC} ${CFLAGS}
-o rxdemo_client rxdemo_client.o rxdemo.cs.o ${LIBS}
rxdemo_server: rxdemo_server.o rxdemo.ss.o ${LIBS} ${CC} ${CFLAGS}
-o rxdemo_server rxdemo_server.o rxdemo.ss.o ${LIBS}
rxdemo_client.o: rxdemo.h
rxdemo_server.o: rxdemo.h
rxdemo.cs.c rxdemo.ss.c rxdemo.er.c rxdemo.h: rxdemo.xg rxgen rxdemo.xg
clean: rm -f *.o rxdemo.cs.c rxdemo.ss.c rxdemo.xdr.c rxdemo.h \
rxdemo_client rxdemo_server core
[End of file data]
rxdemo complete.
- The rxdemo server program continues to run after handling these calls, offering its services to any other callers. It can be killed by sending it an interrupt signal using Control-C (or whatever mapping has been set up for the shell's interrupt character).
Bibliography
- [1] Transarc Corporation. AFS 3.0 System Administrator's Guide, F-30-0-D102, Pittsburgh, PA, April 1990.
- [2] S.P. Miller, B.C. Neuman, J.I. Schiller, J.H. Saltzer. Kerberos Authentication and Authorization System, Project Athena Technical Plan, Section E.2.1, M.I.T., December 1987.
- [3] Bill Bryant. Designing an Authentication System: a Dialogue in Four Scenes, Project Athena internal document, M.I.T, draft of 8 February 1988.
- [4] S. R. Kleinman. Vnodes: An Architecture for Multiple file System Types in Sun UNIX, Conference Proceedings, 1986 Summer Usenix Technical Conference, pp. 238-247, El Toro, CA, 1986.