Note: The following recommendations are based on performance experiments using AIX Version 3.2.5. At the time this book was written, the degree to which these recommendations would apply to AIX Version 4 was not known.
From the performance standpoint, the most important difference between DCE Distributed File Service (DFS) and NFS is the client data-caching capability of DFS, so it is not surprising that the most important performance-tuning techniques for DFS involve choosing the attributes of the client cache. The client and server parameter choices discussed in this section are:
To assess the disk versus memory trade-off in your environment, consider the following points:
cm getcachesizeDivide the number of 1KB blocks being used by .9 to determine the memory cache size needed to accommodate the same amount of data. (About 10% of the blocks in the cache are used for DFS record keeping.)
Determining the appropriate DFS cache size for a particular system will take some experimentation. You might begin by estimating the sum of:
If the users' home directories are in DFS, you will want to make an allowance for the frequency with which the home directory is accessed, and the effect on perceived responsiveness of the system.
The size of the client cache is specified in the CacheInfo file and can be overridden with the dfsd -blocks n option, where n is the number of KB in the cache. This parameter applies to both memory and disk caches.
The DFS cache chunk size can range from 8KB to 256KB. For large files (several MB), sequential read and write performance increases as chunk size increases, up to about 64KB. For very large files (100MB or more) a chunk size of 256KB yields the best read performance.
The chunk size is specified with the dfsd -chunksize n option, where n is an integer from 13 to 18, inclusive. The cache size is 2**n bytes, and so ranges from 8KB (2**13) to 256KB(2**18). This parameter applies to both memory and disk caches. The default size is 8KB for memory caches and 64KB for disk caches.
This parameter only applies to disk caches. For memory caches, the number of chunks is already specified by the combination of cache size and chunk size. For disk caches, the default number of chunks is computed as the number of cache blocks divided by 8. If a du of the cache directory indicates that the space is less than 90% full, increase the number of cache chunks with the dfsd -files n option, where n is the number of chunks to be accommodated. This allows better utilization of the available cache space in applications that use many small files. Since multiple files cannot share a chunk, the number of chunks determines the maximum number of files the cache can accommodate.
The disk cache should be in a logical volume that is:
The status-buffer size limits the maximum number of files that can be in the cache at one time. One entry is required for each file. If the status buffer is full, new files will displace old files in the cache, even though there is enough disk space to hold them. If your workload consists mostly of files that are equal to or smaller than the chunk size, the status buffer should have as many entries as there are chunks in the cache.
The status-buffer size is specified with the dfsd -stat n option, where n is the number of entries in the status buffer. The default value of n is 300.
Sequential read and write performance are affected by the size of the records being read or written by the application. In general, read throughput increases with record size up to 4KB, above which it levels off. Write throughput increases with record size up to 2KB, above which it levels off or decreases slightly.
DFS uses UDP as its communications protocol. The recommendations for tuning DFS communications for servers and multiuser client systems parallel those for tuning communications in general (see "UDP, TCP/IP, and mbuf Tuning Parameters Summary"):
You can also use chdev to set these parameters, if you take the adapter offline first. For example, for a Token-Ring adapter, the sequence of commands would be:
# ifconfig tr0 detach # chdev -l tok0 -a xmt_que_size=150 -a rec_que_size=150 # ifconfig tr0 hostname upYou can observe the effect of the change with:
$ lsattr -E -l tok0
On high-speed servers, it may be desirable to increase the number of -mainprocs and -tokenprocs (in the fxd command), to ensure that all of the available CPU capacity can be used effectively.
The following should be considered when setting up a DCE LFS aggregate (using the newaggr command) on a DFS server: