This topic provides recommendations for actions you should take (or not take) before and during the installation process.
The topic includes the following major sections:
Before you begin the installation process, be sure that you have made decisions about the size and location of disk file systems and paging spaces, and that you understand how to communicate those decisions to AIX.
If you are upgrading to a new level of AIX, you should:
We do not recommend any a priori changes from the default CPU scheduling parameters, such as the time-slice duration. Unless you have extensive monitoring and tuning experience with the same workload on a nearly identical configuration, you should leave these parameters unchanged at installation time.
See "Monitoring and Tuning CPU Use" for post-installation recommendations.
If the system you are installing is larger than 32MB and is expected to support more than five active users at one time, you may want to consider raising the minimum level of multiprogramming of the VMM memory-load-control mechanism. As an example, if your conservative estimate is that four of your most memory-intensive applications should be able to run simultaneously, leaving at least 16MB for the operating system and 25% of real memory for file pages, you could increase the minimum multiprogramming level from the default of 2 to 4 with the command:
# schedtune -m 4
All other memory threshold changes should wait until you have had experience with the response of the system to the real workload.
See "Monitoring and Tuning Memory Use" for post-installation recommendations.
Although the mechanisms for defining and expanding logical volumes attempt to make the best possible default choices, satisfactory disk-I/O performance is much more likely if the installer of the system tailors the size and placement of the logical volumes to the expected data storage and workload requirements. Our recommendations are:
Drive SCSI Random Pages Sequential Pages Capacity Adapter per Second per Second -------- ------------------- ------------- ----------------- 200MB Model 250 Integrated approx. 40 approx. 250 400MB SCSI II approx. 50 approx. 375 857MB SCSI II approx. 60 approx. 550 2.4GB SCSI II approx. 65* approx. 525 1.37GB SCSI II approx. 70 approx. 800 540MB SCSI II approx. 85 approx. 975 1.0GB** SCSI II approx. 85 approx. 1075 2.0GB SCSI II approx. 85 approx. 950 * per accessor (there are two) ** This 1.0GB drive (part number 45G9464) replaced an earlier 1.0GB drive (part number 55F5206) in late 1993.
Note: These numbers are derived from the results of laboratory measurements under ideal conditions. They represent a synthesis of a number of different measurements, not the results of a single benchmark. They are provided to give you a general sense of the relative speeds of the disk drives. They will change with time due to improvements in the drives, adapters, and software.
Disk Drives Disk Adapter per Adapter ------------ ------------- Original RS/6000 SCSI adapter 1 SCSI-2 High Performance Controller 2 SCSI-2 Fast Adapter (8-bit) 2 SCSI-2 Fast/Wide Adapter (16-bit) 3
hdisk0 Available 00-01-00-00 400 MB SCSI Disk Drive hdisk1 Available 00-01-00-10 320 MB SCSI Disk Drive rmt0 Defined 00-01-00-50 2.3 GB 8mm Tape Drive
See "Monitoring and Tuning Disk I/O" for post-installation recommendations.
The general recommendation is that the sum of the sizes of the paging spaces should be equal to at least twice the size of the real memory of the machine, up to a memory size of 256MB (512MB of paging space). For memories larger than 256MB, we recommend:
total paging space = 512MB + (memory size - 256MB) * 1.25
Ideally, there should be several paging spaces of roughly equal size, each on a different physical disk drive. If you decide to create additional paging spaces, create them on physical volumes that are more lightly loaded than the physical volume in rootvg. When allocating paging space blocks, the VMM allocates four blocks, in round-robin fashion, from each of the active paging spaces that has space available. While the system is booting, only the primary paging space (hd6) is active. Consequently, all paging-space blocks allocated during boot are on the primary paging space. This means that the primary paging space should be somewhat larger than the secondary paging spaces. The secondary paging spaces should all be of the same size to ensure that the round-robin algorithm can work effectively.
The lsps -a command gives a snapshot of the current utilization level of all the paging spaces on a system. The psdanger() subroutine can also be used to determine how closely paging-space utilization is approaching dangerous levels. As an example, the following program uses psdanger() to provide a warning message when a threshold is exceeded:
/* psmonitor.c Monitors system for paging space low conditions. When the condition is detected, writes a message to stderr. Usage: psmonitor [Interval [Count]] Default: psmonitor 1 1000000 */ #include <stdio.h> #include <signal.h> main(int argc,char **argv) { int interval = 1; /* seconds */ int count = 1000000; /* intervals */ int current; /* interval */ int last; /* check */ int kill_offset; /* returned by psdanger() */ int danger_offset; /* returned by psdanger() */
/* are there any parameters at all? */ if (argc > 1) { if ( (interval = atoi(argv[1])) < 1 ) { fprintf(stderr,"Usage: psmonitor [ interval [ count ] ]\n"); exit(1); } if (argc > 2) { if ( (count = atoi( argv[2])) < 1 ) { fprintf(stderr,"Usage: psmonitor [ interval [ count ] ]\n"); exit(1); } } } last = count -1; for(current = 0; current < count; current++) { kill_offset = psdanger(SIGKILL); /* check for out of paging space */ if (kill_offset < 0) fprintf(stderr, "OUT OF PAGING SPACE! %d blocks beyond SIGKILL threshold.\n", kill_offset*(-1)); else { danger_offset = psdanger(SIGDANGER); /* check for paging space low */ if (danger_offset < 0) { fprintf(stderr, "WARNING: paging space low. %d blocks beyond SIGDANGER threshold.\n", danger_offset*(-1)); fprintf(stderr, " %d blocks below SIGKILL threshold.\n", kill_offset); } } if (current < last) sleep(interval); } }
If mirroring is being used and Mirror Write Consistency is on (as it is by default), you may want to locate the copies in the outer region of the disk, since the Mirror Write Consistency information is always written in Cylinder 0. From a performance standpoint, mirroring is costly, mirroring with Write Verify is costlier still (extra disk rotation per write), and mirroring with both Write Verify and Mirror Write Consistency is costliest of all (disk rotation plus a seek to Cylinder 0). To avoid confusion, we should point out that although an lslv command will usually show Mirror Write Consistency to be on for non-mirrored logical volumes, no actual processing is incurred unless the COPIES value is greater than one. Write Verify, on the other hand, defaults to off, since it does have meaning (and cost) for nonmirrored logical volumes.
See the summary of communications tuning recommendations in "UDP, TCP/IP, and mbuf Tuning Parameters Summary".