The VMM memory-load-control facility protects an overloaded system from thrashing--a self-perpetuating paralysis in which the processes in the system are spending all their time stealing memory frames from one another and reading/writing pages on the paging device.
Memory load control is intended to smooth out infrequent peaks in load that might otherwise cause the system to thrash. It is not intended to act continuously in a configuration that has too little RAM to handle its normal workload. It is a safety net, not a trampoline. The correct solution to a fundamental, persistent RAM shortage is to add RAM, not to experiment with memory load control in an attempt to trade off response time for memory. The situations in which the memory-load-control facility may really need to be tuned are those in which there is more RAM than the defaults were chosen for, not less--configurations in which the defaults are too conservative.
You should not change the memory-load-control parameter settings unless your workload is consistent and you believe the default parameters are ill-suited to your workload.
The default parameter settings shipped with the system are always in force unless changed; and changed parameters last only until the next system boot. All memory-load-control tuning activities must be done by root. The system administrator may change the parameters to "tune" the algorithm to a particular workload or to disable it entirely. This is done by running the schedtune command. The source and object code of schedtune are in /usr/samples/kernel .
Attention: schedtune is in the samples directory because it is very VMM-implementation dependent. The schedtune code that accompanies each release of AIX was tailored specifically to the VMM in that release. Running the schedtune executable from one release on a different release might well result in an operating-system failure. It is also possible that the functions of schedtune may change from release to release. You should not propagate shell scripts or inittab entries that include schedtune to a new release without checking the schedtune documentation for the new release to make sure that the scripts will still have the desired effect. schedtune is not supported under SMIT, nor has it been tested with all possible combinations of parameters.
schedtune -? obtains a terse description of the flags and options. schedtune with no flags displays the current parameter settings, as follows:
THRASH SUSP FORK SCHED -h -p -m -w -e -f -t SYS PROC MULTI WAIT GRACE TICKS TIME_SLICE 6 4 2 1 2 10 0
(The -f and -t flags are not part of the memory-load-control mechanism. They are documented in the full syntax description of schedtune. The -t flag is also discussed in "Modifying the Scheduler Time Slice".) After a tuning experiment, memory load control can be reset to its default characteristics by executing schedtune -D.
Memory load control is disabled by setting a parameter value such that processes are never suspended. schedtune -h 0 effectively disables memory load control by setting to an impossibly high value the threshold that the algorithm uses to recognize thrashing.
In some specialized situations, it may be appropriate to disable memory load control from the outset. For example, if you are using a terminal emulator with a time-out feature to simulate a multiuser workload, memory-load-control intervention may result in some responses being delayed long enough for the process to be killed by the time-out feature. If you are using rmss to investigate the effects of reduced memory sizes, you will want to disable memory load control to avoid interference with your measurement.
If disabling memory load control results in more, rather than fewer, thrashing situations (with correspondingly poorer responsiveness), then memory load control is playing an active and supportive role in your system. Tuning the memory-load-control parameters then may result in improved performance--or you may need to add RAM.
Setting the minimum multiprogramming level, m, effectively keeps m processes from being suspended. Suppose a system administrator knew that at least ten processes must always be resident and active in RAM for successful performance, and suspected that memory load control was too vigorously suspending processes. If schedtune -m 10 were issued, the system would never suspend so many processes that fewer than ten were competing for memory. The parameter m does not count the kernel, processes that have been pinned in RAM with the plock system call, fixed-priority processes with priority values less than 60, and processes awaiting events. The system default of m=2 ensures that the kernel, all pinned processes, and two user processes will always be in the set of processes competing for RAM.
While m=2 is appropriate for a desktop, single-user configuration, it is frequently too small for larger, multiuser or server configurations with large amounts of RAM. On those systems, setting m to 4 or 6 may result in the best performance.
When you have determined the number of processes that ought to be able to run in your system during periods of peak activity, you can add a schedtune at the end of the /etc/inittab file, which ensures that it will be run each time the system is booted, overriding the defaults that would otherwise take effect with a reboot. For example, an appropriate /etc/inittab line for raising the minimum level of multiprogramming to 4 on an AIX version 4 system would be:
schedtune:2:wait:/usr/samples/kernel/schedtune -m 4
An equivalent /etc/inittab line for an AIX version 3.2.5 system would be:
schedtune:2:wait:/usr/lpp/bos/samples/schedtune -m 4
Remember, this line should not be propagated to a new release of AIX without a check of the documentation.
While it is possible to vary other parameters that control the suspension rate of processes and the criteria by which individual processes are selected for suspension, it is impossible to predict with any confidence the effect of such changes on a particular configuration and workload. Deciding on the default parameters was a difficult task, requiring sophisticated measurement tools and patient observation of repeating workloads. Great caution should be exercised if memory-load-control parameter adjustments other than those just discussed are considered.