All component readout and triggering is controlled by the ZEUS run control. The run control consists of a programme running on a central VAX 8700. This programme communicates with the control programmes of the individual sub-detectors and global components through a local area network. Together they are responsible for distributing commands given by the operator, monitoring the system and transferring error messages.
Different approaches have been taken to controlling and down-loading the transputer networks of the individual components. Each component has at least one equipment computer for control handling and stand-alone operation, eg. VAX (VMS), SGI (IRIX) or DECstation (ULTRIX). All components started using a microVAX equipped with a Q-bus-to-transputer link interface module from the CAPLIN Cybernetics Corporation. Now a VAX can be connected to a transputer using other interface boards. Some systems are using a 2TP-VME module interfaced to the single back-plane extension VME slot of a Silicon Graphics 4D/35S workstation, while other component groups are using a SCSI-disk interface to down-load code and data to their transputer arrays.
Almost all the components with transputers in ZEUS use the occam programming language for their applications - the event-builder uses 3L parallel C. occam has a simple formal specification and is a low-level language close to the transputer assembler language. The compiler generates very efficient machine code and hence there is little need to programme directly in transputer assembler language.
The transputer development toolset (ITOOLS) is the most commonly used programming environment within ZEUS, but some component groups have written their own software packages for controlling transputer networks, providing I/O, multiplexing data on channels, state management and the automatic generation of code [15]. Some groups have used SASD techniques and CASE tools.
Testing and debugging software distributed over several transputers (or any parallel system) can be a problem. It is a difficult and very time consuming task since no good tools are available which allow one to analyze a transputer network without changing its real-time behavior. In debugging asynchronous systems we have sometimes found it necessary to force synchronization. This can be difficult or not even possible in some situations and requires a good global knowledge of the system. We believe the availability of a post-mortem debugger to be essential. Besides locating the usual run time errors common to other languages, the more difficult diagnoses of dead-lock is often possible. Some groups have found automatic code generation tools to be invaluable in reducing compile, link and processor errors. However, a great deal of experience with such tools is usually required to maintain them.
The trend in experimental high energy physics is to estimate the performance of a system based on its underlying hardware. In the case of parallel systems, this is a particularly bad approach and designing such systems should involve a new discipline of programming. The approach should emphasis software development systems and education.