next up previous
Next: Specifics Up: No Title Previous: Introduction

Motivation

As computers become more tightly coupled by networks, and users become more dependent on networking, bandwidth requirements have increased. However, users are limited by their software, so an increase in bandwidth will not necessarily correspond to an increase in performance. As software becomes more efficient, communication will improve.

The old concept of input/output is no longer appropriate - almost everything is just communication of information between processors, networks, memories, disk drives and displays. As the amount of information has become larger, it has become increasingly shared, in order to prevent time-consuming copying.

What has not changed during this evolution, however, is the speed of signals (ie. the speed of light): the greater the distance of data transmission, the more time it takes. There are two ways of approaching this problem:

Both approaches make use of local caches.

When speaking of data residing in a ``remote" location, the usual measure of time is the number of processor cycles that are wasted while awaiting the arrival of the data. Thus, as processor speeds increase, a given physical distance becomes more remote. Currently, a distance of a few centimeters is often remote enough that designers will include fast cache memories on the processor chip.

However, caches can create serious logical problems due to the fact that they hold duplicate copies of data. If the data changes, all of the duplicate copies are suddenly wrong, and if these wrong values continue to be passed to the local processors, then serious inconsistencies will result.

So, as caching becomes increasingly used to compensate for the increasing remoteness of data that is shared, keeping these caches consistent becomes increasingly important. That is what SCI's ``coherence" does.

SCI tells how to send signals (``Interface") in a way that is independent of application and technology (``Scalable"). It also explains how to keep track of duplicate cached copies of data so that all the stale copies can get refreshed if the data changes, and how to do this in a system of unspecified size and shape, with an unspecified number of processors and I/O controllers performing unspecified applications, in a way that is simple enough that the hardware can take care of everything.


next up previous
Next: Specifics Up: No Title Previous: Introduction

Douglas M Gingrich
Thu Oct 10 13:51:22 MDT 1996