Low-Energy Asynchronous Memory Design
Jos?e A. Tierno Alain J. Martin
California Institute of Technology
Pasadena, CA 91125
We introduce the concept of energy per operation as a measure of performance of an asynchronous circuit. We show how to model energy consumption based on the high-level language specification. This model is independent of voltage and timing considerations. We apply this model to memory design. We show first how to dimension a memory array, and how to break up this memory array into smaller arrays to minimize the energy per access. We then show how to use cache memory and pre-fetch mechanisms to further reduce energy per access.
Keywords: Low-energy, low-power, asynchronous design, memory design.
Present day portable computers run the most common interactive applications (word-processors, spreadsheets, windows, etc.) with no noticeable computation delay; weight and battery life have become more important than processing speed. These two factors are related by battery size: to operate the computer for a longer time without recharging, we need a larger, heavier, battery.
The limitation is therefore in the total amount of electric energy stored in that battery, that is available for operation. To extend the battery life, we have to make the computer more efficient in the way it uses this energy.
Electrical power dissipation has been used as a figure of merit for this type of application. It is convenient for synchronous circuits with no power management, where power dissipation is very much independent of the level of activity of the circuit. Asynchronous operation is better described in terms of reactive programs: energy is dissipated only when the circuit is active. As a consequence, asynchronous circuits can have remarkable energy performance [6, 7]. For asynchronous systems, a proper measure of per-
formance is the energy per operation." This metric measures the energy required to execute an instruction, fetch a piece of data from memory, service an interrupt, etc. To maximize the battery life, we can minimize the average energy per operation, that is, we maximize the number of instructions that we can execute with one battery charge.
Energy per operation is an additive quantity: given a computation described in terms of more elementary steps, we can calculate the energy required to execute that computation by adding the energy requirements of each step. In this way we can compare the energy efficiency of different algorithms that execute the same computation, independently of timing considerations. Comparison power consumptions would require some knowledge about timing (e.g. so much power at so much throughput").
In this article we propose an energy model for asynchronous circuits based on the energy cost of data communications. This model is justified in terms of the physical implementation of a communication action, and the actual energy dissipation associated with that implementation.
As an example of the use of the energy model, we analyze the design of asynchronous memories. Memory subsystems are usually designed for speed and density, with secondary consideration given to energy. Memory is slow compared to processors, and high throughput is achieved through parallelism (wide data-words) and prediction (memory caching). These same design techniques can be used to improve energy performance; the design criteria are, however, different, and are explained in detail in this paper.
First we show how to partition a memory array to minimize access energy under the assumption that all addresses are equally probable. Second, we show how to use the statistics of long sequences of addresses to further reduce the average energy per access. These techniques result in a trade-off between area and energy per access. This analysis shows that conventional commercial architectures are not optimal from the point of view of energy efficiency.