page 1  (17 pages)
2to next section

INTERNATIONAL COMPUTER SCIENCE INSTITUTE

I
1947 Center St. ffl Suite 600 ffl Berkeley, California 94704-1198 ffl (510) 643-9153 ffl FAX (510) 643-7684

MBP on T0: mixing floating- and

fixed-point formats in BP learning

Davide Anguita ? y Benedict A. Gomes ? z

TR-94-038

August 1994

Abstract
We examine the efficient implementation of back prop type algorithms on T0 [4], a vector processor with a fixed point engine, designed for neural network simulation. A matrix formulation of back prop, Matrix Back Prop [1], has been shown to be very efficient on some RISCs [2]. Using Matrix Back Prop, we achieve an asymptotically optimal performance on T0 (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line method. Since high efficiency is futile if convergence is poor (due to the use of fixed point arithmetic), we use a mixture of fixed and floating point operations. The key observation is that the precision of fixed point is sufficient for good convergence, if the range is appropriately chosen. Though the most expensive computations are implemented in fixed point, we achieve a rate of convergence that is comparable to the floating point version. The time taken for conversion between fixed and floating point is also shown to be reasonable.

?International Computer Science Institute (ICSI), Berkeley, USA
yDepartment of Biophysical and Electronic Engineering, University of Genova, Italy
zDept. of Electrical Engineering and Computer Science, University of California, Berkeley, USA