
INTERNATIONAL COMPUTER SCIENCE INSTITUTE
I
1947 Center St. ffl Suite 600 ffl Berkeley, California 947041198 ffl (510) 6439153 ffl FAX (510) 6437684
MBP on T0: mixing floating and
fixedpoint formats in BP learning
Davide Anguita ? y Benedict A. Gomes ? z
TR94038
August 1994
Abstract
We examine the efficient implementation of back prop type algorithms on T0 [4], a
vector processor with a fixed point engine, designed for neural network simulation. A
matrix formulation of back prop, Matrix Back Prop [1], has been shown to be very
efficient on some RISCs [2]. Using Matrix Back Prop, we achieve an asymptotically
optimal performance on T0 (about 0.8 GOPS) for both forward and backward phases,
which is not possible with the standard online method. Since high efficiency is futile
if convergence is poor (due to the use of fixed point arithmetic), we use a mixture of
fixed and floating point operations. The key observation is that the precision of fixed
point is sufficient for good convergence, if the range is appropriately chosen. Though
the most expensive computations are implemented in fixed point, we achieve a rate
of convergence that is comparable to the floating point version. The time taken for
conversion between fixed and floating point is also shown to be reasonable.
?International Computer Science Institute (ICSI), Berkeley, USA
yDepartment of Biophysical and Electronic Engineering, University of Genova, Italy
zDept. of Electrical Engineering and Computer Science, University of California, Berkeley, USA