as selec i al ri es es i
l i le r a cas i
. agavat i . J. ooges . lariu? J. ang
ne o t e n ent or t c ro e s n co ter sc -
ence n o es se ect n t e th s est e e ent n set o e eents. n t s er e resent st se ect on or t c r ns n 18 o t e on es -connecte co ter o s ze 38 58 , t t e ro c st n . r res t s o s t t, e se ro
co t t ons, se ect on c n e one c ster on s t c osen rect n r es t n on s re es es.
e : es es, ro c st n , t e ses, se ect on, r e
or t s
Recent advances in VLSI have made it possible to build parallel machines featuring tens of thousands of processors. Yet, practice indicates that this increase in raw computational power does not, as a rule, translate into increased performance of the same order of magnitude. The reason seems to be twofold: first, not all problems are not known to admit efficient parallel solutions; second, parallel computation requires interprocessor communications and simultaneous memory accesses which often act as bottlenecks in present-day parallel machines.
The mesh-connected computer architecture has emerged as one of the most natural choices for solving a large number of computational tasks in