Anirban Nag, Rajeev Balasubramonian,

**Vivek Srikumar**, Ross Walker, Ali Shafiee, John Paul Strachan and Naveen MuralimanoharIEEE Micro special issue on Memristor-Based Computing 2018.

### Abstract

Many recent works take advantage of highly parallel analog
in-situ computation in memristor crossbars to accelerate the
many vector-matrix multiplication operations in deep neural
networks (DNNs). However, these in-situ accelerators have two
significant shortcomings: The ADCs account for a large fraction
of chip power and area, and these accelerators adopt a
homogeneous design in which every resource is provisioned for
the worst case. By addressing both problems, the new
architecture, called Newton, moves closer to achieving optimal
energy per neuron for crossbar accelerators. We introduce new
techniques that apply at different levels of the tile hierarchy,
some leveraging heterogeneity and others relying on
divide-and-conquer numeric algorithms to reduce computations and
ADC pressure. Finally, we place constraints on how a workload is
mapped to tiles, thus helping reduce resource-provisioning in
tiles. For many convolutional-neural-network (CNN) dataflows and
structures, Newton achieves a 77-percent decrease in power,
51-percent improvement in energy-efficiency, and 2.1× higher
throughput/area, relative to the state-of-the-art In-Situ Analog
Arithmetic in Crossbars (ISAAC) accelerator.

### Links

- Link to paper
- See on Google Scholar

### Bib Entry

@article{nag2018newton, title={Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration}, author={Nag, Anirban and Balasubramonian, Rajeev and Srikumar, Vivek and Walker, Ross and Shafiee, Ali and Strachan, John Paul and Muralimanohar, Naveen}, journal={IEEE Micro}, volume={38}, number={5}, pages={41--49}, year={2018}, publisher={IEEE} }