Title: Enhancing Utilization of SIMD-Like Accelerator for Sparse Convolutional Neural Networks
Authors: Lai, Bo-Cheng
Pan, Jyun-Wei
Lin, Chien-Yu
電子工程學系及電子研究所
Department of Electronics Engineering and Institute of Electronics
Keywords: Load balance;machine learning;single-instruction-multiple-data (SIMD) architecture;sparse convolutional neural networks (CNNs)
Issue Date: 1-May-2019
Abstract: Although the existing single-instruction-multiple-data-like (SIMD) accelerators can handle the compressed format of sparse convolutional neural networks, the sparse and irregular distributions of nonzero elements cause low utilization of multipliers in a processing engine (PE) and imbalanced computation between PEs. This brief addresses the above issues by proposing a data screening and task mapping (DSTM) accelerator which integrates a series of techniques, including software refinement and hardware modules. An efficient indexing module is introduced to identify the effectual computation pairs and skip unnecessary computation in a fine-grained manner. The intra-PE load imbalance is alleviated with weight data rearrangement. An effective task sharing mechanism further balances the computation between PEs. When compared with the state-of-the-art SIMD-like accelerator, the proposed DSTM enhances the average PE utilization by 3.5x. The overall processing throughput is 59.7% higher than the previous design.
URI: http://dx.doi.org/10.1109/TVLSI.2019.2897052
http://hdl.handle.net/11536/152414
ISSN: 1063-8210
DOI: 10.1109/TVLSI.2019.2897052
Journal: IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS
Volume: 27
Issue: 5
Begin Page: 1218
End Page: 1222
Appears in Collections:Articles