SC17 Denver, CO

Flexible Batched Sparse Matrix-Vector Product on GPUs


Workshop: 8th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems
Authors: Hartwig Anzt (University of Tennessee, Karlsruhe Institute of Technology)

Abstract: We propose a variety of batched routines for concurrently processing a large collection of small-size, independent sparse matrix vector products (SpMV) on graphics processing units (GPUs). These batched SpMV kernels are designed to be flexible in order to handle a batch of matrices which differ in size, nonzero count, and nonzero distribution. Furthermore, they support three most commonly used sparse storage formats: CSR, COO and ELL. Our experimental results on a state-of-the-art GPU reveal performance improvements of up to 25× compared to non-batched SpMV routines.




Workshop Index