Selected Publications

We present a novel mechanism to accelerate state-of-art Convolutional Neural Networks (CNNs) on CPU-FPGA platform with coherent shared memory. First, we exploit Fast Fourier Transform (FFT) and Overlap-and-Add (OaA) to reduce the computational requirements of the convolutional layer. We map the frequency domain algorithms onto a highly-parallel OaA-based 2D convolver design on the FPGA. Then, we propose a novel data layout in shared memory for efficient data communication between the CPU and the FPGA. Our approach can be applied to any kernel size less than the chosen FFT size with appropriate zero-padding leading to acceleration of a wide range of CNN models. We exploit the data parallelism of OaA-based 2D convolver and task parallelism to scale the overall system performance.

Recent Blogs

Authors: Chi Zhang, Corey Chen, Limian Zhang

This is a course project done in Fall 2017 CSCI 599 Deep Learning and its Applications at USC.



I am a teaching assistant at University of Southern California:

  • CSCI 350: Introduction to Operating Systems, Fall 2017


  • Last name minus 'g' plus '527' AT usc DOT edu
  • EEB 246, University of Southern California, CA, 90089, USA