Mоlimо vаs kоristitе оvај idеntifikаtоr zа citirаnjе ili оvај link dо оvе stаvkе: https://open.uns.ac.rs/handle/123456789/1824
Nаziv: Reducing off-chip memory traffic in deep CNNs using stick buffer cache
Аutоri: Rakanović, Damjan 
Erdeljan, Andrea
Vranjković, Vuk 
Vukobratovic B.
Teodorović, Predrag 
Struharik, Rastislav 
Dаtum izdаvаnjа: 5-јан-2018
Čаsоpis: 2017 25th Telecommunications Forum, TELFOR 2017 - Proceedings
Sažetak: © 2017 IEEE. Recent studies show that traffic between the Convolutional Neural Network (CNN) accelerators and off-chip memory becomes critical with respect to the energy consumption, as the networks become deeper in order to improve performance. This is especially important for low power embedded applications. Since on-chip data transfer is much less expensive in terms of power consumption, significant improvement can be obtained by caching and reusing previously transferred off-chip data. However, due to unique caching pattern, which is adequate for calculations of convolutions within CNNs, standard cache memories would not be efficient for this purpose. In this paper, we propose an intelligent on-chip memory architecture which allows caching and significant reduction of feature map transfer from off-chip memory, during computations of convolutional layers in CNNs. Experiment results show that the proposed scheme can reduce off-chip feature map traffic up to 98.5% per convolutional layer for AlexNet and 89% for each convolutional layer of VGG-16.
URI: https://open.uns.ac.rs/handle/123456789/1824
ISBN: 9781538630723
DOI: 10.1109/TELFOR.2017.8249398
Nаlаzi sе u kоlеkciјаmа:FTN Publikacije/Publications

Prikаzаti cеlоkupаn zаpis stаvki

SCOPUSTM   
Nаvоđеnjа

4
prоvеrеnо 06.05.2023.

Prеglеd/i stаnicа

22
Prоtеklа nеdеljа
9
Prоtеkli mеsеc
1
prоvеrеnо 10.05.2024.

Google ScholarTM

Prоvеritе

Аlt mеtrikа


Stаvkе nа DSpace-u su zаštićеnе аutоrskim prаvimа, sа svim prаvimа zаdržаnim, оsim аkо nije drugačije naznačeno.