TY - GEN
T1 - Optimization and deployment of CNNs at the Edge
T2 - 16th ACM International Conference on Computing Frontiers, CF 2019
AU - Meloni, Paolo
AU - Loi, Daniela
AU - Busia, Paola
AU - Deriu, Gianfranco
AU - Pimentel, Andy D.
AU - Sapra, Dolly
AU - Stefanov, Todor
AU - Minakova, Svetlana
AU - Conti, Francesco
AU - Benini, Luca
AU - Pintor, Maura
AU - Biggio, Battista
AU - Moser, Bernhard
AU - Shepelev, Natalia
AU - Fragoulis, Nikos
AU - Theodorakopoulos, Ilias
AU - Masin, Michael
AU - Palumbo, Francesca
PY - 2019/4/30
Y1 - 2019/4/30
N2 - Deep learning (DL) algorithms have already proved their effectiveness on a wide variety of application domains, including speech recognition, natural language processing, and image classification. To foster their pervasive adoption in applications where low latency, privacy issues and data bandwidth are paramount, the current trend is to perform inference tasks at the edge. This requires deployment of DL algorithms on low-energy and resource-constrained computing nodes, often heterogenous and parallel, that are usually more complex to program and to manage without adequate support and experience. In this paper, we present ALOHA, an integrated tool flow that tries to facilitate the design of DL applications and their porting on embedded heterogenous architectures. The proposed tool flow aims at automating different design steps and reducing development costs. ALOHA considers hardware-related variables and security, power efficiency, and adaptivity aspects during the whole development process, from pre-training hyperparameter optimization and algorithm configuration to deployment.
AB - Deep learning (DL) algorithms have already proved their effectiveness on a wide variety of application domains, including speech recognition, natural language processing, and image classification. To foster their pervasive adoption in applications where low latency, privacy issues and data bandwidth are paramount, the current trend is to perform inference tasks at the edge. This requires deployment of DL algorithms on low-energy and resource-constrained computing nodes, often heterogenous and parallel, that are usually more complex to program and to manage without adequate support and experience. In this paper, we present ALOHA, an integrated tool flow that tries to facilitate the design of DL applications and their porting on embedded heterogenous architectures. The proposed tool flow aims at automating different design steps and reducing development costs. ALOHA considers hardware-related variables and security, power efficiency, and adaptivity aspects during the whole development process, from pre-training hyperparameter optimization and algorithm configuration to deployment.
KW - Convolution Neural Networks
KW - FPGAs
KW - Hardware accelerators
UR - http://www.scopus.com/inward/record.url?scp=85066029325&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85066029325&partnerID=8YFLogxK
U2 - 10.1145/3310273.3323435
DO - 10.1145/3310273.3323435
M3 - Conference contribution
AN - SCOPUS:85066029325
T3 - ACM International Conference on Computing Frontiers 2019, CF 2019 - Proceedings
SP - 326
EP - 332
BT - ACM International Conference on Computing Frontiers 2019, CF 2019 - Proceedings
PB - Association for Computing Machinery, Inc
Y2 - 30 April 2019 through 2 May 2019
ER -