Abstract
The expressive power of deep neural networks has enabled us to successfully tackle several modeling problems in computer vision, natural language processing, and financial forecasting in the last few years. Nowadays, neural networks achieving state-of-the-art (SoTA) performance in any field can be formed by hundreds of layers with millions of parameters. While achieving impressive performances, it is often required several days with high-end hardware in order to optimize a single SoTA neural network. But more importantly, it took several years of experiments for the community to gradually discover more and more efficient neural network architectures, going from VGGNet to ResNet, then DenseNet. In addition to the expensive and time-consuming experimentation process, SoTA neural networks, which require powerful processors to run, cannot be easily deployed to mobile or embedded devices. For these reasons, improving the training and deployment efficiency of deep neural networks has become an important area of research in the deep learning community. In this chapter, we will cover two topics, namely progressive neural network learning and compressive learning, which have been extensively developed recently to enhance the training and deployment of deep models.
Original language | English |
---|---|
Title of host publication | Deep Learning for Robot Perception and Cognition |
Editors | Alexandros Iosifidis, Anastasios Tefas |
Publisher | Academic Press |
Pages | 187-220 |
Number of pages | 34 |
ISBN (Electronic) | 9780323857871 |
ISBN (Print) | 9780323885720 |
DOIs | |
Publication status | Published - 2022 |
Publication type | A3 Book chapter |
Keywords
- Compressive learning
- Compressive sensing
- Multilinear compressive learning
- Neural architecture search
- Progressive neural network learning
Publication forum classification
- Publication forum level 2
ASJC Scopus subject areas
- General Computer Science