Progressive and compressive learning

Dat Thanh Tran, Moncef Gabbouj, Alexandros Iosifidis

Tutkimustuotos: LukuScientificvertaisarvioitu


The expressive power of deep neural networks has enabled us to successfully tackle several modeling problems in computer vision, natural language processing, and financial forecasting in the last few years. Nowadays, neural networks achieving state-of-the-art (SoTA) performance in any field can be formed by hundreds of layers with millions of parameters. While achieving impressive performances, it is often required several days with high-end hardware in order to optimize a single SoTA neural network. But more importantly, it took several years of experiments for the community to gradually discover more and more efficient neural network architectures, going from VGGNet to ResNet, then DenseNet. In addition to the expensive and time-consuming experimentation process, SoTA neural networks, which require powerful processors to run, cannot be easily deployed to mobile or embedded devices. For these reasons, improving the training and deployment efficiency of deep neural networks has become an important area of research in the deep learning community. In this chapter, we will cover two topics, namely progressive neural network learning and compressive learning, which have been extensively developed recently to enhance the training and deployment of deep models.

OtsikkoDeep Learning for Robot Perception and Cognition
ToimittajatAlexandros Iosifidis, Anastasios Tefas
KustantajaAcademic Press
ISBN (elektroninen)9780323857871
ISBN (painettu)9780323885720
DOI - pysyväislinkit
TilaJulkaistu - 2022
OKM-julkaisutyyppiA3 Kirjan tai muun kokoomateoksen osa


  • Jufo-taso 2

!!ASJC Scopus subject areas

  • Computer Science(all)


Sukella tutkimusaiheisiin 'Progressive and compressive learning'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä