Variance-preserving Deep Metric Learning for Content-based Image Retrieval

Nikolaos Passalis, Alexandros Iosifidis, Moncef Gabbouj, Anastasios Tefas

Research output: Contribution to journalArticleScientificpeer-review

3 Citations (Scopus)


Supervised deep metric learning led to spectacular results for several Content-based Information Retrieval (CBIR) applications. The success of these approaches slowly led to the belief that image retrieval and classification are just slightly different variations of the same problem. However, recent evidence suggests that learning highly discriminative representation for a (limited) set of training classes removes valuable information from the representation, potentially harming both the in-domain, as well as the out-of-domain retrieval precision. In this paper, we propose a regularized discriminative deep metric learning method that aims to not only learn a representation that allows for discriminating between different classes, but it is also capable of encoding the latent generative factors separately for each class, overcoming this limitation. This allows for modeling the in-class variance and, as a result, maintaining the ability to represent both sub-classes of the in-domain data, as well as objects that belong to classes outside the training domain. The effectiveness of the proposed method, over existing supervised and unsupervised representation/metric learning approaches, is demonstrated under different in-domain and out-of-domain setups and three challenging image datasets.
Original languageEnglish
Pages (from-to)8-14
Number of pages7
JournalPattern Recognition Letters
Early online dateDec 2019
Publication statusPublished - 2020
Publication typeA1 Journal article-refereed


  • Metric Learning
  • Content-based Information Retrieval
  • Deep Learning

Publication forum classification

  • Publication forum level 2


Dive into the research topics of 'Variance-preserving Deep Metric Learning for Content-based Image Retrieval'. Together they form a unique fingerprint.

Cite this