Supervised learning for Content-based Information Retrieval allows for obtaining discriminative representations that often excel within the training domain. However, recent evidence suggests that these representations can actually harm the retrieval precision for queries that do not belong to the domain of the training set compared to other, less discriminative representations. To avoid this behavior, we propose to learn discriminative representations which also encode the latent generative factors for each class. In this way, the proposed method is capable of maintaining (part of) the in-class variance, as well as being able to represent data that belong to classes that were not seen during the training by better learning the structure of the input space. The proposed method is evaluated under different in-domain and out-of-domain setups, significantly outperforming existing supervised and unsupervised representation learning approaches.
|Nimi||IEEE International Conference on Image Processing|
|Conference||IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING|
|Ajanjakso||1/01/00 → …|