Abstract
In this paper, we describe a method for the determination of a subspace of the feature space in kernel methods, which is suited to large-scale learning problems. Linear model learning in the obtained space corresponds to a nonlinear model learning process in the input space. Since the obtained feature space is determined only by exploiting properties of the training data, this approach can be used for generic nonlinear pattern recognition. That is, nonlinear data mapping can be considered to be a pre-processing step exploiting nonlinear relationships between the training data. Linear techniques can be subsequently applied in the new feature space and, thus, they can model nonlinear properties of the problem at hand. In order to appropriately address the inherent problem of kernel learning methods related to their time and memory complexities, we follow an approximate learning approach. We show that the method can lead to considerable operation speed gains and achieve very good performance. Experimental results verify our analysis.
| Original language | English |
|---|---|
| Pages (from-to) | 190-197 |
| Number of pages | 8 |
| Journal | Pattern Recognition |
| DOIs | |
| Publication status | Published - Sept 2016 |
| Publication type | A1 Journal article-refereed |
Keywords
- Kernel methods
- Nonlinear pattern recognition
- Nonlinear projection trick
- Nyström approximation
Publication forum classification
- Publication forum level 3
ASJC Scopus subject areas
- Software
- Artificial Intelligence
- Computer Vision and Pattern Recognition
- Signal Processing