Regularized Extreme Learning Machine for large-scale media content analysis

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    6 Citations (Scopus)

    Abstract

    In this paper, we propose a new regularization approach for Extreme Learning Machine-based Single-hidden Layer Feedforward Neural network training. We show that the proposed regularizer is able to weight the dimensions of the ELM space according to the importance of the network's hidden layer weights, without imposing additional computational and memory costs in the network learning process. This enhances the network's performance and makes the proposed approach suitable for learning nonlinear decision surfaces in large-scale classification problems. We test our approach in medium- and large-scale face recognition problems, where we observe its superiority when compared to the existing regularized Extreme Learning Machine classifier in both constrained and unconstrained problems, thus making our approach applicable in demanding media analysis applications such as those appearing in digital cinema production.

    Original languageEnglish
    Title of host publicationProcedia Computer Science
    PublisherElsevier
    Pages420-427
    Number of pages8
    Volume53
    Edition1
    DOIs
    Publication statusPublished - 2015
    Publication typeA4 Article in conference proceedings
    EventINNS Conference on Big Data 2015 - San Francisco, United States
    Duration: 8 Aug 201510 Aug 2015

    Conference

    ConferenceINNS Conference on Big Data 2015
    Country/TerritoryUnited States
    CitySan Francisco
    Period8/08/1510/08/15

    Keywords

    • Extreme Learning Machine
    • Face recognition
    • Large-scale learning
    • Regularization

    ASJC Scopus subject areas

    • General Computer Science

    Fingerprint

    Dive into the research topics of 'Regularized Extreme Learning Machine for large-scale media content analysis'. Together they form a unique fingerprint.

    Cite this