Sparse hidden units activation in Restricted Boltzmann Machine

Jakub M. Tomczak, Adam Gonczarek

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

Sparsity has become a concept of interest in machine learning for many years. In deep learning sparse solutions play crucial role in obtaining robust and discriminative features. In this paper, we study a new regularization term for sparse hidden units activation in the context of Restricted Boltzmann Machine (RBM). Our proposition is based on the symmetric Kullback-Leibler divergence applied to compare the actual and the desired distribution over the active hidden units. We compare our method against two other enforcing sparsity regularization terms by evaluating the empirical classification error using two datasets: (i) for image classification (MNIST), (ii) for document classification (20-newsgroups).

Original languageEnglish
Title of host publicationProgress in Systems Engineering - Proceedings of the 23rd International Conference on Systems Engineering
PublisherSpringer Verlag
Pages181-185
Number of pages5
ISBN (Print)9783319084213
DOIs
Publication statusPublished - 1 Jan 2015
Externally publishedYes
Event23rd International Conference on Systems Engineering, ICSEng 2014 - Las Vegas, NV, United States
Duration: 19 Aug 201421 Aug 2014

Publication series

NameAdvances in Intelligent Systems and Computing
Volume1089
ISSN (Print)2194-5357

Conference

Conference23rd International Conference on Systems Engineering, ICSEng 2014
CountryUnited States
CityLas Vegas, NV
Period19/08/1421/08/14

Keywords

  • Deep learning
  • sparse solution
  • symmetric Kullback-Leibler divergence

Fingerprint Dive into the research topics of 'Sparse hidden units activation in Restricted Boltzmann Machine'. Together they form a unique fingerprint.

Cite this