Paper 4

Learning Through Non-linearly Supervised Dimensionality Reduction

Authors: Josif Grabocka and Lars Schmidt-Thieme

Volume 17 (2015)

Abstract

Dimensionality reduction is a crucial ingredient of machine learning and data mining, boosting classi cation accuracy through the isolation of patterns via omission of noise. Nevertheless, recent studies have shown that dimensionality reduction can bene t from label information, via a joint estimation of predictors and target variables from a low-rank representation. In the light of such inspiration, we propose a novel dimensionality reduction which simultaneously reconstructs the predictors using matrix factorization and estimates the target variable via a dual-form maximum margin classi er from the latent space. Compared to existing studies which conduct the decomposition via linearly supervision of targets, our method reconstructs the labels using nonlinear functions. If the hyper-plane separating the class regions in the original data space is non-linear, then a nonlinear dimensionality reduction helps improving the generalization over the test instances. The joint optimization function is learned through a coordinate descent algorithm via stochastic updates. Empirical results demonstrate the superiority of the proposed method compared to both classi cation in the original space (no reduction), classi cation after unsupervised reduction, and classi - cation using linearly supervised projection.