Maxime Vono

Welcome

I am a Ph.D. student since October, 2017 in the beautiful city of Toulouse, France. I work under the supervision of Pierre Chainais and Nicolas Dobigeon, within the SC group of the IRIT laboratory.

Prior to that, I graduated from Ecole Centrale de Lille majoring in data science. I also hold a M.Sc. degree in applied mathematics from University of Lille.

My research interests are in new methods and algorithms to solve challenging Bayesian inference problems encountered in machine learning & signal processing. In particular, I am interested in making more efficient Markov chain Monte Carlo (MCMC) algorithms that scale in high-dimension and/or big data settings by exploiting the connections between optimization and simulation-based methods.

To this purpose, I recently proposed a novel Bayesian scheme inspired from variable splitting, an efficient tool used to solve challenging optimization problems. In a nutshell, the derived Bayesian hierarchical model corresponds to an arbitrarily small approximation of the initial one leading to an inference that can be performed iteratively, efficiently and in possibly distributed settings. For more details, see our paper.

News/Events:
(starting) Sept., 2018 Data science/analytics consultancy missions for a large retailer.
Sept. 17-20, 2018 I will present our paper in the sparse learning session of MLSP'18 conference.
April 2018 Submitted paper on split-and-augmented Gibbs sampler (aka ADMM-inspired sampling).

Research

Preprints

  1. M. Vono, N. Dobigeon, and P. Chainais, “Split-and-augmented Gibbs sampler - Application to large-scale inference problems,” submitted, 2018.

    Recently, a new class of Markov chain Monte Carlo (MCMC) algorithms took advantage of convex optimization to build efficient and fast sampling schemes from high-dimensional distributions. Variable splitting methods have become classical in optimization to divide difficult problems into simpler ones and have proven their efficiency in solving high-dimensional inference problems encountered in machine learning and signal processing. This paper derives two new optimization-driven sampling schemes inspired from variable splitting and data augmentation. In particular, the formulation of one of the proposed approaches is closely related to the alternating direction method of multipliers (ADMM) main steps. The proposed framework enables to derive faster and more efficient sampling schemes than the current state-of-the-art methods and can embed the latter. By sampling efficiently the parameter to infer as well as the hyperparameters of the problem, the generated samples can be used to approximate maximum a posteriori (MAP) and minimum mean square error (MMSE) estimators of the parameters to infer. Additionally, the proposed approach brings credibility intervals at a low cost contrary to optimization methods. Simulations on two often-studied signal processing problems illustrate the performance of the two proposed samplers. All results are compared to those obtained by recent state-of-the-art optimization and MCMC algorithms used to solve these problems.

            @preprint{Vono2018_sub,
            author = {Vono, Maxime and Dobigeon, Nicolas and Chainais, Pierre},
            title = {Split-and-augmented {G}ibbs sampler - {A}pplication to large-scale inference problems},
            year = {2018},
            journal = {submitted},
            eprint = {arXiv:1804.05809},
            arxiv = {https://arxiv.org/abs/1804.05809}
            }
          

Conference Articles

  1. M. Vono, N. Dobigeon, and P. Chainais, “Sparse Bayesian binary logistic regression using the split-and-augmented Gibbs sampler,” in IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 2018.
    Finalist for the Best Student Paper Awards.

    Logistic regression has been extensively used to perform classification in machine learning and signal/image processing. Bayesian formulations of this model with sparsity-inducing priors are particularly relevant when one is interested in drawing credibility intervals with few active coefficients. Along these lines, the derivation of efficient simulation-based methods is still an active research area because of the analytically challenging form of the binomial likelihood. This paper tackles the sparse Bayesian binary logistic regression problem by relying on the recent split-and-augmented Gibbs sampler (SPA). Contrary to usual data augmentation strategies, this Markov chain Monte Carlo (MCMC) algorithm scales in high dimension and divides the initial sampling problem into simpler ones. These sampling steps are then addressed with efficient state-of-the-art methods, namely proximal MCMC algorithms that can benefit from the recent closed-form expression of the proximal operator of the logistic cost function. SPA appears to be faster than efficient proximal MCMC algorithms and presents a reasonable computational cost compared to optimization-based methods with the advantage of producing credibility intervals. Experiments on handwritten digits classification problems illustrate the performances of the proposed approach.

            @inproceedings{Vono_MLSP18,
            author = {Vono, Maxime and Dobigeon, Nicolas and Chainais, Pierre},
            title = {Sparse {B}ayesian binary logistic regression using the split-and-augmented {G}ibbs sampler},
            year = {2018},
            booktitle = {IEEE International Workshop on Machine Learning for Signal Processing (MLSP), 2018, Aalborg, Denmark}
            }
          

Talks

  1. May 2018 - Invited talk organized by team SigMA (CRIStAL laboratory, Lille, France)
    Split-and-augmented Gibbs sampler - A divide & conquer approach to solve large-scale inference problems

    Recently, a new class of Markov chain Monte Carlo (MCMC) algorithms took advantage of convex optimization to build efficient and fast sampling schemes from high-dimensional distributions. Variable splitting methods have become classical in optimization to divide difficult problems into simpler ones and have proven their efficiency in solving high-dimensional inference problems encountered in machine learning and signal processing. This paper derives two new optimization-driven sampling schemes inspired from variable splitting and data augmentation. In particular, the formulation of one of the proposed approaches is closely related to the alternating direction method of multipliers (ADMM) main steps. The proposed framework enables to derive faster and more efficient sampling schemes than the current state-of-the-art methods and can embed the latter. By sampling efficiently the parameter to infer as well as the hyperparameters of the problem, the generated samples can be used to approximate maximum a posteriori (MAP) and minimum mean square error (MMSE) estimators of the parameters to infer. Additionally, the proposed approach brings credibility intervals at a low cost contrary to optimization methods. Simulations on two often-studied signal processing problems illustrate the performance of the two proposed samplers. All results are compared to those obtained by recent state-of-the-art optimization and MCMC algorithms used to solve these problems.

Presentations

  1. Sept. 2018 - Poster presentation at Optimization and Learning workshop (Toulouse, France).
    Split-and-augmented Gibbs sampler - A divide & conquer approach to solve large-scale inference problems
  2. July 2018 - Poster presentation at LMS/CRiSM summer school on computational statistics (Univ. Warwick, UK).
    Split-and-augmented Gibbs sampler - A divide & conquer approach to solve large-scale inference problems
  3. July 2018 - Poster presentation at BNPSI 2018 workshop (Bordeaux, France).
    Split-and-augmented Gibbs sampler - A divide & conquer approach to solve large-scale inference problems

Consulting

LMlogo ITMlogo

Starting from September, 2018 I will work, as a data science/analytics consultant, for the Marketing & Strategy direction of an international supermarket chain called Intermarché Alimentaire International (ITM AI). These consultancy missions will be carried out in parallel of my Ph.D. studies and will be the opportunity to apply my previous experiences, knowledge and work to concrete and important issues for retailers, namely sales forecasting, pricing strategy and promotional events.

Some of the above issues were tackled in my previous internships and Master's thesis (in applied mathematics) where I was particularly interested in optimal pricing policies to apply in clearance and/or promotional events. To this purpose, I did my Master's thesis in partnership with the Pricing direction of a French home-improvement and gardening retailer called Leroy Merlin France (LMF) working on pricing policy optimization for clearance events. The optimal pricing strategy I derived during this work met the requirements and constraints of the Pricing direction of LMF (limited number of price changes, modeling of the buying process uncertainty, etc.), was tested on their past transactional data and was later proposed to some French brick and mortar stores.

CV

ToulouseUnivlogo LilleUnivlogo CentraleLillelogo

2017 - currently Ph.D. - Machine Learning & Signal Processing
University of Toulouse, France

Optimization-driven Markov chain Monte Carlo algorithms

2016 - 2017 M.Sc. - Applied Mathematics (Probability & Statistics)
University of Lille, France

Obtained with honors

2013 - 2017 M.Sc. - Engineering (Data Science)
Ecole Centrale de Lille, France

Including a gap year
Head of the class (rank: 1st)

Find me

On the Web

Mailing address

INP - ENSEEIHT Toulouse
2, rue Charles Camichel
B.P. 7122
31071 Toulouse Cedex 7
France



View larger map
CRISTALlogo IRITlogo CNRSlogo Toulouselogo ENSEEIHTlogo

© 2017. All rights reserved.

Powered by a customized version of Hydejack v7.5.1