Category: random tensors

Our chair on large random tensors was selected for ANITI 2.0!

Our chair on large random tensors was selected for ANITI 2.0!

I’m excited to announce that our chair titled “LArge Tensors for daTa analysIs and maChine lEarning” (LATTICE) was selected for the second version of ANITI, the AI institute of Toulouse. It will thus be part of ANITI’s proposal for setting up an AI cluster in Toulouse, in the sense of the currently open AnR call.

This chair, held by myself (as the PI) and Xiaoyi Mai of IMT (as a co-chair), is devoted to the study of large random tensor models, their estimation, and their application to (unsupervised) machine learning. An abstract is available at ANITI’s website, along with a list of all the other selected chairs.

More info on that will be available later this year.

Internship on the analysis of tensor-based methods for machine learning

Internship on the analysis of tensor-based methods for machine learning

We are looking for strongly motivated M2 students with a solid background on probability, statistics and optimization for a (5- or) 6-month internship in 2023, focused on the analysis of tensor decomposition methods for machine learning in the large-data regime. The student will be co-supervised by myself and Xiaoyi Mai, Assistant Professor at the University of Toulouse and member of the Institut de Mathématiques de Toulouse (IMT).

A detailed description can be found here.

Paper “A Random Matrix Perspective on Random Tensors” accepted by JMLR

Paper “A Random Matrix Perspective on Random Tensors” accepted by JMLR

We are pleased to announce that our paper where we propose the use of RMT tools to study random tensor models has been accepted by JMLR!

The central idea in our approach is to study the spectrum of matrix arising from contractions of a random tensor models with well-chosen directions. In particular, in the JMLR paper we leverage this idea to derive a novel characterization of the performance of maximum likelihood estimation of a symmetric rank-one Gaussian tensor model. Despite the fact that previous studies using tools from spin glass theory had already established a precise characterization of MLE for this model, our approach is more elementary and accessible, and we believe it will more easily lend itself to the study of other, more structured and more general tensor models.

As a matter of fact, Mohamed Seddik, Romain Couillet and Maxime Guillaud have already extended it to rank-one asymmetric Gaussian models, obtaining novel results on MLE for these models and on the spectrum of contractions of the associated ensemble (see https://arxiv.org/abs/2112.12348).

Check out our preprint on arXiv for more details : https://arxiv.org/pdf/2108.00774.pdf

New preprint on random tensors

New preprint on random tensors

We have just posted a new preprint on arXiv, called “A Random Matrix Perspective on Random Tensors,” which concerns large-dimensional random tensor models. Our main contribution is methodological: we show that such models can be studied by borrowing tools from random matrix theory. The key idea is to consider contractions of a given random tensor, which effectively give rise to random matrices whose spectra are related back to spectral properties of the tensor itself. In particular, in the case of a symmetric rank-one Gaussian model, this allows reaching certain precise predictions regarding the performance of the maximum likelihood estimator of the “planted” vector (also known as spike), which had been hitherto achieved only via statistical physics methods. This viewpoint also brings interesting insights regarding the landscape of the maximum likelihood problem. Our approach is quite versatile and can be extended to asymmetric, non-Gaussian and higher-order models. This is a collaboration with Romain Couillet and Pierre Comon from Gipsa-lab. Check out the paper for more details!