Logo CNRS

Yann Ollivier

Professional Web Page
Page Web professionnelle
Page personnelle

The reasonable man adapts himself to the world; the unreasonable man persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.

G. B. Shaw

Photo Yann Ollivier
Crédit photo : J.-Ph. Jover.
I am a CNRS “Chargé de recherche” (research scientist) in the TAO team (machine learning, optimization, evolutionary computing) at the computer science department in Paris-Sud University (Orsay, France). If this page is up-to-date, I'm 35. Curriculum vitæ.

I am currently working on applications of probability theory and information theory to artificial intelligence and machine learning. I have also been working in other areas of mathematics including Markov chains, (discrete) Ricci curvature, concentration of measure, random groups, hyperbolic groups, and general relativity.

You may also visit my personal Web page. In particular, it contains non-professional, wide-audience mathematical texts and mathematical programs.

Enseignants-chercheurs : vrai ou faux ? Texte personnel qui n'engage pas mes employeurs.

To reach me: contact (domain:) yann-ollivier.org

Scientific publications by topic

Years given in parentheses denote redaction time. For published texts, the year given without parentheses is the official publication year (i.e. the year of actual printing on paper).

Artificial intelligence, learning, optimization

(2014)
pdf (399K)
Preprint.
Auto-encoders aim at building a more compact representation of a dataset, by constructing maps $X\stackrel{f}{\to} Y\stackrel{g}{\to} X$ from data space to a smaller "feature space" and back, with small reconstruction error. We discuss the similarities and differences between training an auto-encoder to minimize the reconstruction error, and training the same auto-encoder to actually compress the data.

(2013)
pdf (1101K)
Preprint, submitted.
Recurrent neural networks, a powerful probabilistic model for sequential data, are notoriously hard to train. We propose a training method based on a metric gradient ascent inspired by Riemannian geometry. The metric is built to achieve invariance wrt changes in parametrization, at a low algorithmic cost. We also introduce persistent contextual neural networks (PCNNs), a variation on the model architecture. On synthetic data this model is able to learn difficult structures, such as block nesting or long-term dependencies, from only few training examples.
Code (tar.gz) used for the experiments.

(2013)
pdf (757K)
Preprint, submitted.
We describe four algorithms for neural network training, each adapted to different scalability constraints. These algorithms are mathematically principled and invariant under a number of transformations in data and network representation, from which performance is thus independent. These algorithms are obtained from the setting of differential geometry, and are based on either the natural gradient using the Fisher information matrix, or on Hessian methods, scaled down in a specific way to allow for scalability while keeping some of their key mathematical properties.

(2012)
pdf (2597K)
Preprint, submitted.
When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. We interpret auto-encoders as generative models in this setting. Both theory and experiments highlight the importance, for deep architectures, of using an inference model (from data to hidden variables) richer than the generative model (from hidden variables to data).

2013 (2012)
pdf (311K)
FOGA (Foundations of Genetic Algorithms) XII, 2013.
Guarantees of improvement over the course of optimization algorithms often need to assume infinitesimal step sizes. We prove that for a class of optimization algorithms coming from information geometry, IGO algorithms, such improvement occurs even with non-infinitesimal steps, with a maximal step size independent of the function to be optimized.

(2011)
pdf (1183K)
Preprint, submitted. Updated 2013/06/29.
The information-geometric optimization (IGO) method is a canonical way to turn any smooth parametric family of probability distributions on an arbitrary, discrete or continuous search space X into a continuous-time black-box optimization method on X. It is defined thanks to the Fisher metric from information geometry, to achieve maximal invariance properties under various reparametrizations. When applied to specific families of distributions, it naturally recovers some known algorithms (such as CMA-ES from Gaussians). Theoretical considerations suggest that IGO achives minimal diversity loss through optimization. First experiments using restricted Boltzmann machines show that IGO may be able to spontaneously perform multimodal optimization.

2007 (2006)
pdf (242K)
ps.gz
Proc. AAAI-07, Vancouver, Canada, July 2007, 1427--1433.
The goal is to find "related nodes" to a given node in a graph/Markov chain (e.g. a graph of Web pages). We propose the use of discrete Green functions, a standard tool from Markov chains. We test this method versus more classical ones on the graph of Wikipedia. Accompanying Web site.

2003 (2000)
pdf (211K)
ps.gz
Random Struct. Algor. 23 (2003), n° 1, 58--72.
We study the genetical dynamics of mating (crossover) operators in finite populations. We prove that the convergence to equilibrium is exponential but that there is a non-eventually vanishing bias depending on population size.

Concentration, Ricci curvature, Markov chains

2013 (2011)
pdf (333K)
In Analysis and Geometry of Metric Measure Spaces: Lecture Notes of the 50th Séminaire de Mathématiques Supérieures (SMS), Montréal, 2011, Galia Dafni, Robert McCann, Alina Stancu, eds, AMS (2013).
We try to provide a visual introduction to some objects from Riemannian geometry: parallel transport, sectional curvature, Ricci curvature, Bianchi identities... We also present some of the existing generalizations of these notions to non-smooth or discrete spaces, insisting on Ricci curvature.

2012 (2010)
pdf (258K)
SIAM J. Discr. Math. 26 (2012), n°3, 983--996.
We compare two approaches to Ricci curvature on non-smooth spaces, in the case of the discrete hypercube $\{0,1\}^N$. Along the way we get new results of a combinatorial and probabilistic nature, including a curved Brunn--Minkowski inequality on the discrete hypercube.

2010 (2009)
pdf (550K)
ps.gz (470K)
Ann. Probab. 38 (2010), n°6, 2418--2442.
Under a discrete positive curvature assumption, we get explicit finite-time bounds for convergence of empirical means in the Markov chain Monte Carlo method. This allows to improve known bounds on several examples such as the Ornstein-Uhlenbeck process, waiting queues, spin systems at high temperature or Brownian motion on positively curved manifolds.

2010 (2008)
pdf (379K)
ps.gz (230K)
in Probabilistic approach to geometry, Adv. Stud. Pure Math. 57, Math. Soc. Japan (2010), 343--381.
This is a gentle introduction to the context and results of my article Ricci curvature of Markov chains on metric spaces. It begins with a description of classical Ricci curvature in Riemannian geometry, as well as a reminder for discrete Markov chains. A number of open problems are mentioned.

2009 (2007)
pdf (560K)
ps.gz (291K)
J. Funct. Anal. 256 (2009), n°3, 810--864.
Some of the results are announced in the note Ricci curvature of metric spaces, C.R. Math. Acad. Sci Paris 345 (2007), n°11, 643--646.
We define the Ricci curvature of metric measure spaces as the amount by which small balls are closer (in transportation distance) than their centers are. This definition naturally extends to any Markov chain on a metric space. For a Riemannian manifold this gives back the value of Ricci curvature of a tangent vector. For example, the discrete cube is positively curved, as well as processes with positive Ricci curvature in the Bakry-Émery sense. Positive Ricci curvature is shown to imply a spectral gap, a Lévy-Gromov Gaussian concentration theorem and a kind of logarithmic Sobolev inequality.
(Erratum: In theorem 49 (and only there), we need to assume that X is locally compact. This is omitted in the published version.)

(2004)
php
Notes (in French) of a working seminar I organized in 2004 about the various approaches to concentration of measure (geometrical, analytical, probabilistic).

(1999)
pdf (268K)
ps.gz
Master's dissertation, advised by P. Pansu.
I explain some of the results of Gromov's "Chapter 3 1/2", about the observable diameter and concentration of measure on submanifolds of the sphere and on some projective complex algebraic varieties.

Random groups, geometric group theory

2005
pdf (1658K)
ps.gz (1130K)
Ensaios Matemáticos [Mathematical Surveys], 10. Sociedade Brasileira de Matemática, Rio de Janeiro, 2005.
Small book reviewing currently known facts and numerous open problems about random groups. This text is aimed at those having some basic knowledge of geometric group theory and wanting to discover the precise meaning of "random groups" and hopefully provides a roadmap to working on the subject.
January 2010 random groups updates

How to order a printed copy.

2011 (2005)
pdf (421K)
ps.gz (569K)
Trans. Amer. Math. Soc. 363 (2011), n°9, 4701–4733.
Random groups at density $d<1/5$ have a codimension-1 subgroup and thus do not have property $(T)$. Moreover at density $d<1/6$ they act freely and cocompactly on a CAT(0) cube complex and thus have the Haagerup property.

2007 (2004)
pdf
ps.gz
Internat. J. Algebra Comput. 17 (2007), n°1, 37--51.
We prove that random groups at density d satisfy an isoperimetric inequality with sharp constant $1-2d$. Also when $d<1/5$ the random presentation satisfies the Dehn algorithm, whereas it does not for $d>1/5$. We use a somewhat improved local-global criterion.

2007 (2004)
pdf (227K)
ps.gz (250K)
Trans. Amer. Math. Soc. 359 (2007), n°5, 1959--1976.
We prove that any countable group embeds in $Out(G)$ for some group G with property $(T)$ (this answers a question of Paulin). We also get Kazhdan groups which are not Hopfian, or not coHopfian. For this we use the graphical small cancellation technique of Gromov.

2005 (2004)
pdf
ps.gz
C.R. Math. Acad. Sci. Paris 341 (2005), n°3, 137--140.
Random quotients of hyperbolic groups with "harmful" torsion collapse at densities smaller than expected.

(2003)
pdf
ps.gz
Expository manuscript.
Simple proof that property $(T)$ is equivalent to a uniform spectral gap for the random walk operator with values in unitary representations, and of the $\lambda_1>1/2$ criterion.

(2003)
pdf
ps.gz
Expository manuscript.
Description of the steps in Gromov's construction of a group whose Cayley graph contains (more or less isometrically) a family of expanders, with some technical points.

2006 (2003)
pdf (280K)
ps.gz
Comment. Math. Helv. 81 (2006), n° 3, 569--593.
The growth exponent of a generic group (in the density model) is arbitrarily close to that of the free group. This answers a question of Grigorchuk and de la Harpe.

2005 (2003)
pdf (309K)
ps.gz
Ann. Inst. Fourier (Grenoble) 55 (2005), n° 1, 289--317.
We show that the spectral gap of the Laplacian (or random walk operator) on a generic group is very probably almost as large as in a free group. Moreover this spectral gap is robust under random quotients of hyperbolic groups (in the density model).

2006 (2003)
pdf
ps.gz
Bull. Belg. Math. Soc. 13 (2006), n° 1, 75--89.
If a group presentation has for relators the words read on the cycles of a labelled graph, and if the labelling satisfies a generalized small cancellation condition, then the group is hyperbolic.

2004 (2002)
pdf (838K)
GAFA, Geom. Funct. Anal. 14 (2004), n° 3, 595--679.
Generalisation of Gromov's result that a random group is infinite hyperbolic if the number of relators is less than some critical value and trivial above this value: this is still true when taking a random quotient of a hyperbolic group. The critical value can be computed and depends on the properties of the random walk on the group.

2003 (2002)
Critical densities for random quotients of hyperbolic groups
C.R. Math. Acad. Sci. Paris 336 (2003), n° 5, 391--394.
Short paper announcing the results of Sharp phase transition theorems for hyperbolicity of random groups.

General relativity and statistical physics

(2008)
On the kinetic Fokker-Planck equation in Riemannian geometry
(Currently on hold)

2009 (2008)
pdf (327K)
Nonlinear Analysis: Theory, Methods & Applications 71 (2009), n°12, e199--e202.
This is a survey of our results on the large-scale effects of fluctuations in cosmology. It contains a synthesis of results from the texts below.

2009 (2008)
pdf
ps.gz
Physica A 388 (2009), 5029--5035.
Since general relativity is non-linear, fluctuations (e.g. gravitational waves or irregularities in matter density) around a given mean produce non-zero average effects. For example, we show that gravitational waves of currently undetectable amplitude and frequency could influence expansion of the universe roughly as much as the total matter content of the universe. This should be taken into account when considering dark matter/dark energy problems.

2005 (2004)
pdf
ps.gz
Astron. Astrophys. 433 (2005), n°2, 397--404.
Observing a fluctuating black hole yields the impression that it is surrounded with ``apparent matter'' of negative energy.

Miscellaneous

(2009)
pdf (515K)
The introduction (in French) of my Habilitation manuscript, which contains a presentation of my research for a general mathematical audience.

(2003)
pdf (1210K)
ps.gz (574K)
Advisors: M. Gromov and P. Pansu
My PhD dissertation. In addition to an overall presentation, it mainly contains some of the texts above.

Various texts on my personal Web page, eg: the various meanings of entropy in mathematics; introduction to concentration of measure; presentation of different cohomology theories in various settings; introductions to geometric group theory; and more. (Mostly in French.)


To reach me: contact (domain:) yann-ollivier.org
Last modified: July 12, 2014.