The principle of the holography of information states that in a theory of quantum gravity a copy of all the information available on a Cauchy slice is also available near the boundary of the Cauchy slice. This redundancy in the theory is already present at low energy. In the context of the AdS/CFT correspondence, this principle can be translated into a statement about the dual conformal field theory. We carry out this translation and demonstrate that the principle of the holography of information holds in bilocal holography.

Complete Calabi-Yau metrics provide singularity models for limits of Kahler-Einstein metrics. We study complete Calabi-Yau metrics with Euclidean volume growth and quadratic curvature decay. It is known that under these assumptions the metric is always asymptotic to a unique cone at infinity. Previous work of Donaldson-S. gives a 2-step degeneration to the cone in the algebro-geometric sense, via a possible intermediate object (a K-semistable cone). We will show that such intermediate K-semistable cone does not occur. This is in sharp contrast to the case of local singularities. This result together with the work of Conlon-Hein also give a complete algebro-geometric classification of these metrics, which in particular confirms Yau’s compactification conjecture in this setting. I will explain the proof in this talk, and if time permits I will describe a conjectural picture in general when the curvature decay condition is removed. Based on joint work with Junsheng Zhang (UC Berkeley).

We study the Inclusion Process with vanishing diffusion coefficient, which is known to exhibit condensation and metastable dynamics for cluster locations. Here we focus on the dynamics of mass distribution rather than locations, and consider the process on the complete graph in the thermodynamic limit with fixed particle density. We describe the mass distribution for a given configuration by a measure on a suitably scaled mass space and derive a limiting measure-valued process. When the diffusion coefficient scales like the inverse of the system size, the scaling limit is equivalent to the well known Poisson-Dirichlet diffusion, offering an alternative point of view for this well-established dynamics. Testing configurations with size-biased functions, our approach can be generalized to other scaling regimes. This leads to an interesting characterization of the limiting dynamics via duality and provides a natural extension of the Poisson-Dirichlet diffusion to infinite mutation rate. This is joint work with Simon Gabriel and Paul Chleboun (both Warwick)

Error-correcting codes are known to define chiral 2d lattice CFTs where all the $U(1)$ symmetries are enhanced to $SU(2)$. In this paper, we extend this construction to a broader class of length-$n$ codes which define full (non-chiral) CFTs with $SU(2)^n$ symmetry, where $n=c+ \bar c$. We show that codes give a natural discrete ensemble of 2d theories in which one can compute averaged observables. The partition functions obtained from averaging over all codes weighted equally is found to be given by the sum over modular images of the vacuum character of the full extended symmetry group, and in this case the number of modular images is finite. This averaged partition function has a large gap, scaling linearly with $n$, in primaries of the full $SU(2)^n$ symmetry group. Using the sum over modular images, we conjecture the form of the genus-2 partition function. This exhibits the connected contributions to disconnected boundaries characteristic of wormhole solutions in a bulk dual.

I’ll describe two approaches to constructing a universal state sum. The first approach (arXiv:2104.02101) is more elementary and assumes semisimplicity. Special cases of this state sum include Turaev–Viro, Crane–Yetter, Douglas–Reutter, the Reshetikhin–Turaev Dehn surgery formula (thought of as a state sum), Brown–Arf for $\mathrm{Pin}_-$ 2-manifolds, and Dijkgraaf–Witten. The second approach (joint work with David Reutter) is more general and does not assume semisimplicity. If there’s time I’ll sketch a program to use the non-semisimple state sum to reproduce a cluster of non-semi-simple 3-manifold invariants due to many different authors (Lyubashenko, Kuperberg, Hennings, ... Geer, Gainutdinov, Patureau-Mirand, ... ).

It is well known that Frobenius algebras are in correspondence with 2-dimensional TQFTs. In this talk, we introduce Frobenius objects in any monoidal category, and in particular in the category where objects are sets and morphisms are spans of sets. We prove the existence of a simplicial set that encodes the data of the Frobenius structure in this category. This serves as a (simplicial) toy model of the Wehrheim–Woodward construction for the symplectic category. This is part of a program that intends to describe, in terms of category theory, the relationship between symplectic groupoids and topological field theory via the Poisson sigma model. Based on joint work with Rajan Mehta and Molly Keller (Rev. in Math. Phys (34) 10 (2022)), with Rajan Mehta, Adele Long and Sophia Marx (https://arxiv.org/abs/2208.14716), and ongoing work with Rajan Mehta and Walker Stern.

After an introduction on the way large deviation functions appear in non-equilibrium systems, I will try to explain how they can be calculated for general Markov processes. Based on the theory, it is easy to establish general properties of non-equilibrium systems such as the fluctuation theorem. Then the main part of these lectures will be to review the theoretical approaches, such as matrix products, the Bethe ansatz or the macroscopic fluctuation theory, allowing to obtain a series of exact expressions of large deviation functions for lattice gas models.

After an introduction on the way large deviation functions appear in non-equilibrium systems, I will try to explain how they can be calculated for general Markov processes. Based on the theory, it is easy to establish general properties of non-equilibrium systems such as the fluctuation theorem. Then the main part of these lectures will be to review the theoretical approaches, such as matrix products, the Bethe ansatz or the macroscopic fluctuation theory, allowing to obtain a series of exact expressions of large deviation functions for lattice gas models.

Toric symplectic manifolds contain an interesting and well-studied family of Lagrangian tori, called toric fibres. In this talk, we address the natural question of which toric fibres are equivalent under Hamiltonian diffeomorphisms of the ambient space. On one hand, we use a symmetric version of McDuff's probes to construct such equivalences and on the other hand, we give certain obstructions coming from Chekanov's classification of product tori in symplectic vector spaces combined with a lifting trick from toric geometry. We will discuss many four-dimensional examples in which a full classification can be achieved.

After an introduction on the way large deviation functions appear in non-equilibrium systems, I will try to explain how they can be calculated for general Markov processes. Based on the theory, it is easy to establish general properties of non-equilibrium systems such as the fluctuation theorem. Then the main part of these lectures will be to review the theoretical approaches, such as matrix products, the Bethe ansatz or the macroscopic fluctuation theory, allowing to obtain a series of exact expressions of large deviation functions for lattice gas models.

Deep artificial neural networks have made great success in many problems in science and engineering. In this talk, I will discuss our recent efforts to develop DNNs capable of learning non-trivial geometry information hidden in data. In the first part, I will discuss our work on advocating the use of a multi-chart latent space for better data representation. Inspired by differential geometry, we propose a Chart Auto-Encoder (CAE) and prove a universal approximation theorem on its representation capability. CAE admits desirable manifold properties that conventional auto-encoders with a flat latent space fail to obey. We further establish statistical guarantees on the generalization error for trained CAE models and show their robustness to noise. Our numerical experiments also demonstrate satisfactory performance on data with complicated geometry and topology. If time permits, I will discuss our work on defining convolution on manifolds via parallel transport. This geometric way of defining parallel transport convolution (PTC) provides a natural combination of modeling and learning on manifolds. PTC allows for the construction of compactly supported filters and is also robust to manifold deformations. I will demonstrate its applications to shape analysis and point clouds processing using PTC-nets. This talk is based on a series of joint work with my students and collaborators.

After an introduction on the way large deviation functions appear in non-equilibrium systems, I will try to explain how they can be calculated for general Markov processes. Based on the theory, it is easy to establish general properties of non-equilibrium systems such as the fluctuation theorem. Then the main part of these lectures will be to review the theoretical approaches, such as matrix products, the Bethe ansatz or the macroscopic fluctuation theory, allowing to obtain a series of exact expressions of large deviation functions for lattice gas models.

In this talk, we will explore the importance of mathematical foundations for AI and data science and the design of an academic curriculum for graduate students. While traditional mathematics for AI and data science has focused on core techniques like linear algebra, basic probability, and optimization methods (e.g., gradient and stochastic gradient descent), several advanced mathematical techniques are now essential to understanding modern data science. These include ideas from the calculus of variations in spaces of random variables, functional analytic methods, ergodic theory, control theory methods in reinforcement learning, and metrics in spaces of probability measures. We will discuss the author's experience designing an applied mathematics curriculum on data science and draw on the author's experience and lessons learned in teaching an advanced course on the mathematical foundations of data science. This talk aims to promote discussion and exchange of ideas on how mathematicians can play an important role in AI and data science and better equip our students to excel in this field.

Symbolic regression aims to find optimal functional representation of datasets, with broad applications across science. This is traditionally done using a "genetic algorithm" which stochastically searches function space using an evolution-inspired method for generating new trial functions. Motivated by the uncertainties inherent in this approach -- and its failure on seemingly simple test cases -- I will describe a new method which exhaustively searches and evaluates function space. Coupled to a model selection principle based on minimum description length, Exhaustive Symbolic Regression is guaranteed to find the simple equations that optimally balance simplicity with accuracy on any dataset. I will describe how the method works and showcase it on Hubble rate measurements and dynamical galaxy data.