By Alexander's theorem, every link in the 3-sphere can be represented as the closure of a braid. Lorenz links and twisted torus links are two families that have been extensively studied and are well-described in terms of braids. In this talk, we will present a natural generalization of Lorenz links and twisted torus links that produces all links in the 3-sphere. This provides a simpler braid description for all links in the 3-sphere.
How do competing pathogen strains evolve within and across a population of hosts? We propose a simple stochastic model in which the type composition within each host evolves according to a family of Markov kernels. When hosts evolve independently, the model reveals a moment duality with genealogies related to the Ancestral Selection Graph and, under suitable scaling, converges to a Wright–Fisher diffusion with drift. When hosts interact through the population distribution, the system becomes weakly interacting. We prove propagation of chaos and show that the dynamics of a typical host converge to a McKean–Vlasov diffusion. As an illustration, we consider mutation rates depending on the current population state and study ergodicity of the resulting mean-field dynamics. This talk is based on join work with Leonardo Videla (Universidad de Santiago) and Héctor Olivero (Universidad de Valparaiso).
Given a weight $w$ on the unit circle, consider the orthogonal polynomials on the unit circle generated by $w$. Steklov famously conjectured that if $w$ is bounded below, then the polynomials all ought to be uniformly bounded above. While false, this conjecture begs the follow-up question: under what regularity conditions on $w$ are the polynomials uniformly bounded in $L^p(w)$ for some $p\gt 2$? Building upon a preliminary answer given by Nazarov for when $w$ is bounded above and below, we provide a positive answer when $w$ is an $A_2$ weight. This is joint work with Alexander Aptekarev and Sergey Denisov.
In this talk, we address the localization of general nonlocal functionals of double-integral type with fractional dependence on the state variable, inspired by peridynamics. Localization is carried out as the interaction horizon among particles tends to zero. As a main result, we obtain an explicit formulation of the local $Γ$-limit, also covering the vectorial case. Applications of this result to nonlinear elasticity and the p-Laplacian eigenvalue problem will be discussed.
We propose a new concept of lifts of reversible diffusion processes and show that various well-known non-reversible Markov processes arising in applications are lifts in this sense of simple reversible diffusions. Furthermore, we introduce a concept of non-asymptotic relaxation times and show that these can at most be reduced by a square root through lifting, generalising a related result in discrete time.
For reversible diffusions on domains in Euclidean space, or, more generally, on a Riemannian manifold with boundary, non-reversible lifts are in particular given by the Hamiltonian flow on the tangent bundle, interspersed with random velocity refreshments, or perturbed by Ornstein-Uhlenbeck noise, and reflected at the boundary. In order to prove that for certain choices of parameters, these lifts achieve the optimal square-root reduction up to a constant factor, precise upper bounds on relaxation times are required. We demonstrate how the recently developed approach to quantitative hypocoercivity based on space-time Poincaré inequalities can be rephrased and simplified in the language of lifts and how it can be applied to find optimal lifts.
The Hopfield Neural Network has played, ever since its introduction in 1982 by John Hopfield, a fundamental role in the inter-disciplinary study of storage and retrieval capabilities of neural networks, further highlighted by the recent 2024 Physics Nobel Prize.
From its strong link with biological pattern retrieval mechanisms to its high-capacity Dense Associative Memory variants and connections to generative models, the Hopfield Neural Network has found relevance both in Neuroscience, as well as the most modern of AI systems.
Much of our theoretical knowledge of these systems however, comes from a surprising and powerful link with Statistical Mechanics, first established and explored in seminal works of Amit, Gutfreund and Sompolinsky in the second half of the 1980s: the interpretation of associative memories as spin-glass systems.
In this talk, we will present this duality, as well as the mathematical techniques from spin-glass systems that allow us to accurately and rigorously predict the behavior of different types of associative memories, capable of undertaking various different tasks.
A rigorous understanding of the dynamical nature of spacelike singularities remains an open problem in mathematical cosmology. Since the heuristic work of Belinski–Khalatnikov–Lifshitz and Misner's Mixmaster construction, vacuum spatially homogeneous cosmological models are expected to play a key role for generic singularities. We therefore focus on this class of models. The most general cases are the Bianchi type VIII, type IX, and type VI_{-1/9}, each with a four-dimensional Hubble-normalized state space.
On one hand, we embed the types VIII and IX models into modified gravity theories and show that general relativity (GR) arises as a bifurcation point where chaotic dynamics become generic, suggesting a new approximation scheme for GR. On the other hand, we analyze the type VI_{-1/9} oscillatory regime and show that only a subset of its structure is dynamically relevant.
Here we introduce basic concepts, various models (SIP, SEP, independent random walkers) and how they are linked to each other via the Lie algebraic formalism.
From the Lie algebraic formalism we infer that interacting particle systems with dualities come in "families" characterized by an underlying Lie algebra.
These are SU(2) for SEP, SU(1,1) for SIP, and the Heisenberg algebra for independent particles.
References
Giardina, C., & Redig, F. (2026). Duality for Markov processes: a Lie algebraic approach. Springer Nature.
Van Ginkel, B., & Redig, F. (2020). Hydrodynamic Limit of the Symmetric Exclusion Process on a Compact Riemannian Manifold: B. van Ginkel et al. Journal of Statistical Physics, 178(1), 75-116.
Junné, J., Redig, F., & Versendaal, R. (2024). Hydrodynamic limit of the symmetric exclusion process on complete Riemannian manifolds and principal bundles. arXiv:2410.20167.
Giardinà, C., Redig, F., & van Tol, B. (2024). Intertwining and propagation of mixtures for generalized KMP models and harmonic models. arXiv:2406.01160.
Schütz, G., & Sandow, S. (1994). Non-Abelian symmetries of stochastic processes: Derivation of correlation functions for random-vertex models and disordered-interacting-particle systems. Physical Review E, 49(4), 2726.
Giardina, C., Kurchan, J., Redig, F., & Vafayi, K. (2009). Duality and hidden symmetries in interacting particle systems. Journal of Statistical Physics, 135(1), 25-55.
Frassek, R., & Giardinà, C. (2022). Exact solution of an integrable non-equilibrium particle system. Journal of Mathematical Physics, 63(10).
Here we use duality to characterize the ergodic invariant measures, and use duality to also look at the stationary state of systems driven by reservoirs at the boundary.
Special attention is given to the harmonic model and propagation of mixed product states.
Existing machine learning frameworks operate over the field of real numbers ($\mathbb{R}$) and learn representations in real (Euclidean or Hilbert) vector spaces (e.g., $\mathbb{R}^d$). Their underlying geometric properties align well with intuitive concepts such as linear separability, minimum enclosing balls, and subspace projection; and basic calculus provides a toolbox for learning through gradient-based optimization.
But is this the only possible choice? In this seminar, we study the suitability of a radically different field as an alternative to $\mathbb{R}$ — the ultrametric and non-archimedean space of $p$-adic numbers, $\mathbb{Q}_p$. The hierarchical structure of the $p$-adics and their interpretation as infinite strings make them an appealing tool for code theory and hierarchical representation learning. Our exploratory theoretical work establishes the building blocks for classification, regression, and representation learning with the $p$-adics, providing learning models and algorithms. We illustrate how simple Quillian semantic networks can be represented as a compact $p$-adic linear network, a construction which is not possible with the field of reals. We finish by discussing open problems and opportunities for future research enabled by this new framework.
Many complex biological and physical networks are naturally subject to both random influences, i.e., extrinsic randomness, from their surrounding environment, and uncertainties, i.e., intrinsic noise, from their individuals. Among many interesting network dynamics, of particular importance is the synchronization property which is closely related to the network reliability especially in cellular bio-networks. It has been speculated that whereas extrinsic randomness may cause noise-induced synchronization, intrinsic noises can drive synchronized individuals apart. This talk presents an appropriate framework of (discrete-state and discrete time) Markov random networks to incorporate both extrinsic randomness and intrinsic noise into the rigorous study of such synchronization and desynchronization scenaria. In particular, alternating patterns between synchronization and desynchronization behaviors are given by studying the asymptotics of the Markov perturbed stationary distributions. This talk is based on joint works with Arno Berger, Wen Huang, Hong Qian, Felix X.-F. Ye, and Yingfei Yi.
Diffusion probabilistic models have become the state-of-the-art tool in generative methods, used to generate high-resolution samples from very high-dimension distributions (e.g. images). Although very effective, they suffer some drawbacks:
as opposed to variational encoders, the dimension of the problem remains high during the generation process and
they can be prone to memorization of the training dataset.
In this talk, we first provide an introduction to generative modeling, with a focus on diffusion models from the point of view of stochastic PDEs. Then, we introduce a kernel-smoothed empirical score and study the bias-variance of this estimator. We find improved bounds on the KL-divergence between a true measure and an approximate measure generated by using the smoothed empirical score. This score estimator leads to less memorization and better generalization. We demonstrate these findings on synthetic and real datasets, combining diffusion models with variational encoders to reduce the dimensionality of the problem.