We will review recent developments concerning the use of level set techniques associated with solutions to elliptic equations, in particular spacetime harmonic functions, and their application to positive mass theorems and comparison geometry.

Diffeomorphism symmetry is an intrinsic difficulty in gravitational theory, which appears in almost all of the questions in gravity. As is well known, the diffeomorphism symmetries in gravity should be interpreted as gauge symmetries, so only diffeomorphism invariant operators are physically interesting. However, because of the non-linear effect of gravitational theory, the results for diffeomorphism invariant operators are very limited.

In this work, we focus on the Jackiw-Teitelboim gravity in classical limit, and use Peierls bracket (which is a linear response like computation of observables’ bracket) to compute the algebra of a large class of diffeomorphism invariant observables. With this algebra, we can reproduce some recent results in Jackiw-Teitelboim gravity including: traversable wormhole, scrambling effect, and $SL(2)$ charges. We can also use it to clarify the question of when the creation of an excitation deep in the bulk increases or decreases the boundary energy, which is of crucial importance for the “typical state” version of the firewall paradox.

In the talk, I will first give a brief introduction of Peierls bracket, and then use the Peierls bracket to study the brackets between diffeomorphism invariant observables in Jackiw-Teitelboim gravity. I will then give two applications of this algebra: reproducing the scrambling effect, and studying the energy change after creating an excitation in the bulk.

In this presentation new types of multivariate EWMA control charts are presented. They are based on the Euclidean distance and on the distance defined by using the inverse of the diagonal matrix consisting of the variances. The design of the proposed control schemes does not involve the computation of the inverse covariance matrix and, thus, it can be used in the high-dimensional setting. The distributional properties of the control statistics are obtained and are used in the determination of the new control procedures. Within an extensive simulation study the new approaches are compared with the multivariate EWMA control charts which are based on the Mahalanobis distance.

The presented results are based on a joint work with Rostyslav Bodnar and Taras Bodnar.

I will discuss some of the unusual properties, in geometry and physics, of a family of Calabi-Yau threefolds fibered by elliptic curves. I will compare it to a construction by Elkies and a classical results of Burkhardt. This leads to some open questions.

Statistical methods play an important role in infectious disease epidemiology. They provide the main set of tools to compute estimates of key epidemiological parameters and to shed light on the transmission dynamics of a pathogen. Markov chain Monte Carlo (MCMC) methods are powerful simulation techniques used to explore the posterior parameter space and carry out inference under the Bayesian paradigm. As MCMC samplers are iterative by design, drawing samples from the target posterior distribution often requires huge computational resources. This computational bottleneck is particularly unwelcome when analysis of epidemic data and estimation of model parameters is required in (near) real-time, as is often the case during epidemic outbreaks where massive datasets are updated on a daily basis. We explore the synergy between the Laplace approximation and Bayesian P-splines in epidemic models to deliver a flexible inference methodology with fast and nimble algorithms that outperform MCMC-based approaches from a computational perspective. The socalled “Laplacian-P-splines” method is illustrated in the context of nowcasting (i.e. the real-time assessment of the current epidemic situation corrected for imperfect data information caused by delays in reporting) and in the recently proposed EpiLPS framework for estimating the time-varying reproduction number with applications on data of SARS-CoV-2.

This course begins with a brief introduction to classical calculus of variations and its applications to classical problems such as geodesic trajectories and the brachistochrone problem. Then, we examine Hamilton-Jacobi equations, the role of convexity and the classical verification theorem. Next, we illustrate the lack of classical solutions and motivate the definition of viscosity solutions. The course ends with a brief description of the reinforcement learning problem and its connection with Hamilton-Jacobi equations.

This course begins with a brief introduction to classical calculus of variations and its applications to classical problems such as geodesic trajectories and the brachistochrone problem. Then, we examine Hamilton-Jacobi equations, the role of convexity and the classical verification theorem. Next, we illustrate the lack of classical solutions and motivate the definition of viscosity solutions. The course ends with a brief description of the reinforcement learning problem and its connection with Hamilton-Jacobi equations.

Motivated by a recent application in Semi-Supervised Learning (SSL), the minicourse is a brief introduction to the analysis of infinity-harmonic functions. We will discuss the Lipschitz extension problem, its solution via MacShane-Whitney extensions and its several drawbacks, leading to the notion of AMLE (Absolutely Minimising Lipschitz Extension). We then explore the equivalence between being absolutely minimising Lipschitz, enjoying comparison with cones and solving the infinity-Laplace equation in the viscosity sense.

Classical propositional logic (and other propositional logics) are generally presented with atomic formulas, due to a philosophical idea of Wittgenstein (1921), baptized by Bertrand Russell “Logical Atomism” (1924). This philosophical view is controversial and from a mathematical point of view it is possible to construct propositional logic without atoms. This is what we will show here in the case of classical propositional logic. Surprisingly enough this has not yet been studied in details. On the one hand Gödel very succinctly talked about that in a informal way (1929/1930). On the other hand Suszko developed what he called “abstract logic”, a general theory of propositional logics without the atomic assumption, but did not study in details particular cases.

We will here present a precise mathematical definition of classical propositional logic without atoms, present a semantics for it, a sequent calculus and prove the completeness theorem using a very general abstract version of this theorem (Beziau 2001).

References

J.-Y. Béziau, “Sequents and bivaluations”, Logique et Analyse, 44 (2001), pp.373-394.

K. Gödel, “Eine Eigenschaft der Realisierungen des Aussagenkalküls”, Ergebnisse eines mathematischen Kolloquiums, 2 (1929/30), pp.20-21.

B. Russell, “Logical Atomism”, in J. H. Muirhead (ed.), Contemporary British Philosophers, London: Allen and Unwin, 1924, pp.356–383.

R. Suszko (with S. L. Bloom and D. J. Brown) “A note on abstract logics”, Bulletin de l’Académie Polonaise des Sciences, 18 (1970), pp. 109-110.

L. Wittgenstein, “Logisch-Philosophische Abhandlung”, Annalen der Naturphilosophie, 14 (1921).

Motivated by a recent application in Semi-Supervised Learning (SSL), the minicourse is a brief introduction to the analysis of infinity-harmonic functions. We will discuss the Lipschitz extension problem, its solution via MacShane-Whitney extensions and its several drawbacks, leading to the notion of AMLE (Absolutely Minimising Lipschitz Extension). We then explore the equivalence between being absolutely minimising Lipschitz, enjoying comparison with cones and solving the infinity-Laplace equation in the viscosity sense.

Virasoro constraints for Gromov-Witten invariants have a rich history tied to the very beginning of the subject, but recently there have been many developments on the sheaf side. In this talk I will survey those developments and talk about joint work with A. Bojko and W. Lim where we propose a general conjecture of Virasoro constraints for moduli spaces of sheaves and formulate it using the vertex algebra that D. Joyce recently introduced to study wall-crossing. Using Joyce's framework we can show compatibility between wall-crossing and the constraints, which we then use to prove that they hold for moduli of stable sheaves on curves and surfaces with $h^{0,1}=h^{0,2}=0$. In the talk I will give a rough overview of the vertex algebra story and focus on the ideas behind the proof in the case of curves.

In breeding programmes, the observed genetic change is a sum of the contributions of different groups of individuals. Quantifying these sources of genetic change is essential for identifying the key breeding actions and optimizing breeding programmes. However, it is difficult to disentangle the contribution of individual groups due to the inherent complexity of breeding programmes. Here we extend the previously developed method for partitioning genetic mean by paths of selection to work both with the mean and variance of breeding values. We first extended the partitioning method to quantify the contribution of different groups to genetic variance assuming breeding values are known. Second, we combined the partitioning method with the Markov Chain Monte Carlo approach to draw samples from the posterior distribution of breeding values and use these samples for computing the point and interval estimates of partitions for the genetic mean and variance. We implemented the method in the R package AlphaPart. We demonstrated the method with a simulated cattle breeding programme.We showed how to quantify the contribution of different groups of individuals to genetic mean and variance. We showed that the contributions of different selection paths to genetic variance are not necessarily independent. Finally, we observed some limitations of the partitioning method under a misspecified model, suggesting the need for a genomic partitioning method. We presented a partitioning method to quantify sources of change in genetic mean and variance in breeding programmes. The method can help breeders and researchers understand the dynamics in genetic mean and variance in a breeding programme. The developed method for partitioning genetic mean and variance is a powerful method for understanding how different paths of selection interact within a breeding programme and how they can be optimised.

Recent developments involving replica wormholes, the generalized entropy, quantum extremal surfaces, holographic map, etc, have shown what is missing in Hawking’s original calculation. We can now see how to perform semi-classical calculations that are entirely consistent with unitary: information is not lost. The state of the Hawking radiation has subtle correlations that build up as a black hole evaporates and ensure that the final state is pure. The interpretation of the results for the picture of a black hole with a smooth internal geometry before the singularity is reached is less clear. I will review these developments and present a simple microscopic model which can be used to illustrate the issues involved. The recent observation that the holographic map, the map between the semi-classical and microscopic states, is non-isometric plays a key role. Contrary to some suggestions, manipulation of the radiation far from the black hole cannot affect its interior in a non-local way. The picture seems entirely consistent with microscopic constructions like fuzzballs in string theory.

I will describe the universal aspect of unitary conformal field theories at high temperature with a global symmetry group, which can be a discrete group or a compact Lie group. I will describe the geometric setup to apply the spurion analysis and explain how we can capture this universal aspect upto a constant factor that depends on the choice of a theory. As a by-product, this analysis demonstrates that the RNAdS black hole with non-abelian hair is thermodynamically more stable than the one without hair.

Every smooth fiber bundle admits a complete Ehresmann connection. I will talk about the story of this theorem and its relation with Riemannian submersions. Then, after discussing some foundations of Riemannian geometry of Lie groupoids and stacks, I will present a generalization of the theorem into this framework, which somehow answers an open problem on linearization. Talk based on collaborations with my former student M. de Melo.

Despite the non-convex optimization landscape, over-parametrized shallow networks are able to achieve global convergence under gradient descent. The picture can be radically different for narrow networks, which tend to get stuck in badly-generalizing local minima. Here we investigate the cross-over between these two regimes in the high-dimensional setting, and in particular investigate the connection between the so-called mean-field/hydrodynamic regime and the seminal approach of Saad & Solla. Focusing on the case of Gaussian data, we study the interplay between the learning rate, the time scale, and the number of hidden units in the high-dimensional dynamics of stochastic gradient descent (SGD). Our work builds on a deterministic description of SGD in high-dimensions from statistical physics, which we extend and for which we provide rigorous convergence rates.