Collapse Expand

Search

 

xml

24 seminars found


, Tuesday

Lisbon WADE — Webinar in Analysis and Differential Equations


, University of Pavia.

Abstract

In the last few years the Gamow problem, namely

\[ \min\Big\{P(\Omega)+\varepsilon \int_{\Omega}\int_\Omega \frac{1}{|x-y|}\,dx\,dy : \Omega\subset \mathbb{R}^3, |\Omega|=1\Big\}, \]

for $\varepsilon>0$, has attracted a lot of attention from mathematicians. Nowadays it is well understood that for small $\varepsilon$ there exist a minimizer and it is a ball, while for very large $\varepsilon$ there is no minimizer.

Although it is very easy to formulate, there are still several open problems about it (mostly concerning nonexistence of minimizers for large $\varepsilon$ in a generalized $N$-dimensional setting).

A variation of this model, which could be called ``spectral Gamow problem'' consists in using the first eigenvalue of the Dirichlet Laplacian instead of the Perimeter, namely to consider

\[ \min\Big\{\lambda_1(\Omega)+\varepsilon \int_{\Omega}\int_\Omega \frac{1}{|x-y|}\,dx\,dy : \Omega\subset \mathbb{R}^3, |\Omega|=1\Big\}, \]

and in this talk we will provide some new results on this case.

Moreover, we will consider a different problem but with a similar structure, which can be seen as the minimization of a Hartree functional settled in a box, namely

\[ \min\Big\{\min_{u\in H^1_0(\Omega),\;\int u^2=1}\Big\{\int_\Omega|\nabla u|^2+q \int_\Omega\int_\Omega\frac{u^2(x)u^2(y)}{|x-y|}\,dxdy\Big\} : \Omega\subset \mathbb{R}^3,\;|\Omega|=1\Big\}, \]

for $q>0$.

The study of this functional arises when describing the ground state of a superconducting charge qubit.

We show that there is a threshold $q_1>0$ such that for all $q\leq q_1$ existence of minimizers occurs and minimizers are $C^{2,\gamma}$ nearly spherical.

We will also give some ideas (although nonconclusive) on how to treat the nonexistence issue for this functional.

The techniques and tools needed in the proofs are very broad. We employ spectral quantitative inequalities, the regularity of free boundaries, spectral surgery arguments and shape variations.

This is a joint project with Cyrill Muratov (Pisa) and Berardo Ruffini (Bologna).

, Tuesday

Geometria em Lisboa


, University of Aberdeen.

Abstract

I will report on the work in progress with S.Anjos and M.Pinsonnault concerning configuration spaces of symplectic balls in the standard complex projective plane. A few weeks ago Martin showed that when the balls are small their configuration space is homotopy equivalent to the configuration space of points. I will discuss what is happening if the balls are bigger. I will also try to put it into a more general context of configuration of rigid balls in domains of a Euclidean space.



, Thursday

Lisbon WADE — Webinar in Analysis and Differential Equations

Room 6.2.52, Faculty of Sciences, University of Lisbon &


Delia Schiera, Instituto Superior Técnico, Universidade de Lisboa.

Abstract

We will consider a Lane-Emden system on a bounded regular domain with Neumann boundary conditions and (sub)critical nonlinearities. In the critical regime, we show that, under suitable conditions on the exponents in the nonlinearities, least-energy (sign-changing) solutions exist. Moreover, through a suitable nonlinear eigenvalue problem, we prove convergence of solutions in dependence of the exponents of the nonlinearities in the (sub)critical range. Finally, I will briefly discuss related results on multiplicity, symmetry breaking, and regularity.

Based on joint works with A. Pistoia, A. Saldaña and H. Tavares.

, Thursday

Probability in Mathematical Physics


, Universidad de Los Andes.

Abstract

Let $L = (L(t))_{t \geq 0}$ be a multivariate Lévy process with Lévy measure $\nu(dy) = \exp(-f(|y|))dy$ for a smoothly regularly varying function $f$ of index $\alpha > 1$. The process $L$ is renormalized as $X^\varepsilon(t) = \varepsilon L(\tau_\varepsilon t),\ t \in [0,T]$, for a scaling parameter $\tau_\varepsilon = o(\varepsilon^{-1})$, as $\varepsilon \to 0$. We study the behavior of the bridge $Y^{\varepsilon,x}$ of the renormalized process $X^\varepsilon$ conditioned on the event $X^\varepsilon(T) = x$ for a given end point $x \neq 0$ and end time $T > 0$ in the regime of small $\varepsilon$. Our main result is a sample path large deviations principle (LDP) for $Y^{\varepsilon,x}$ with a specific speed function $S(\varepsilon)$ and an entropy-type rate function $I_x$ on the Skorokhod space in the limit $\varepsilon \to 0+$. We show that the asymptotic energy minimizing path of $Y^{\varepsilon,x}$ is the linear parametrization of the straight line between 0 and $x$, while all paths leaving this set are exponentially negligible. We also infer a LDP for the asymptotic number of jumps and establish asymptotic normality of the jump increments of $Y^{\varepsilon,x}$. Since on these short time scales ($\tau_\varepsilon = o(\varepsilon^{-1})$) direct LDP methods cannot be adapted we use an alternative direct approach based on convolution density estimates of the marginals $X^\varepsilon(t),\ t \in [0,T]$, for which we solve a specific nonlinear functional equation.



, Monday

Algebra and Topology


, Khalifa University of Science and Technology, Abu Dhabi.

Abstract

We provide a brief introduction to the theory of clones, minions, and clonoids, which are sets of functions of several arguments with certain closure conditions defined in terms of function class composition. These notions arise in a natural way in universal algebra and they have proved useful in the analysis of computational complexity of constraint satisfaction problems. Our primary focus is on clonoids of Boolean functions, and we present classifications of clonoids in the spirit of Post's classification of clones. Moreover, we propose refinements and strengthenings to Sparks's theorem on the cardinalities of clonoid lattices.


, Friday

Mathematics for Artificial Intelligence


, ISR & Instituto Superior Técnico.

Abstract

Distributed machine learning addresses the problem of training a model when the dataset is scattered across spatially distributed agents. The goal is to design algorithms that allow each agent to arrive at the model trained on the whole dataset, but without agents ever disclosing their local data.

This tutorial covers the two main settings in DML, namely, Federated Learning, in which agents communicate with a common server, and Decentralized Learning, in which agents communicate only with a few neighbor agents. For each setting, we illustrate synchronous and asynchronous algorithms.

We start by discussing convex models. Although distributed algorithms can be derived from many perspectives, we show that convex models allow to generate many interesting synchronous algorithms based on the framework of contractive operators. Furthermore, by stochastically activating such operators by blocks, we obtain directly their asynchronous versions. In both kind of algorithms agents interact with their local loss functions via the convex proximity operator.

We then discuss nonconvex models. Here, agents interact with their local loss functions via the gradient. We discuss the standard mini-batch stochastic gradient (SG) and an improved version, the loopless stochastic variance-reduced gradient (L-SVRG).

We end the tutorial by briefly mentioning our recent research on the vertical federated learning setting where the dataset is scattered, not by examples, but by features.

, Friday

Mathematics for Artificial Intelligence


, ISR & Instituto Superior Técnico.

Abstract

Distributed machine learning addresses the problem of training a model when the dataset is scattered across spatially distributed agents. The goal is to design algorithms that allow each agent to arrive at the model trained on the whole dataset, but without agents ever disclosing their local data.

This tutorial covers the two main settings in DML, namely, Federated Learning, in which agents communicate with a common server, and Decentralized Learning, in which agents communicate only with a few neighbor agents. For each setting, we illustrate synchronous and asynchronous algorithms.

We start by discussing convex models. Although distributed algorithms can be derived from many perspectives, we show that convex models allow to generate many interesting synchronous algorithms based on the framework of contractive operators. Furthermore, by stochastically activating such operators by blocks, we obtain directly their asynchronous versions. In both kind of algorithms agents interact with their local loss functions via the convex proximity operator.

We then discuss nonconvex models. Here, agents interact with their local loss functions via the gradient. We discuss the standard mini-batch stochastic gradient (SG) and an improved version, the loopless stochastic variance-reduced gradient (L-SVRG).

We end the tutorial by briefly mentioning our recent research on the vertical federated learning setting where the dataset is scattered, not by examples, but by features.


, Thursday

Lisbon WADE — Webinar in Analysis and Differential Equations


Diogo Arsénio, NYU Abu Dhabi.

Abstract

The phenomenon of dispersion in a physical system occurs whenever the elementary building blocks of the system, whether they are particles or waves, overall move away from each other, because each evolves according to a distinct momentum. This physical process limits the superposition of particles or waves, and leads to remarkable mathematical properties of the densities or amplitudes, including local and global decay, Strichartz estimates, and smoothing.

In kinetic theory, the effects of dispersion in the whole space were notably well captured by the estimates developed by Castella and Perthame in 1996, which, for instance, are particularly useful in the analysis of the Boltzmann equation to construct global solutions. However, these estimates are based on the transfer of integrability of particle densities in mixed Lebesgue spaces, which fails to apply to general settings of kinetic dynamics.

Therefore, we are now interested in characterizing the kinetic dispersive effects in the whole space in cases where only natural principles of conservation of mass, momentum and energy, and decay of entropy seem to hold. Such general settings correspond to degenerate endpoint cases of the Castella–Perthame estimates where no dispersion is effectively measured. However, by introducing a suitable kinetic uncertainty principle, we will see how it is possible to extract some amount of entropic dispersion and, in essence, measure how particles tend to move away from each other, at least when they are not restricted by a spatial boundary.

A simple application of entropic dispersion will then show us how kinetic dynamics in the whole space inevitably leads, in infinite time, to an asymptotic thermodynamic equilibrium state with no particle interaction and no available heat to sustain thermodynamic processes, thereby providing a provocative interpretation of the heat death of the universe.


, Tuesday

Lisbon WADE — Webinar in Analysis and Differential Equations


Enrico Valdinoci & Serena Dipierro, University of Western Australia.

Abstract

We present the theory of local and nonlocal minimal surfaces in relation to models of phase coexistence, with special attention to regularity and geometric properties.


, Thursday

Lisbon WADE — Webinar in Analysis and Differential Equations


Enrico Valdinoci & Serena Dipierro, University of Western Australia.

Abstract

We present the theory of local and nonlocal minimal surfaces in relation to models of phase coexistence, with special attention to regularity and geometric properties.


, Friday

Mathematics for Artificial Intelligence


, IT & Instituto Superior Técnico.

Abstract

This lecture first provides an introduction to classical variational inference (VI), a key technique for approximating complex posterior distributions in Bayesian methods, typically by minimizing the Kullback-Leibler (KL) divergence. We'll discuss its principles and common uses.

Building on this, the lecture introduces Fenchel-Young variational inference (FYVI), a novel generalization that enhances flexibility. FYVI replaces the KL divergence with broader Fenchel-Young (FY) regularizers, with a special focus on those derived from Tsallis entropies. This approach enables learning posterior distributions with significantly smaller, or sparser, support than the prior, offering advantages in model interpretability and performance.

, Friday

Mathematics for Artificial Intelligence


, IT & Instituto Superior Técnico.

Abstract

This lecture first provides an introduction to classical variational inference (VI), a key technique for approximating complex posterior distributions in Bayesian methods, typically by minimizing the Kullback-Leibler (KL) divergence. We'll discuss its principles and
common uses.

Building on this, the lecture introduces Fenchel-Young variational inference (FYVI), a novel generalization that enhances flexibility.FYVI replaces the KL divergence with broader Fenchel-Young (FY) regularizers, with a special focus on those derived from Tsallisentropies. This approach enables learning posterior distributions with significantly smaller, or sparser, support than the prior, offering advantages in model interpretability and performance.


, Thursday

Mathematics for Artificial Intelligence


, CAMGSD & Instituto Superior Técnico.

Abstract

We define a nonlinear Fourier transform which maps sequences of contractive $n \times n$ matrices to $SU(2n)$-valued functions on the circle $\mathbb T$. We characterize the image of compactly supported sequences and square-summable sequences on the half-line, and prove that the inverse map is well-defined on $SU(2n)$-valued functions whose diagonal $n \times n$ blocks are outer matrix functions. As an application, we prove infinite generalized quantum signal processing in the fully coherent regime.

Bibliography:

, Thursday

Mathematics for Artificial Intelligence


, CAMGSD & Instituto Superior Técnico.

Abstract

We define a nonlinear Fourier transform which maps sequences of contractive $n \times n$ matrices to $SU(2n)$-valued functions on the circle $\mathbb T$. We characterize the image of compactly supported sequences and square-summable sequences on the half-line, and prove that the inverse map is well-defined on $SU(2n)$-valued functions whose diagonal $n \times n$ blocks are outer matrix functions. As an application, we prove infinite generalized quantum signal processing in the fully coherent regime.




, Thursday

Probability in Mathematical Physics

Room P3.31, Mathematics Building, Instituto Superior TécnicoInstituto Superior Técnico &


, Grupo de Física Matemática.

Abstract

We first introduce a brief review of the history of Brownian Motion up to the modern experiments where isolated Brownian particles are observed.Later, we introduce a one-space-dimensional wavefunction model of a heavy particle and a collection of light particles that might generate "Brownian-Motion-Like" trajectories as well as diffusive motion (displacement proportional to the square-root of time).This model satisfies two conditions that grant, for the temporal motion of the heavy particle:

  1. An oscillating series with properties similar to those of the Ornstein-Uhlenbeck process;
  2. A best quadratic fit with an "average" non-positive curvature in a proper time interval.

We note that Planck's constant and the molecular mass enter into the diffusion coefficient, while they also recently appeared in experimental estimates;to our knowledge, this is the first microscopic derivation in which they contribute directly to the diffusion coefficient.Finally, we discuss whether cat states are present in the thermodynamic ensembles.

(Joint for with W.D. Wick)

File available at https://hal.science/hal-04838011



Instituto Superior Técnico
Av. Rovisco Pais, Lisboa, PT