Collapse Expand

Search

 

xml

15 seminars found


, Thursday

Probability and Statistics


, Faculty of Business Administration and Economics, European University Viadrina, Frankfurt (Oder), Germany.

Abstract

In recent years, matrix-valued data has received an increasing amount of attention. This is due to their frequent application in various fields, such as signal processing, finance, medicine, engineering, among others. Here we consider matrix-valued time series processes and our aim is to detect changes in the mean behavior.

An obvious way to handle the problem is to make use of vectorization, i.e. the columns of the matrix are written together as a matrix. The problem is then reduced to the detection of a change in a vector time series. Such problems have been discussed by, e.g. Kramer and Schmid (1997), Bodnar et al. (2023), and Bodnar et al. (2024). The disadvantage of vectorization consists in the fact that the resulting time series process may be high-dimensional and the process identification is quite difficult.

In the last five years, other types of matrix-valued time series processes have been proposed (e.g., Chen et al. (2021), Wu and Bi (2023)). These approaches are characterized by fewer parameters and, for that reason, are of great interest in practice.

Using these new types of time series model, EWMA control charts for matrix-valued time series are derived. The control design is calculated, and some explicit results are given for matrix-valued autoregressive processes. The performance of the charts is compared with each other within an extensive simulation study.

Joint work with:

  • S. Knoth, Department of Economics and Social Sciences, Institute of Mathematics and Statistics, Helmut Schmidt University, Hamburg, Germany
  • Y. Okhrin, Department of Statistics and Data Science, Faculty of Business and Economics, University of Augsburg, Germany
  • V. Petruk, Department of Statistics, Faculty of Business Administration and Economics, European University Viadrina, Frankfurt (Oder), Germany
, Thursday

Mathematical Relativity


Arthur Suvorov, University of Tuebingen.

Abstract

The set of (smooth) metrics that can be placed on a Riemannian manifold defines an infinite-dimensional "superspace" that, remarkably, can itself be imbued with the structure of a (Fréchet) manifold. The subspace pertaining to (spatially-sliced) Einstein metrics was explored in detail by Wheeler and collaborators back in the late 50s, as it provides a means to describe a collection of spacetimes purely in terms of geometry through the famous words "mass without mass; charge without charge". At least in some restricted contexts, a natural basis relates to multipole moments which provide a tool to decompose a spacetime into a set of numbers. I will describe the construction of such superspaces, how to define inner products and (weak) Riemannian metrics there, and how they may be useful to provide astrophysical intuition. For instance, geodesics can be computed on Met(M) which allows one to define a single number that tells you how "distant" two spacetimes (e.g., two Kerr black holes) are from one another.




, Wednesday

Mathematics for Artificial Intelligence


, IT & Instituto Superior Técnico.

Abstract

Existing machine learning frameworks operate over the field of real numbers ($\mathbb{R}$) and learn representations in real (Euclidean or Hilbert) vector spaces (e.g., $\mathbb{R}^d$). Their underlying geometric properties align well with intuitive concepts such as linear separability, minimum enclosing balls, and subspace projection; and basic calculus provides a toolbox for learning through gradient-based optimization.

But is this the only possible choice? In this seminar, we study the suitability of a radically different field as an alternative to $\mathbb{R}$ — the ultrametric and non-archimedean space of $p$-adic numbers, $\mathbb{Q}_p$. The hierarchical structure of the $p$-adics and their interpretation as infinite strings make them an appealing tool for code theory and hierarchical representation learning. Our exploratory theoretical work establishes the building blocks for classification, regression, and representation learning with the $p$-adics, providing learning models and algorithms. We illustrate how simple Quillian semantic networks can be represented as a compact $p$-adic linear network, a construction which is not possible with the field of reals. We finish by discussing open problems and opportunities for future research enabled by this new framework.

Based on:
André F. T. Martins, Learning with the $p$-adics


, Thursday

Lisbon WADE — Webinar in Analysis and Differential Equations


Gevorg Mnatsakanyan, Yerevan State University.

Abstract

The Malmquist-Takenaka (MT) system is a complete orthonormal system in $H^2(T)$ generated by an arbitrary sequence of points in the unit disk that do not approach the boundary very fast. The nth point of the sequence is responsible for multiplying the nth and subsequent terms of the system by a Möbius transform taking the point to 0. One can recover the classical trigonometric system, its perturbations or conformal transformations, as particular examples of the MT system. However, for many interesting choices of the generating sequence, the MT system is less understood. We prove almost everywhere convergence of the MT series for three different classes of generating sequences.



, Friday

Mathematics for Artificial Intelligence


, Sapienza University of Rome.

Abstract

The Hopfield Neural Network has played, ever since its introduction in 1982 by John Hopfield, a fundamental role in the inter-disciplinary study of storage and retrieval capabilities of neural networks, further highlighted by the recent 2024 Physics Nobel Prize.

From its strong link with biological pattern retrieval mechanisms to its high-capacity Dense Associative Memory variants and connections to generative models, the Hopfield Neural Network has found relevance both in Neuroscience, as well as the most modern of AI systems.

Much of our theoretical knowledge of these systems however, comes from a surprising and powerful link with Statistical Mechanics, first established and explored in seminal works of Amit, Gutfreund and Sompolinsky in the second half of the 1980s: the interpretation of associative memories as spin-glass systems.

In this talk, we will present this duality, as well as the mathematical techniques from spin-glass systems that allow us to accurately and rigorously predict the behavior of different types of associative memories, capable of undertaking various different tasks.






Instituto Superior Técnico
Av. Rovisco Pais, Lisboa, PT