###
09/01/2020, 16:00 — 17:00 — Room P3.10, Mathematics Building

Stevo Rackovic, *Mathematics Department, Instituto Superior Técnico*

```
```###
Gaussian Process Regression for Animation Rig Towards the Face Model

##### In professional 3D animation artists model movements and scenes using rig functions - constrained set of sliders or controllers that propagate deformations and drive mechanism of object or character in systems of 3D tools. These controllers are manually built for each character and cannot be reused if the underlying structure is not exactly the same. There are often hundreds of adjustable parameters, and artists have to learn the structure for each new character. This is usually bottleneck in production, that might be avoided by automating this process. D. Holden et. al. proposed possible solutions using Gaussian Processes Regression, which showed useful in the case of skeletal (quadriped) characters. We want to further apply this on face model, that has a completely different structure than the skeletal model. In this work we explain the model for 3D face animation, the theory of Gaussian processes regression and a method to apply it for solving the problem of interest. At the end results and examples are presented with a simple animation model we have at our disposal.

###
19/11/2019, 14:00 — 15:00 — Room P3.10, Mathematics Building

Ana Ferreira, *Departamento de Matemática, Instituto Superior Técnico*

```
```###
Extreme Value Theory applied to Longevity of Humans

There has been a long discussion on whether the distribution of human longevity has a finite or infinite right support. We shall discuss some recent results on Extreme Value Theory applied to Longevity of Humans. Some basic methods of EVT will be reviewed, where discussion will be oriented towards applications on human life-span data. It turns out that the quality of the actual data is a crucial issue. The results are based on data sets from the International Database on Longevity.

Joint work with Fei Huang (RSFAS, College of Business and Economics, Australian National University).

###
29/10/2019, 14:00 — 15:00 — Room P3.10, Mathematics Building

Wolfgang Schmid, *European University Viadrina, Department of Statistics, Frankfurt, Germany*

```
```###
Monitoring Image Processes

In recent years we observe dramatic changes in the way in which quality features of manufactured products are designed and inspected. The modeling and monitoring problems obtained by new inspection methods and fast multi-stream high-speed sensors are quite complex. These measurement tools are used in emerging technologies like, e.g., additive manufacturing. It has been shown that in these fields other types of quality characteristics have to be monitored. It is mainly not the mean, the variance, the covariance matrix or a simple profile which reflects the behavior of the quality characteristics but the shape, surfaces and images, etc. This is a new area for SPC. Note that more complicated characteristics arise in other fields of applications as well like, e.g., the monitoring of optimal portfolio weights in finance. Since in the last years many new approaches have been developed in the fields of image analysis, spatial statistics and for spatio-temporal modeling a huge amount of tools are available to model the underlying processes. Thus the main problem lies on the development of monitoring schemes for such structures.

In this talk new procedures for monitoring image processes are introduced. They are based on multivariate exponential smoothing and cumulative sums taking into account the local correlation structure. A comparison is given with existing methods. Within an extensive simulation study the performance of the analyzed methods is discussed.

The presented results are based on a joint work with Yarema Okhrin and Ivan Semeniuk.

###
15/10/2019, 14:00 — 15:00 — Room P3.10, Mathematics Building

Soraia Pereira, *University of Lisbon, Portugal*

```
```###
A LASSO-type model for the bulk and tail of a heavy-tailed response

As widely known, in an extreme value framework, interest focuses on modelling the most extreme observations — disregarding the central part of the distribution; commonly, the effort centers on modelling the tail of the distribution by the generalized Pareto distribution, in a Peaks over threshold framework. Yet, in most practical situations it would be desirable to model both the bulk of the data along with the extreme values. In this talk, I will introduce a novel regression model for the bulk and the tail of a heavy-tailed response. Our regression model builds over the extended generalized Pareto distribution, as recently proposed by Naveau et al (2016). The proposed model allows us to learn the effect of covariates on a heavy-tailed response via a LASSO-type specification conducted via a Lagrangian restriction. The performance of the proposed approach will be assessed through a simulation study, and the method will be applied to a real data set.

###
26/09/2019, 14:00 — 15:00 — Room P3.10, Mathematics Building

V. G. Kulkarni, *Department of Statistics and Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA*

```
```###
First Come, First Served Queues with Two Classes of Impatient Customers

We study systems with two classes of impatient customers who differ across the classes in their distribution of service times and patience times. The customers are served on a first-come, first served basis (FCFS), regardless of their class. Such systems are common in customer call centers, which often segment their arrivals into classes of callers whose requests may differ greatly in their complexity and criticality. We first consider an $M/G/1 + M$ queue and then analyze the $M/M/k + M$ case. Analyzing these systems using a queue length process proves intractable as it would require us to keep track of the class of each customer at each position in queue. Consequently, we introduce a virtual waiting time process where the service times of customers who will eventually abandon the system are not considered. We analyze this process to obtain performance characteristics such as the percentage of customers who receive service in each class, the expected waiting times of customers in each class, and the average number of customers waiting in queue. We use our characterization of the system to perform a numerical analysis of the $M/M/k + M$ system, and find several managerial implications of administering a FCFS system with multiple classes of impatient customers. Finally, we compare the performance a system based on data from a call center with the steady-state performance measures of a comparable $M/M/k + M$ system. We find that the performance measures of the $M/M/k + M$ system serve as good approximations of the system based on real data.

Joint work with:

Ivo Adan, Eindhoven University of Technology, the Netherlands,

and

Brett Hathaway, Kenan-Flagler School of Business, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

###
20/05/2019, 11:00 — 12:00 — Room P3.10, Mathematics Building

Alexandra Moura, *ISEG and CEMAPRE*

```
```###
Optimal reinsurance of dependent risks

The talk will focus on the optimal reinsurance problem for two dependent risks, from the point of view of the ceding insurance company. We aim at maximizing the expected utility or the adjustment coefficient of the insurer wealth. The insurer buys reinsurance on each risk separately. By risk we mean a line of business, a portfolio of policies or a policy. We assume a generic known dependence structure, so that the optimal solution depends on the joint distribution. Due to dependencies, the optimal level of reinsurance for each risk involves a trade-off between the reinsurance premia of both risks. We study the shape of this trade-off and characterize the optimal treaties. We show that an optimal solution exists and provide an optimality condition. Unfortunately, explicit optimal treaties are not easy to compute from this condition. We discuss some strategies to obtain numerical approximations for the optimal treaties and discuss some aspects of the structure of the optimal strategy. Numerical results are presented assuming that the two risks are dependent by means of a copula structure and that the reinsurance treaty consists of a combination of quota-share and stop-loss. Sensitivity of the optimal reinsurance strategy is analyzed numerically to several factors, including the dependence structure, through the copula chosen, and the dependence strength, by means of the dependence parameter, corresponding to different values of the Kendall’s tau. A variety of reinsurance premium calculation principles are also considered.

###
06/05/2019, 11:00 — 12:00 — Room P8, Mathematics Building, IST

Manuel Cabral Morais, *Department of Mathematics & CEMAT, Instituto Superior Técnico - Universidade de Lisboa*

```
```###
Improving the ARL profile of the Poisson EWMA chart

The Poisson exponentially weighted moving average (PEWMA) chart was proposed by Borror et al. (1998) to monitor the mean of counts of nonconformities. This chart regrettably fails to have an in-control average run length (ARL) larger than any out-of-control ARL, i.e., the PEWMA chart is ARL-biased. Moreover, due to the discrete character of its control statistic the PEWMA it is difficult to set the control limits in such way that the in-control takes a desired value, say ARL0. In this paper, we propose an ARL-unbiased counterpart of the PEWMA chart and use the R statistical software to provide gripping illustrations of this chart with a decidedly improved ARL profile and an in-control ARL equal to ARL0. We also compare the ARL performance of the proposed chart with the one of a few competing control charts for the mean of i.i.d. Poisson counts.

Joint work with Sven Knoth (Department of Mathematics and Statistics — Faculty of Economics and Social Sciences — Helmut Schmidt University, Hamburg, Germany)

###
24/04/2019, 13:00 — 14:00 — Room P4.35, Mathematics Building

Clément Dombry, *Université Franche-Comté, Besançon, France*

```
```###
The coupling method in extreme value theory

One of the main goal of extreme value theory is to infer probabilities of extreme events for which only limited observations are available and require extrapolation of the tail distribution of the observations. One major result is Balkema-de Haan-Pickands theorem that provides an approximation of the distribution of exceedances above high threshold by a Generalized Pareto distribution. We revisit these results with coupling arguments and provide quantitative estimates for the Wasserstein distance between the empirical distribution of exceedances and the limit Pareto model. In a second part of the talk, we extend the results to the analysis of a proportional tail model for quantile regression closely related to the heteroscedastic extremes framework developed by Einmahl et al. (JRSSB 2016). We introduce coupling arguments relying on total variation and Wasserstein distances for the analysis of the asymptotic behavior of estimators of the extreme value index and integrated skedasis function.

Joint work with B. Bobbia and D. Varron (Université de Franche Comté).

###
22/04/2019, 11:00 — 12:00 — Room P8, Mathematics Building, IST

Soraia Pereira, *Faculdade de Ciências da Universidade de Lisboa and CEAUL*

```
```###
Geostatistical analysis of sardine eggs data — a Bayesian approach

Understanding the distribution of animals over space, as well as how that distribution is influenced by environmental covariates, is a fundamental requirement for the effective management of animal populations. This is especially the case for populations which are harvested. The sardine is one of the most important fisheries species, both for its economic, sociologic, antropologic and cultural values.

Here we intend to understand the spatial distribution of the average number of sardine eggs by $m^3$. Our main objectives are to identify the environmental variables that better explain the spatial variation in sardine eggs density and to make predictions in spatial points that were not observed.

The data structure presents an excess of zeros and extreme values. To deal with this, we propose a point-referenced zero-inflated model to model the probability of presence together with the positive sardine eggs density and a point-referenced generalized Pareto model for the extremes. Finally, we combine the results of these two models to get the spatial predictions of the variable of interest. We follow a Bayesian approach and the inference is made using the package R-INLA in the software R.

###
01/04/2019, 11:00 — 12:00 — Room P8, Mathematics Building, IST

João Xavier, *ISR and Instituto Superior Técnico*

```
```###
Distributed Learning Algorithms for Big Data

Modern datasets are increasingly collected by teams of agents that are spatially distributed: sensor networks, networks of cameras, and teams of robots. To extract information in a scalable manner from those distributed datasets, we need distributed learning. In the vision of distributed learning, no central node exists; the spatially distributed agents are linked by a sparse communication network and exchange short messages between themselves to directly solve the learning problem. To work in the real-world, a distributed learning algorithm must cope with several challenges, e.g., correlated data, failures in the communication network, and minimal knowledge of the network topology. In this talk, we present some recent distributed learning algorithms that can cope with such challenges. Although our algorithms are simple extensions of known ones, these extensions require new mathematical proofs that elicit interesting applications of probability theory tools, namely, ergodic theory.

###
18/03/2019, 11:00 — 12:00 — Room P3.10, Mathematics Building

Manuela de Souto Miranda, *Universidade de Aveiro and CIDMA*

```
```###
Contributions for the detection of multivariate outliers

The detection of outliers in multivariate models is always a dicult matter, but the subject is even more complex when dealing with dependent structures, as it is the case with the Simultaneous Equation Model (SEM). Unlike other models dened by systems of equations, such as the multivariate regression, the SEM assumes that the response variable in each equation can be stated as an explanatory variable in the rest of the system, meaning that explanatory variables can be correlated with the error terms. We present a method of outlier detection that bypasses those diculties using the asymptotic distribution of adequate robust Mahalanobis distances. The process identies anomalous data points as outliers of the SEM in simple steps and it provides a clear visualization. We illustrate this procedure with a real econometric data set.

###
06/03/2019, 13:00 — 14:00 — Room P3.10, Mathematics Building

Ana Bianco and Graciela Boente, *Universidad de Buenos Aires*

```
```###
Robust logistic regression with sparse predictor variables

Nowadays, dealing with high-dimensional data is a recurrent problem that cuts across modern statistics. One main feature of high dimensional data is that the dimension $p$, that is, the number of covariates, is high, while the sample size $n$ is relatively small. In this circumstance, the bet on sparsity principle suggests to proceed under the assumption that most of the effects are not significant. Sparse covariates are frequent in the classification problem and in this situation the task of variable selection may be also of interest. We focus on the logistic regression model and our aim is to address robust and sparse estimators of the regression parameter in order to perform estimation and variable selection at the same time.For this purpose, we introduce a family of penalized M-type estimators for the logistic regression parameter that are stable against atypical data. We explore different penalizations functions and we introduce the so-called sign penalization. This new penalty has the advantage that it does not shrink the estimated coefficients to $0$ and that it depends only on one parameter.We will discuss the variable selection capability of the proposal as well as its asymptotic behaviour. Through a numerical study, we compare the finite sample performance of the proposal with different penalized estimators either robust or classical, under different scenarios.

###
25/02/2019, 11:00 — 12:00 — Room P3.10, Mathematics Building

Anna Couto, *INESC-ID and CEMAT*

```
```###
A Comprehensive Methodology to Analyse Topic Difficulties in Educational Programmes

We propose a comprehensive Learning Analytics methodology to investigate the level of understanding students achieve in the learning process. The goals of such methodology are

- To identify topics in which students experience difficulties on;
- To assess whether these difficulties are recurrent along semesters;
- To decide if there are conceptual associations between topics in which students experience difficulties on; and, more generally,
- To discover statistically significant groups of topics in which students show similar performance.

The proposed methodology uses statistics and data visualization techniques to address the first and the second goals, frequent itemset mining to tackle the third goal, and biclustering is proposed to find relationships within educational data, revealing meaningful and statistically significant patterns of students’ performance.

We illustrate the application of the methodology to a Computer Science course.

###
13/12/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building

Margarida G. M. S. Cardoso, *Instituto Universitário de Lisboa (ISCTE-IUL), Business Research Unit (BRU-IUL)*

```
```###
Working towards a typology of indices of agreement for clustering evaluation

Indices of agreement (IA) are commonly used to evaluate stability of a clustering solution or its agreement with ground truth – internal and external validation of the same solution, respectively.

IA provide different measures of the accordance between two partitions of the same data set, being based on contingency table data. Despite their frequent use in clustering evaluation, there are still open issues regarding the specific thresholds for each index to conclude about the degree of agreement between the partitions.

To acquire new insights on the indices behavior that may help improve clustering evaluation, 14 paired indices of indices are analyzed within diverse experimental scenarios - with balanced or unbalanced clusters and poorly, moderately or well separated ones. The paired indices’ observed values are all based on a cross-classification table of counts of pairs of observations both partitions agree to join and/or separate in the clusters. The IADJUST method is used to learn about the behavior of the indices under the hypothesis of agreement between partitions occurring by chance (H0). It relies on the generation of contingency tables under H0, being a simulation based procedure that enables to correct any index of agreement by deducting agreement by chance, overcoming previous limitations of analytical or approximate approaches – (Amorim and Cardoso, 2015).

The results suggest a preliminary typology of paired indices of agreement based on their distributional characteristics under H0. Inter-scenarios symbolic data referring to location, dispersion and shape measures of IA distributions under H0 are used to build this typology.

### Reference

Amorim, M. J., & Cardoso, M. G. (2015). *Comparing clustering solutions: The use of adjusted paired indices.* Intelligent Data Analysis, 19(6), 1275-1296.

Joint work with Maria José Amorim (Department of Mathematics of ISEL, Lisbon, Portugal).

###
29/11/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building

Cláudia Nunes, *CEMAT & DM, Instituto Superior Técnico, Universidade de Lisboa*

```
```###
Feed-in Tariff Contract Schemes and Regulatory Uncertainty

This paper presents a novel analysis of four finite feed-in tariff (FIT) schemes, namely fixed-price, fixed-premium, minimum price guarantee and sliding premium with a cap and a floor, under market and regulatory uncertainty. Using an analytical real options framework, we derive the project value, the optimal investment threshold and the value of the investment opportunity for the four FIT schemes. Regulatory uncertainty is modeled allowing the tariff to be reduced before the signature of the contract. While market uncertainty defers investment, a higher and more likely tariff reduction accelerates investment. We also present several findings that are aimed at policymaking decisions, regarding namely the choice, level and duration of the FIT. For instance, the investment threshold of the sliding premium with a cap and a floor is lower than the minimum price guarantee, which suggests that the first regime is a better policy than the latter because it accelerates the investment while avoiding overcompensation.

###
15/11/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building

Carina Silva, *Escola Superior de Tecnologia da Saúde de Lisboa do Instituto Politécnico de Lisboa e CEAUL*

```
```###
Selecting differentially expressed genes in samples subgroups on microarray data

A common task in analysing microarray data is to determine which genes are differentially expressed under two (or more) kinds of tissue samples or samples submitted under different experimental conditions. It is well known that biological samples are heterogeneous due to factors such as molecular subtypes or genetic background, which are often unknown to the investigator. For instance, in experiments which involve molecular classification of tumours it is important to identify significant subtypes of cancer. Bimodal or multimodal distributions often reflect the presence of subsamples mixtures.

Consequently, truly differentially expressed genes on sample subgroups may be lost if usual statistical approaches are used. In this work it is proposed a graphical tool which identifies genes with up and down regulation, as well as genes with differential expression which revels hidden subclasses, that are usually missed if current statistical methods are used.

###
08/11/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building

Carlos Oliveira, *Grupo de Física Matemática da Universidade de Lisboa*

```
```###
Optimal investment decision under switching regimes of subsidy support

We address the problem of making a managerial decision when the investment project is subsidized, which results in the resolution of an infinite-horizon optimal stopping problem of a switching diffusion driven by either a homogeneous or an inhomogeneous continuous-time Markov chain. We provide a characterization of the value function (and optimal strategy) of the optimal stopping problem. On the one hand, broadly, we can prove that the value function is the unique viscosity solution to a system of HJB equations. On the other hand, when the Markov chain is homogeneous and the switching diffusion is one-dimensional, we obtain stronger results: the value function is the difference between two convex functions

###
29/10/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building

Wolfgang Schmid, *European University Viadrina, Department of Statistics, Germany*

```
```###
Monitoring Non-Stationary Processes

In nearly all papers on statistical process control for time-dependent data it is assumed that the underlying process is stationary. However, in finance and economics we are often faced with situations where the process is close to non-stationarity or it is even non-stationary.

In this talk the target process is modeled by a multivariate state-space model which may be non-stationary. Our aim is to monitor its mean behavior. The likelihood ratio method, the sequential probability ratio test, and the Shiryaev-Roberts procedure are applied to derive control charts signaling a change from the supposed mean structure. These procedures depend on certain reference values which have to be chosen by the practitioner in advance. The corresponding generalized approaches are considered as well, and generalized control charts are determined for state-space processes. These schemes do not have further design parameters. In an extensive simulation study the behavior of the introduced schemes is compared with each other using various performance criteria as the average run length, the average delay, the probability of a successful detection, and the probability of a false detection.

### Literature

- Lazariv T. and Schmid W. (2018). Surveillance of non-stationary processes.
*AStA - Advances in Statistical Analysis*, https://doi.org/10.1007/s10182-018-00330-4 . - Lazariv T. and Schmid W. (2018). Challenges in monitoring non-stationary time series. In
*Frontiers in Statistical Process Control*, Vol. 12, pp. 257-275. Berlin: Springer.

Joint work with Taras Lazariv (European University Viadrina, Department of Statistics, Germany).

###
04/10/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building

Manuel Cabral Morais, *CEMAT & DM, Instituto Superior Técnico, Universidade de Lisboa*

```
```###
A thinning-based EWMA chart to monitor counts: some preliminary results

Shewhart control charts are known to be somewhat insensitive to shifts of small and moderate size. Expectedly, alternative control schemes such as the cumulative sum (CUSUM) and the exponentially weighted moving average (EWMA) charts have been proposed to speed up the detection of such shifts.

The novel chart we propose relies on a EWMA control statistic where the usual scalar product is replaced by what we call a fractional binomial thinning to avoid the typical over smoothing ascribable to ceiling, rounding, and flooring operations. The properties of this discrete statistic are, to a moderate extent, similar to the ones of its continuous EWMA counterpart and the run length (RL) performance of the associated chart can be computed exactly using the Markov chain approach for independent and identically distributed (i.i.d.) counts. Moreover, this chart is set in such way that: the average run length (ARL) curve attains a maximum in the in-control situation, i.e., the chart is ARL- unbiased; and the in-control ARL is equal to a pre-specified value.

We use the R statistical software to provide compelling illustrations of this unconventional EWMA chart and to compare its RL performance with the ones of a few competing control charts for the mean of i.i.d. Poisson counts.

### Keywords

Average run length; Exponentially weighted moving average; Fractional binomial thinning; Statistical process control.

###
27/09/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building

Cláudia Soares, *ISR - Instituto de Sistemas e Robótica, Instituto Superior Técnico, Portugal*

```
```###
Distributed learning in large scale networks: from GPS-denied localization to MAP inference

Big Data can elicit greater insight, but storage or computational limitations — or even privacy concerns — challenge learning from massive data sets. The distributed paradigm fits such problems just right: such algorithms work on partial data and fuse intermediate results within local neighborhoods, over a distributed network of computing nodes. In this talk we will take a tour starting on GPS-denied localization and culminating on a general distributed MAP inference algorithm for graphical models.