13/12/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building
Margarida G. M. S. Cardoso, Instituto Universitário de Lisboa (ISCTE-IUL), Business Research Unit (BRU-IUL)
Working towards a typology of indices of agreement for clustering evaluation
Indices of agreement (IA) are commonly used to evaluate stability of a clustering solution or its agreement with ground truth – internal and external validation of the same solution, respectively.
IA provide different measures of the accordance between two partitions of the same data set, being based on contingency table data. Despite their frequent use in clustering evaluation, there are still open issues regarding the specific thresholds for each index to conclude about the degree of agreement between the partitions.
To acquire new insights on the indices behavior that may help improve clustering evaluation, 14 paired indices of indices are analyzed within diverse experimental scenarios - with balanced or unbalanced clusters and poorly, moderately or well separated ones. The paired indices’ observed values are all based on a cross-classification table of counts of pairs of observations both partitions agree to join and/or separate in the clusters. The IADJUST method is used to learn about the behavior of the indices under the hypothesis of agreement between partitions occurring by chance (H0). It relies on the generation of contingency tables under H0, being a simulation based procedure that enables to correct any index of agreement by deducting agreement by chance, overcoming previous limitations of analytical or approximate approaches – (Amorim and Cardoso, 2015).
The results suggest a preliminary typology of paired indices of agreement based on their distributional characteristics under H0. Inter-scenarios symbolic data referring to location, dispersion and shape measures of IA distributions under H0 are used to build this typology.
Amorim, M. J., & Cardoso, M. G. (2015). Comparing clustering solutions: The use of adjusted paired indices. Intelligent Data Analysis, 19(6), 1275-1296.
Joint work with Maria José Amorim (Department of Mathematics of ISEL, Lisbon, Portugal).
29/11/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building
Cláudia Nunes, CEMAT & DM, Instituto Superior Técnico, Universidade de Lisboa
Feed-in Tariff Contract Schemes and Regulatory Uncertainty
This paper presents a novel analysis of four finite feed-in tariff (FIT) schemes, namely fixed-price, fixed-premium, minimum price guarantee and sliding premium with a cap and a floor, under market and regulatory uncertainty. Using an analytical real options framework, we derive the project value, the optimal investment threshold and the value of the investment opportunity for the four FIT schemes. Regulatory uncertainty is modeled allowing the tariff to be reduced before the signature of the contract. While market uncertainty defers investment, a higher and more likely tariff reduction accelerates investment. We also present several findings that are aimed at policymaking decisions, regarding namely the choice, level and duration of the FIT. For instance, the investment threshold of the sliding premium with a cap and a floor is lower than the minimum price guarantee, which suggests that the first regime is a better policy than the latter because it accelerates the investment while avoiding overcompensation.
15/11/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building
Carina Silva, Escola Superior de Tecnologia da Saúde de Lisboa do Instituto Politécnico de Lisboa e CEAUL
Selecting differentially expressed genes in samples subgroups on microarray data
A common task in analysing microarray data is to determine which genes are differentially expressed under two (or more) kinds of tissue samples or samples submitted under different experimental conditions. It is well known that biological samples are heterogeneous due to factors such as molecular subtypes or genetic background, which are often unknown to the investigator. For instance, in experiments which involve molecular classification of tumours it is important to identify significant subtypes of cancer. Bimodal or multimodal distributions often reflect the presence of subsamples mixtures.
Consequently, truly differentially expressed genes on sample subgroups may be lost if usual statistical approaches are used. In this work it is proposed a graphical tool which identifies genes with up and down regulation, as well as genes with differential expression which revels hidden subclasses, that are usually missed if current statistical methods are used.
08/11/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building
Carlos Oliveira, Grupo de Física Matemática da Universidade de Lisboa
Optimal investment decision under switching regimes of subsidy support
We address the problem of making a managerial decision when the investment project is subsidized, which results in the resolution of an infinite-horizon optimal stopping problem of a switching diffusion driven by either a homogeneous or an inhomogeneous continuous-time Markov chain. We provide a characterization of the value function (and optimal strategy) of the optimal stopping problem. On the one hand, broadly, we can prove that the value function is the unique viscosity solution to a system of HJB equations. On the other hand, when the Markov chain is homogeneous and the switching diffusion is one-dimensional, we obtain stronger results: the value function is the difference between two convex functions
29/10/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Wolfgang Schmid, European University Viadrina, Department of Statistics, Germany
Monitoring Non-Stationary Processes
In nearly all papers on statistical process control for time-dependent data it is assumed that the underlying process is stationary. However, in finance and economics we are often faced with situations where the process is close to non-stationarity or it is even non-stationary.
In this talk the target process is modeled by a multivariate state-space model which may be non-stationary. Our aim is to monitor its mean behavior. The likelihood ratio method, the sequential probability ratio test, and the Shiryaev-Roberts procedure are applied to derive control charts signaling a change from the supposed mean structure. These procedures depend on certain reference values which have to be chosen by the practitioner in advance. The corresponding generalized approaches are considered as well, and generalized control charts are determined for state-space processes. These schemes do not have further design parameters. In an extensive simulation study the behavior of the introduced schemes is compared with each other using various performance criteria as the average run length, the average delay, the probability of a successful detection, and the probability of a false detection.
- Lazariv T. and Schmid W. (2018). Surveillance of non-stationary processes. AStA - Advances in Statistical Analysis, https://doi.org/10.1007/s10182-018-00330-4 .
- Lazariv T. and Schmid W. (2018). Challenges in monitoring non-stationary time series. In Frontiers in Statistical Process Control, Vol. 12, pp. 257-275. Berlin: Springer.
Joint work with Taras Lazariv (European University Viadrina, Department of Statistics, Germany).
04/10/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building
Manuel Cabral Morais, CEMAT & DM, Instituto Superior Técnico, Universidade de Lisboa
A thinning-based EWMA chart to monitor counts: some preliminary results
Shewhart control charts are known to be somewhat insensitive to shifts of small and moderate size. Expectedly, alternative control schemes such as the cumulative sum (CUSUM) and the exponentially weighted moving average (EWMA) charts have been proposed to speed up the detection of such shifts.
The novel chart we propose relies on a EWMA control statistic where the usual scalar product is replaced by what we call a fractional binomial thinning to avoid the typical over smoothing ascribable to ceiling, rounding, and flooring operations. The properties of this discrete statistic are, to a moderate extent, similar to the ones of its continuous EWMA counterpart and the run length (RL) performance of the associated chart can be computed exactly using the Markov chain approach for independent and identically distributed (i.i.d.) counts. Moreover, this chart is set in such way that: the average run length (ARL) curve attains a maximum in the in-control situation, i.e., the chart is ARL- unbiased; and the in-control ARL is equal to a pre-specified value.
We use the R statistical software to provide compelling illustrations of this unconventional EWMA chart and to compare its RL performance with the ones of a few competing control charts for the mean of i.i.d. Poisson counts.
Average run length; Exponentially weighted moving average; Fractional binomial thinning; Statistical process control.
27/09/2018, 14:00 — 15:00 — Room P3.10, Mathematics Building
Cláudia Soares, ISR - Instituto de Sistemas e Robótica, Instituto Superior Técnico, Portugal
Distributed learning in large scale networks: from GPS-denied localization to MAP inference
Big Data can elicit greater insight, but storage or computational limitations — or even privacy concerns — challenge learning from massive data sets. The distributed paradigm fits such problems just right: such algorithms work on partial data and fuse intermediate results within local neighborhoods, over a distributed network of computing nodes. In this talk we will take a tour starting on GPS-denied localization and culminating on a general distributed MAP inference algorithm for graphical models.
12/06/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Ana Prior, Instituto Superior de Engenharia de Lisboa, ISEL, Portugal
Estimation of the drift of a $2n$-dimension OU process
A $2n$-dimension Ornstein-Uhlenbeck (OU) process for which the diffusion matrix is singular is considered. This process is used as a model for the dynamic behavior of vibrating engineering structures such as bridges, buildings, dams, among others. We study the problem of estimating the vibration frequencies of the structure or, equivalently, the parameters of the stochastic differential equation (SDE) that governs the OU process.
Firstly, it is considered the case where the OU process is perturbed by an independent wiener process. The maximum likelihood estimator of the drift matrix is obtained and the properties of the estimator are established. The local asymptotic normality of the estimator is analyzed in detail. Since general regularity conditions do not hold in this case (the diffusion matrix is singular), theoretical results from the classic literature on the subject do not immediately apply and an alternative approach based on the Laplace transform is used.
Secondly, it is considered the case where the OU process is perturbed by two independent fractional brownian motions. Models involving fractional noises have not been widely used in engineering. However, many problems in engineering involve processes exhibiting long memory. For this reason, the estimation of the parameters of multidimensional state space linear models, described by SDEs and disturbed by fractional Brownian motion, has a potential application in different areas of engineering. We analyze the problem of estimating the drift parameters of a $2$- dimension linear stochastic differential equation perturbed by two independent fractional Brownian motions with the same Hurst parameter belonging to $(1/2,1)$. The maximum likelihood estimator of the drift parameters is obtained after a transformation of the original model and making use of the so called fundamental martingale.
In both cases, a simulation study is presented in the context of a real world situation that illustrates the asymptotic behavior of the maximum likelihood estimator of the drift matrix.
29/05/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
José G. Dias, Instituto Universitário de Lisboa (ISCTE-IUL), BRU-IUL, Lisboa, Portugal
Multiple-valued symbolic data clustering: heuristic and model-based approaches
Symbolic data analysis (SDA) has been developed as an extension of the data analysis to handle more complex data structures. In this general framework the pair observation/variable is characterized by more than one value: from two (e.g., interval-value data defined by minimum and maximum values) to multiple-valued variables (e.g., frequencies or proportions).
This research discusses the clustering of multiple-valued symbolic data. First, we discuss an extension of heuristic clustering based on the symmetric Kullback-Leibler distance combined with a complete-linkage rule within the hierarchical clustering framework. Then, we propose a new model-based clustering framework. These new family of models based on the Dirichlet distribution includes mixture of regression/expert models. Results are illustrated with synthetic and demographic (population pyramids) data.
22/05/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Mirela Predescu, BNP Paribas - Risk Analytics and Modelling Team, London
Market Risk Measurement — Theory and Practice
Topics that will be covered in this talk:
- Value-at-Risk (VaR)
- Expected Shortfall (ES)
- VaR/ES Measurement
- Historical Simulation
- Model Building Approach
- Monte Carlo Simulation Approach
- VaR Backtesting
08/05/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Nuno Sobreira, Department of Mathematics, Lisbon School of Economics & Management, Universidade de Lisboa
Evaluation of volatility models for forecasting Value-at-Risk and Expected Shortfall in the Portuguese Stock Market
The objective of this paper is to run a forecasting competition of different parametric volatility time series models to estimate Value-at-Risk (VaR) and Expected Shortfall (ES) within the Portuguese Stock Market. This work is also intended to bring new insights about the methods used throughout this exercise. Finally, we want to relate the timing of the exceptions (extreme losses surpassing the VaR) with events at the firm level and with national/international economic conditions.
For these purposes, a number of models from the General Autoregressive Conditional Heteroscedasticity (GARCH) class are used with different distribution functions for the innovations, in particular, Normal, Student-t and Generalized Error Distribution (GED) and corresponding skewed versions. The GARCH models are also used in conjunction with the Generalized Pareto Distribution through the use of extreme value theory.
The performance of these different models to forecast 1% and 5% VaR and ES for 1-day, 5-days and 10-days horizons are analyzed for a set of companies traded in the EURONEXT Lisbon stock exchange. The results obtained for the VaRs and ESs are evaluated with backtesting procedures based on a number of statistical tests and compared with the use of different loss functions.
The final results are analyzed in several dimensions. Preliminary analysis show that the use of extreme value theory generally leads to better results, especially for low values of alpha. This is more evident in the case of the statistical backtests dealing with ES. Moreover, skewed distributions generally do not seem to perform better than their centered counterparts
24/04/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Peter M. Kort, Department of Econometrics & Operations Research and CentER, Tilburg University; Department of Economics, University of Antwerp
Dynamic Capital Structure Choice and Investment Timing
The paper considers the problem of an investor that has the option to acquire a firm. Initially this firm is run as to maximize shareholder value, where the shareholders are risk averse. To do so it has to decide each time on investment and dividend levels. The firm's capital stock can be financed by equity and debt, where less solvable firms pay a higher interest rate on debt. Revenue is stochastic.
We find that the firm is run such that capital stock and dividends develop in a fixed proportion to the equity. In particular, it turns out that more dividends are paid if the economic environment is more uncertain. We also derive an explicit expression for the threshold value of the equity above which it is optimal for the investor to acquire the firm. This threshold increases in the level of uncertainty reflecting the value of waiting that uncertainty generates.
17/04/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Rita Pimentel, CEMAT-IST
Processes with jumps in Finance
We will addresses two particular investment problems that share, in particular, the following feature: the processes that model the uncertainty exhibit discontinuities in their sample paths. These discontinuities — or jumps — are driven by jump processes, hereby modelled by Poisson processes. Above all, the problems addressed are all problems that fall in the category of optimal stopping problems: choose a time to take a given action (in particular, the time to decide to invest, as here we consider investment problems) in order to maximize an expected payoff.
In the first problem, we assume that a firm is currently receiving a profit stream from an already operational project, and has the option to invest in a new project, with impact in its profitability. Moreover, we assume that there are two sources of uncertainty that influence the firm’s decision about when to invest: the random fluctuations of the revenue (depending on the random demand) and the changing investment cost. And, as already mentioned, both processes exhibit discontinuities in their sample paths.
The second problem is developed in the scope of technology adoption. The technology innovation is, by far, an example of a discontinuous process: the technological level does not increase in a steady pace, but instead from now and then some improvement or breakthrough happens. Thus it is natural to assume that technology innovations are driven by jump processes. As such, in this problem we consider a firm that is producing in a declining market, but with the option to undertake an innovation investment and thereby to replace the old product by a new one, paying a constant sunk cost. As the first product is a well established one, its price is deterministic. Upon investment in the second product, the price may fluctuate, according to a geometric Brownian motion. The decision is when to invest in a new product.
10/04/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Vanda M. Lourenço, FCT & CMA, NOVA University of Lisbon, Portugal
Robust inference for ROC regression
The receiver operating characteristic (ROC) curve is the most popular tool for evaluating the diagnostic accuracy of continuous biomarkers. Often, covariate information that affects the biomarker performance is also available and several regression methods have been proposed to incorporate covariates in the ROC framework. In this work, we propose robust inference methods for ROC regression, which can be used to safeguard against the presence of outlying biomarker values. Simulation results suggest that the methods perform well in recovering the true conditional ROC curve and corresponding area under the curve, on a variety of data contamination scenarios. Methods are illustrated using data on age-specific accuracy of glucose as a biomarker of diabetes.
03/04/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Cláudia Silvestre, Escola Superior de Comunicação Social, Instituto Politécnico de Lisboa
Challenges of Clustering
Grouping similar objects in order to produce a classification is one of the basic abilities of human beings. It is one of the primary milestones of a child's concrete operational stage and continues to be used throughout adult life, playing a very important role on how we analyse our world. Although being a practical skill, clustering techniques are also commonly used in several applications areas such as social sciences, medicine, biology, engineering and computer science. Despite its wide application there are two issues that remain as ongoing research issues: (i) how many clusters should be selected? and (ii) which are the relevant variables for clustering? These two questions are crucial in order to obtain the best solution. We will answer them using a model-based approach based on finite mixture distributions and information criteria: Bayesian Information Criteria (BIC), Akaike's Information Criteria (AIC), Integrated Completed Likelihood (ICL) and Minimum Message Length (MML).
20/03/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Maria Kulikova, CEMAT-IST
Accurate implementations of nonlinear Kalman-like filtering methods with application to chemical engineering
A goal in many practical applications is to combine a priori knowledge about a physical system with experimental data to provide on-line estimation of states and/or parameters of that system. The time evolution of the (hidden) state is modeled by using dynamic system which is perturbed by a certain process noise. This noise is used for modeling the uncertainties in the system dynamics. The term optimal filtering traditionally refers to a class of methods that can be used for estimating the state of a time-varying system which is indirectly observed through noisy measurements. In this talk, we discuss the development of advanced Kalman-like filtering methods for estimating continuous-time nonlinear stochastic systems with discrete measurements. We starts with a brief overview of existing nonlinear Bayesian methods . Next, we focus on the numerical implementation of the Kalman-like filters (the Extended Kalman filter, the Unscented Kalman filter and Cubature Kalman filter) for estimating the state of continuous-discrete models . The standard approach implies that the Euler-Maruyama method is used for discretization of the underlying (process) stochastic differential equation (SDE). To reduce the discretization error, some subdivisions might be additionally introduced in each sampling interval. Some modern continuous-time filtering methods are developed by using a higher order methods, e.g. see the cubature Kalman filter based on the Ito-Taylor expansion for discretizing the underlying SDE in . However, all resulted implementations are the fixed step size methods and they do not allow for a proper processing of long and irregular sampling intervals (e.g. when missing measurements are appeared). An alternative methodology is to derived the moment differential equations, first. Next, the resulted ordinary differential equations (ODEs) are solved by modern ODE solvers. This approach allows for using variable step size solvers and copes with long/irregular sampling intervals accurately. Besides, we use the ODE solvers with global error control that improves the estimation quality further . As a numerical example we consider the batch reactor model studied in chemical engineering literature .
(Joint work with Gennady Kulikov.)
 Arasaratnam I., Haykin S. (2009) Cubature Kalman Filters, IEEE Trans. Automat. Contr. 54(6): 1254–1269.
 Kulikov G. Yu., Kulikova M. V. (2014) Accurate numerical implementation of the continuous-discrete extended Kalman filter, IEEE Transactionson Automatic Control, 59(1): 273–279.
 Arasaratnam I., Haykin S., Hurd T. R. (2010) Cubature Kalman filtering for continuous-discrete systems: Theory and simulations, IEEE Trans. Signal Process. 58(10): 4977-4993.
 Kulikov G. Yu., Kulikova M. V. (2016) The accurate continuous-discrete extended Kalman filter for radar tracking. IEEE Transactions on Signal Processing, 64(4): 948-958.
 Kulikov G. Yu., Kulikova M. V. (2015) State estimation in chemical systems with infrequent measurements. Proceedings of European Control Conference, Linz, Austria, pp. 2688–2693.
13/03/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Manuel Cabral Morais, CEMAT-IST
ARL-unbiased geometric control charts for high-yield processes
The geometric distribution is the basic model for the quality characteristic that represents the cumulative count of conforming (CCC) items between two nonconforming ones in a high-yield process.
In order to control increases and decreases in the fraction nonconforming in a timely fashion, the geometric chart should be set in such way that the average run length (ARL) curve attains a maximum in the in-control situation, i.e., it should be ARL-unbiased.
By exploring the notions of uniformly most powerful unbiased tests with randomization probabilities, we are able not only to eliminate the bias of the ARL function of the existing geometric charts, but also to bring their in-control ARL exactly to a pre-specified value.
Instructive examples are provided to illustrate that the ARL-unbiased geometric charts have the potential to play a major role in the prompt detection of the deterioration and improvement of real high-yield processes.
27/02/2018, 11:00 — 12:00 — Room P3.10, Mathematics Building
Reinaldo Castro Souza, Industrial Engineering Dept., PUC-Rio, Rio de Janeiro, Brazil
The challenge of inserting wind power generation into the Brazilian hydrothermal optimal dispatch
Brazil has a total of 4.648 power generation projects in operation, totaling 161 GW of installed capacity, where 74% comes from hydroelectric power plants and about 7% from intermittent generation sources (wind power in particular). An addition of 25 GW is scheduled for the next few years in the country's generation capacity, where 43% of this increment will come from intermittent sources. Nowadays, planning the Brazilian energy sector means, basically, making decisions about the dispatch of hydroelectric and thermoelectric plants where the operation strategy minimizes the expected value of the operation cost during the planning period, which is composed of fuel costs plus penalties for failing to supply the projected expected load. Given the growing trend of wind power generation, basically in the Northeast region of the country, within the Brazilian energy matrix, it is necessary to include this type of generation into the optimal approach dispatch currently used, so that this type of generation is effectively considered in the long term planning. This talk aims to show the preliminary developments toward the implementation in a stochastic way of such kind of energy generation, in order to generate the optimal hydrothermal wind dispatch.
Keywords: Hydro, Thermal wind power generation, optimal dispatch, demand forecast, inflow and wind speed uncertainties
01/02/2018, 11:40 — 12:20 — Room P3.10, Mathematics Building
João Brazuna, Seminário de Investigação em Probabilidades e Estatística I
Martingalas e Análise de Sobrevivência
Apresentação do Seminário de Investigação em Probabilidades e Estatística I no âmbito do Mestrado em Matemática e Aplicações (MMA) e Doutoramento em Estatística e Processos Estocásticos, em colaboração com o CEMAT.
01/02/2018, 11:00 — 11:40 — Room P3.10, Mathematics Building
Maria Almeida Silva, Seminário de Investigação em Probabilidades e Estatística I
Intervalos de confiança bootstrap
Apresentação do Seminário de Investigação em Probabilidades e Estatística I no âmbito do Mestrado em Matemática e Aplicações (MMA) e Doutoramento em Estatística e Processos Estocásticos, em colaboração com o CEMAT.