Contents/conteúdo

Probability and Statistics Seminar   RSS

Past sessions

Newer session pages: Next 3 2 1 Newest 

21/05/2015, 11:30 — 12:30 — Room P3.10, Mathematics Building
, ISEG and CEMAPRE, Universidade de Lisboa

Comparison of Statistic and Deterministic Frameworks of Uncertainty Quantification

Two different approaches to the prediction problem are compared employing a realistic example, combustion of natural gas, with 102 uncertain parameters and 76 quantities of interests. One approach, termed Bound-to-Bound Data Collaboration (abbreviated to B2B) deploys semi-definite programming algorithms where the initial bounds on unknowns are combined with the initial bound of experimental data to produce new uncertainty bounds for the unknowns that are consistent with the data and, finally, deterministic uncertainty bounds for prediction in new settings. The other approach is statistical and Bayesian, referred to as BCP (for Bayesian Calibration and Prediction). It places prior distributions on the unknown parameters and on the parameters of the measurement error distributions and produces posterior distributions for model parameters and posterior distributions for model predictions in new settings.  The predictions from the two approaches are consistent:  B2B bounds and the support of the BCP predictive distribution overlap a very large part of each other. The BCP predictive distribution is more nuanced than the B2B bounds but depends on stronger assumptions. Interpretation and comparison of the results is closely connected with assumptions made about the model and experimental data and how they are used in both settings. The principal conclusion is that use of both methods protects against possible violations of assumptions in the BCP approach and conservative specifications and predictions using B2B.

Joint work with Michael Frenklach, Andrew Packard (UC Berkeley), Jerome Sacks (National Institute of Statistical Sciences) and Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha)

07/05/2015, 11:30 — 12:30 — Room P3.10, Mathematics Building
, Faculdade de Ciências da Universidade de Lisboa, CEAUL

The importance of Statistics in Bioinformatics

Statistics acts in several areas of knowledge, being Bioinformatics one of the most recent application fields. In reality, the role of Statistics in Bioinformatics goes beyond a mere intervention. It is an integral pillar of Bioinformatics. Statistics has been gaining its space in this area, becoming an essential component of recognized merit. In this seminar the speaker intends to show the importance of Statistics in addressing systems as diverse as protein structure or microarray and NGS data. A set of specific studies in Molecular Biology, will be the basis for the presentation of some of the most common statistical methodologies in Bioinformatics. It is also shown the importance of the available software, including some R packages.

16/04/2015, 11:30 — 12:30 — Room P3.10, Mathematics Building
, CEMAT e Departamento de Matemática, Instituto Superior Técnico, Universidade de Lisboa

Investment Decisions under Multi-uncertainty and Exogenous Shocks

In this presentation we study the investment problem when both the demand and the investment costs are stochastic. We assume that the processes are independent, and both are modeled using geometric Brownian motion with exogenous jumps driven by independent Poisson processes. We use a real options approach, leading to an optimal stopping problem. Due to the multi-uncertainty, we propose a method to solve explicitly the problem, and we prove that this method leads exactly to the solution of the optimization problem.

Joint work with Rita Pimentel.

26/03/2015, 11:30 — 12:30 — Room P3.10, Mathematics Building
, Priberam Labs

Statistical Learning for Natural Language Processing

The field of Natural Language Processing (NLP) deals with automatic processing of large corpora of text such as newswire articles (from online newspaper websites), social media (such as Facebook or Twitter) and user-created content (such as Wikipedia). It has experienced large growth in academia as well as in the industry,  ith large corporations such as Microsoft, Google, Facebook, Apple, Twitter, Amazon, among others, investing strongly in these technologies.

One of the most successful approaches to NLP is statistical learning (also known as machine learning), which uses the statistical properties of corpora of text to infer new knowledge.

In this talk I will present multiple NLP problems and provide a brief overview of how they can be solved with statistical learning. I will also present one of these problems (language detection) in more detail to illustrate how basic properties of Probability Theory are at the core of these techniques.

12/03/2015, 11:30 — 12:30 — Room P3.10, Mathematics Building
, CEMAT e Departamento de Matemática, Instituto Superior Técnico, Universidade de Lisboa

To be or not to be Bayesian: That IS NOT the question

Frequentist and Bayesian approaches to statistical thinking are different in their foundations and have been developed along somehow separated routes. In a time where more and more powerful statistics is badly needed, special attention should be given to the existing connection between the two paradigms in order to bring together their formidable strength.

21/01/2015, 11:30 — 12:30 — Room P3.10, Mathematics Building
, Norwegian University of Science and Technology

Anticipative Transmission Planning under Uncertainty

Transmission system operators (TSOs) build transmission lines to take generation capacity into account. However, their decision is confounded by policies that promote renewable energy technologies. Thus, what should be the size of the transmission line to accommodate subsequent generation expansion? Taking the perspective of a TSO, we use a real options approach not only to determine the optimal timing and sizing of the transmission line but also to explore its effects on generation expansion.

17/12/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, FCT, Universidade Nova de Lisboa; CEMAT

A robust mixed linear model for heritability estimation in plant studies

Heritability ($H^2$) refers to the extent of how much a certain phenotype is genetically determined. Knowledge of $H^2$ is crucial in plant studies to help perform effective selection. Once a trait is known to be high heritable, association studies are performed so that the SNPs underlying those traits’ variation may be found. Here, regression models are used to test for associations between phenotype and candidate SNPs. SNP imputation ensures that marker information is complete, so both the coefficient of determination ($R^2$) and $H^2$ are equivalent. One popular model used in these studies is the animal model, which is a linear mixed model (LMM) with a specific layout. However, when the normality assumption is violated, as other likelihood-based models, this model may provide biased results in the association analysis and greatly affect the classical $R^2$. Therefore, a robust version of the REML estimates for linear LMM to be used in this context is proposed, as well as a robust version of a recently proposed $R^2$. The performance of both classical and robust approaches for the estimation of $H^2$ is thus evaluated via simulation and an example of application with a maize data set is presented.

Joint work with P.C. Rodrigues, M.S. Fonseca and A.M. Pires

10/12/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, CEAUL, Faculdade de Ciências Médicas da UNL

Statistical methods in cancer research

Understanding trends and long-term trends in the incidence of diseases, particularly in cancer, is a major concern of epidemiologists. Several statistical methodologies are available to study cancer incidence rates. Age-Period-Cohort (APC) models may be used to study the variation of incidence rates through time. They analyse age-specific incidence according to three time scales: age at diagnosis (age), date of diagnosis (period) and date of birth (cohort). Classic and Bayesian APC models are available. Understanding geographical variations in health, particularly in small areas, has also become extremely important. Several types of spatial epidemiology studies are available such as disease mapping, usually used in ecological studies. The geographic mapping of diseases is very important in the definition of policies in oncology, namely on the allocation of resources, and on the identification of clusters with high incidence of disease. Geographical association studies, that allow the identification of risk factors associated with the spatial variation of a disease, are also indispensable and deserve special attention in disease incidence studies. For this purpose, Bayesian Hierarchical models are a common choice.

To quantify cancer survival in the absence of other causes of death, relative survival is also considered in cancer population-based studies. Several approaches to estimate regression models for relative survival using the method of maximum likelihood are available.

Finally, having an idea of the future burden of cancer is also of the utmost importance, namely for planning health services. This is why projections of cancer incidence are so important. Several projection models that differ according to cancer incidence trends are available.

The aim of this study is to investigate spatial and temporal trends in the incidence of colorectal cancer, to estimate relative survival and to make projections. It is a retrospective population-based study that considers data on all colorectal cancers registered by the Southern Portuguese Cancer Registry (ROR Sul) between 1998 and 2006.

03/12/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
Ana Freitas, Instituto Superior de Educação e Ciências; CEMAT

Testes de hipóteses para comparar probabilidades de recombinação de dois seres vivos com um conjunto de marcadores comuns

A recombinação genética, embora seja um fenómeno estudado desde há longo tempo, continua a ser um tema muito actual, pois encontra-se bastante relacionada com a evolução e diversificação das espécies. É um processo que gera novas combinações de genes onde depois a selecção natural actua. Por outro lado, é com base nas estimativas das probabilidades de recombinação que são construídos os mapas genéticos que nos dão uma imagem da constituição dos diversos cromossomas que existem em cada espécie. O objectivo deste trabalho é apresentar um método, se possível exacto, para estimar a estrutura de variância-covariância dos estimadores conjuntos das probabilidades de recombinação genética, e propor testes de hipóteses para comparar probabilidades de recombinação entre dois grupos de seres vivos que tenham um conjunto de marcadores comuns. As metodologias propostas são aplicadas a um conjunto de dados relativos a uma espécie de Eucaliptos (Eucalyptus globulus) resultantes de um cruzamento obtido por backcross. Os resultados obtidos, permitem-nos concluir que as metodologias propostas são apropriadas para comparar probabilidades de recombinações entre os dois sexos de uma espécie e, podem constituir um método alternativo aos testes habitualmente utilizados para resolver este tipo de questões.

19/11/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, Instituto de Telecomunicações, IST

Network Inference from Co-Occurrences

Inferring network structures is a central problem arising in many fields of science and technology, including communication systems, biology, sociology, and neuroscience. In this talk, after briefly reviewing several network inference problems, we will focus on that of inferring network structure from ``co-occurrence" observations. These observations identify which network components (e.g., switches, routers, genes) co-occur in a path, but do not indicate the order in which they occur in that path. Without order information, the number of structures that are data-consistent grows exponentially with the network size. Yet, the basic engineering/evolutionary principles underlying most networks strongly suggest that no all data-consistent structures are equally likely. In particular, nodes that often co-occur are probably closer than nodes that rarely co-occur. This observation suggests modeling co-occurrence observations as independent realizations of a random walk on the network, subjected to random permutations. Treating these permutations as missing data, allows deriving an expectation–maximization (EM) algorithm for estimating the random walk parameters. The model and EM algorithm significantly simplify the problem, but the computational complexity still grows exponentially in the length of each path. We thus propose a polynomial-time Monte Carlo EM algorithm based on importance sampling and derive conditions that ensure convergence of the algorithm with high probability. Finally, we report simulations and experiments with Internet measurements and inference of biological networks that demonstrate the performance of this approach.The work reported in this talk was done in collaboration with Michael Rabbat (McGill University, Canada) and Robert D. Nowak (University of Wisconsin, USA).

05/11/2014, 14:30 — 15:30 — Room P3.10, Mathematics Building
, Instituto de Telecomunicações

A Stochastic Model for Throughput in Wireless Data Networks with Single-Frequency Operation and Partial Connectivity under Contention-Based Multiple Access Modes

Wireless data (packet) networks operating in a common frequency channel, as happens for example with a Basic Service Set in the IEEE 802.11 (WiFi) system, are subject to intrinsic impairments. One such major impairment results from the broadcast nature of the radio channel: if two different transmissions arrive at a receiver with any time overlap, they will interfere destructively and thus one or both of the corresponding packets will not be correctly received (packet collision), thus wasting radio channel transmission time and possibly requiring a retransmission of the original packet(s).

In order to achieve a better utilization of the scarce radio channel resource, stations in wireless networks use multiple acess algorithms to attempt to usefully coordinate their radio transmissions. One example is given by Carrier Sense Multiple Access (CSMA), used as a basis for the sharing of the radio channel in the WiFi system, which establishes that a station should not start a new packet transmission if it can hear any other station transmitting. In a network with radio connectivity between any pair of stations (fully connected) and negligible propagation delays, such algorithm succeeds at completely preventing the existence of collisions. That is however not the case if there exist pairs of stations that cannot directly hear each other (partial connectivity). Many other multiple access algorithms have been proposed and studied.

In this talk will be presented a stochastic model for the study of throughput (i.e., the long-term fraction of time that the radio channel is occupied with successful packet transmissions) in the class of networks described above. The talk will start with a short description of the communication functions and structure of the system under study and of the class of multiple access algorithms considered. Following will be presented a Markovian model for the representation of the time evolution of the packet transmissions taking place in the network and a result given on the existence of a product form for its stationary probabilities. The next step will be to show how the desired throughputs can be obtained from the steady-state probabilities of this process and the average durations of the successful packet transmissions. The latter are obtained from the times to absorption in a set of auxiliary (absorbing) derived Markov chains. Finally, and time permitting, a reference will be made to results concerning the insensitivity of product form steady state solutions, when they exist, to the distribution of packet lengths and retransmission time intervals, by means of a Generalized Semi-Markov Process representation.

24/10/2014, 11:30 — 12:30 — Room P4.35, Mathematics Building
, CEMAT-IST, UAL

On the misleading signals in simultaneous schemes for the mean vector and covariance matrix of multivariate i.i.d. output

The performance of a product often depends on several quality characteristics. Simultaneous schemes for the process mean vector and the covariance matrix are essential to determine if unusual variation in the location and dispersion of a multivariate normal vector of quality characteristics has occurred.

Misleading signals (MS) are likely to happen while using such simultaneous schemes and correspond to valid signals that lead to a misinterpretation of a shift in mean vector (resp. covariance matrix) as a shift in covariance matrix (resp. mean vector).

This paper focuses on numerical illustrations that show that MS are fairly frequent, and on the use of stochastic ordering to qualitatively assess the impact of changes in th emean vector and covariance matrix in the probabilities of misleading signals in simultaneous schemes for these parameters while dealing with multivariate normal i.i.d. output.

(Joint work with: Manuel Cabral Morais, António Pacheco, CEMAT-IST; Wolfgang Schmid, European University Viadrina.)

08/10/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, CEMAT-IST

On hitting times for Markov time series of counts with applications to quality control

Examples of time series of counts arise in several areas, for instance in epidemiology, industry, insurance and network analysis. Several time series models for these counts have been proposed and some are based on the binomial thinning operation, namely the integer-valued autoregressive (INAR) model, which mimics the structure and the autocorrelation function of the autoregressive (AR) model.

The detection of shifts in the mean of an INAR process is a recent research subject and it can be done by using quality control charts. Underlying the performance analysis of these charts, there is an indisputable popular measure: the run length (RL), the number of samples until a signal is triggered by the chart. Since a signal is given as soon as the control statistic falls outside the control limits, the RL is nothing but a hitting time.

In this paper, we use stochastic ordering to assess: the ageing properties of the RL of charts for the process mean of Poisson INAR(1) output; the impact of shifts in model parameters on this RL. We also explore the implications of all these properties, thus casting interesting light on this hitting time for a Markov time series of counts.

(Joint work with António Pacheco, CEMAT-IST.) 

24/09/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
Nuno Moreira, Instituto Português do Mar e da Atmosfera (IPMA)

Previsões Meteorológicas — mais do que probabilidades?

A previsão meteorológica depende da existência de observações de variáveis físicas (temperatura do ar, humidade relativa do ar, vento, …) e da capacidade de prever a evolução temporal destas variáveis. Depende também de outros parâmetros meteorológicos (convergência, vorticidade, índices de estabilidade, …) que, não sendo diretamente mensuráveis mas indiretamente calculados, permitem construir o cenário futuro da atmosfera no prazo de dias a semanas.

A previsão de parâmetros meteorológicos está assente na corrida de modelos numéricos de previsão em supercomputadores de elevado desempenho, disponíveis quer a nível internacional quer a nível nacional. A nível europeu, estes desenvolvimentos são realizados pelo Centro Europeu de Previsão a Médio Prazo (ECMWF), com a disponibilização, duas vezes por dia para todo o Globo, de resultados determinísticos com resolução espacial de 16 km até 10 dias e de resultados probabilísticos com resolução espacial de 64 km até 15 dias. Em Portugal são corridos modelos numéricos de área local, com resolução espacial de 9 km até 72 horas (modelo ALADIN) e de 2.5 km até 48 horas (modelo AROME), este último em 3 domínios distintos – Continente, Madeira e Açores.

Enquanto as previsões determinísticas apresentam uma única solução para um instante futuro, as previsões probabilísticas permitem considerar intervalos de variação dos diversos parâmetros meteorológicos, o que se torna particularmente importante para parâmetros mais difíceis de prever, dos quais é exemplo a precipitação. As variações nos parâmetros meteorológicos resultam de perturbações nas condições iniciais dos modelos numéricos, que procuram representar a influência dos erros existentes nas observações meteorológicas.

Deste modo, as previsões meteorológicas em formato probabilístico recorrem a parâmetros e produtos disponíveis operacionalmente, tais como: probabilidade de ocorrência, média de ensemble, dispersão, shift of tail, meteograma, Extreme Forecast Index, Spaghetti, Clusters,… . Para prazos mais longos (semanas a meses) e, em parte, de forma experimental são ainda utilizadas anomalias (em relação a períodos passados - climatologia) de alguns parâmetros como a precipitação, a temperatura do ar, a temperatura da água do mar e a pressão atmosférica ao nível médio do mar.

17/09/2014, 12:00 — 13:00 — Room P3.10, Mathematics Building
Cláudia Pascoal, CEMAT-IST

Contributions to Variable Selection and Robust Anomaly Detection in Telecommunications

Over the years, we have witnessed an incredible high level of technological development where Internet plays the leading role. The Internet not only brought benefits but also originates new threats expressed by anomalies/outliers. Consequently, new and improved outlier detection methodologies need to be developed. Expectedly, we propose an anomaly detection method that combines a robust variable selection method and a robust outlier detection procedure based on Principal Component Analysis.

Our method was evaluated using a data set obtained from a network scenario capable of producing a perfect ground-truth under real (but controlled) traffic conditions. The robust variable selection step was essential to eliminate redundant and irrelevant variables that were deteriorating the performance of the anomaly detector. The variable selection methods we considered use a filter strategy based on Mutual Information and Entropy for which we have developed robust estimators. The filter methods incorporate a redundancy component which tries to capture overlaps among variables.

The performance of eight variable selection methods was studied under a theoretical framework that allows reliable comparisons among them, determining the true/theoretical variable ordering under specific evaluation scenarios, and unveiled problems in the construction of the associated objective functions. Our proposal, maxMIFS, which is associated with a simple objective function, revealed to be unaffected by these problems and achieved outstanding results. For these reasons, it was chosen to be applied in the preprocessing step. With this approach, the results improved substantially and the main objective of this work was fulfilled: improving the detection of anomalies in Internet traffic flows.

28/05/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, CEMAT, IST

Seeing, hearing, doing multivariate statistics

Signal processing is an important task in our days that arise in various areas such as engineering and applied mathematics. A signal represents time-varying or spatially varying physical quantities. Signals of importance can include sound, electromagnetic radiation, images, telecommunication transmission signals, and many others. A signal carries information, and the objective of signal processing is to extract useful information carried by the signal. The received signal is usually disturbed by electrical, atmospheric or deliberate interferences. Due to the random nature of the signal, statistical techniques play an important role in analyzing the signal.

There are many techniques used to analyze these types of data, depending on the focus or research question of the study. Some of these techniques are Principal Component Analysis (PCA) and Fourier transform, in particular discrete Fourier transform (DFT). The main goal in this work is to explore the relations between PCA and others mathematical transforms, based on Toeplitz and circulant matrices. In this sense, the proposed method relates the theory behind the Fourier transform through the Toeplitz and circulant matrices and the PCA. To illustrate the methodology we will consider sounds and images.

Keywords: Circulant Matrix, Fourier Transform, Principal Component Analysis, Signal Processing, Toeplitz Matrix.

14/05/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, Celfocus: Telco & Media

Mathematics in consultancy

Major telecommunications companies currently hold complex systems, with varying interdependencies and still remarkable reliability. The constant development and changes in their core, along with different types of users involved in the use of these systems, makes them extremely interesting. The maintenance, the service assurance and the positive evolution of this network depends on the deep understanding and control of these complex systems. This knowledge, which refers to different interactions between internal and exogenous systems, allows you, for instance, to control and increase the speed of problem solving, to correctly approach trouble tickets and to detect the root of the problems.

To address these problems, the deconstruction of an error safeguarding and fuzzy logic has been implemented, emphasizing the control, the precision and the planning of the systems behaviour. Statistics and Quality Control brought another perspective to approach these issues. The first results, obtained from field operations and with a direct influence on the user's experience, are the focus of this presentation.

30/04/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, CEMAT, Instituto Superior Técnico, Universidade de Lisboa

Estimating Adaptive Market Efficiency Using the Kalman Filter

This paper addresses the adaptive market hypothesis (AMH), which suggests that market efficiency is not a stable property, but rather that it evolves with time. The test of evolving efficiency (TEE) investigates the efficiency of a particular market by using a multi-factor model with time-varying coefficients and GARCH errors. The model is a variant of the stochastic GARCH in Mean (GARCH-M) proposed in 1990, which tests for market efficiency in an absolute sense, i.e. by assuming that market efficiency is unchanged over time. To resolve this problem, the TEE extends all previous tests and provides a mechanism for observing the market learning process by estimating the changes in market efficiency over time. Both stochastic GARCH-M and TEE models are estimated using Kalman filtering techniques. The contribution of this paper is two-fold:

  1. we explain in detail the quasi-maximum likelihood estimation (QMLE) procedure based on the standard Kalman filter applied to the stochastic GARCH-M and TEE models;
  2. we estimate the changes in the level of market efficiency in three markets over a period that includes the financial markets crisis of 2007/2008.

The three markets are specifically chosen to reflect a developed (London LSE), mature emerging (Johannesburg JSE) and immature emerging market (Nairobi NSE) perspective. Our empirical study suggests that, in spite of the financial crisis, all three markets maintained their pre-crisis level of weak-form efficiency.

09/04/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, Universidade do Algarve and CEMAT

Stochastic modeling in Communications Networks

Research in stochastic modeling is strongly influenced by applications in diverse domains. Communication networks constitute a lively field that motivate the study of new stochastic models and sometimes the development of new methods of analysis to understand the overall functioning of these complex systems. In this talk, it is presented some stochastic models found in communications networks from the perspective of the speaker's work over the last decade.

02/04/2014, 11:30 — 12:30 — Room P3.10, Mathematics Building
, FCT/DMAT of Évora University and CIMA-UE

Traffic Estimation of a M/G/1 Queue Using Probes

The huge growth of the Internet associated to the appearance of new multimedia applications requiring high demands of traffic, gives an important role to the monitorization of Internet traffic for quality of service assessment.

In this context, Internet probing has been a subject of great interest for researchers, since it permits to measure the internet performance by sending controlled probe packets to the network whose observed performance can be used to estimate the characteristics of the original traffic.

In this work we consider the estimation of the arrival rate and the service time moments of a Internet router modelled as a M/G/1 queue with probing. The probe inter-arrival times are i.i.d. and probe service times follow a general positive distribution. The only observations used are the arrival times, service times and departure times of probes. We derive the main equations from which the quantities of interest can be estimated. Two particular probe arrivals, deterministic and Poisson, are investigated.

Joint work with with Nélson Antunes (FCT/DMAT of Algarve University and CEMAT) and António Pacheco (Instituto Superior Técnico and CEMAT)

Older session pages: Previous 5 6 7 8 9 10 11 Oldest