# Probability and Statistics Seminar ### Robust statistics: an overview

The basic ingredients of a statistical analysis are, in general, a data set, a model and a number of statistical procedures (estimation methods and tests). These procedures require that certain assumptions are fulfilled in order to function properly. Examples of this kind of assumptions are: normality of the observations, their independence and distributional identity (i.i.d.), homogeneity of variances, linearity or stationarity. If one or several of these assumptions are not verified the results of the statistical procedures may become completely aberrant. When this happens the procedure is called "non-robust". If, on the contrary, the results do not change a lot in the presence of small deviations from the assumptions, the procedure is called "robust". The importance of robust statistical procedures comes from the fact that the ideal assumptions are barely or never met in practice. Several simple examples will be presented to illustrate the severe effects of the violation of the underlying assumptions on the results of statistical procedures. After this introduction the talk goes on with a brief presentation of the basic concepts of robust statistics and with the discussion of robust methods in two main areas of statistics: regression and multivariate analysis. These areas are precisely those where there is a stronger need for robust methods and where the research effort has been more concentrated. The talk ends with some considerations on the future of robust statistics.