Menu Content/Inhalt
June 2019
M T W T F S S
27 28 29 30 31 1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

Search on Site

Events Print
previous year previous month next month next year
See by year See by month See by week See Today Search Jump to month
Adrien Saumard, CREST Print
Thursday, 25 April 2019, 12:15 - 13:15

Adrien Saumard, CREST

Over-penalization: a finite sample improvement of the Unbiased Risk Estimation principle for model selection

Abstract: In prediction problems, the Unbiased Risk Estimation principle is one of the most common approach for selecting among a family of estimators. It constitutes indeed the theoretical ground behind Akaike's celebrated criterion or more generally, of theoretically designed penalties. Cross-validation, resampling based techniques for selection and Stein's unbiased risk estimator are also based on this principle. However, as we shall see, the unbiased risk estimation principle is sub-optimal for finite samples. This is due to a subtle second order effect in model selection problems, implying that a slight positive bias in risk estimation actually improves the selection. This phenomenon is well known from experts and sometimes call the overpenalization problem. We consider a general heuristic for the selection in a family of M-estimators, that allows us to identify the right amount of over-penalization, that is connected to the deviations of the excess risks of the estimators at hand. We also make a natural link between the overpenalization problem and a multiple (pseudo-)testing between a collection of random events. Then we propose some efficient modifications of AIC procedure and V-fold cross-validation, that are supported by both theoretical and empirical results.  

Homepage 

Location: R42.2.113
Contact: Nancy De Munck - This e-mail address is being protected from spam bots, you need JavaScript enabled to view it

Back