Menu Content/Inhalt
Seminars Print
previous year previous month next month next year
See by year See by month See by week See Today Search Jump to month
ULB-UCL Seminar Print
Friday, 24 November 2017, 14:30 - 17:00

David Preinerstorfer, ULB : Uniformly valid confidence intervals post-model-selection (joint with François Bachoc and Lukas Steinberger).

Abstract: We suggest general methods to construct asymptotically uniformly valid confidence intervals post-model-selection. The constructions are based on principles recently proposed by Berk et al. (2013). In particular the candidate models used can be misspecified, the target of inference is model-specific, and coverage is guaranteed for any data-driven model selection procedure. After developing a general theory we apply our methods to practically important situations where the candidate set of models, from which a working model is selected, consists of fixed design homoskedastic or heteroskedastic linear models, or of binary regression models with general link functions. In an extensive simulation study, we find that the proposed confidence intervals perform remarkably well, even when compared to existing methods that are tailored only for specific model selection procedures.

Johan Seger, UCL : Accelerating the Convergence Rate of Monte Carlo Integration through Ordinary Least Squares (joint with François Portier)

Abstract: In numerical integration, control variates are commonly used to reduce the variance of the naive Monte Carlo method. The control functions can be viewed as explanatory variables in a linear regression model with the integrand as dependent variable. The control functions have a known mean vector and covariance matrix, and using this information or not yields a number of variations of the method. A specific variation arises when the control functions are centered and the integral is estimated as the intercept via the ordinary least squares estimator in the linear regression model. When the number of control functions is kept fixed, all these variations are asymptotically equivalent, with asymptotic variance equal to the variance of the error variable in the regression model. Nevertheless, the ordinary least squares estimator presents particular advantages: it is the only one that correctly integrates constant functions and the control functions. In addition, if the number of control functions grows to infinity with the number of Monte Carlo replicates, the ordinary least squares estimator converges at a faster rate than the Monte Carlo procedure, the integration error having a Gaussian limit whose variance can be estimated consistently by the residual variance in the regression model. An extensive simulation confirms the superior performance of the ordinary least squares Monte Carlo method for a variety of univariate and multivariate integrands and control functions.

Location: 2NO906 Campus Plaine
Contact: Nancy De Munck - This e-mail address is being protected from spam bots, you need JavaScript enabled to view it

Back