108:30 — The radius of statistical efficiency

Classical results in asymptotic statistics show that the Fisher information matrix controls the difficulty of estimating a statistical model from observed data. In this work, we introduce a companion measure of robustness of an estimation problem: the radius of statistical efficiency (RSE) is the size of the smallest perturbation to the problem data that renders the Fisher information matrix singular. This quantity arises naturally as a statistical analogue of the "distance to ill-posedness", commonly used in numerical analysis and optimization. We compute RSE up to numerical constants for a variety of test bed problems, including principal component analysis, generalized linear models, phase retrieval, blind deconvolution, and matrix completion. In all cases, the RSE quantifies the compatibility between the covariance of the population data and the underlying model parameter.

209:00 — The Role of Subgradients in Second-Order Variational Analysis

Any second-order variational construction is defined at a point in the graph of subgradient map- ings. The second element of such points is a subgradient. Does it make a difference which subgradients we choose in these constructions? This talk is going to answer this question and demonstrate that certain subgradients, indeed, allow us to obtain more information about func- tions under consideration. Focusing mainly on the behavior of the second subderivative and subgradient proto-derivative of certain composite functions, we demonstrate that choosing the underlying subgradient, utilized in the definitions of these concepts, from the relative interior of the subdifferential mapping ensures stronger second-order variational properties such as strict twice epi-differentiability and strict subgradient proto-differentiability. Using this observation, we provide a simple characterization of continuous differentiability of the proximal mapping of our composite functions. As another application, we discuss the equivalence of metric regularity and strong metric regularity of a class of generalized equations at their nondegenerate solutions. This talk is based on joint works with Nguyen T. V. Hang.

309:30 — Polyhedral Newton-min algorithms for complementarity problems

The semismooth Newton method is a very efficient approach for computing
a zero of a large class of nonsmooth equations. When the initial iterate
is sufficiently close to a regular zero and the function is strongly
semismooth, the generated sequence converges quadratically to that zero,
while the iteration only requires to solve a linear system.

If the first iterate is far away from a zero, however, it is difficult
to force its convergence using linesearch or trust regions because a
semismooth Newton direction may not be a descent direction of the
associated least-square merit function, unlike when the function is
differentiable. We explore this question in the particular case of a
nonsmooth equation reformulation of the nonlinear complementarity
problem, using the minimum function. We propose a globally convergent
algorithm using a modification of a semismooth Newton direction that
makes it a descent direction of the least-square function. Instead of
requiring that the direction satisfies a linear system, it must be a
feasible point of a convex polyhedron; hence, it can be computed in
polynomial time. This polyhedron is defined by the often very few
inequalities, obtained by linearizing pairs of functions that have close
negative values at the current iterate; hence, somehow, the algorithm
feels the proximity of a ``negative kink'' of the minimum function and acts
accordingly. This is the plain PNM (Polyhedral Newton-min) algorithm.

In order to avoid as often as possible the extra cost of having to find
a feasible point of a polyhedron, a hybrid algorithm (HNM) is also proposed,
in which the Newton-min direction is accepted if a
sufficient-descent-like criterion is satisfied, which is often the case
in practice. Global and fast convergence to regular points is proved.

PNM or HNM may be used for linear complementarity problems (LCPs) since
only linear systems and polyhedral feasibility computations are involved,
no LCP subproblem. Preliminary experiments on LCPs were encouraging, we
observed that in most instances, the Newton-min direction was always
accepted. Only in a few instances and in only one or two iterations,
the algorithm needed expensive feasibility steps and these steps were
helpful in avoiding failures.

410:00 — Levenberg-Marquardt globalization of the polyhedral Newton-min algorithm for complementarity problems

Complementarity problems, in their rather general formulation, consist in
finding some vector x such that 0 <= F(x) | G(x) => 0, where F and G are
smooth vector functions. Optimality conditions for problems with nonlinear
inequality contraints fall within this framework, but the problem has many
other important applications. Remarkably, it is possible to reformulate the
problem as a nonsmooth equation H(x) = 0. Then, a possible approach
consists in applying nonsmooth Newton-like iterations to this equation.
A method of choice is the semismooth Newton algorithm, which benefits
from fast local convergence but lacks some global convergence property.

When H = min(F,G), one can overcome this drawback by using the
polyhedral Newton-min (PNM) algorithm with linesearch. The price to
pay for this improvement is the need to find a point in a convex
polyhedron (instead of solving a linear system) at each iteration.
Furthermore, regularity conditions on the cluster points are required to
guarantee that these are solutions. One goal of the presented work is to
remedy to this latter restriction by using a Levenberg-Marquardt (LM)
globalization technique. This extension is made difficult in particular
because the LM method minimizes a sum of squared linearized functions,
which mixes the components, whereas the NM algorithm is
designed componentwise thus enjoys separability properties.

When at an iterate some components of F and G are equal, H is not
differentiable. A technique used in the PNM algorithm deals with
these components by combining F and G, while the NM algorithm only
uses one of them. The proposed method follows a similar approach,
using a componentwise convex combination of F and G. An appropriate
choice of these weights allows one to obtain a descent curved path for
the cost function, provided that the current iterate is not stationnary. A
tolerance is used to detect the proximity to the kinks of H, which
intervenes in the analysis and is also relevant for numerical reasons.