108:30 — Convergence and error estimation of smoothing consensus-based optimization algorithm for nonsmooth nonconvex optimization

  • Wei Bian, Harbin Institute of Technology

Global optimization has received increased attention across almost every field of economy, science and artificial intelligence. Lately, a novel swarm intelligence model, namely the consensus-based optimization (CBO) algorithm, was introduced to deal with the global optimization problems. Limited by the conditions of Ito's formula, the convergence analysis of the previous CBO algorithms mainly focus on the problem with smooth objective function. With the help of smoothing method, this paper achieves a breakthrough by proposing an effective CBO algorithm for solving the global solution of a nonconvex, nonsmooth, and possible non-Lipschitz continuous minimization problem with theoretical analysis. We indicate that the proposed algorithm exhibits a global consensus with any initial data, and then give a more detailed error estimation on the objective function values from the proposed algorithm towards the global minimum. Inspired by the smoothing techniques, we give a sufficient condition on the system parameters and initial data to
guarantee that the function value at global consensus state can be close enough to the global minimum.
And note that the conditions on the parameters are independent of the dimension of the problem. Finally, some numerical examples are presented to illustrate the appreciable performance of the proposed method on solving the nonsmooth, nonconvex minimization problems.

209:00 — On solving a rank regularized minimization problem via equivalent factorized column-sparse regularized models

Rank regularized minimization problem is an ideal model for the low-rank matrix completion/recovery problem. The matrix factorization approach can transform the high-dimensional rank regularized problem to a low-dimensional factorized column-sparse regularized problem. The latter can greatly facilitate fast computations in applicable algorithms, but needs to overcome the simultaneous non-convexity of the loss and regularization functions. In this talk we consider the factorized column-sparse regularized model. Firstly, we optimize this model with bound constraints, and establish a certain equivalence between the optimized factorization problem and rank regularized problem. Further, we strengthen the optimality condition for stationary points of the factorization problem and define the notion of strong stationary point. Moreover, we establish the equivalence between the factorization problem and its a nonconvex relaxation in the sense of global minimizers and strong stationary points. To solve the factorization problem, we design two types of algorithms and give an adaptive method to reduce their computation. The first algorithm is from the relaxation point of view and its iterates own some properties from global minimizers of the factorization problem after finite iterations. We give some analysis on the convergence of its iterates to the strong stationary point. The second algorithm is designed for directly solving the factorization problem. We improve the PALM algorithm introduced by Bolte et al. (Math Program Ser A 146:459-494, 2014) for the factorization problem and give its improved convergence results. Finally, we conduct numerical experiments to show the promising performance of the proposed model and algorithms for low-rank matrix completion.

309:30 — A Corrected Inexact Proximal Augmented Lagrangian Method with a Relative Error Criterion for a Class of Group-quadratic Regularized Optimal Transport Problems

The optimal transport (OT) problem and its related problems have attracted significant attention and have been extensively studied in various applications. In this paper, we focus on a class of group-quadratic regularized OT problems which aim to find solutions with specialized structures that are advantageous in practical scenarios. To solve this class of problems, we propose a corrected inexact proximal augmented Lagrangian method (ciPALM), with the subproblems being solved by the semi-smooth Newton (Ssn) method. We establish that the proposed method exhibits appealing convergence properties under mild conditions. Moreover, our ciPALM distinguishes itself from the recently developed semismooth Newton-based inexact proximal augmented Lagrangian (Snipal) method for linear programming. Specifically, Snipal uses an absolute error criterion for the approximate minimization of the subproblem for which a summable sequence of tolerance parameters needs to be pre-specified for practical implementations. In contrast, our ciPALM adopts a relative error criterion with a single tolerance parameter, which would be more friendly to tune from computational and implementation perspectives. These favorable properties position our ciPALM as a promising candidate for tackling large-scale problems. Various numerical studies validate the effectiveness of employing a relative error criterion for the inexact proximal augmented Lagrangian method, and also demonstrate that our ciPALM is competitive for solving large-scale group-quadratic regularized OT problems.

410:00 — ** CANCELLED ** An iteratively reweigthed second-order method for nonconvex regularization

This paper considers a class of nonconvex sparsity-promoting regularization problems with a twice continuously differentiable loss function. We present a second-order algorithm to solve this class of nonconvex and nonsmooth problems. Most existing algorithms are first-order methods and only a hybrid of proximal gradient method and subspace regularized Newton method is proposed for lp regularization until recently. Our new method is also a hybrid method with main features include: (i) our method is based on the iteratively reweighted method with the regularization term being iteratively approximated by a weighted ℓ1 regularization term, so that it can applied to various nonconvex regularization problems. (ii) Our method altenatively solves the weighted ℓ1 approximation by a soft-thresholding step and the subspace approximate Newton step. (iii) The iterates generated by our algorithm have unchanged sign values and the nonzero components are bounded away from 0 for sufficiently large iterations and the algorithm eventually reverts to a perturbed Newton method. (iv) We prove global convergence and local quadratic convergence rate under loose assumptions for our method and demonstrate its efficiency on a large set of model prediction problems.