ScholarBank@NUShttps://scholarbank.nus.edu.sgThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Thu, 02 Dec 2021 20:16:06 GMT2021-12-02T20:16:06Z50341- Semismooth homeomorphisms and strong stability of semidefinite and Lorentz complementarity problemshttps://scholarbank.nus.edu.sg/handle/10635/44218Title: Semismooth homeomorphisms and strong stability of semidefinite and Lorentz complementarity problems
Authors: Pang, J.-S.; Sun, D.; Sun, J.
Abstract: Based on an inverse function theorem for a system of semismooth equations, this paper establishes several necessary and sufficient conditions for an isolated solution of a complementarity problem defined on the cone of symmetric positive semidefinite matrices to be strongly regular/stable. We show further that for a parametric complementarity problem of this kind, if a solution corresponding to a base parameter is strongly stable, then a semismooth implicit solution function exists whose directional derivatives can be computed by solving certain affine problems on the critical cone at the base solution. Similar results are also derived for a complementarity problem defined on the Lorentz cone. The analysis relies on some new properties of the directional derivatives of the projector onto the semidefinite cone and the Lorentz cone.
Wed, 01 Jan 2003 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/442182003-01-01T00:00:00Z
- Semismooth matrix-valued functionshttps://scholarbank.nus.edu.sg/handle/10635/44209Title: Semismooth matrix-valued functions
Authors: Sun, D.; Sun, J.
Abstract: Matrix-valued functions play an important role in the development of algorithms for semidefinite programming problems. This paper studies generalized differential properties of such functions related to nonsmooth-smoothing Newton methods. The first part of this paper discusses basic properties such as the generalized derivative, Rademacher's theorem, B-derivative, directional derivative, and semismoothness. The second part shows that the matrix absolute-value function, the matrix semidefinite-projection function, and the matrix projective residual function are strongly semismooth.
Tue, 01 Jan 2002 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/442092002-01-01T00:00:00Z
- A feasible semismooth asymptotically Newton method for mixed complementarity problemshttps://scholarbank.nus.edu.sg/handle/10635/102643Title: A feasible semismooth asymptotically Newton method for mixed complementarity problems
Authors: Sun, D.; Womersley, R.S.; Qi, H.
Abstract: Semismooth Newton methods constitute a major research area for solving mixed complementarity problems (MCPs). Early research on semismooth Newton methods is mainly on infeasible methods. However, some MCPs are not well defined outside the feasible region or the equivalent unconstrained reformulations of other MCPs contain local minimizers outside the feasible region. As both these problems could make the corresponding infeasible methods fail, more recent attention is on feasible methods. In this paper we propose a new feasible semismooth method for MCPs, in which the search direction asymptotically converges to the New ton direction. The new method overcomes the possible non-convergence of the projected semismooth Newton method, which is widely used in various numerical implementations, by minimizing a one-dimensional quadratic convex problem prior to doing (curved) line searches. As with other semismooth Newton methods, the proposed method only solves one linear system of equations at each iteration.The sparsity of the Jacobian of the reformulated system can be exploited, often reducing the size of the system that must be solved. The reason for this is that the projection onto the feasible set increases the likelihood of components of iterates being active. The global and superlinear/quadratic convergence of the proposed method is proved under mild conditions. Numerical results are reported on all problems from the MCPLIB collection [8].
Sun, 01 Dec 2002 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1026432002-12-01T00:00:00Z
- A Fast Globally Linearly Convergent Algorithm for the Computation of Wasserstein Barycentershttps://scholarbank.nus.edu.sg/handle/10635/169192Title: A Fast Globally Linearly Convergent Algorithm for the Computation of Wasserstein Barycenters
Authors: Yang, Lei; Li, Jia; Sun Defeng; Toh, Kim-Chuan
Abstract: In this paper, we consider the problem of computing a Wasserstein barycenter
for a set of discrete probability distributions with finite supports, which
finds many applications in areas such as statistics, machine learning and image
processing. When the support points of the barycenter are pre-specified, this
problem can be modeled as a linear program (LP) whose problem size can be
extremely large. To handle this large-scale LP, we analyse the structure of its
dual problem, which is conceivably more tractable and can be reformulated as a
well-structured convex problem with 3 kinds of block variables and a coupling
linear equality constraint. We then adapt a symmetric Gauss-Seidel based
alternating direction method of multipliers (sGS-ADMM) to solve the resulting
dual problem and establish its global convergence and global linear convergence
rate. As a critical component for efficient computation, we also show how all
the subproblems involved can be solved exactly and efficiently. This makes our
method suitable for computing a Wasserstein barycenter on a large dataset,
without introducing an entropy regularization term as is commonly practiced. In
addition, our sGS-ADMM can be used as a subroutine in an alternating
minimization method to compute a barycenter when its support points are not
pre-specified. Numerical results on synthetic datasets and image datasets
demonstrate that our method is highly competitive for solving large-scale
problems, in comparison to two existing representative methods and the
commercial software Gurobi.
Mon, 01 Jan 2018 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1691922018-01-01T00:00:00Z
- Solving karush-kuhn-tucker systems via the trust region and the conjugate gradient methodshttps://scholarbank.nus.edu.sg/handle/10635/104146Title: Solving karush-kuhn-tucker systems via the trust region and the conjugate gradient methods
Authors: Qi, H.; Qi, L.; Sun, D.
Abstract: A popular approach to solving the Karush-Kuhn-Tucker (KKT) system, mainly arising from the variational inequality problem, is to reformulate it as a constrained minimization problem with simple bounds. In this paper, we propose a trust region method for solving the reformulation problem with the trust region subproblems being solved by the truncated conjugate gradient (CG) method, which is cost effective. Other advantages of the proposed method over existing ones include the fact that a good approximated solution to the trust region subproblem can be found by the truncated CG method and is judged in a simple way; also, the working matrix in each iteration is H, instead of the condensed HT H, where H is a matrix element of the generalized Jacobian of the function used in the reformulation. As a matter of fact, the matrix used is of reduced dimension. We pay extra attention to ensure the success of the truncated CG method as well as the feasibility of the iterates with respect to the simple constraints. Another feature of the proposed method is that we allow the merit function value to be increased at some iterations to speed up the convergence. Global and superlinear/quadratic convergence is shown under standard assumptions. Numerical results are reported on a subset of problems from the MCPLIB collection [S. P. Dirkse and M. C. Ferris, Optim. Methods Softw., 5 (1995), pp. 319-345].
Thu, 01 Jan 2004 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1041462004-01-01T00:00:00Z
- A smoothing newton-type algorithm of stronger convergence for the quadratically constrained convex quadratic programminghttps://scholarbank.nus.edu.sg/handle/10635/102765Title: A smoothing newton-type algorithm of stronger convergence for the quadratically constrained convex quadratic programming
Authors: Huang, Z.-H.; Sun, D.; Zhao, G.
Abstract: In this paper we propose a smoothing Newton-type algorithm for the problem of minimizing a convex quadratic function subject to finitely many convex quadratic inequality constraints. The algorithm is shown to converge globally and possess stronger local superlinear convergence. Preliminary numerical results are also reported. © 2006 Springer Science + Business Media, LLC.
Sun, 01 Oct 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1027652006-10-01T00:00:00Z
- Strong semismoothness of the Fischer-Burmeister SDC and SOC complementarity functionshttps://scholarbank.nus.edu.sg/handle/10635/44182Title: Strong semismoothness of the Fischer-Burmeister SDC and SOC complementarity functions
Authors: Sun, D.; Sun, J.
Abstract: We show that the Fischer-Burmeister complementarity functions, associated to the semidefinite cone (SDC) and the second order cone (SOC), respectively, are strongly semismooth everywhere. Interestingly enough, the proof relys on a relationship between the singular value decomposition of a nonsymmetric matrix and the spectral decomposition of a symmetric matrix. © Springer-Verlag 2005.
Sat, 01 Jan 2005 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/441822005-01-01T00:00:00Z
- The SC1 property of the squared norm of the SOC Fischer-Burmeister functionhttps://scholarbank.nus.edu.sg/handle/10635/44189Title: The SC1 property of the squared norm of the SOC Fischer-Burmeister function
Authors: Chen, J.-S.; Sun, D.; Sun, J.
Abstract: We show that the gradient mapping of the squared norm of Fischer-Burmeister function is globally Lipschitz continuous and semismooth, which provides a theoretical basis for solving nonlinear second-order cone complementarity problems via the conjugate gradient method and the semismooth Newton's method. © 2008 Elsevier Ltd. All rights reserved.
Tue, 01 Jan 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/441892008-01-01T00:00:00Z
- Globally and quadratically convergent algorithm for minimizing the sum of Euclidean normshttps://scholarbank.nus.edu.sg/handle/10635/103346Title: Globally and quadratically convergent algorithm for minimizing the sum of Euclidean norms
Authors: Zhou, G.; Toh, K.C.; Sun, D.
Abstract: For the problem of minimizing the sum of Euclidean norms (MSN), most existing quadratically convergent algorithms require a strict complementarity assumption. However, this assumption is not satisfied for a number of MSN problems. In this paper, we present a globally and quadratically convergent algorithm for the MSN problem. In particular, the quadratic convergence result is obtained without assuming strict complementarity. Examples without strictly complementary solutions are given to show that our algorithm can indeed achieve quadratic convergence. Preliminary numerical results are reported.
Sat, 01 Nov 2003 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1033462003-11-01T00:00:00Z
- A squared smoothing Newton method for nonsmooth matrix equations and its applications in semidefinite optimization problemshttps://scholarbank.nus.edu.sg/handle/10635/44236Title: A squared smoothing Newton method for nonsmooth matrix equations and its applications in semidefinite optimization problems
Authors: Sun, J.; Sun, D.; Liqun, Q.I.
Abstract: We study a smoothing Newton method for solving a nonsmooth matrix equation that includes semidefinite programming and the semidefinite complementarity problem as special cases. This method, if specialized for solving semidefinite programs, needs to solve only one linear system per iteration and achieves quadratic convergence under strict complementarity and nondegeneracy. We also establish quadratic convergence of this method applied to the semidefinite complementarity problem under the assumption that the Jacobian of the problem is positive definite on the affine hull of the critical cone at the solution. These results are based on the strong semismoothness and complete characterization of the B-subdifferential of a corresponding squared smoothing matrix function, which are of general theoretical interest.
Thu, 01 Jan 2004 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/442362004-01-01T00:00:00Z
- A proximal point algorithm for log-determinant optimization with group Lasso regularizationhttps://scholarbank.nus.edu.sg/handle/10635/102737Title: A proximal point algorithm for log-determinant optimization with group Lasso regularization
Authors: Yang, J.; Sun, D.; Toh, K.-C.
Abstract: We consider the covariance selection problem where variables are clustered into groups and the inverse covariance matrix is expected to have a blockwise sparse structure. This problem is realized via penalizing the maximum likelihood estimation of the inverse covariance matrix by group Lasso regularization. We propose to solve the resulting log-determinant optimization problem with the classical proximal point algorithm (PPA). At each iteration, as it is difficult to update the primal variables directly, we first solve the dual subproblem by an inexact semismooth Newton-CG method and then update the primal variables by explicit formulas based on the computed dual variables. We also propose to accelerate the PPA by an inexact generalized Newton's method when the iterate is close to the solution. Theoretically, we prove that at the optimal solution, the nonsingularity of the generalized Hessian matrices of the dual subproblem is equivalent to the constraint nondegeneracy condition for the primal problem. Global and local convergence results are also presented for the proposed PPA. Moreover, based on the augmented Lagrangian function of the dual problem we derive an alternating direction method (ADM), which is easily implementable and is demonstrated to be efficient for random problems. Numerical results, including comparisons with the ADM on both synthetic and real data, are presented to demonstrate that the proposed Newton-CG based PPA is stable and efficient and, in particular, outperforms the ADM when high accuracy is required. © 2013 Society for Industrial and Applied Mathematics.
Tue, 01 Jan 2013 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1027372013-01-01T00:00:00Z
- Semismoothness of solutions to generalized equations and the Moreau-Yosida regularizationhttps://scholarbank.nus.edu.sg/handle/10635/104094Title: Semismoothness of solutions to generalized equations and the Moreau-Yosida regularization
Authors: Meng, F.; Sun, D.; Zhao, G.
Abstract: We show that a locally Lipschitz homeomorphism function is semismooth at a given point if and only if its inverse function is semismooth at its image point. We present a sufficient condition for the semismoothness of solutions to generalized equations over cone reducible (nonpolyhedral) convex sets. We prove that the semismoothness of solutions to the Moreau-Yosida regularization of a lower semicontinuous proper convex function is implied by the semismoothness of the metric projector over the epigraph of the convex function. © Springer-Verlag 2005.
Tue, 01 Nov 2005 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1040942005-11-01T00:00:00Z
- Hankel matrix rank minimization with applications to system identification and realizationhttps://scholarbank.nus.edu.sg/handle/10635/103363Title: Hankel matrix rank minimization with applications to system identification and realization
Authors: Maryam, F.; Pong, T.K.; Sun, D.; Tseng, P.
Abstract: We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz, and moment structures and catalog applications from diverse fields under this framework. We discuss various first-order methods for solving the resulting optimization problem, including alternating direction methods of multipliers, proximal point algorithms, and gradient projection methods. We perform computational experiments to compare these methods on system identification problems and system realization problems. For the system identification problem, the gradient projection method (accelerated by Nesterov's extrapolation techniques) and the proximal point algorithm usually outperform other first-order methods in terms of CPU time on both real and simulated data, for small and large regularization parameters, respectively, while for the system realization problem, the alternating direction method of multipliers, as applied to a certain primal reformulation, usually outperforms other first-order methods in terms of CPU time. We also study the convergence of the proximal alternating direction methods of multipliers used in this paper. Copyright © 2013 by SIAM.
Tue, 01 Jan 2013 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1033632013-01-01T00:00:00Z
- An augmented Lagrangian dual approach for the H-weighted nearest correlation matrix problemhttps://scholarbank.nus.edu.sg/handle/10635/102825Title: An augmented Lagrangian dual approach for the H-weighted nearest correlation matrix problem
Authors: Qi, H.; Sun, D.
Abstract: Higham (2002, IMA J. Numer. Anal., 22, 329-343) considered two types of nearest correlation matrix problems, namely the W-weighted case and the H-weighted case. While the W-weighted case has since been well studied to make several Lagrangian dual-based efficient numerical methods available, the H-weighted case remains numerically challenging. The difficulty of extending those methods from the W-weighted case to the H-weighted case lies in the fact that an analytic formula for the metric projection onto the positive semidefinite cone under the H-weight, unlike the case under the W-weight, is not available. In this paper we introduce an augmented Lagrangian dual-based approach that avoids the explicit computation of the metric projection under the H-weight. This method solves a sequence of unconstrained convex optimization problems, each of which can be efficiently solved by an inexact semismooth Newton method combined with the conjugate gradient method. Numerical experiments demonstrate that the augmented Lagrangian dual approach is not only fast but also robust. © The author 2010. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Fri, 01 Apr 2011 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1028252011-04-01T00:00:00Z
- A newton-cg augmented lagrangian method for semidefinite programminghttps://scholarbank.nus.edu.sg/handle/10635/102695Title: A newton-cg augmented lagrangian method for semidefinite programming
Authors: Zhao, X.-Y.; Sun, D.; Toh, K.-C.
Abstract: We consider a Newton-CG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresponding solution mapping at the origin. For the inner problems, we show that the positive definiteness of the generalized Hessian of the objective function in these inner problems, a key property for ensuring the efficiency of using an inexact semismooth Newton-CG method to solve the inner problems, is equivalent to the constraint nondegeneracy of the corresponding dual problems. Numerical experiments on a variety of large-scale SDP problems with the matrix dimension n up to 4, 110 and the number of equality constraints m up to 2, 156, 544 show that the proposed method is very efficient. We are also able to solve the SDP problem fap36 (with n = 4, 110 and m = 1, 154, 467) in the Seventh DIMACS Implementation Challenge much more accurately than in previous attempts. Copyright © 2010, Society for Industrial and Applied Mathematics.
Fri, 01 Jan 2010 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1026952010-01-01T00:00:00Z
- Löwner's operator and spectral functions in Euclidean Jordan algebrashttps://scholarbank.nus.edu.sg/handle/10635/53017Title: Löwner's operator and spectral functions in Euclidean Jordan algebras
Authors: Sun, D.; Sun, J.
Abstract: We study analyticity, differentiability, and semismoothness of Löwner's operator and spectral functions under the framework of Euclidean Jordan algebras. In particular, we show that many optimization-related classical results in the symmetric matrix space can be generalized within this framework. For example, the metric projection operator over any symmetric cone defined in a Euclidean Jordan algebra is shown to be strongly semismooth. The research also raises several open questions, whose answers would be of strong interest for optimization research. © 2008 INFORMS.
Thu, 01 May 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/530172008-05-01T00:00:00Z
- Smoothing functions and smoothing newton method for complementarity and variational inequality problemshttps://scholarbank.nus.edu.sg/handle/10635/104140Title: Smoothing functions and smoothing newton method for complementarity and variational inequality problems
Authors: Qi, L.; Sun, D.
Abstract: This paper provides for the first time some computable smoothing functions for variational inequality problems with general constraints. This paper proposes also a new version of the smoothing Newton method and establishes its global and superlinear (quadratic) convergence under conditions weaker than those previously used in the literature. These are achieved by introducing a general definition for smoothing functions, which include almost all the existing smoothing functions as special cases.
Mon, 01 Apr 2002 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1041402002-04-01T00:00:00Z
- Complementarity functions and numerical experiments on some smoothing Newton methods for second-order-cone complementarity problemshttps://scholarbank.nus.edu.sg/handle/10635/44187Title: Complementarity functions and numerical experiments on some smoothing Newton methods for second-order-cone complementarity problems
Authors: Chen, X.D.; Sun, D.; Sun, J.
Abstract: Two results on the second-order-cone complementarity problem are presented. We show that the squared smoothing function is strongly semismooth. Under monotonicity and strict feasibility we provide a new proof, based on a penalized natural complementarity function, for the solution set of the second-order-cone complementarity problem being bounded. Numerical results of squared smoothing Newton algorithms are reported.
Wed, 01 Jan 2003 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/441872003-01-01T00:00:00Z
- A quadratically convergent newton method for computing the nearest correlation matrixhttps://scholarbank.nus.edu.sg/handle/10635/102741Title: A quadratically convergent newton method for computing the nearest correlation matrix
Authors: Qi, H.; Sun, D.
Abstract: The nearest correlation matrix problem is to find a correlation matrix which is closest to a given symmetric matrix in the Frobenius norm. The well-studied dual approach is to reformulate this problem as an unconstrained continuously differentiable convex optimization problem. Gradient methods and quasi-Newton methods such as BFGS have been used directly to obtain globally convergent methods. Since the objective function in the dual approach is not twice continuously differentiable, these methods converge at best linearly. In this paper, we investigate a Newton-type method for the nearest correlation matrix problem. Based on recent developments on strongly semismooth matrix valued functions, we prove the quadratic convergence of the proposed Newton method. Numerical experiments confirm the fast convergence and the high efficiency of the method. © 2006 Society for Industrial and Applied Mathematics.
Sun, 01 Jan 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1027412006-01-01T00:00:00Z
- Constraint nondegeneracy, strong regularity, and nonsingularity in semidefinite programminghttps://scholarbank.nus.edu.sg/handle/10635/103046Title: Constraint nondegeneracy, strong regularity, and nonsingularity in semidefinite programming
Authors: Chan, Z.X.; Sun, D.
Abstract: It is known that the Karush-Kuhn-Tucker (KKT) conditions of semidefinite programming can be reformulated as a nonsmooth system via the metric projector over the cone of symmetric and positive semidefinite matrices. We show in this paper that the primal and dual constraint nondegeneracies, the strong regularity, the nonsingularity of the B-subdifferential of this nonsmooth system, and the nonsingularity of the corresponding Clarke's generalized Jacobian, at a KKT point, are all equivalent. Moreover, we prove the equivalence between each of these conditions and the nonsingularity of Clarke's generalized Jacobian of the smoothed counterpart of this nonsmooth system used in several globally convergent smoothing Newton methods. In particular, we establish the quadratic convergence of these methods under the primal and dual constraint nondegeneracies, but without the strict complementarity. © 2008 Society for Industrial and Applied Mathematics.
Tue, 01 Jan 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1030462008-01-01T00:00:00Z
- On the coderivative of the projection operator onto the second-order conehttps://scholarbank.nus.edu.sg/handle/10635/103776Title: On the coderivative of the projection operator onto the second-order cone
Authors: Outrata, J.V.; Sun, D.
Abstract: The limiting (Mordukhovich) coderivative of the metric projection onto the second-order cone \mathbb{R}{n} is computed. This result is used to obtain a sufficient condition for the Aubin property of the solution map of a parameterized second-order cone complementarity problem and to derive necessary optimality conditions for a mathematical program with a second-order cone complementarity problem among the constraints. © 2008 Springer Science+Business Media B.V.
Mon, 01 Dec 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1037762008-12-01T00:00:00Z
- An inexact accelerated proximal gradient method for large scale linearly constrained convex SDPhttps://scholarbank.nus.edu.sg/handle/10635/102841Title: An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP
Authors: Jiang, K.; Sun, D.; Toh, K.-C.
Abstract: The accelerated proximal gradient (APG) method, first proposed by Nesterov for minimizing smooth convex functions, later extended by Beck and Teboulle to composite convex objective functions, and studied in a unifying manner by Tseng, has proven to be highly efficient in solving some classes of large scale structured convex optimization (possibly nonsmooth) problems, including nuclear norm minimization problems in matrix completion and l 1 minimization problems in compressed sensing. The method has superior worst-case iteration complexity over the classical projected gradient method and usually has good practical performance on problems with appropriate structures. In this paper, we extend the APG method to the inexact setting, where the subproblem in each iteration is solved only approximately, and show that it enjoys the same worst-case iteration complexity as the exact counterpart if the subproblems are progressively solved to sufficient accuracy. We apply our inexact APG method to solve large scale convex quadratic semidefinite programming (QSDP) problems of the form min{1/2〈x, Q(x)〉 + 〈c, x〉 | A (x) = b, x ≻ 0}, where Q,A are given linear maps and b, c are given data. The subproblem in each iteration is solved by a semismooth Newton-CG (SSNCG) method with warm-start using the iterate from the previous iteration. Our APG-SSNCG method is demonstrated to be efficient for QSDP problems whose positive semidefinite linear maps Q are highly ill-conditioned or rank deficient. © 2012 Society for Industrial and Applied Mathematics.
Sun, 01 Jan 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1028412012-01-01T00:00:00Z
- Fast Algorithms for Large-Scale Generalized Distance Weighted Discriminationhttps://scholarbank.nus.edu.sg/handle/10635/144831Title: Fast Algorithms for Large-Scale Generalized Distance Weighted Discrimination
Authors: Xin Yee Lam; J. S. Marron; Defeng Sun; Kim-Chuan Toh
Thu, 17 May 2018 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1448312018-05-17T00:00:00Z
- An implementable proximal point algorithmic framework for nuclear norm minimizationhttps://scholarbank.nus.edu.sg/handle/10635/102836Title: An implementable proximal point algorithmic framework for nuclear norm minimization
Authors: Liu, Y.-J.; Sun, D.; Toh, K.-C.
Abstract: The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In this paper, we study inexact proximal point algorithms in the primal, dual and primal-dual forms for solving the nuclear norm minimization with linear equality and second order cone constraints. We design efficient implementations of these algorithms and present comprehensive convergence results. In particular, we investigate the performance of our proposed algorithms in which the inner sub-problems are approximately solved by the gradient projection method or the accelerated proximal gradient method. Our numerical results for solving randomly generated matrix completion problems and real matrix completion problems show that our algorithms perform favorably in comparison to several recently proposed state-of-the-art algorithms. Interestingly, our proposed algorithms are connected with other algorithms that have been studied in the literature. © 2011 Springer and Mathematical Optimization Society.
Fri, 01 Jun 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1028362012-06-01T00:00:00Z
- A further result on an implicit function theorem for locally Lipschitz functionshttps://scholarbank.nus.edu.sg/handle/10635/102647Title: A further result on an implicit function theorem for locally Lipschitz functions
Authors: Sun, D.
Abstract: Let H:Rm × Rn → Rn be a locally Lipschitz function in a neighborhood of (ȳ,x̄) and H(ȳ,x̄) = 0 for some ȳ ∈ Rm and x̄ ∈ Rn. The implicit function theorem in the sense of Clarke (Pacific J. Math. 64 (1976) 97; Optimization and Nonsmooth Analysis, Wiley, New York, 1983) says that if πx∂H(ȳ,x̄) is of maximal rank, then there exist a neighborhood Y of ȳ and a Lipschitz function G(·):Y → Rn such that G(ȳ) = x̄ and for every y in Y, H(y,G(y)) = 0. In this paper, we shall further show that if H has a superlinear (quadratic) approximate property at (ȳ,x̄), then G has a superlinear (quadratic) approximate property at ȳ. This result is useful in designing Newton's methods for nonsmooth equations. © 2001 Elsevier Science B.V.
Tue, 01 May 2001 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1026472001-05-01T00:00:00Z
- Strong semismoothness of eigenvalues of symmetric matrices and its application to inverse eigenvalue problemshttps://scholarbank.nus.edu.sg/handle/10635/44233Title: Strong semismoothness of eigenvalues of symmetric matrices and its application to inverse eigenvalue problems
Authors: Sun, D.; Sun, J.
Abstract: It is well known that the eigenvalues of a real symmetric matrix are not everywhere differentiable. A classical result of Ky Fan states that each eigenvalue of a symmetric matrix is the difference of two convex functions, which implies that the eigenvalues are semismooth functions. Based on a recent result of the authors, it is further proved in this paper that the eigenvalues of a symmetric matrix are strongly semismooth everywhere. As an application, it is demonstrated how this result can be used to analyze the quadratic convergence of Newton's method for solving inverse eigenvalue problems (IEPs) and generalized IEPs with multiple eigenvalues.
Tue, 01 Jan 2002 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/442332002-01-01T00:00:00Z
- Calibrating least squares semidefinite programming with equality and inequality constraintshttps://scholarbank.nus.edu.sg/handle/10635/102966Title: Calibrating least squares semidefinite programming with equality and inequality constraints
Authors: Gao, Y.; Sun, D.
Abstract: In this paper, we consider the least squares semidefinite programming with a large number of equality and inequality constraints. One difficulty in finding an efficient method for solving this problem is due to the presence of the inequality constraints. In this paper, we propose to overcome this difficulty by reformulating the problem as a system of semismooth equations with two level metric projection operators. We then design an inexact smoothing Newton method to solve the resulting semismooth system. At each iteration, we use the BiCGStab iterative solver to obtain an approximate solution to the generated smoothing Newton linear system. Our numerical experiments confirm the high efficiency of the proposed method. © 2009 Society for Industrial and Applied Mathematics.
Thu, 01 Jan 2009 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1029662009-01-01T00:00:00Z
- Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithmhttps://scholarbank.nus.edu.sg/handle/10635/104148Title: Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithm
Authors: Wang, C.; Sun, D.; Toh, K.-C.
Abstract: We propose a Newton-CG primal proximal point algorithm (PPA) for solving large scale log-determinant optimization problems. Our algorithm employs the essential ideas of PPA, the Newton method, and the preconditioned CG solver. When applying the Newton method to solve the inner subproblem, we find that the log-determinant term plays the role of a smoothing term as in the traditional smoothing Newton technique. Focusing on the problem of maximum likelihood sparse estimation of a Gaussian graphical model, we demonstrate that our algorithm performs favorably compared to existing state-of-the-art algorithms and is much preferred when a high quality solution is required for problems with many equality constraints. © 2010 Society for Industrial and Applied Mathematics.
Fri, 01 Jan 2010 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1041482010-01-01T00:00:00Z
- A dual optimization approach to inverse quadratic eigenvalue problems with partial eigenstructurehttps://scholarbank.nus.edu.sg/handle/10635/102635Title: A dual optimization approach to inverse quadratic eigenvalue problems with partial eigenstructure
Authors: Bai, Z.-J.; Chu, D.; Sun, D.
Abstract: The inverse quadratic eigenvalue problem (IQEP) arises in the field of structural dynamics. It aims to find three symmetric matrices, known as the mass, the damping, and the stiffness matrices, such that they are closest to the given analytical matrices and satisfy the measured data. The difficulty of this problem lies in the fact that in applications the mass matrix should be positive definite and the stiffness matrix positive semidefinite. Based on an equivalent dual optimization version of the IQEP, we present a quadratically convergent Newton-type method. Our numerical experiments confirm the high efficiency of the proposed method. © 2007 Society for Industrial and Applied Mathematics.
Mon, 01 Jan 2007 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1026352007-01-01T00:00:00Z
- Correlation stress testing for value-at-risk: An unconstrained convex optimization approachhttps://scholarbank.nus.edu.sg/handle/10635/104553Title: Correlation stress testing for value-at-risk: An unconstrained convex optimization approach
Authors: Qi, H.; Sun, D.
Abstract: Correlation stress testing is employed in several financial models for determining the value-at-risk (VaR) of a financial institution's portfolio. The possible lack of mathematical consistence in the target correlation matrix, which must be positive semidefinite, often causes breakdown of these models. The target matrix is obtained by fixing some of the correlations (often contained in blocks of submatrices) in the current correlation matrix while stressing the remaining to a certain level to reflect various stressing scenarios. The combination of fixing and stressing effects often leads to mathematical inconsistence of the target matrix. It is then naturally to find the nearest correlation matrix to the target matrix with the fixed correlations unaltered. However, the number of fixed correlations could be potentially very large, posing a computational challenge to existing methods. In this paper, we propose an unconstrained convex optimization approach by solving one or a sequence of continuously differentiable (but not twice continuously differentiable) convex optimization problems, depending on different stress patterns. This research fully takes advantage of the recently developed theory of strongly semismooth matrix valued functions, which makes fast convergent numerical methods applicable to the underlying unconstrained optimization problem. Promising numerical results on practical data (RiskMetrics database) and randomly generated problems of larger sizes are reported. © 2009 Springer Science+Business Media, LLC.
Mon, 01 Mar 2010 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1045532010-03-01T00:00:00Z
- A primal-dual algorithm for minimizing a sum of Euclidean normshttps://scholarbank.nus.edu.sg/handle/10635/102734Title: A primal-dual algorithm for minimizing a sum of Euclidean norms
Authors: Qi, L.; Sun, D.; Zhou, G.
Abstract: We study the problem of minimizing a sum of Euclidean norms. This nonsmooth optimization problem arises in many different kinds of modern scientific applications. In this paper we first transform this problem and its dual problem into a system of strongly semismooth equations, and give some uniqueness theorems for this problem. We then present a primal-dual algorithm for this problem by solving this system of strongly semismooth equations. Preliminary numerical results are reported, which show that this primal-dual algorithm is very promising. © 2002 Elsevier Science B.V. All rights reserved.
Tue, 01 Jan 2002 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1027342002-01-01T00:00:00Z
- The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programminghttps://scholarbank.nus.edu.sg/handle/10635/44217Title: The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming
Authors: Sun, D.; Sun, J.; Zhang, L.
Abstract: We analyze the rate of local convergence of the augmented Lagrangian method in nonlinear semidefinite optimization. The presence of the positive semidefinite cone constraint requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and variational analysis on the projection operator in the symmetric matrix space. Without requiring strict complementarity, we prove that, under the constraint nondegeneracy condition and the strong second order sufficient condition, the rate of convergence is linear and the ratio constant is proportional to 1/c, where c is the penalty parameter that exceeds a threshold c̄ > 0 . © 2007 Springer-Verlag.
Tue, 01 Jan 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/442172008-01-01T00:00:00Z
- The strong second-order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implicationshttps://scholarbank.nus.edu.sg/handle/10635/104362Title: The strong second-order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications
Authors: Sun, D.
Abstract: For a locally optimal solution to the nonlinear semidefinite programming problem, under Robinson's constraint qualification, the following conditions are proved to be equivalent: the strong second-order sufficient condition and constraint nondegeneracy; the nonsingularity of Clarke's Jacobian of the Karush-Kuhn-Tucker system; the strong regularity of the Karush-Kuhn-Tucker point; and others. © 2006 INFORMS.
Wed, 01 Nov 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1043622006-11-01T00:00:00Z
- Sub-quadratic convergence of a smoothing Newton algorithm for the P 0 - And monotone LCPhttps://scholarbank.nus.edu.sg/handle/10635/104214Title: Sub-quadratic convergence of a smoothing Newton algorithm for the P 0 - And monotone LCP
Authors: Huang, Z.-H.; Qi, L.; Sun, D.
Abstract: Given [InlineMediaObject not available: see fulltext.], the linear complementarity problem (LCP) is to find [InlineMediaObject not available: see fulltext.] such that (x, s) ≥ 0,s=Mx+q,x Ts =0. By using the Chen-Harker-Kanzow-Smale (CHKS) smoothing function, the LCP is reformulated as a system of parameterized smooth-nonsmooth equations. As a result, a smoothing Newton algorithm, which is a modified version of the Qi-Sun-Zhou algorithm [Mathematical Programming, Vol. 87, 2000, pp. 1-35], is proposed to solve the LCP with M being assumed to be a P 0 -matrix (P 0 -LCP). The proposed algorithm needs only to solve one system of linear equations and to do one line search at each iteration. It is proved in this paper that the proposed algorithm has the following convergence properties: (i) it is well-defined and any accumulation point of the iteration sequence is a solution of the P 0 -LCP; (ii) it generates a bounded sequence if the P 0 -LCP has a nonempty and bounded solution set; (iii) if an accumulation point of the iteration sequence satisfies a nonsingularity condition, which implies the P 0 -LCP has a unique solution, then the whole iteration sequence converges to this accumulation point sub-quadratically with a Q-rate 2-t, where t (0,1) is a parameter; and (iv) if M is positive semidefinite and an accumulation point of the iteration sequence satisfies a strict complementarity condition, then the whole sequence converges to the accumulation point quadratically. © Springer-Verlag 2003.
Thu, 01 Apr 2004 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1042142004-04-01T00:00:00Z