81 research outputs found

    Serum Early Prostate Cancer Antigen (EPCA) Level and Its Association with Disease Progression in Prostate Cancer in a Chinese Population

    Get PDF
    BACKGROUND: Early prostate cancer antigen (EPCA) has been shown a prostate cancer (PCa)-associated nuclear matrix protein, however, its serum status and prognostic power in PCa are unknown. The goals of this study are to measure serum EPCA levels in a cohort of patients with PCa prior to the treatment, and to evaluate the clinical value of serum EPCA. METHODS: Pretreatment serum EPCA levels were determined with an ELISA in 77 patients with clinically localized PCa who underwent radical prostatectomy and 51 patients with locally advanced or metastatic disease who received primary androgen deprivation therapy, and were correlated with clinicopathological variables and disease progression. Serum EPCA levels were also examined in 40 healthy controls. RESULTS: Pretreatment mean serum EPCA levels were significantly higher in PCa patients than in controls (16.84 ± 7.60 ng/ml vs. 4.12 ± 2.05 ng/ml, P<0.001). Patients with locally advanced and metastatic PCa had significantly higher serum EPCA level than those with clinically localized PCa (22.93 ± 5.28 ng/ml and 29.41 ± 8.47 ng/ml vs. 15.17 ± 6.03 ng/ml, P = 0.014 and P<0.001, respectively). Significantly elevated EPCA level was also found in metastatic PCa compared with locally advanced disease (P < 0.001). Increased serum EPCA levels were significantly and positively correlated with Gleason score and clinical stage, but not with PSA levels and age. On multivariate analysis, pretreatment serum EPCA level held the most significantly predictive value for the biochemical recurrence and androgen-independent progression among pretreatment variables (HR = 4.860, P<0.001 and HR = 5.418, P<0.001, respectively). CONCLUSIONS: Serum EPCA level is markedly elevated in PCa. Pretreatment serum EPCA level correlates significantly with the poor prognosis, showing prediction potential for PCa progression

    Improving the convergence of non-interior point algorithm for nonlinear complementarity problems

    No full text
    Abstract. Recently, based upon the Chen-Harker-Kanzow-Smale smoothing function and the trajectory and the neighbourhood techniques, Hotta and Yoshise proposed a noninterior point algorithm for solving the nonlinear complementarity problem. Their algorithm is globally convergent under a relatively mild condition. In this paper, we modify their algorithm and combine it with the superlinear convergence theory for nonlinear equations. We provide a globally linearly convergent result for a slightly updated version of the Hotta-Yoshise algorithm and show that a further modified Hotta-Yoshise algorithm is globally and superlinearly convergent, with a convergence Q-order 1 + t, under suitable conditions, where t ∈ (0, 1) is an additional parameter. 1

    An augmented Lagrangian dual approach for the H-weighted nearest correlation matrix problem

    No full text
    Higham (2002, IMA J. Numer. Anal., 22, 329–343) considered two types of nearest correlation matrix problems, namely the W-weighted case and the H-weighted case. While the W-weighted case has since been well studied to make several Lagrangian dual-based efficient numerical methods available, the H-weighted case remains numerically challenging. The difficulty of extending those methods from the W-weighted case to the H-weighted case lies in the fact that an analytic formula for the metric projection onto the positive semidefinite cone under the H-weight, unlike the case under the W-weight, is not available. In this paper we introduce an augmented Lagrangian dual-based approach that avoids the explicit computation of the metric projection under the H-weight. This method solves a sequence of unconstrained convex optimization problems, each of which can be efficiently solved by an inexact semismooth Newton method combined with the conjugate gradient method. Numerical experiments demonstrate that the augmented Lagrangian dual approach is not only fast but also robust

    A quadratically convergent Newton method for the nearest correlation matrix problem

    No full text
    The nearest correlation matrix problem is to find a correlation matrix which is closest to a given symmetric matrix in the Frobenius norm. The well-studied dual approach is to reformulate this problem as an unconstrained continuously differentiable convex optimization problem. Gradient methods and quasi-Newton methods such as BFGS have been used directly to obtain globally convergent methods. Since the objective function in the dual approach is not twice continuously differentiable, these methods converge at best linearly. In this paper, we investigate a Newton-type method for the nearest correlation matrix problem. Based on recent developments on strongly semismooth matrix valued functions, we prove the quadratic convergence of the proposed Newton method. Numerical experiments confirm the fast convergence and the high efficiency of the method

    Solving Karush-Kuhn-Tucker systems via the trust region and the conjugate gradient methods

    Get PDF
    A popular approach to solving the Karush-Kuhn-Tucker (KKT) system, mainly arising from the variational inequality problem, is to reformulate it as a constrained minimization problem with simple bounds. In this paper, we propose a trust region method for solving the reformulation problem with the trust region subproblems being solved by the truncated conjugate gradient (CG) method, which is cost effective. Other advantages of the proposed method over existing ones include the fact that a good approximated solution to the trust region subproblem can be found by the truncated CG method and is judged in a simple way; also, the working matrix in each iteration is H, instead of the condensed H TH , where H is a matrix element of the generalized Jacobian of the function used in the reformulation. As a matter of fact, the matrix used is of reduced dimension. We pay extra attention to ensure the success of the truncated CG method as well as the feasibility of the iterates with respect to the simple constraints. Another feature of the proposed method is that we allow the merit function value to be increased at some iterations to speed up the convergence. Global and superlinear/quadratic convergence is shown under standard assumptions. Numerical results are reported on a subset of problems from the MCPLIB collection [S. P. Dirkse and M. C. Ferris, Optim. Methods Softw., 5 (1995), pp. 319-345]

    Smoothing Functions and a Smoothing Newton Method for Complementarity and Variational Inequality Problems

    No full text
    In this paper, we discuss smoothing approximations of nonsmooth functions arising from complementarity and variational inequality problems. We present some new results which are essential in designing Newton-type methods. We introduce several new classes of smoothing functions for nonlinear complementarity problems and order complementarity problems. In particular, in the first time some computable smoothing functions for variational inequality problems with general constraints are introduced. Then we propose a new version of smoothing Newton methods and establish its global and superlinear (quadratical) convergence under conditions weaker than those in the literature

    Improving The Convergence Of Non-Interior Point Algorithms For Nonlinear Complementarity Problems

    No full text
    . Recently, based upon the Chen-Harker-Kanzow-Smale smoothing function and the trajectory and the neighbourhood techniques, Hotta and Yoshise proposed a non-interior point algorithm for solving the nonlinear complementarity problem. Their algorithm is globally convergent under a relatively mild condition. In this paper, we modify their algorithm and combine it with the superlinear convergence theory for nonlinear equations. We provide a globally linearly convergent result for a slightly updated version of the HottaYoshise algorithm and show that a further modified Hotta-Yoshise algorithm is globally and superlinearly convergent, with a convergence Q-order 1 + t, under suitable conditions, where t 2 (0; 1) is an additional parameter. 1. Introduction Consider the nonlinear complementarity problem (NCP for abbreviation) : Find an (x; y) 2 ! n \Theta ! n such that y \Gamma f(x) = 0; x 0; y 0; x T y = 0; (1.1) where f : ! n ! ! n is a continuously differentiable function. The..

    Nonsmooth Equations and Smoothing Newton Methods

    No full text
    In this article we review and summarize recent developments on nonsmooth equations and smoothing Newton methods. Several new suggestions are presented. 1 Introduction Suppose that H : ! n ! ! n is locally Lipschitz but not necessarily continuously differentiable. To solve H(x) = 0 (1.1) has become one of most active research directions in mathematical programming. The early study of nonsmooth equations can be traced back to [Eav71, Man75, Man76]. The system of nonsmooth equations arises from many applications. Pang and Qi [PaQ93] reviewed eight problems in the studies of complementarity problems, variational inequality problems and optimization problems, which can be reformulated as systems of nonsmooth equations. In this paper, we review recent developments of algorithms for solving nonsmooth equations. Section 2 is devoted to semismooth Newton methods and Section 3 discusses smoothing Newton methods. We make several final remarks in Section 4. 2 Semismooth Newton methods 2.1 L..
    corecore