6,369 research outputs found

    Approximate least trimmed sum of squares fitting and applications in image analysis

    Get PDF
    The least trimmed sum of squares (LTS) regression estimation criterion is a robust statistical method for model fitting in the presence of outliers. Compared with the classical least squares estimator, which uses the entire data set for regression and is consequently sensitive to outliers, LTS identifies the outliers and fits to the remaining data points for improved accuracy. Exactly solving an LTS problem is NP-hard, but as we show here, LTS can be formulated as a concave minimization problem. Since it is usually tractable to globally solve a convex minimization or concave maximization problem in polynomial time, inspired by [1], we instead solve LTS’ approximate complementary problem, which is convex minimization. We show that this complementary problem can be efficiently solved as a second order cone program. We thus propose an iterative procedure to approximately solve the original LTS problem. Our extensive experiments demonstrate that the proposed method is robust, efficient and scalable in dealing with problems where data are contaminated with outliers. We show several applications of our method in image analysis.Fumin Shen, Chunhua Shen, Anton van den Hengel and Zhenmin Tan

    Time course and robustness of ERP object and face differences

    Get PDF
    Conflicting results have been reported about the earliest “true” ERP differences related to face processing, with the bulk of the literature focusing on the signal in the first 200 ms after stimulus onset. Part of the discrepancy might be explained by uncontrolled low-level differences between images used to assess the timing of face processing. In the present experiment, we used a set of faces, houses, and noise textures with identical amplitude spectra to equate energy in each spatial frequency band. The timing of face processing was evaluated using face–house and face–noise contrasts, as well as upright-inverted stimulus contrasts. ERP differences were evaluated systematically at all electrodes, across subjects, and in each subject individually, using trimmed means and bootstrap tests. Different strategies were employed to assess the robustness of ERP differential activities in individual subjects and group comparisons. We report results showing that the most conspicuous and reliable effects were systematically observed in the N170 latency range, starting at about 130–150 ms after stimulus onset

    Advances of Robust Subspace Face Recognition

    Get PDF
    Face recognition has been widely applied in fast video surveillance and security systems and smart home services in our daily lives. Over past years, subspace projection methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), are the well-known algorithms for face recognition. Recently, linear regression classification (LRC) is one of the most popular approaches through subspace projection optimizations. However, there are still many problems unsolved in severe conditions with different environments and various applications. In this chapter, the practical problems including partial occlusion, illumination variation, different expression, pose variation, and low resolution are addressed and solved by several improved subspace projection methods including robust linear regression classification (RLRC), ridge regression (RR), improved principal component regression (IPCR), unitary regression classification (URC), linear discriminant regression classification (LDRC), generalized linear regression classification (GLRC) and trimmed linear regression (TLR). Experimental results show that these methods can perform well and possess high robustness against problems of partial occlusion, illumination variation, different expression, pose variation and low resolution
    corecore