263,758 research outputs found

    Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training

    Get PDF
    We consider the convex quadratic linearly constrained problem with bounded variables and with huge and dense Hessian matrix that arises in many applications such as the training problem of bias support vector machines. We propose a decomposition algorithmic scheme suitable to parallel implementations and we prove global convergence under suitable conditions. Focusing on support vector machines training, we outline how these assumptions can be satisfied in practice and we suggest various specific implementations. Extensions of the theoretical results to general linearly constrained problem are provided. We included numerical results on support vector machines with the aim of showing the viability and the effectiveness of the proposed scheme

    Sparse Support Matrix Machines for the Classification of Corrupted Data

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Support matrix machine is fragile to the presence of outliers: even few corrupted data points can arbitrarily alter the quality of the approximation, What if a fraction of columns are corrupted? In real world, the data is noisy and most of the features may be redundant as well as may be useless, which in turn affect the classification performance. Thus, it is important to perform robust feature selection under robust metric learning to filter out redundant features and ignore the noisy data points for more interpretable modelling. To overcome this challenge, in this work, we propose a new model to address the classification problem of high dimensionality data by jointly optimizing the both regularizer and hinge loss. We combine the hinge loss and regularization terms as spectral elastic net penalty. The regularization term which promotes the structural sparsity and shares similar sparsity patterns across multiple predictors. It is a spectral extension of the conventional elastic net that combines the property of low-rank and joint sparsity together, to deal with complex high dimensional noisy data. We further extends this approach by combining the recovery along with feature selection and classification could significantly improve the performance based on the assumption that the data consists of a low rank clean matrix plus a sparse noise matrix. We perform matrix recovery, feature selection and classification through joint minimization of p,q-norm and nuclear norm under the incoherence and ambiguity conditions and able to recover intrinsic matrix of higher rank and recover data with much denser corruption. Although, above both methods takes full advantage of low rank assumption to exploit the strong correlation between columns and rows of each matrix and able to extract useful features, however, are originally built for binary classification problems. To improve the robustness against data that is rich in outliers, we further extend this problem and present a novel multiclass support matrix machine by utilizing the maximization of the inter-class margins (i.e. margins between pairs of classes). We demonstrate the significance and advantage of our methods on different available benchmark datasets such as person identification, face recognition and EEG classification. Results showed that our methods achieved significantly better performance both in terms of time and accuracy for solving the classification problem of highly correlated matrix data as compared to state-of-the-art methods

    Machine Hyperconsciousness

    Get PDF
    Individual animal consciousness appears limited to a single giant component of interacting cognitive modules, instantiating a shifting, highly tunable, Global Workspace. Human institutions, by contrast, can support several, often many, such giant components simultaneously, although they generally function far more slowly than the minds of the individuals who compose them. Machines having multiple global workspaces -- hyperconscious machines -- should, however, be able to operate at the few hundred milliseconds characteistic of individual consciousness. Such multitasking -- machine or institutional -- while clearly limiting the phenomenon of inattentional blindness, does not eliminate it, and introduces characteristic failure modes involving the distortion of information sent between global workspaces. This suggests that machines explicitly designed along these principles, while highly efficient at certain sets of tasks, remain subject to canonical and idiosyncratic failure patterns analogous to, but more complicated than, those explored in Wallace (2006a). By contrast, institutions, facing similar challenges, are usually deeply embedded in a highly stabilizing cultural matrix of law, custom, and tradition which has evolved over many centuries. Parallel development of analogous engineering strategies, directed toward ensuring an 'ethical' device, would seem requisite to the sucessful application of any form of hyperconscious machine technology

    Large quadratic programs in training gaussian support vector machines

    Get PDF
    We consider the numerical solution of the large convex quadratic program arising in training the learning machines named support vector machines. Since the matrix of the quadratic form is dense and generally large, solution approaches based on explicitstorage of this matrix are not practicable. Well known strategies for this quadratic program are based on decomposition techniques that split the problem into a sequence of smaller quadratic programming subproblems. For the solution of these subproblems we present an iterative projection-type method suited for the structure of the constraints and very eective in case of Gaussian support vector machines. We develop an appropriate decomposition technique designed to exploit the high performance of the proposed inner solver on medium or large subproblems. Numerical experiments on large-scale benchmark problems allow to compare this approach with another widelyused decomposition technique. Finally, a parallel extension of the proposed strategy is described
    • …
    corecore