8,261 research outputs found

    The Metric Nearness Problem

    Get PDF
    Metric nearness refers to the problem of optimally restoring metric properties to distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric data can be important in various settings, for example, in clustering, classification, metric-based indexing, query processing, and graph theoretic approximation algorithms. This paper formulates and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a “nearest” set of distances that satisfy the properties of a metric—principally the triangle inequality. For solving this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative projection method. An intriguing aspect of the metric nearness problem is that a special case turns out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and develops a new algorithm for the latter problem using a primal-dual method. Applications to graph clustering are provided as an illustration. We include experiments that demonstrate the computational superiority of triangle fixing over general purpose convex programming software. Finally, we conclude by suggesting various useful extensions and generalizations to metric nearness

    A Better Alternative to Piecewise Linear Time Series Segmentation

    Get PDF
    Time series are difficult to monitor, summarize and predict. Segmentation organizes time series into few intervals having uniform characteristics (flatness, linearity, modality, monotonicity and so on). For scalability, we require fast linear time algorithms. The popular piecewise linear model can determine where the data goes up or down and at what rate. Unfortunately, when the data does not follow a linear model, the computation of the local slope creates overfitting. We propose an adaptive time series model where the polynomial degree of each interval vary (constant, linear and so on). Given a number of regressors, the cost of each interval is its polynomial degree: constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so on. Our goal is to minimize the Euclidean (l_2) error for a given model complexity. Experimentally, we investigate the model where intervals can be either constant or linear. Over synthetic random walks, historical stock market prices, and electrocardiograms, the adaptive model provides a more accurate segmentation than the piecewise linear model without increasing the cross-validation error or the running time, while providing a richer vocabulary to applications. Implementation issues, such as numerical stability and real-world performance, are discussed.Comment: to appear in SIAM Data Mining 200

    A cell-based smoothed finite element method for kinematic limit analysis

    Get PDF
    This paper presents a new numerical procedure for kinematic limit analysis problems, which incorporates the cell-based smoothed finite element method with second-order cone programming. The application of a strain smoothing technique to the standard displacement finite element both rules out volumetric locking and also results in an efficient method that can provide accurate solutions with minimal computational effort. The non-smooth optimization problem is formulated as a problem of minimizing a sum of Euclidean norms, ensuring that the resulting optimization problem can be solved by an efficient second-order cone programming algorithm. Plane stress and plane strain problems governed by the von Mises criterion are considered, but extensions to problems with other yield criteria having a similar conic quadratic form or 3D problems can be envisaged
    corecore