684 research outputs found

    The Metric Nearness Problem

    Get PDF
    Metric nearness refers to the problem of optimally restoring metric properties to distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric data can be important in various settings, for example, in clustering, classification, metric-based indexing, query processing, and graph theoretic approximation algorithms. This paper formulates and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a ā€œnearestā€ set of distances that satisfy the properties of a metricā€”principally the triangle inequality. For solving this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative projection method. An intriguing aspect of the metric nearness problem is that a special case turns out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and develops a new algorithm for the latter problem using a primal-dual method. Applications to graph clustering are provided as an illustration. We include experiments that demonstrate the computational superiority of triangle fixing over general purpose convex programming software. Finally, we conclude by suggesting various useful extensions and generalizations to metric nearness

    Study on multi-SVM systems and their applications to pattern recognition

    Get PDF
    制åŗ¦:ꖰ ; 報告ē•Ŗ号:ē”²3136号 ; å­¦ä½ć®ēخ锞:博士(å·„å­¦) ; ꎈäøŽå¹“ęœˆę—„:2010/7/12 ; ę—©å¤§å­¦ä½čؘē•Ŗ号:ꖰ541

    SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks

    Full text link
    Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials. However, simulating and training SNNs is computationally expensive due to the need to solve large systems of coupled differential equations. In this paper, we introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs. Our algorithm reduces the computational cost of both the forward and backward pass operations from O(N) to O(log(N)) per network spike, thereby enabling numerically exact simulations of large spiking networks and their efficient training using backpropagation through time. By leveraging the sparsity of the network, SparseProp eliminates the need to iterate through all neurons at each spike, employing efficient state updates instead. We demonstrate the efficacy of SparseProp across several classical integrate-and-fire neuron models, including a simulation of a sparse SNN with one million LIF neurons. This results in a speed-up exceeding four orders of magnitude relative to previous event-based implementations. Our work provides an efficient and exact solution for training large-scale spiking neural networks and opens up new possibilities for building more sophisticated brain-inspired models.Comment: 10 pages, 4 figures, accepted at NeurIP

    Churn Management Optimization with Controllable Marketing Variables and Associated Management Costs

    Get PDF
    In this paper, we propose a churn management model based on a partial least square (PLS) optimization method that explicitly considers the management costs of controllable marketing variables for a success- ful churn management program. A PLS prediction model is first calibrated to estimate the churn proba- bilities of customers. Then this PLS prediction model is transformed into a control model after relative management costs of controllable marketing variables are estimated through a triangulation method. Finally, a PLS optimization model with marketing objectives and constraints are specified and solved via a sequential quadratic programming method. In our experiments, we observe that while the training and test data sets are dramatically different in terms of churner distributions (50% vs. 1.8%), four control- lable variables in three marketing strategies significantly changed through optimization process while other variables only marginally changed. We also observe that the most significant variable in a PLS pre- diction model does not necessarily change most significantly in our PLS optimization model due to the highest management cost associated, implying differences between a prediction and an optimization model. Finally, two marketing models designed for targeting the subsets of customers based on churn probability or management costs are presented and discussed

    Exact Characterization of the Convex Hulls of Reachable Sets

    Full text link
    We study the convex hulls of reachable sets of nonlinear systems with bounded disturbances. Reachable sets play a critical role in control, but remain notoriously challenging to compute, and existing over-approximation tools tend to be conservative or computationally expensive. In this work, we exactly characterize the convex hulls of reachable sets as the convex hulls of solutions of an ordinary differential equation from all possible initial values of the disturbances. This finite-dimensional characterization unlocks a tight estimation algorithm to over-approximate reachable sets that is significantly faster and more accurate than existing methods. We present applications to neural feedback loop analysis and robust model predictive control

    Annales Mathematicae et Informaticae (42.)

    Get PDF

    Optimisation:An Introduction CTW.04/TM-5476

    Get PDF

    A steepest descent algorithm for the optimal control of a cascaded hydropower system

    Get PDF
    Optimal power generation along the cascaded Kainji-Jebba hydroelectric power system had been very difficult to achieve. The reservoirs operating heads are being affected by possible variation in impoundments upstream, stochastic factors that are weather-related, availability of the turbo-alternators and power generated at any time. Proposed in this paper, is an algorithm for solving the optimal release of water on the cascaded hydropower system based on steepest descent method. The uniqueness of this work is the conversion of the infinite dimensional control problem to a finite one, the introduction of clever techniques for choosing the steepest descent step size in each iteration and the nonlinear penalty embedded in the procedure. The control algorithm was implemented in an Excel VBAĀ® environment to solve the ormulated Lagrange problem within an accuracy of 0.03%. It is recommended for use in system studies and control design for the optimal power generation in the cascaded hydropower system
    • ā€¦
    corecore