192 research outputs found

    Uncertainty and Error in Combat Modeling, Simulation, and Analysis

    Get PDF
    Due to the infrequent and competitive nature of combat, several challenges present themselves when developing a predictive simulation. First, there is limited data with which to validate such analysis tools. Secondly, there are many aspects of combat modeling that are highly uncertain and not knowable. This research develops a comprehensive set of techniques for the treatment of uncertainty and error in combat modeling and simulation analysis. First, Evidence Theory is demonstrated as a framework for representing epistemic uncertainty in combat modeling output. Next, a novel method for sensitivity analysis of uncertainty in Evidence Theory is developed. This sensitivity analysis method generates marginal cumulative plausibility functions (CPFs) and cumulative belief functions (CBFs) and prioritizes the contribution of each factor by the Wasserstein distance (also known as the Kantorovich or Earth Movers distance) between the CBF and CPF. Using this method, a rank ordering of the simulation input factors can be produced with respect to uncertainty. Lastly, a procedure for prioritizing the impact of modeling choices on simulation output uncertainty in settings where multiple models are employed is developed. This analysis provides insight into the overall sensitivities of the system with respect to multiple modeling choices

    Efficient Models and Algorithms for Image Processing for Industrial Applications

    Get PDF
    Image processing and computer vision are now part of our daily life and allow artificial intelligence systems to see and perceive the world with a visual system similar to the human one. In the quest to improve performance, computer vision algorithms reach remarkable computational complexities. The high computational complexity is mitigated by the availability of hardware capable of supporting these computational demands. However, high-performance hardware cannot always be relied upon when one wants to make the research product usable. In this work, we have focused on the development of computer vision algorithms and methods with low computational complexity but high performance. The first approach is to study the relationship between Fourier-based metrics and Wasserstein distances to propose alternative metrics to the latter, considerably reducing the time required to obtain comparable results. In the second case, instead, we start from an industrial problem and develop a deep learning model for change detection, obtaining state-of-the-art performance but reducing the computational complexity required by at least a third compared to the existing literature

    Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent

    Full text link
    We study first-order optimization algorithms for computing the barycenter of Gaussian distributions with respect to the optimal transport metric. Although the objective is geodesically non-convex, Riemannian GD empirically converges rapidly, in fact faster than off-the-shelf methods such as Euclidean GD and SDP solvers. This stands in stark contrast to the best-known theoretical results for Riemannian GD, which depend exponentially on the dimension. In this work, we prove new geodesic convexity results which provide stronger control of the iterates, yielding a dimension-free convergence rate. Our techniques also enable the analysis of two related notions of averaging, the entropically-regularized barycenter and the geometric median, providing the first convergence guarantees for Riemannian GD for these problems.Comment: 48 pages, 8 figure
    • …
    corecore