1,028 research outputs found

    Neuro-inspired edge feature fusion using Choquet integrals

    Get PDF
    It is known that the human visual system performs a hierarchical information process in which early vision cues (or primitives) are fused in the visual cortex to compose complex shapes and descriptors. While different aspects of the process have been extensively studied, such as lens adaptation or feature detection, some other aspects, such as feature fusion, have been mostly left aside. In this work, we elaborate on the fusion of early vision primitives using generalizations of the Choquet integral, and novel aggregation operators that have been extensively studied in recent years. We propose to use generalizations of the Choquet integral to sensibly fuse elementary edge cues, in an attempt to model the behaviour of neurons in the early visual cortex. Our proposal leads to a fully-framed edge detection algorithm whose performance is put to the test in state-of-the-art edge detection datasets.The authors gratefully acknowledge the financial support of the Spanish Ministry of Science and Technology (project PID2019-108392GB-I00 (AEI/10.13039/501100011033), the Research Services of Universidad PĂşblica de Navarra, CNPq (307781/2016-0, 301618/2019-4), FAPERGS (19/2551-0001660) and PNPD/CAPES (464880/2019-00)

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness

    Acta Cybernetica : Volume 21. Number 1.

    Get PDF

    A Statistical Framework for Improved Automatic Flaw Detection in Nondestructive Evaluation Images

    Get PDF
    Nondestructive evaluation (NDE) techniques are widely used to detect flaws in critical components of systems like aircraft engines, nuclear power plants and oil pipelines in order to prevent catastrophic events. Many modern NDE systems generate image data. In some applications an experienced inspector performs the tedious task of visually examining every image to provide accurate conclusions about the existence of flaws. This approach is labor-intensive and can cause misses due to operator ennui. Automated evaluation methods seek to eliminate human-factors variability and improve throughput. Simple methods based on peak amplitude in an image are sometimes employed and a trained-operator-controlled refinement that uses a dynamic threshold based on signal-to-noise ratio (SNR) has also been implemented. We develop an automated and optimized detection procedure that mimics these operations. The primary goal of our methodology is to reduce the number of images requiring expert visual evaluation by filtering out images that are overwhelmingly definitive on the existence or absence of a flaw. We use an appropriate model for the observed values of the SNR-detection criterion to estimate the probability of detection. Our methodology outperforms current methods in terms of its ability to detect flaws

    Currency security and forensics: a survey

    Get PDF
    By its definition, the word currency refers to an agreed medium for exchange, a nation’s currency is the formal medium enforced by the elected governing entity. Throughout history, issuers have faced one common threat: counterfeiting. Despite technological advancements, overcoming counterfeit production remains a distant future. Scientific determination of authenticity requires a deep understanding of the raw materials and manufacturing processes involved. This survey serves as a synthesis of the current literature to understand the technology and the mechanics involved in currency manufacture and security, whilst identifying gaps in the current literature. Ultimately, a robust currency is desire

    Variable autonomy assignment algorithms for human-robot interactions.

    Get PDF
    As robotic agents become increasingly present in human environments, task completion rates during human-robot interaction has grown into an increasingly important topic of research. Safe collaborative robots executing tasks under human supervision often augment their perception and planning capabilities through traded or shared control schemes. However, such systems are often proscribed only at the most abstract level, with the meticulous details of implementation left to the designer\u27s prerogative. Without a rigorous structure for implementing controls, the work of design is frequently left to ad hoc mechanism with only bespoke guarantees of systematic efficacy, if any such proof is forthcoming at all. Herein, I present two quantitatively defined models for implementing sliding-scale variable autonomy, in which levels of autonomy are determined by the relative efficacy of autonomous subroutines. I experimentally test the resulting Variable Autonomy Planning (VAP) algorithm and against a traditional traded control scheme in a pick-and-place task, and apply the Variable Autonomy Tasking algorithm to the implementation of a robot performing a complex sanitation task in real-world environs. Results show that prioritizing autonomy levels with higher success rates, as encoded into VAP, allows users to effectively and intuitively select optimal autonomy levels for efficient task completion. Further, the Pareto optimal design structure of the VAP+ algorithm allows for significant performance improvements to be made through intervention planning based on systematic input determining failure probabilities through sensorized measurements. This thesis describes the design, analysis, and implementation of these two algorithms, with a particular focus on the VAP+ algorithm. The core conceit is that they are methods for rigorously defining locally optimal plans for traded control being shared between a human and one or more autonomous processes. It is derived from an earlier algorithmic model, the VAP algorithm, developed to address the issue of rigorous, repeatable assignment of autonomy levels based on system data which provides guarantees on basis of the failure-rate sorting of paired autonomous and manual subtask achievement systems. Using only probability ranking to define levels of autonomy, the VAP algorithm is able to sort modules into optimizable ordered sets, but is limited to only solving sequential task assignments. By constructing a joint cost metric for the entire plan, and by implementing a back-to-front calculation scheme for this metric, it is possible for the VAP+ algorithm to generate optimal planning solutions which minimize the expected cost, as amortized over time, funds, accuracy, or any metric combination thereof. The algorithm is additionally very efficient, and able to perform on-line assessments of environmental changes to the conditional probabilities associated with plan choices, should a suitable model for determining these probabilities be present. This system, as a paired set of two algorithms and a design augmentation, form the VAP+ algorithm in full

    Patch-wise adaptive weights smoothing

    Get PDF
    Image reconstruction from noisy data has a long history of methodological development and is based on a variety of ideas. In this paper we introduce a new method called patch-wise adaptive smoothing, that extends the Propagation-Separation approach by using comparisons of local patches of image intensities to define local adaptive weighting schemes for an improved balance of reduced variability and bias in the reconstruction result. We present the implementation of the new method in an R package aws and demonstrate its properties on a number of examples in comparison with other state-of-the art image reconstruction methods

    LCCC focus period and workshop on Dynamics and Control in Networks

    Get PDF
    • …
    corecore