68,922 research outputs found

    Cognitive GPR for subsurface sensing based on edge computing and deep reinforcement learning

    Get PDF
    Ground penetrating radars (GPRs) have been extensively used in many industrial applications, such as coal mining, structural health monitoring, subsurface utilities detection and localization, and autonomous driving. Most of the existing GPR systems are human-operated due to the need for experience in operation configurations based on the interpretation of collected GPR data. To achieve the best subsurface sensing performance, it is desired to design an autonomous GPR system that can operate adaptively under varying sensing conditions. In this research, first, a generic architecture for cognitive GPRs based on edge computing is studied. The operation of cognitive GPRs under this architecture is formulated as a sequential decision process. Then a cognitive GPR based on 2D B-Scan image analysis and deep Q-learning network (DQN) is investigated. A novel entropy-based reward function is designed for the DQN model by using the results of subsurface object detection (via the region of interest identification) and recognition (via classification). Furthermore, to acquire a global view of subsurface objects with complex shape configurations, 2D B-Scan image analysis is extended to 3D GPR data analysis termed “Scan Cloud.” A scan cloud-enabled cognitive GPR is studied based on an advanced deep reinforcement learning method called deep deterministic policy gradient (DDPG) with a new reward function derived from 3D GPR data. The proposed methods are evaluated using GPR modeling and simulation software called GprMax. Simulation results show that our proposed cognitive GPRs outperform other GPR systems in terms of detection accuracy, operating time, and object reconstruction

    Knowledge-based vision and simple visual machines

    Get PDF
    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    Streaming visualisation of quantitative mass spectrometry data based on a novel raw signal decomposition method

    Get PDF
    As data rates rise, there is a danger that informatics for high-throughput LC-MS becomes more opaque and inaccessible to practitioners. It is therefore critical that efficient visualisation tools are available to facilitate quality control, verification, validation, interpretation, and sharing of raw MS data and the results of MS analyses. Currently, MS data is stored as contiguous spectra. Recall of individual spectra is quick but panoramas, zooming and panning across whole datasets necessitates processing/memory overheads impractical for interactive use. Moreover, visualisation is challenging if significant quantification data is missing due to data-dependent acquisition of MS/MS spectra. In order to tackle these issues, we leverage our seaMass technique for novel signal decomposition. LC-MS data is modelled as a 2D surface through selection of a sparse set of weighted B-spline basis functions from an over-complete dictionary. By ordering and spatially partitioning the weights with an R-tree data model, efficient streaming visualisations are achieved. In this paper, we describe the core MS1 visualisation engine and overlay of MS/MS annotations. This enables the mass spectrometrist to quickly inspect whole runs for ionisation/chromatographic issues, MS/MS precursors for coverage problems, or putative biomarkers for interferences, for example. The open-source software is available from http://seamass.net/viz/

    User preferences on route instruction types for mobile indoor route guidance

    Get PDF
    Adaptive mobile wayfinding systems are being developed to ease wayfinding in the indoor environment. They present wayfinding information to the user, which is adapted to the context. Wayfinding information can be communicated by using different types of route instructions, such as text, photos, videos, symbols or a combination thereof. The need for a different type of route instruction may vary at decision points, for example because of its complexity. Furthermore, these needs may be different for different user characteristics (e.g., age, gender, level of education). To determine this need for information, an online survey has been executed where participants rated 10 different route instruction types at several decision points in a case study building. Results show that the types with additional text were preferred over those without text. The photo instructions, combined with text, generally received the highest ratings, especially from first-time visitors. 3D simulations were appreciated at complex decision points and by younger people. When text (with symbols) is considered as a route instruction type, it is best used for the start or end instruction

    Cognitive visual tracking and camera control

    Get PDF
    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision

    Interactive Spaces. Models and Algorithms for Reality-based Music Applications

    Get PDF
    Reality-based interfaces have the property of linking the user's physical space with the computer digital content, bringing in intuition, plasticity and expressiveness. Moreover, applications designed upon motion and gesture tracking technologies involve a lot of psychological features, like space cognition and implicit knowledge. All these elements are the background of three presented music applications, employing the characteristics of three different interactive spaces: a user centered three dimensional space, a floor bi-dimensional camera space, and a small sensor centered three dimensional space. The basic idea is to deploy the application's spatial properties in order to convey some musical knowledge, allowing the users to act inside the designed space and to learn through it in an enactive way
    corecore