755 research outputs found

    New Generation of Instrumented Ranges: Enabling Automated Performance Analysis

    Get PDF
    Military training conducted on physical ranges that match a unit’s future operational environment provides an invaluable experience. Today, to conduct a training exercise while ensuring a unit’s performance is closely observed, evaluated, and reported on in an After Action Review, the unit requires a number of instructors to accompany the different elements. Training organized on ranges for urban warfighting brings an additional level of complexity—the high level of occlusion typical for these environments multiplies the number of evaluators needed. While the units have great need for such training opportunities, they may not have the necessary human resources to conduct them successfully. In this paper we report on our US Navy/ONR-sponsored project aimed at a new generation of instrumented ranges, and the early results we have achieved. We suggest a radically different concept: instead of recording multiple video streams that need to be reviewed and evaluated by a number of instructors, our system will focus on capturing dynamic individual warfighter pose data and performing automated performance evaluation. We will use an in situ network of automatically-controlled pan-tilt-zoom video cameras and personal position and orientation sensing devices. Our system will record video, reconstruct dynamic 3D individual poses, analyze, recognize events, evaluate performances, generate reports, provide real-time free exploration of recorded data, and even allow the user to generate ‘what-if’ scenarios that were never recorded. The most direct benefit for an individual unit will be the ability to conduct training with fewer human resources, while having a more quantitative account of their performance (dispersion across the terrain, ‘weapon flagging’ incidents, number of patrols conducted). The instructors will have immediate feedback on some elements of the unit’s performance. Having data sets for multiple units will enable historical trend analysis, thus providing new insights and benefits for the entire service.Office of Naval Researc

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    Modelling false positive reduction in maritime object detection

    Get PDF
    Target detection has become a very significant research area in computer vision with its applications in military, maritime surveillance, and defense and security. Maritime target detection during critical sea conditions produces a number of false positives when using the existing algorithms due to sea waves, dynamic nature of the ocean, camera motion, sea glint, sensor noise, sea spray, swell and the presence of birds. The main question that has been addressed in this research is how can object detection be improved in maritime environment by reducing false positives and promoting detection rate. Most of Previous work on object detection still fails to address the problem of false positives and false negatives due to background clutter. Most of the researchers tried to reduce false positives by applying filters but filtering degrades the quality of an image leading to more false alarms during detection. As much as radar technology has previously been the most utilized method, it still fails to detect very small objects and it may be applied in special circumstances. In trying to improve the implementation of target detection in maritime, empirical research method was proposed to answer questions about existing target detection algorithms and techniques used to reduce false positives in object detection. Visible images were retrained on a pre-trained Faster R-CNN with inception v2. The pre-trained model was retrained on five different sample data with increasing size, however for the last two samples the data was duplicated to increase size. For testing purposes 20 test images were utilized to evaluate all the models. The results of this study showed that the deep learning method used performed best in detecting maritime vessels and the increase of dataset improved detection performance and false positives were reduced. The duplication of images did not yield the best results; however, the results were promising for the first three models with increasing data
    • …
    corecore