297,566 research outputs found

    Real-Time High Resolution Integrated Optical Micro-Spectrometer

    Get PDF
    A real-time integrated planar single-mode waveguide grating micro-spectrometer with high resolution of 0.5 nm in 120 nm wide range of visible spectrum, from 525 nm to 645 nm is demonstrated. A CMOS sensor is used for capturing the output image of micro-spectrometer. A f = 1cm lens is used to focus the diffracted monochromatic light onto the CMOS sensor. An algorithm is developed using simple polynomial equation which uses two known reference wavelengths to convert x-pixel numbers of the CMOS sensor to wavelength spectrum. The output of micro-spectrometer in this design has comparatively less noise than usual spectrometric measurements. This design uses built-in matlab functions such as \u27findpeaks\u27 to find the input laser peaks and the central pixel numbers for that peaks and \u27polyfit\u27 to find the coefficients essential for the calibration of wavelength spectrum

    Standoff Methods for the Detection of Threat Agents: A Review of Several Promising Laser-Based Techniques

    Get PDF
    Detection of explosives, explosive precursors, or other threat agents presents a number of technological challenges for optical sensing methods. Certainly detecting trace levels of threat agents against a complex background is chief among these challenges; however, the related issues of multiple target distances (from standoff to proximity) and sampling time scales (from passive mines to rapid rate of march convoy protection) for different applications make it unlikely that a single technique will be ideal for all sensing situations. A number of methods for spanning the range of optical sensor technologies exist which, when integrated, could produce a fused sensor system possessing a high level of sensitivity to threat agents and a moderate standoff real-time capability appropriate for portal screening of personnel or vehicles. In this work, we focus on several promising, and potentially synergistic, laser-based methods for sensing threat agents. For each method, we have briefly outlined the technique and report on the current level of capability

    Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction

    Get PDF
    Interactive real-time scene acquisition from hand-held depth cameras has recently developed much momentum, enabling applications in ad-hoc object acquisition, augmented reality and other fields. A key challenge to online reconstruction remains error accumulation in the reconstructed camera trajectory, due to drift-inducing instabilities in the range scan alignments of the underlying iterative-closest-point (ICP) algorithm. Various strategies have been proposed to mitigate that drift, including SIFT-based pre-alignment, color-based weighting of ICP pairs, stronger weighting of edge features, and so on. In our work, we focus on surface curvature as a feature that is detectable on range scans alone and hence does not depend on accurate multi-sensor alignment. In contrast to previous work that took curvature into consideration, however, we treat curvature as an independent quantity that we consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering. Using multiple benchmark sequences, and in direct comparison to other state-of-the-art online acquisition systems, we show that our approach significantly reduces drift, both when analyzing individual pipeline stages in isolation, as well as seen across the online reconstruction pipeline as a whole

    3D Capturing Performances of Low-Cost Range Sensors for Mass-Market Applications

    Get PDF
    Since the advent of the first Kinect as motion controller device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices have been introduced on the mass-market for several purposes, including gesture based interfaces, 3D multimedia interaction, robot navigation, finger tracking, 3D body scanning for garment design and proximity sensors for automotive. However, given their capability to generate a real time stream of range images, these has been used in some projects also as general purpose range devices, with performances that for some applications might be satisfying. This paper shows the working principle of the various devices, analyzing them in terms of systematic errors and random errors for exploring the applicability of them in standard 3D capturing problems. Five actual devices have been tested featuring three different technologies: i) Kinect V1 by Microsoft, Structure Sensor by Occipital, and Xtion PRO by ASUS, all based on different implementations of the Primesense sensor; ii) F200 by Intel/Creative, implementing the Realsense pattern projection technology; Kinect V2 by Microsoft, equipped with the Canesta TOF Camera. A critical analysis of the results tries first of all to compare them, and secondarily to focus the range of applications for which such devices could actually work as a viable solution

    Autonomous construction agents: An investigative framework for large sensor network self-management

    Get PDF
    Recent technological advances have made it cost effective to utilize massive, heterogeneous sensor networks. To gain appreciable value from these informational systems, there must be a control scheme that coordinates information flow to produce meaningful results. This paper will focus on tools developed to manage the coordination of autonomous construction agents using stigmergy, in which a set of basic low-level rules are implemented through various environmental cues. Using VE-Suite, an open-source virtual engineering software package, an interactive environment is created to explore various informational configurations for the construction problem. A simple test case is developed within the framework, and construction times are analyzed for possible functional relationships pertaining to performance of a particular set of parameters and a given control process. Initial experiments for the test case show sensor saturation occurs relatively quickly with 5-7 sensors, and construction time is generally independent of sensor range except for small numbers of sensors. Further experiments using this framework are needed to define other aspects of sensor performance. These trends can then be used to help decide what kinds of sensing capabilities are required to simultaneously achieve the most cost-effective solution and provide the required value of information when applied to the development of real world sensor applications

    Advanced Mid-Water Tools for 4D Marine Data Fusion and Visualization

    Get PDF
    Mapping and charting of the seafloor underwent a revolution approximately 20 years ago with the introduction of multibeam sonars -- sonars that provided complete, high-resolution coverage of the seafloor rather than sparse measurements. The initial focus of these sonar systems was the charting of depths in support of safety of navigation and offshore exploration; more recently innovations in processing software have led to approaches to characterize seafloor type and for mapping seafloor habitat in support of fisheries research. In recent years, a new generation of multibeam sonars has been developed that, for the first time, have the ability to map the water column along with the seafloor. This ability will potentially allow multibeam sonars to address a number of critical ocean problems including the direct mapping of fish and marine mammals, the location of mid-water targets and, if water column properties are appropriate, a wide range of physical oceanographic processes. This potential relies on suitable software to make use of all of the new available data. Currently, the users of these sonars have a limited view of the mid-water data in real-time and limited capacity to store it, replay it, or run further analysis. The data also needs to be integrated with other sensor assets such as bathymetry, backscatter, sub-bottom, seafloor characterizations and other assets so that a “complete” picture of the marine environment under analysis can be realized. Software tools developed for this type of data integration should support a wide range of sonars with a unified format for the wide variety of mid-water sonar types. This paper describes the evolution and result of an effort to create a software tool that meets these needs, and details case studies using the new tools in the areas of fisheries research, static target search, wreck surveys and physical oceanographic processes

    A Novel Real-Time Edge-Cloud Big Data Management and Analytics Framework for Smart Cities

    Get PDF
    Exposing city information to dynamic, distributed, powerful, scalable, and user-friendly big data systems is expected to enable the implementation of a wide range of new opportunities; however, the size, heterogeneity and geographical dispersion of data often makes it difficult to combine, analyze and consume them in a single system. In the context of the H2020 CLASS project, we describe an innovative framework aiming to facilitate the design of advanced big-data analytics workflows. The proposal covers the whole compute continuum, from edge to cloud, and relies on a well-organized distributed infrastructure exploiting: a) edge solutions with advanced computer vision technologies enabling the real-time generation of “rich” data from a vast array of sensor types; b) cloud data management techniques offering efficient storage, real-time querying and updating of the high-frequency incoming data at different granularity levels. We specifically focus on obstacle detection and tracking for edge processing, and consider a traffic density monitoring application, with hierarchical data aggregation features for cloud processing; the discussed techniques will constitute the groundwork enabling many further services. The tests are performed on the real use-case of the Modena Automotive Smart Area (MASA)

    Long Range Automated Persistent Surveillance

    Get PDF
    This dissertation addresses long range automated persistent surveillance with focus on three topics: sensor planning, size preserving tracking, and high magnification imaging. field of view should be reserved so that camera handoff can be executed successfully before the object of interest becomes unidentifiable or untraceable. We design a sensor planning algorithm that not only maximizes coverage but also ensures uniform and sufficient overlapped camera’s field of view for an optimal handoff success rate. This algorithm works for environments with multiple dynamic targets using different types of cameras. Significantly improved handoff success rates are illustrated via experiments using floor plans of various scales. Size preserving tracking automatically adjusts the camera’s zoom for a consistent view of the object of interest. Target scale estimation is carried out based on the paraperspective projection model which compensates for the center offset and considers system latency and tracking errors. A computationally efficient foreground segmentation strategy, 3D affine shapes, is proposed. The 3D affine shapes feature direct and real-time implementation and improved flexibility in accommodating the target’s 3D motion, including off-plane rotations. The effectiveness of the scale estimation and foreground segmentation algorithms is validated via both offline and real-time tracking of pedestrians at various resolution levels. Face image quality assessment and enhancement compensate for the performance degradations in face recognition rates caused by high system magnifications and long observation distances. A class of adaptive sharpness measures is proposed to evaluate and predict this degradation. A wavelet based enhancement algorithm with automated frame selection is developed and proves efficient by a considerably elevated face recognition rate for severely blurred long range face images

    e-DOTS: AN INDOOR TRACKING SOLUTION

    Get PDF
    poster abstractAccurately tracking an object as its moves in a large indoor area is at-tractive due to its applicability to a wide range of domains. For example, a typical healthcare setup may benefit from tracking their assets, such as spe-cialized equipment, in real-time and thus optimize their usage. Existing techniques, such as the GPS, that focus on outdoor tracking do not provide accurate estimations of location within the confines of an indoor setup. Prev-alent approaches that attempt to provide the ability to perform indoor track-ing primarily focus on a homogenous type of sensor when providing an esti-mation of an object’s location. Such a homogeneous view is neither benefi-cial nor sufficient due to specific characteristics of single type of sensors. This research aims to create a distributed tracking system composed out of many different kinds of inexpensive and off-the-shelf sensors to address this challenge. Specifically, the proposed system, called Enhanced Distributed Object Tracking System (e-DOTS), will incorporate sensors such as web cameras, publically available wireless access points, and inexpensive RFID tracking tags to achieve accurate tracking over a large indoor area in real-time. As an object, in addition to moving in a known indoor setup, may move through an unknown confined area, the e-DOTS needs to incorporate opportunistic discovery of available sensors, select a proper subset of them, and fuse their readings in real-time to achieve an accurate estimation of the current position of that object. A preliminary prototype of e-DOTS has been created and experimented with. The results of these validations are promis-ing and suggest the possibility of e-DOTS achieving its desired goals. Further research is aimed at incorporating different kinds of sensors, different fusion techniques (e.g., Federated Kalman Filtering) and various discovery mecha-nisms to improve the tracking accuracy and the associated response time

    Modeling Camera Effects to Improve Visual Learning from Synthetic Data

    Full text link
    Recent work has focused on generating synthetic imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling variation in the sensor domain. Sensor effects can degrade real images, limiting generalizability of network performance on visual tasks trained on synthetic data and tested in real environments. This paper proposes an efficient, automatic, physically-based augmentation pipeline to vary sensor effects --chromatic aberration, blur, exposure, noise, and color cast-- for synthetic imagery. In particular, this paper illustrates that augmenting synthetic training datasets with the proposed pipeline reduces the domain gap between synthetic and real domains for the task of object detection in urban driving scenes
    • …
    corecore