3,618 research outputs found

    Activity Monitoring Made Easier by Smart 360-degree Cameras

    Get PDF
    This paper proposes the use of smart 360-degree cameras for activity monitoring. By exploiting the geometric properties of these cameras and adopting off-the-shelf tracking algorithms adapted to equirectangular images, this paper shows how simple it becomes deploying a camera network, and detecting the presence of pedestrians in predefined regions of interest with minimal information on the camera, namely its height. The paper further shows that smart 360-degree cameras can enhance motion understanding in the environment and proposes a simple method to estimate the heatmap of the scene to highlight regions where pedestrians are more often present. Quantitative and qualitative results demonstrate the effectiveness of the proposed approach

    Feature-based calibration of distributed smart stereo camera networks

    Get PDF
    A distributed smart camera network is a collective of vision-capable devices with enough processing power to execute algorithms for collaborative vision tasks. A true 3D sensing network applies to a broad range of applications, and local stereo vision capabilities at each node offer the potential for a particularly robust implementation. A novel spatial calibration method for such a network is presented, which obtains pose estimates suitable for collaborative 3D vision in a distributed fashion using two stages of registration on robust 3D features. The method is first described in a general, modular sense, assuming some ideal vision and registration algorithms. Then, existing algorithms are selected for a practical implementation. The method is designed independently of networking details, making only a few basic assumptions about the underlying network\u27s capabilities. Experiments using both software simulations and physical devices are designed and executed to demonstrate performance

    Feature-based calibration of distributed smart stereo camera networks

    Get PDF
    A distributed smart camera network is a collective of vision-capable devices with enough processing power to execute algorithms for collaborative vision tasks. A true 3D sensing network applies to a broad range of applications, and local stereo vision capabilities at each node offer the potential for a particularly robust implementation. A novel spatial calibration method for such a network is presented, which obtains pose estimates suitable for collaborative 3D vision in a distributed fashion using two stages of registration on robust 3D features. The method is initially described in a geometrical sense, then presented in a practical implementation using existing vision and registration algorithms. The method is designed independently of networking details, making only a few basic assumptions about the underlying networkpsilas capabilities. Experiments using both software simulations and physical devices are designed and executed to demonstrate performance

    Single-pass inline pipeline 3D reconstruction using depth camera array

    Get PDF
    A novel inline inspection (ILI) approach using depth cameras array (DCA) is introduced to create high-fidelity, dense 3D pipeline models. A new camera calibration method is introduced to register the color and the depth information of the cameras into a unified pipe model. By incorporating the calibration outcomes into a robust camera motion estimation approach, dense and complete 3D pipe surface reconstruction is achieved by using only the inline image data collected by a self-powered ILI rover in a single pass through a straight pipeline. The outcomes of the laboratory experiments demonstrate one-millimeter geometrical accuracy and 0.1-pixel photometric accuracy. In the reconstructed model of a longer pipeline, the proposed method generates the dense 3D surface reconstruction model at the millimeter level accuracy with less than 0.5% distance error. The achieved performance highlights its potential as a useful tool for efficient in-line, non-destructive evaluation of pipeline assets

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio
    • …
    corecore