20 research outputs found

    A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks

    Get PDF
    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds

    Object Tracking Based on Satellite Videos: A Literature Review

    Get PDF
    Video satellites have recently become an attractive method of Earth observation, providing consecutive images of the Earth’s surface for continuous monitoring of specific events. The development of on-board optical and communication systems has enabled the various applications of satellite image sequences. However, satellite video-based target tracking is a challenging research topic in remote sensing due to its relatively low spatial and temporal resolution. Thus, this survey systematically investigates current satellite video-based tracking approaches and benchmark datasets, focusing on five typical tracking applications: traffic target tracking, ship tracking, typhoon tracking, fire tracking, and ice motion tracking. The essential aspects of each tracking target are summarized, such as the tracking architecture, the fundamental characteristics, primary motivations, and contributions. Furthermore, popular visual tracking benchmarks and their respective properties are discussed. Finally, a revised multi-level dataset based on WPAFB videos is generated and quantitatively evaluated for future development in the satellite video-based tracking area. In addition, 54.3% of the tracklets with lower Difficulty Score (DS) are selected and renamed as the Easy group, while 27.2% and 18.5% of the tracklets are grouped into the Medium-DS group and the Hard-DS group, respectively

    Processing RGB Color Sensors for Measuring the Circadian Stimulus of Artificial and Daylight Light Sources

    Get PDF
    The three main tasks of modern lighting design are to support the visual performance, satisfy color emotion (color quality), and promote positive non-visual outcomes. In view of large-scale applications, the use of simple and inexpensive RGB color sensors to monitor related visual and non-visual illumination parameters seems to be of great promise for the future development of human-centered lighting control systems. In this context, the present work proposes a new methodology to assess the circadian effectiveness of the prevalent lighting conditions for daylight and artificial light sources in terms of the physiologically relevant circadian stimulus (CS) metric using such color sensors. In the case of daylight, the raw sensor readouts were processed in such a way that the CIE daylight model can be applied as an intermediate step to estimate its spectral composition, from which CS can eventually be calculated straightforwardly. Maximal CS prediction errors of less than 0.0025 were observed when tested on real data. For artificial light sources, on the other hand, the CS approximation method of Truong et al. was applied to estimate its circadian effectiveness from the sensor readouts. In this case, a maximal CS prediction error of 0.028 must be reported, which is considerably larger compared to daylight, but still in an acceptable range for typical indoor lighting applications. The use of RGB color sensors is thus shown to be suitable for estimating the circadian effectiveness of both types of illumination with sufficient accuracy for practical applications

    Interactive energy minimizing segmentation frameworks

    Get PDF
    [no abstract

    SHELDON Smart habitat for the elderly.

    Get PDF
    An insightful document concerning active and assisted living under different perspectives: Furniture and habitat, ICT solutions and Healthcare
    corecore