6,096 research outputs found

    Keyframe-based monocular SLAM: design, survey, and future directions

    Get PDF
    Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery

    Group-In: Group Inference from Wireless Traces of Mobile Devices

    Full text link
    This paper proposes Group-In, a wireless scanning system to detect static or mobile people groups in indoor or outdoor environments. Group-In collects only wireless traces from the Bluetooth-enabled mobile devices for group inference. The key problem addressed in this work is to detect not only static groups but also moving groups with a multi-phased approach based only noisy wireless Received Signal Strength Indicator (RSSIs) observed by multiple wireless scanners without localization support. We propose new centralized and decentralized schemes to process the sparse and noisy wireless data, and leverage graph-based clustering techniques for group detection from short-term and long-term aspects. Group-In provides two outcomes: 1) group detection in short time intervals such as two minutes and 2) long-term linkages such as a month. To verify the performance, we conduct two experimental studies. One consists of 27 controlled scenarios in the lab environments. The other is a real-world scenario where we place Bluetooth scanners in an office environment, and employees carry beacons for more than one month. Both the controlled and real-world experiments result in high accuracy group detection in short time intervals and sampling liberties in terms of the Jaccard index and pairwise similarity coefficient.Comment: This work has been funded by the EU Horizon 2020 Programme under Grant Agreements No. 731993 AUTOPILOT and No.871249 LOCUS projects. The content of this paper does not reflect the official opinion of the EU. Responsibility for the information and views expressed therein lies entirely with the authors. Proc. of ACM/IEEE IPSN'20, 202

    Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    Full text link
    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and Automation (ICRA '15), Seattle, WA, US

    SysMART Indoor Services: A System of Smart and Connected Supermarkets

    Full text link
    Smart gadgets are being embedded almost in every aspect of our lives. From smart cities to smart watches, modern industries are increasingly supporting the Internet of Things (IoT). SysMART aims at making supermarkets smart, productive, and with a touch of modern lifestyle. While similar implementations to improve the shopping experience exists, they tend mainly to replace the shopping activity at the store with online shopping. Although online shopping reduces time and effort, it deprives customers from enjoying the experience. SysMART relies on cutting-edge devices and technology to simplify and reduce the time required during grocery shopping inside the supermarket. In addition, the system monitors and maintains perishable products in good condition suitable for human consumption. SysMART is built using state-of-the-art technologies that support rapid prototyping and precision data acquisition. The selected development environment is LabVIEW with its world-class interfacing libraries. The paper comprises a detailed system description, development strategy, interface design, software engineering, and a thorough analysis and evaluation.Comment: 7 pages, 11 figur

    High-Precision Fruit Localization Using Active Laser-Camera Scanning: Robust Laser Line Extraction for 2D-3D Transformation

    Full text link
    Recent advancements in deep learning-based approaches have led to remarkable progress in fruit detection, enabling robust fruit identification in complex environments. However, much less progress has been made on fruit 3D localization, which is equally crucial for robotic harvesting. Complex fruit shape/orientation, fruit clustering, varying lighting conditions, and occlusions by leaves and branches have greatly restricted existing sensors from achieving accurate fruit localization in the natural orchard environment. In this paper, we report on the design of a novel localization technique, called Active Laser-Camera Scanning (ALACS), to achieve accurate and robust fruit 3D localization. The ALACS hardware setup comprises a red line laser, an RGB color camera, a linear motion slide, and an external RGB-D camera. Leveraging the principles of dynamic-targeting laser-triangulation, ALACS enables precise transformation of the projected 2D laser line from the surface of apples to the 3D positions. To facilitate laser pattern acquisitions, a Laser Line Extraction (LLE) method is proposed for robust and high-precision feature extraction on apples. Comprehensive evaluations of LLE demonstrated its ability to extract precise patterns under variable lighting and occlusion conditions. The ALACS system achieved average apple localization accuracies of 6.9 11.2 mm at distances ranging from 1.0 m to 1.6 m, compared to 21.5 mm by a commercial RealSense RGB-D camera, in an indoor experiment. Orchard evaluations demonstrated that ALACS has achieved a 95% fruit detachment rate versus a 71% rate by the RealSense camera. By overcoming the challenges of apple 3D localization, this research contributes to the advancement of robotic fruit harvesting technology

    Multi-wavelength, multi-beam, photonic based sensor for object discrimination and positioning

    Get PDF
    Over the last decade, substantial research efforts have been dedicated towards the development of advanced laser scanning systems for discrimination in perimeter security, defence, agriculture, transportation, surveying and geosciences. Military forces, in particular, have already started employing laser scanning technologies for projectile guidance, surveillance, satellite and missile tracking; and target discrimination and recognition. However, laser scanning is relatively a new security technology. It has previously been utilized for a wide variety of civil and military applications. Terrestrial laser scanning has found new use as an active optical sensor for indoors and outdoors perimeter security. A laser scanning technique with moving parts was tested in the British Home Office - Police Scientific Development Branch (PSDB) in 2004. It was found that laser scanning has the capability to detect humans in 30m range and vehicles in 80m range with low false alarm rates. However, laser scanning with moving parts is much more sensitive to vibrations than a multi-beam stationary optic approach. Mirror device scanners are slow, bulky and expensive and being inherently mechanical they wear out as a result of acceleration, cause deflection errors and require regular calibration. Multi-wavelength laser scanning represent a potential evolution from object detection to object identification and classification, where detailed features of objects and materials are discriminated by measuring their reflectance characteristics at specific wavelengths and matching them with their spectral reflectance curves. With the recent advances in the development of high-speed sensors and high-speed data processors, the implementation of multi-wavelength laser scanners for object identification has now become feasible. A two-wavelength photonic-based sensor for object discrimination has recently been reported, based on the use of an optical cavity for generating a laser spot array and maintaining adequate overlapping between tapped collimated laser beams of different wavelengths over a long optical path. While this approach is capable of discriminating between objects of different colours, its main drawback is the limited number of security-related objects that can be discriminated. This thesis proposes and demonstrates the concept of a novel photonic based multi-wavelength sensor for object identification and position finding. The sensor employs a laser combination module for input wavelength signal multiplexing and beam overlapping, a custom-made curved optical cavity for multi-beam spot generation through internal beam reflection and transmission and a high-speed imager for scattered reflectance spectral measurements. Experimental results show that five different laser wavelengths, namely 473nm, 532nm, 635nm, 670nm and 785nm, are necessary for discriminating various intruding objects of interest through spectral reflectance and slope measurements. Various objects were selected to demonstrate the proof of concept. We also demonstrate that the object position (coordinates) is determined using the triangulation method, which is based on the projection of laser spots along determined angles onto intruding objects and the measurement of their reflectance spectra using an image sensor. Experimental results demonstrate the ability of the multi-wavelength spectral reflectance sensor to simultaneously discriminate between different objects and predict their positions over a 6m range with an accuracy exceeding 92%. A novel optical design is used to provide additional transverse laser beam scanning for the identification of camouflage materials. A camouflage material is chosen to illustrate the discrimination capability of the sensor, which has complex patterns within a single sample, and is successfully detected and discriminated from other objects over a 6m range by scanning the laser beam spots along the transverse direction. By using more wavelengths at optimised points in the spectrum where different objects show different optical characteristics, better discrimination can be accomplished
    corecore