1,286 research outputs found

    Smart environment monitoring through micro unmanned aerial vehicles

    Get PDF
    In recent years, the improvements of small-scale Unmanned Aerial Vehicles (UAVs) in terms of flight time, automatic control, and remote transmission are promoting the development of a wide range of practical applications. In aerial video surveillance, the monitoring of broad areas still has many challenges due to the achievement of different tasks in real-time, including mosaicking, change detection, and object detection. In this thesis work, a small-scale UAV based vision system to maintain regular surveillance over target areas is proposed. The system works in two modes. The first mode allows to monitor an area of interest by performing several flights. During the first flight, it creates an incremental geo-referenced mosaic of an area of interest and classifies all the known elements (e.g., persons) found on the ground by an improved Faster R-CNN architecture previously trained. In subsequent reconnaissance flights, the system searches for any changes (e.g., disappearance of persons) that may occur in the mosaic by a histogram equalization and RGB-Local Binary Pattern (RGB-LBP) based algorithm. If present, the mosaic is updated. The second mode, allows to perform a real-time classification by using, again, our improved Faster R-CNN model, useful for time-critical operations. Thanks to different design features, the system works in real-time and performs mosaicking and change detection tasks at low-altitude, thus allowing the classification even of small objects. The proposed system was tested by using the whole set of challenging video sequences contained in the UAV Mosaicking and Change Detection (UMCD) dataset and other public datasets. The evaluation of the system by well-known performance metrics has shown remarkable results in terms of mosaic creation and updating, as well as in terms of change detection and object detection

    Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture.</p> <p>Methods</p> <p>Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale <it>in silico </it>image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM).</p> <p>Results</p> <p>We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates.</p> <p>Conclusion</p> <p>The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.</p

    Image Processing in Dense Forest Areas using Unmanned Aerial System (UAS)

    Get PDF
    Description: A detailed workflow using Structure from Motion (SfM) techniques for processing high-resolution Unmanned Aerial System (UAS) NIR and RGB imagery in a dense forest environment where obtaining control points is difficult due to limited access and safety issues. Abstract: Imagery collected via Unmanned Aerial System (UAS) platforms has become popular in recent years due to improvements in a Digital Single-Lens Reflex (DSLR) camera (centimeter and sub-centimeter), lower operation costs as compared to human piloted aircraft, and the ability to collect data over areas with limited ground access. Many different application (e.g., forestry, agriculture, geology, archaeology) are already using and utilizing the advantages of UAS data. Although, there are numerous UAS image processing workflows, for each application the approach can be different. In this study, we developed a processing workflow of UAS imagery collected in a dense forest (e.g., coniferous/deciduous forest and contiguous wetlands) area allowing users to process large datasets with acceptable mosaicking and georeferencing errors. Imagery was acquired with near-infrared (NIR) and red, green, blue (RGB) cameras with no ground control points. Image quality of two different UAS collection platforms were observed. Agisoft Metashape, a photogrammetric suite, which uses SfM (Structure from Motion) techniques, was used to process the imagery. The results showed that an UAS having a consumer grade Global Navigation Satellite System (GNSS) onboard had better image alignment than an UAS with lower quality GNSS

    The VISTA Science Archive

    Full text link
    We describe the VISTA Science Archive (VSA) and its first public release of data from five of the six VISTA Public Surveys. The VSA exists to support the VISTA Surveys through their lifecycle: the VISTA Public Survey consortia can use it during their quality control assessment of survey data products before submission to the ESO Science Archive Facility (ESO SAF); it supports their exploitation of survey data prior to its publication through the ESO SAF; and, subsequently, it provides the wider community with survey science exploitation tools that complement the data product repository functionality of the ESO SAF. This paper has been written in conjunction with the first public release of public survey data through the VSA and is designed to help its users understand the data products available and how the functionality of the VSA supports their varied science goals. We describe the design of the database and outline the database-driven curation processes that take data from nightly pipeline-processed and calibrated FITS files to create science-ready survey datasets. Much of this design, and the codebase implementing it, derives from our earlier WFCAM Science Archive (WSA), so this paper concentrates on the VISTA-specific aspects and on improvements made to the system in the light of experience gained in operating the WSA.Comment: 22 pages, 16 figures. Minor edits to fonts and typos after sub-editting. Published in A&

    Design of Immersive Online Hotel Walkthrough System Using Image-Based (Concentric Mosaics) Rendering

    Get PDF
    Conventional hotel booking websites only represents their services in 2D photos to show their facilities. 2D photos are just static photos that cannot be move and rotate. Imagebased virtual walkthrough for the hospitality industry is a potential technology to attract more customers. In this project, a research will be carried out to create an Image-based rendering (IBR) virtual walkthrough and panoramic-based walkthrough by using only Macromedia Flash Professional 8, Photovista Panorama 3.0 and Reality Studio for the interaction of the images. The web-based of the image-based are using the Macromedia Dreamweaver Professional 8. The images will be displayed in Adobe Flash Player 8 or higher. In making image-based walkthrough, a concentric mosaic technique is used while image mosaicing technique is applied in panoramic-based walkthrough. A comparison of the both walkthrough is compared. The study is also focus on the comparison between number of pictures and smoothness of the walkthrough. There are advantages of using different techniques such as image-based walkthrough is a real time walkthrough since the user can walk around right, left, forward and backward whereas the panoramic-based cannot experience real time walkthrough because the user can only view 360 degrees from a fixed spot

    Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing

    Get PDF
    Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods

    Comparing of radial and tangencial geometric for cylindric panorama

    Full text link
    Cameras generally have a field of view only large enough to capture a portion of their surroundings. The goal of immersion is to replace many of your senses with virtual ones, so that the virtual environment will feel as real as possible. Panoramic cameras are used to capture the entire 360°view, also known as panoramic images.Virtual reality makes use of these panoramic images to provide a more immersive experience compared to seeing images on a 2D screen. This thesis, which is in the field of Computer vision, focuses on establishing a multi-camera geometry to generate a cylindrical panorama image and successfully implementing it with the cheapest cameras possible. The specific goal of this project is to propose the cameras geometry which will decrease artifact problems related to parallax in the panorama image. We present a new approach of cylindrical panoramic images from multiple cameras which its setup has cameras placed evenly around a circle. Instead of looking outward, which is the traditional ”radial” configuration, we propose to make the optical axes tangent to the camera circle, a ”tangential” configuration. Beside an analysis and comparison of radial and tangential geometries, we provide an experimental setup with real panoramas obtained in realistic conditionsLes caméras ont généralement un champ de vision à peine assez grand pour capturer partie de leur environnement. L’objectif de l’immersion est de remplacer virtuellement un grand nombre de sens, de sorte que l’environnement virtuel soit perçu comme le plus réel possible. Une caméra panoramique est utilisée pour capturer l’ensemble d’une vue 360°, également connue sous le nom d’image panoramique. La réalité virtuelle fait usage de ces images panoramiques pour fournir une expérience plus immersive par rapport aux images sur un écran 2D. Cette thèse, qui est dans le domaine de la vision par ordinateur, s’intéresse à la création d’une géométrie multi-caméras pour générer une image cylindrique panoramique et vise une mise en œuvre avec les caméras moins chères possibles. L’objectif spécifique de ce projet est de proposer une géométrie de caméra qui va diminuer au maximum les problèmes d’artefacts liés au parallaxe présent dans l’image panoramique. Nous présentons une nouvelle approche de capture des images panoramiques cylindriques à partir de plusieurs caméras disposées uniformément autour d’un cercle. Au lieu de regarder vers l’extérieur, ce qui est la configuration traditionnelle ”radiale”, nous proposons de rendre les axes optiques tangents au cercle des caméras, une configuration ”tangentielle”. Outre une analyse et la comparaison des géométries radiales et tangentielles, nous fournissons un montage expérimental avec de vrais panoramas obtenus dans des conditions réaliste
    • …
    corecore