3,711 research outputs found

    Diffuse retro-reflective imaging for improved mosquito tracking around human baited bednets

    Get PDF
    Robust imaging techniques for tracking insects have been essential tools in numerous laboratory and field studies on pests, beneficial insects and model systems. Recent innovations in optical imaging systems and associated signal processing have enabled detailed characterisation of nocturnal mosquito behaviour around bednets and improvements in bednet design, a global essential for protecting populations against malaria. Nonetheless, there remain challenges around ease of use for large scale in situ recordings and extracting data reliably in the critical areas of the bednet where the optical signal is attenuated. Here we introduce a retro-reflective screen at the back of the measurement volume, which can simultaneously provide diffuse illumination, and remove optical alignment issues whilst requiring only one-sided access to the measurement space. The illumination becomes significantly more uniform, although, noise removal algorithms are needed to reduce the effects of shot noise particularly across low intensity bednet regions. By systematically introducing mosquitoes in front and behind the bednet in lab experiments we are able to demonstrate robust tracking in these challenging areas. Overall, the retro-reflective imaging setup delivers mosquito segmentation rates in excess of 90% compared to less than 70% with back-lit systems

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection

    Mapping and classification of ecologically sensitive marine habitats using unmanned aerial vehicle (UAV) imagery and object-based image analysis (OBIA)

    Get PDF
    Nowadays, emerging technologies, such as long-range transmitters, increasingly miniaturized components for positioning, and enhanced imaging sensors, have led to an upsurge in the availability of new ecological applications for remote sensing based on unmanned aerial vehicles (UAVs), sometimes referred to as “drones”. In fact, structure-from-motion (SfM) photogrammetry coupled with imagery acquired by UAVs offers a rapid and inexpensive tool to produce high-resolution orthomosaics, giving ecologists a new way for responsive, timely, and cost-effective monitoring of ecological processes. Here, we adopted a lightweight quadcopter as an aerial survey tool and object-based image analysis (OBIA) workflow to demonstrate the strength of such methods in producing very high spatial resolution maps of sensitive marine habitats. Therefore, three different coastal environments were mapped using the autonomous flight capability of a lightweight UAV equipped with a fully stabilized consumer-grade RGB digital camera. In particular we investigated a Posidonia oceanica seagrass meadow, a rocky coast with nurseries for juvenile fish, and two sandy areas showing biogenic reefs of Sabelleria alveolata. We adopted, for the first time, UAV-based raster thematic maps of these key coastal habitats, produced after OBIA classification, as a new method for fine-scale, low-cost, and time saving characterization of sensitive marine environments which may lead to a more effective and efficient monitoring and management of natural resource

    ToF cameras for eye-in-hand robotics

    Get PDF
    This work was supported by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, by the EU Project IntellAct FP7-ICT2009-6-269959 and by the Catalan Research Commission through SGR-00155.Peer Reviewe

    3D MODELING of A COMPLEX BUILDING: From MULTI-VIEW IMAGE FUSION to GOOGLE EARTH PUBLICATION

    Get PDF
    This paper presents a pipeline that aims at illustrating the procedure to realize a 3D model of a complex building integrating the UAV and terrestrial images and modifying the 3D model in order to publish to Google Earth in an interactive modality so as to provide better available models for visualization and use. The main steps of the procedure are the optimization of the UAV flight, the integration of the different UAV and ground floor images and the optimization of the model to be published to GE. The case study has been identified in a building, The Eremo di Santa Rosalia Convent in Sicily which hash more staggered elevations and located in the hills of the hinterland and of which, the online platform only indicate the position on Google Maps (GM) and Google Earth (GE) with a photo from above and a non-urban road whose GM path is not corresponding with the GE photo. The process highlights the integration of the models and showcases a workflow for the publication of the combined 3D model to the GE platform

    Background Subtraction Based on Color and Depth Using Active Sensors

    Get PDF
    Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage.This work was supported by the projects of excellence from Junta de Andalucia MULTIVISION (TIC-3873), ITREBA (TIC-5060) and VITVIR (P11-TIC-8120), the national project, ARC-VISION (TEC2010-15396), and the EU Project, TOMSY (FP7-270436)
    corecore