984 research outputs found

    A novel video-tracking system to quantify the behaviour of nocturnal mosquitoes attacking human hosts in the field

    Get PDF
    Many vectors of malaria and other infections spend most of their adult life within human homes, the environment where they bloodfeed and rest, and where control has been most successful. Yet, knowledge of peri-domestic mosquito behaviour is limited, particularly how mosquitoes find and attack human hosts or how insecticides impact on behaviour. This is partly because technology for tracking mosquitoes in their natural habitats, traditional dwellings in disease-endemic countries, has never been available. We describe a sensing device that enables observation and recording of nocturnal mosquitoes attacking humans with or without a bed net, in the laboratory and in rural Africa. The device addresses requirements for sub-millimetre resolution over a 2.0 x 1.2 x 2.0 m volume while using minimum irradiance. Data processing strategies to extract individual mosquito trajectories and algorithms to describe behaviour during host/net interactions are introduced. Results from UK laboratory and Tanzanian field tests showed that Culex quinquefasciatus activity was higher and focused on the bed net roof when a human host was present, in colonized and wild populations. Both C. quinquefasciatus and Anopheles gambiae exhibited similar behavioural modes, with average flight velocities varying by less than 10%. The system offers considerable potential for investigations in vector biology and many other fields

    Spherical Regression: Learning Viewpoints, Surface Normals and 3D Rotations on n-Spheres

    Get PDF
    Many computer vision challenges require continuous outputs, but tend to be solved by discrete classification. The reason is classification's natural containment within a probability nn-simplex, as defined by the popular softmax activation function. Regular regression lacks such a closed geometry, leading to unstable training and convergence to suboptimal local minima. Starting from this insight we revisit regression in convolutional neural networks. We observe many continuous output problems in computer vision are naturally contained in closed geometrical manifolds, like the Euler angles in viewpoint estimation or the normals in surface normal estimation. A natural framework for posing such continuous output problems are nn-spheres, which are naturally closed geometric manifolds defined in the R(n+1)\mathbb{R}^{(n+1)} space. By introducing a spherical exponential mapping on nn-spheres at the regression output, we obtain well-behaved gradients, leading to stable training. We show how our spherical regression can be utilized for several computer vision challenges, specifically viewpoint estimation, surface normal estimation and 3D rotation estimation. For all these problems our experiments demonstrate the benefit of spherical regression. All paper resources are available at https://github.com/leoshine/Spherical_Regression.Comment: CVPR 2019 camera read

    GPU accelerated cone based shooting bouncing ray tracing

    Get PDF
    2019 Summer.Includes bibliographical references.Ray tracing can be used as an alternative method to solve complex Computational Electromagnetics (CEM) problems that would require significant time using traditional full-wave CEM solvers. Ray tracing is considered a high-frequency asymptotic solver, sacrificing accuracy for speed via approximation. Two prominent categories for ray tracing exist today: image theory techniques and ray launching techniques. Image theory involves the calculation of image points for each continuous plane within a structure. Ray launching ray tracing is comprised of spawning rays in numerous directions and tracking the intersections these rays have with the environment. While image theory ray tracing typically provides more accurate solutions compared to ray launching techniques, due to more exact computations, image theory is much slower than ray launching techniques due to exponential time complexity of the algorithm. This paper discusses a ray launching technique called shooting bouncing rays (SBR) ray tracing that applies NVIDIA graphics processing units (GPU) to achieve significant performance benefits for solving CEM problems. The GPUs are used as a tool to parallelize the core ray tracing algorithm and also to provide access to the NVIDIA OptiX ray tracing application programming interface (API) that efficiently traces rays within complex structures. The algorithm presented enables quick and efficient simulations to optimize the placement of communication nodes within complex structures. The processes and techniques used in the development of the solver and demonstrations of the validation and the application of the solver on various structures and its comparison to commercially available ray tracing software are presented

    A probabilistic integrated object recognition and tracking framework for video sequences

    Get PDF
    Recognition and tracking of multiple objects in video sequences is one of the main challenges in computer vision that currently deserves a lot of attention from researchers. Almost all the reported approaches are very application-dependent and there is a lack of a general methodology for dynamic object recognition and tracking that can be instantiated in particular cases. In this thesis, the work is oriented towards the definition and development of such a methodology which integrates object recognition and tracking from a general perspective using a probabilistic framework called PIORT (probabilistic integrated object recognition and tracking framework). It include some modules for which a variety of techniques and methods can be applied. Some of them are well-known but other methods have been designed, implemented and tested during the development of this thesis.The first step in the proposed framework is a static recognition module that provides class probabilities for each pixel of the image from a set of local features. These probabilities are updated dynamically and supplied to a tracking decision module capable of handling full and partial occlusions. The two specific methods presented use RGB colour features and differ in the classifier implemented: one is a Bayesian method based on maximum likelihood and the other one is based on a neural network. The experimental results obtained have shown that, on one hand, the neural net based approach performs similarly and sometimes better than the Bayesian approach when they are integrated within the tracking framework. And on the other hand, our PIORT methods have achieved better results when compared to other published tracking methods. All these methods have been tested experimentally in several test video sequences taken with still and moving cameras and including full and partial occlusions of the tracked object in indoor and outdoor scenarios in a variety of cases with different levels of task complexity. This allowed the evaluation of the general methodology and the alternative methods that compose these modules.A Probabilistic Integrated Object Recognition and Tracking Framework for Video SequencesEl reconocimiento y seguimiento de múltiples objetos en secuencias de vídeo es uno de los principales desafíos en visión por ordenador que actualmente merece mucha atención de los investigadores. Casi todos los enfoques reportados son muy dependientes de la aplicación y hay carencia de una metodología general para el reconocimiento y seguimiento dinámico de objetos, que pueda ser instanciada en casos particulares. En esta tesis, el trabajo esta orientado hacia la definición y desarrollo de tal metodología, la cual integra reconocimiento y seguimiento de objetos desde una perspectiva general usando un marco probabilístico de trabajo llamado PIORT (Probabilistic Integrated Object Recognition and Tracking). Este incluye algunos módulos para los que se puede aplicar una variedad de técnicas y métodos. Algunos de ellos son bien conocidos, pero otros métodos han sido diseñados, implementados y probados durante el desarrollo de esta tesis.El primer paso en el marco de trabajo propuesto es un módulo estático de reconocimiento que provee probabilidades de clase para cada píxel de la imagen desde un conjunto de características locales. Estas probabilidades son actualizadas dinámicamente y suministradas a un modulo decisión de seguimiento capaz de manejar oclusiones parciales o totales. Se presenta dos métodos específicos usando características de color RGB pero diferentes en la implementación del clasificador: uno es un método Bayesiano basado en la máxima verosimilitud y el otro método está basado en una red neuronal. Los resultados experimentales obtenidos han mostrado que, por una parte, el enfoque basado en la red neuronal funciona similarmente y algunas veces mejor que el enfoque bayesiano cuando son integrados dentro del marco probabilístico de seguimiento. Por otra parte, nuestro método PIORT ha alcanzado mejores resultados comparando con otros métodos de seguimiento publicados. Todos estos métodos han sido probados experimentalmente en varias secuencias de vídeo tomadas con cámaras fijas y móviles incluyendo oclusiones parciales y totales del objeto a seguir, en ambientes interiores y exteriores, en diferentes tareas y niveles de complejidad. Esto ha permitido evaluar tanto la metodología general como los métodos alternativos que componen sus módulos

    Diffraction Analysis with UWB Validation for ToA Ranging in the Proximity of Human Body and Metallic Objects

    Get PDF
    The time-of-arrival (ToA)-based localization technique performs superior in line-of-sight (LoS) conditions, and its accuracy degrades drastically in proximity of micro-metals and human body, when LoS conditions are not met. This calls for modeling and formulation of Direct Path (DP) to help with mitigation of ranging error. However, the current propagation tools and models are mainly designed for telecommunication applications via focus on delay spread of wireless channel profile, whereas ToA-based localization strive for modeling of DP component. This thesis provides a mitigation to the limitation of existing propagation tools and models to computationally capture the effects of micro-metals and human body on ToA-based indoor localization. Solutions for each computational technique are validated by empirical measurements using Ultra-Wide-Band (UWB) signals. Finite- Difference-Time-Domain (FDTD) numerical method is used to estimate the ranging errors, and a combination of Uniform-Theory-of-Diffraction (UTD) ray theory and geometrical ray optics properties are utilized to model the path-loss and the ToA of the DP obstructed by micro- metals. Analytical UTD ray theory and geometrical ray optics properties are exploited to model the path-loss and the ToA of the first path obstructed by the human body for the scattering scenarios. The proposed scattering solution expanded to analytically model the path-loss and ToA of the DP obstructed by human body in angular motion for the radiation scenarios

    Autonomous Localization Of A Uav In A 3d Cad Model

    Get PDF
    This thesis presents a novel method of indoor localization and autonomous navigation of Unmanned Aerial Vehicles(UAVs) within a building, given a prebuilt Computer Aided Design(CAD) model of the building. The proposed system is novel in that it leverages the support of machine learning and traditional computer vision techniques to provide a robust method of localizing and navigating a drone autonomously in indoor and GPS denied environments leveraging preexisting knowledge of the environment. The goal of this work is to devise a method to enable a UAV to deduce its current pose within a CAD model that is fast and accurate while also maintaining efficient use of resources. A 3-Dimensional CAD model of the building to be navigated through is provided as input to the system along with the required goal position. Initially, the UAV has no idea of its location within the building. The system, comprising a stereo camera system and an Inertial Measurement Unit(IMU) as its sensors, then generates a globally consistent map of its surroundings using a Simultaneous Localization and Mapping (SLAM) algorithm. In addition to the map, it also stores spatially correlated 3D features. These 3D features are then used to generate correspondences between the SLAM map and the 3D CAD model. The correspondences are then used to generate a transformation between the SLAM map and the 3D CAD model, thus effectively localizing the UAV in the 3D CAD model. Our method has been tested to successfully localize the UAV in the test building in an average of 15 seconds in the different scenarios tested contingent upon the abundance of target features in the observed data. Due to the absence of a motion capture system, the results have been verified by the placement of tags on the ground at strategic known locations in the building and measuring the error in the projection of the current UAV location on the ground with the tag
    corecore