1,092 research outputs found

    Smart environment monitoring through micro unmanned aerial vehicles

    Get PDF
    In recent years, the improvements of small-scale Unmanned Aerial Vehicles (UAVs) in terms of flight time, automatic control, and remote transmission are promoting the development of a wide range of practical applications. In aerial video surveillance, the monitoring of broad areas still has many challenges due to the achievement of different tasks in real-time, including mosaicking, change detection, and object detection. In this thesis work, a small-scale UAV based vision system to maintain regular surveillance over target areas is proposed. The system works in two modes. The first mode allows to monitor an area of interest by performing several flights. During the first flight, it creates an incremental geo-referenced mosaic of an area of interest and classifies all the known elements (e.g., persons) found on the ground by an improved Faster R-CNN architecture previously trained. In subsequent reconnaissance flights, the system searches for any changes (e.g., disappearance of persons) that may occur in the mosaic by a histogram equalization and RGB-Local Binary Pattern (RGB-LBP) based algorithm. If present, the mosaic is updated. The second mode, allows to perform a real-time classification by using, again, our improved Faster R-CNN model, useful for time-critical operations. Thanks to different design features, the system works in real-time and performs mosaicking and change detection tasks at low-altitude, thus allowing the classification even of small objects. The proposed system was tested by using the whole set of challenging video sequences contained in the UAV Mosaicking and Change Detection (UMCD) dataset and other public datasets. The evaluation of the system by well-known performance metrics has shown remarkable results in terms of mosaic creation and updating, as well as in terms of change detection and object detection

    MusA: Using Indoor Positioning and Navigation to Enhance Cultural Experiences in a museum

    Get PDF
    In recent years there has been a growing interest into the use of multimedia mobile guides in museum environments. Mobile devices have the capabilities to detect the user context and to provide pieces of information suitable to help visitors discovering and following the logical and emotional connections that develop during the visit. In this scenario, location based services (LBS) currently represent an asset, and the choice of the technology to determine users' position, combined with the definition of methods that can effectively convey information, become key issues in the design process. In this work, we present MusA (Museum Assistant), a general framework for the development of multimedia interactive guides for mobile devices. Its main feature is a vision-based indoor positioning system that allows the provision of several LBS, from way-finding to the contextualized communication of cultural contents, aimed at providing a meaningful exploration of exhibits according to visitors' personal interest and curiosity. Starting from the thorough description of the system architecture, the article presents the implementation of two mobile guides, developed to respectively address adults and children, and discusses the evaluation of the user experience and the visitors' appreciation of these application

    PPF - A Parallel Particle Filtering Library

    Full text link
    We present the parallel particle filtering (PPF) software library, which enables hybrid shared-memory/distributed-memory parallelization of particle filtering (PF) algorithms combining the Message Passing Interface (MPI) with multithreading for multi-level parallelism. The library is implemented in Java and relies on OpenMPI's Java bindings for inter-process communication. It includes dynamic load balancing, multi-thread balancing, and several algorithmic improvements for PF, such as input-space domain decomposition. The PPF library hides the difficulties of efficient parallel programming of PF algorithms and provides application developers with the necessary tools for parallel implementation of PF methods. We demonstrate the capabilities of the PPF library using two distributed PF algorithms in two scenarios with different numbers of particles. The PPF library runs a 38 million particle problem, corresponding to more than 1.86 GB of particle data, on 192 cores with 67% parallel efficiency. To the best of our knowledge, the PPF library is the first open-source software that offers a parallel framework for PF applications.Comment: 8 pages, 8 figures; will appear in the proceedings of the IET Data Fusion & Target Tracking Conference 201

    Blending techniques for underwater photomosaics

    Get PDF
    The creation of consistent underwater photomosaics is typically hampered by local misalignments and inhomogeneous illumination of the image frames, which introduce visible seams that complicate post processing of the mosaics for object recognition and shape extraction. In this thesis, methods are proposed to improve blending techniques for underwater photomosaics and the results are compared with traditional methods. Five specific techniques drawn from various areas of image processing, computer vision, and computer graphics have been tested: illumination correction based on the median mosaic, thin plate spline warping, perspective warping, graph-cut applied in the gradient domain and in the wavelet domain. A combination of the first two methods yields globally homogeneous underwater photomosaics with preserved continuous features. Further improvements are obtained with the graph-cut technique applied in the spatial domain

    The Exploitation of Data from Remote and Human Sensors for Environment Monitoring in the SMAT Project

    Get PDF
    In this paper, we outline the functionalities of a system that integrates and controls a fleet of Unmanned Aircraft Vehicles (UAVs). UAVs have a set of payload sensors employed for territorial surveillance, whose outputs are stored in the system and analysed by the data exploitation functions at different levels. In particular, we detail the second level data exploitation function whose aim is to improve the sensors data interpretation in the post-mission activities. It is concerned with the mosaicking of the aerial images and the cartography enrichment by human sensors—the social media users. We also describe the software architecture for the development of a mash-up (the integration of information and functionalities coming from the Web) and the possibility of using human sensors in the monitoring of the territory, a field in which, traditionally, the involved sensors were only the hardware ones.JRC.H.6-Digital Earth and Reference Dat

    Generalized least squares-based parametric motion estimation and segmentation

    Get PDF
    El análisis del movimiento es uno de los campos más importantes de la visión por computador. Esto es debido a que el mundo real está en continuo movimiento y es obvio que podremos obtener mucha más información de escenas en movimiento que de escenas estáticas. En esta tesis se ha trabajado principalmente en desarrollar algoritmos de estimación de movimiento para su aplicación a problemas de registrado de imágenes y a problemas de segmentación del movimiento. Uno de los principales objetivos de este trabajo es desarrollar una técnica de registrado de imágenes de gran exactitud, tolerante a outliers y que sea capaz de realizar su labor incluso en la presencia de deformaciones de gran magnitud tales como traslaciones, rotaciones, cambios de escala, cambios de iluminación globales y no espacialmente uniformes, etc. Otro de los objetivos de esta tesis es trabajar en problemas de estimación y la segmentación del movimiento en secuencias de dos imágenes de forma casi simultánea y sin conocimiento a priori del número de modelos de movimiento presentes. Los experimentos mostrados en este trabajo demuestran que los algoritmos propuestos en esta tesis obtienen resultados de gran exactitud.This thesis proposes several techniques related with the motion estimation problem. In particular, it deals with global motion estimation for image registration and motion segmentation. In the first case, we will suppose that the majority of the pixels of the image follow the same motion model, although the possibility of a large number of outliers are also considered. In the motion segmentation problem, the presence of more than one motion model will be considered. In both cases, sequences of two consecutive grey level images will be used. A new generalized least squares-based motion estimator will be proposed. The proposed formulation of the motion estimation problem provides an additional constraint that helps to match the pixels using image gradient information. That is achieved thanks to the use of a weight for each observation, providing high weight values to the observations considered as inliers, and low values to the ones considered as outliers. To avoid falling in a local minimum, the proposed motion estimator uses a Feature-based method (SIFT-based) to obtain good initial motion parameters. Therefore, it can deal with large motions like translation, rotations, scales changes, viewpoint changes, etc. The accuracy of our approach has been tested using challenging real images using both affine and projective motion models. Two Motion Estimator techniques, which use M-Estimators to deal with outliers into a iteratively reweighted least squared-based strategy, have been selected to compare the accuracy of our approach. The results obtained have showed that the proposed motion estimator can obtain as accurate results as M-Estimator-based techniques and even better in most cases. The problem of estimating accurately the motion under non-uniform illumination changes will also be considered. A modification of the proposed global motion estimator will be proposed to deal with this kind of illumination changes. In particular, a dynamic image model where the illumination factors are functions of the localization will be used replacing the brightens constancy assumption allowing for a more general and accurate image model. Experiments using challenging images will be performed showing that the combination of both techniques is feasible and provides accurate estimates of the motion parameters even in the presence of strong illumination changes between the images. The last part of the thesis deals with the motion estimation and segmentation problem. The proposed algorithm uses temporal information, by using the proposed generalized least-squares motion estimation process and spatial information by using an iterative region growing algorithm which classifies regions of pixels into the different motion models present in the sequence. In addition, it can extract the different moving regions of the scene while estimating its motion quasi-simultaneously and without a priori information of the number of moving objects in the scene. The performance of the algorithm will be tested on synthetic and real images with multiple objects undergoing different types of motion

    Generalized least squares-based parametric motion estimation and segmentation

    Get PDF
    El análisis del movimiento es uno de los campos más importantes de la visión por computador. Esto es debido a que el mundo real está en continuo movimiento y es obvio que podremos obtener mucha más información de escenas en movimiento que de escenas estáticas. En esta tesis se ha trabajado principalmente en desarrollar algoritmos de estimación de movimiento para su aplicación a problemas de registrado de imágenes y a problemas de segmentación del movimiento. Uno de los principales objetivos de este trabajo es desarrollar una técnica de registrado de imágenes de gran exactitud, tolerante a outliers y que sea capaz de realizar su labor incluso en la presencia de deformaciones de gran magnitud tales como traslaciones, rotaciones, cambios de escala, cambios de iluminación globales y no espacialmente uniformes, etc. Otro de los objetivos de esta tesis es trabajar en problemas de estimación y la segmentación del movimiento en secuencias de dos imágenes de forma casi simultánea y sin conocimiento a priori del número de modelos de movimiento presentes. Los experimentos mostrados en este trabajo demuestran que los algoritmos propuestos en esta tesis obtienen resultados de gran exactitud.This thesis proposes several techniques related with the motion estimation problem. In particular, it deals with global motion estimation for image registration and motion segmentation. In the first case, we will suppose that the majority of the pixels of the image follow the same motion model, although the possibility of a large number of outliers are also considered. In the motion segmentation problem, the presence of more than one motion model will be considered. In both cases, sequences of two consecutive grey level images will be used. A new generalized least squares-based motion estimator will be proposed. The proposed formulation of the motion estimation problem provides an additional constraint that helps to match the pixels using image gradient information. That is achieved thanks to the use of a weight for each observation, providing high weight values to the observations considered as inliers, and low values to the ones considered as outliers. To avoid falling in a local minimum, the proposed motion estimator uses a Feature-based method (SIFT-based) to obtain good initial motion parameters. Therefore, it can deal with large motions like translation, rotations, scales changes, viewpoint changes, etc. The accuracy of our approach has been tested using challenging real images using both affine and projective motion models. Two Motion Estimator techniques, which use M-Estimators to deal with outliers into a iteratively reweighted least squared-based strategy, have been selected to compare the accuracy of our approach. The results obtained have showed that the proposed motion estimator can obtain as accurate results as M-Estimator-based techniques and even better in most cases. The problem of estimating accurately the motion under non-uniform illumination changes will also be considered. A modification of the proposed global motion estimator will be proposed to deal with this kind of illumination changes. In particular, a dynamic image model where the illumination factors are functions of the localization will be used replacing the brightens constancy assumption allowing for a more general and accurate image model. Experiments using challenging images will be performed showing that the combination of both techniques is feasible and provides accurate estimates of the motion parameters even in the presence of strong illumination changes between the images. The last part of the thesis deals with the motion estimation and segmentation problem. The proposed algorithm uses temporal information, by using the proposed generalized least-squares motion estimation process and spatial information by using an iterative region growing algorithm which classifies regions of pixels into the different motion models present in the sequence. In addition, it can extract the different moving regions of the scene while estimating its motion quasi-simultaneously and without a priori information of the number of moving objects in the scene. The performance of the algorithm will be tested on synthetic and real images with multiple objects undergoing different types of motion

    Abstracted Workflow Framework with a Structure from Motion Application

    Get PDF
    In scientific and engineering disciplines, from academia to industry, there is an increasing need for the development of custom software to perform experiments, construct systems, and develop products. The natural mindset initially is to shortcut and bypass all overhead and process rigor in order to obtain an immediate result for the problem at hand, with the misconception that the software will simply be thrown away at the end. In a majority of the cases, it turns out the software persists for many years, and likely ends up in production systems for which it was not initially intended. In the current study, a framework that can be used in both industry and academic applications mitigates underlying problems associated with developing scientific and engineering software. This results in software that is much more maintainable, documented, and usable by others, specifically allowing new users to extend capabilities of components already implemented in the framework. There is a multi-disciplinary need in the fields of imaging science, computer science, and software engineering for a unified implementation model, which motivates the development of an abstracted software framework. Structure from motion (SfM) has been identified as one use case where the abstracted workflow framework can improve research efficiencies and eliminate implementation redundancies in scientific fields. The SfM process begins by obtaining 2D images of a scene from different perspectives. Features from the images are extracted and correspondences are established. This provides a sufficient amount of information to initialize the problem for fully automated processing. Transformations are established between views, and 3D points are established via triangulation algorithms. The parameters for the camera models for all views / images are solved through bundle adjustment, establishing a highly consistent point cloud. The initial sparse point cloud and camera matrices are used to generate a dense point cloud through patch based techniques or densification algorithms such as Semi-Global Matching (SGM). The point cloud can be visualized or exploited by both humans and automated techniques. In some cases the point cloud is draped with original imagery in order to enhance the 3D model for a human viewer. The SfM workflow can be implemented in the abstracted framework, making it easily leverageable and extensible by multiple users. Like many processes in scientific and engineering domains, the workflow described for SfM is complex and requires many disparate components to form a functional system, often utilizing algorithms implemented by many users in different languages / environments and without knowledge of how the component fits into the larger system. In practice, this generally leads to issues interfacing the components, building the software for desired platforms, understanding its concept of operations, and how it can be manipulated in order to fit the desired function for a particular application. In addition, other scientists and engineers instinctively wish to analyze the performance of the system, establish new algorithms, optimize existing processes, and establish new functionality based on current research. This requires a framework whereby new components can be easily plugged in without affecting the current implemented functionality. The need for a universal programming environment establishes the motivation for the development of the abstracted workflow framework. This software implementation, named Catena, provides base classes from which new components must derive in order to operate within the framework. The derivation mandates requirements be satisfied in order to provide a complete implementation. Additionally, the developer must provide documentation of the component in terms of its overall function and inputs. The interface input and output values corresponding to the component must be defined in terms of their respective data types, and the implementation uses mechanisms within the framework to retrieve and send the values. This process requires the developer to componentize their algorithm rather than implement it monolithically. Although the requirements of the developer are slightly greater, the benefits realized from using Catena far outweigh the overhead, and results in extensible software. This thesis provides a basis for the abstracted workflow framework concept and the Catena software implementation. The benefits are also illustrated using a detailed examination of the SfM process as an example application

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm
    corecore