595 research outputs found

    Software Defined Multi-Spectral Imaging for Arctic Sensor Networks

    Get PDF
    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016

    Deep Learning-Based Object Detection in Maritime Unmanned Aerial Vehicle Imagery: Review and Experimental Comparisons

    Full text link
    With the advancement of maritime unmanned aerial vehicles (UAVs) and deep learning technologies, the application of UAV-based object detection has become increasingly significant in the fields of maritime industry and ocean engineering. Endowed with intelligent sensing capabilities, the maritime UAVs enable effective and efficient maritime surveillance. To further promote the development of maritime UAV-based object detection, this paper provides a comprehensive review of challenges, relative methods, and UAV aerial datasets. Specifically, in this work, we first briefly summarize four challenges for object detection on maritime UAVs, i.e., object feature diversity, device limitation, maritime environment variability, and dataset scarcity. We then focus on computational methods to improve maritime UAV-based object detection performance in terms of scale-aware, small object detection, view-aware, rotated object detection, lightweight methods, and others. Next, we review the UAV aerial image/video datasets and propose a maritime UAV aerial dataset named MS2ship for ship detection. Furthermore, we conduct a series of experiments to present the performance evaluation and robustness analysis of object detection methods on maritime datasets. Eventually, we give the discussion and outlook on future works for maritime UAV-based object detection. The MS2ship dataset is available at \href{https://github.com/zcj234/MS2ship}{https://github.com/zcj234/MS2ship}.Comment: 32 pages, 18 figure

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    Space Remote Sensing and Detecting Systems of Oceangoing Ships

    Get PDF
    This paper introduces the implementation of space remote sensing and detecting systems of oceangoing ships as an alternative to the Radio – Automatic Identification System (R-AIS), Satellite – Automatic Identification System (S-AIS), Long Range Identification and Tracking (LRIT), and other current vessel tracking systems. In this paper will be not included  a new project known as a Global Ship Tracking (GST) as an autonomous and discrete satellite network designed by the Space Science Centre (SSC) for research and postgraduate studies in Satellite Communication, Navigation and Surveillance (CNS) at Durban University of Technology (DUT). The ship detection from satellite remote sensing imagery system is a crucial application for maritime safety and security, which includes among others ship tracking, detecting and traffic surveillance, oil spill detection service, and discharge control, sea pollution monitoring, sea ice monitoring service, and protection against illegal fisheries activities. The establishment of a modern sea surface and ships monitoring system needs enhancement of the Satellite Synthetic Aperture Radar (SSAR) that is here discussed as a modern observation infrastructure integrated with Ships Surveillance and Detecting via SSAR TerraSAR-X Spacecraft, Ships Surveillance and Detecting via SSAR Radarsat Spacecraft and Vessels Detecting System (VDS) via SSAR

    2015 Oil Observing Tools: A Workshop Report

    Get PDF
    Since 2010, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) have provided satellite-based pollution surveillance in United States waters to regulatory agencies such as the United States Coast Guard (USCG). These technologies provide agencies with useful information regarding possible oil discharges. Unfortunately, there has been confusion as to how to interpret the images collected by these satellites and other aerial platforms, which can generate misunderstandings during spill events. Remote sensor packages on aircraft and satellites have advantages and disadvantages vis-à-vis human observers, because they do not “see” features or surface oil the same way. In order to improve observation capabilities during oil spills, applicable technologies must be identified, and then evaluated with respect to their advantages and disadvantages for the incident. In addition, differences between sensors (e.g., visual, IR, multispectral sensors, radar) and platform packages (e.g., manned/unmanned aircraft, satellites) must be understood so that reasonable approaches can be made if applicable and then any data must be correctly interpreted for decision support. NOAA convened an Oil Observing Tools Workshop to focus on the above actions and identify training gaps for oil spill observers and remote sensing interpretation to improve future oil surveillance, observation, and mapping during spills. The Coastal Response Research Center (CRRC) assisted NOAA’s Office of Response and Restoration (ORR) with this effort. The workshop was held on October 20-22, 2015 at NOAA’s Gulf of Mexico Disaster Response Center in Mobile, AL. The expected outcome of the workshop was an improved understanding, and greater use of technology to map and assess oil slicks during actual spill events. Specific workshop objectives included: •Identify new developments in oil observing technologies useful for real-time (or near real-time) mapping of spilled oil during emergency events. •Identify merits and limitations of current technologies and their usefulness to emergency response mapping of oil and reliable prediction of oil surface transport and trajectory forecasts.Current technologies include: the traditional human aerial observer, unmanned aircraft surveillance systems, aircraft with specialized senor packages, and satellite earth observing systems. •Assess training needs for visual observation (human observers with cameras) and sensor technologies (including satellites) to build skills and enhance proper interpretation for decision support during actual events

    Robust Multi-sensor Data Fusion for Practical Unmanned Surface Vehicles (USVs) Navigation

    Get PDF
    The development of practical Unmanned Surface Vehicles (USVs) are attracting increasing attention driven by their assorted military and commercial application potential. However, addressing the uncertainties presented in practical navigational sensor measurements of an USV in maritime environment remain the main challenge of the development. This research aims to develop a multi-sensor data fusion system to autonomously provide an USV reliable navigational information on its own positions and headings as well as to detect dynamic target ships in the surrounding environment in a holistic fashion. A multi-sensor data fusion algorithm based on Unscented Kalman Filter (UKF) has been developed to generate more accurate estimations of USV’s navigational data considering practical environmental disturbances. A novel covariance matching adaptive estimation algorithm has been proposed to deal with the issues caused by unknown and varying sensor noise in practice to improve system robustness. Certain measures have been designed to determine the system reliability numerically, to recover USV trajectory during short term sensor signal loss, and to autonomously detect and discard permanently malfunctioned sensors, and thereby enabling potential sensor faults tolerance. The performance of the algorithms have been assessed by carrying out theoretical simulations as well as using experimental data collected from a real-world USV projected collaborated with Plymouth University. To increase the degree of autonomy of USVs in perceiving surrounding environments, target detection and prediction algorithms using an Automatic Identification System (AIS) in conjunction with a marine radar have been proposed to provide full detections of multiple dynamic targets in a wider coverage range, remedying the narrow detection range and sensor uncertainties of the AIS. The detection algorithms have been validated in simulations using practical environments with water current effects. The performance of developed multi-senor data fusion system in providing reliable navigational data and perceiving surrounding environment for USV navigation have been comprehensively demonstrated

    Object Tracking Based on Satellite Videos: A Literature Review

    Get PDF
    Video satellites have recently become an attractive method of Earth observation, providing consecutive images of the Earth’s surface for continuous monitoring of specific events. The development of on-board optical and communication systems has enabled the various applications of satellite image sequences. However, satellite video-based target tracking is a challenging research topic in remote sensing due to its relatively low spatial and temporal resolution. Thus, this survey systematically investigates current satellite video-based tracking approaches and benchmark datasets, focusing on five typical tracking applications: traffic target tracking, ship tracking, typhoon tracking, fire tracking, and ice motion tracking. The essential aspects of each tracking target are summarized, such as the tracking architecture, the fundamental characteristics, primary motivations, and contributions. Furthermore, popular visual tracking benchmarks and their respective properties are discussed. Finally, a revised multi-level dataset based on WPAFB videos is generated and quantitatively evaluated for future development in the satellite video-based tracking area. In addition, 54.3% of the tracklets with lower Difficulty Score (DS) are selected and renamed as the Easy group, while 27.2% and 18.5% of the tracklets are grouped into the Medium-DS group and the Hard-DS group, respectively
    corecore