23 research outputs found
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Converting Optical Videos to Infrared Videos Using Attention GAN and Its Impact on Target Detection and Classification Performance
To apply powerful deep-learning-based algorithms for object detection and classification in infrared videos, it is necessary to have more training data in order to build high-performance models. However, in many surveillance applications, one can have a lot more optical videos than infrared videos. This lack of IR video datasets can be mitigated if optical-to-infrared video conversion is possible. In this paper, we present a new approach for converting optical videos to infrared videos using deep learning. The basic idea is to focus on target areas using attention generative adversarial network (attention GAN), which will preserve the fidelity of target areas. The approach does not require paired images. The performance of the proposed attention GAN has been demonstrated using objective and subjective evaluations. Most importantly, the impact of attention GAN has been demonstrated in improved target detection and classification performance using real-infrared videos
Urban Informatics
This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity
Urban Informatics
This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity
Multimodal Navigation for Accurate Space Rendezvous Missions
© Cranfield University 2021. All rights reserved. No part of
this publication may be reproduced without the written
permission of the copyright ownerRelative navigation is paramount in space missions that involve rendezvousing
between two spacecraft. It demands accurate and continuous estimation of the six
degree-of-freedom relative pose, as this stage involves close-proximity-fast-reaction
operations that can last up to five orbits. This has been routinely achieved thanks to
active sensors such as lidar, but their large size, cost, power and limited operational
range remain a stumbling block for en masse on-board integration. With the onset
of faster processing units, lighter and cheaper passive optical sensors are emerging as
the suitable alternative for autonomous rendezvous in combination with computer
vision algorithms. Current vision-based solutions, however, are limited by adverse
illumination conditions such as solar glare, shadowing, and eclipse. These effects are
exacerbated when the target does not hold cooperative markers to accommodate the
estimation process and is incapable of controlling its rotational state.
This thesis explores novel model-based methods that exploit sequences of monoc ular images acquired by an on-board camera to accurately carry out spacecraft
relative pose estimation for non-cooperative close-range rendezvous with a known
artificial target. The proposed solutions tackle the current challenges of imaging in
the visible spectrum and investigate the contribution of the long wavelength infrared
(or “thermal”) band towards a combined multimodal approach.
As part of the research, a visible-thermal synthetic dataset of a rendezvous
approach with the defunct satellite Envisat is generated from the ground up using a
realistic orbital camera simulator. From the rendered trajectories, the performance
of several state-of-the-art feature detectors and descriptors is first evaluated for
both modalities in a tailored scenario for short and wide baseline image processing
transforms. Multiple combinations, including the pairing of algorithms with their
non-native counterparts, are tested. Computational runtimes are assessed in an
embedded hardware board.
From the insight gained, a method to estimate the pose on the visible band is
derived from minimising geometric constraints between online local point and edge
contour features matched to keyframes generated offline from a 3D model of the
target. The combination of both feature types is demonstrated to achieve a pose
solution for a tumbling target using a sparse set of training images, bypassing the
need for hardware-accelerated real-time renderings of the model.
The proposed algorithm is then augmented with an extended Kalman filter
which processes each feature-induced minimisation output as individual pseudo measurements, fusing them to estimate the relative pose and velocity states at
each time-step. Both the minimisation and filtering are established using Lie group
formalisms, allowing for the covariance of the solution computed by the former to be automatically incorporated as measurement noise in the latter, providing
an automatic weighing of each feature type directly related to the quality of the
matches. The predicted states are then used to search for new feature matches in the
subsequent time-step. Furthermore, a method to derive a coarse viewpoint estimate
to initialise the nominal algorithm is developed based on probabilistic modelling of
the target’s shape. The robustness of the complete approach is demonstrated for
several synthetic and laboratory test cases involving two types of target undergoing
extreme illumination conditions.
Lastly, an innovative deep learning-based framework is developed by processing
the features extracted by a convolutional front-end with long short-term memory cells,
thus proposing the first deep recurrent convolutional neural network for spacecraft
pose estimation. The framework is used to compare the performance achieved by
visible-only and multimodal input sequences, where the addition of the thermal band
is shown to greatly improve the performance during sunlit sequences. Potential
limitations of this modality are also identified, such as when the target’s thermal
signature is comparable to Earth’s during eclipse.PH
Information technology and military performance
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Political Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 519-544).Militaries have long been eager to adopt the latest technology (IT) in a quest to improve knowledge of and control over the battlefield. At the same time, uncertainty and confusion have remained prominent in actual experience of war. IT usage sometimes improves knowledge, but it sometimes contributes to tactical blunders and misplaced hubris. As militaries invest intensively in IT, they also tend to develop larger headquarters staffs, depend more heavily on planning and intelligence, and employ a larger percentage of personnel in knowledge work rather than physical combat. Both optimists and pessimists about the so-called "revolution in military affairs" have tended to overlook the ways in which IT is profoundly and ambiguously embedded in everyday organizational life. Technocrats embrace IT to "lift the fog of war," but IT often becomes a source of breakdowns, misperception, and politicization. To describe the conditions under which IT usage improves or degrades organizational performance, this dissertation develops the notion of information friction, an aggregate measure of the intensity of organizational struggle to coordinate IT with the operational environment. It articulates hypotheses about how the structure of the external battlefield, internal bureaucratic politics, and patterns of human-computer interaction can either exacerbate or relieve friction, which thus degrades or improves performance. Technological determinism alone cannot account for the increasing complexity and variable performances of information phenomena. Information friction theory is empirically grounded in a participant-observation study of U.S. special operations in Iraq from 2007 to 2008. To test the external validity of insights gained through fieldwork in Iraq, an historical study of the 1940 Battle of Britain examines IT usage in a totally different structural, organizational, and technological context.(cont.) These paired cases show that high information friction, and thus degraded performance, can arise with sophisticated IT, while lower friction and impressive performance can occur with far less sophisticated networks. The social context, not just the quality of technology, makes all the difference. Many shorter examples from recent military history are included to illustrate concepts. This project should be of broad interest to students of organizational knowledge, IT, and military effectiveness.by Jon Randall Lindsay.Ph.D