7,912 research outputs found
Vision technology/algorithms for space robotics applications
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed
Dynamic Denoising of Tracking Sequences
©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2008.920795In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences.Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement "collaborate" in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over "static" approaches, in which the tracking images are enhanced independently of each other
Off-line processing of ERS-1 synthetic aperture radar data with high precision and high throughput
The first European remote sensing satellite ERS-1 will be launched by the European Space Agency (ESA) in 1989. The expected lifetime is two to three years. The spacecraft sensors will primarily support ocean investigations and to a limited extent also land applications. Prime sensor is the Active Microwave Instrumentation (AMI) operating in C-Band either as Synthetic Aperture Radar (SAR) or as Wave-Scatterometer and simultaneously as Wind-Scatterometer. In Europe there will be two distinct types of processing for ERS-1 SAR data, Fast Delivery Processing and Precision Processing. Fast Delivery Proceessing will be carried out at the ground stations and up to three Fast Delivery products per pass will be delivered to end users via satellite within three hours after data acquisition. Precision Processing will be carried out in delayed time and products will not be generated until several days or weeks after data acquisition. However, a wide range of products will be generated by several Processing and Archiving Facilities (PAF) in a joint effort coordinated by ESA. The German Remote Sensing Data Center (Deutsches Fernerkundungsdatenzentrum DFD) will develop and operate one of these facilities. The related activities include the acquisition, processing and evaluation of such data for scientific, public and commercial users. Based on this experience the German Remote Sensing Data Center is presently performing a Phase-B study regarding the development of a SAR processor for ERS-1. The conceptual design of this processing facility is briefly outlined
DroTrack: High-speed Drone-based Object Tracking Under Uncertainty
We present DroTrack, a high-speed visual single-object tracking framework for
drone-captured video sequences. Most of the existing object tracking methods
are designed to tackle well-known challenges, such as occlusion and cluttered
backgrounds. The complex motion of drones, i.e., multiple degrees of freedom in
three-dimensional space, causes high uncertainty. The uncertainty problem leads
to inaccurate location predictions and fuzziness in scale estimations. DroTrack
solves such issues by discovering the dependency between object representation
and motion geometry. We implement an effective object segmentation based on
Fuzzy C Means (FCM). We incorporate the spatial information into the membership
function to cluster the most discriminative segments. We then enhance the
object segmentation by using a pre-trained Convolution Neural Network (CNN)
model. DroTrack also leverages the geometrical angular motion to estimate a
reliable object scale. We discuss the experimental results and performance
evaluation using two datasets of 51,462 drone-captured frames. The combination
of the FCM segmentation and the angular scaling increased DroTrack precision by
up to and decreased the centre location error by pixels on average.
DroTrack outperforms all the high-speed trackers and achieves comparable
results in comparison to deep learning trackers. DroTrack offers high frame
rates up to 1000 frame per second (fps) with the best location precision, more
than a set of state-of-the-art real-time trackers.Comment: 10 pages, 12 figures, FUZZ-IEEE 202
Quantitative Precipitation Nowcasting: A Lagrangian Pixel-Based Approach
Short-term high-resolution precipitation forecasting has important implications for navigation, flood forecasting, and other hydrological and meteorological concerns. This article introduces a pixel-based algorithm for Short-term Quantitative Precipitation Forecasting (SQPF) using radar-based rainfall data. The proposed algorithm called Pixel- Based Nowcasting (PBN) tracks severe storms with a hierarchical mesh-tracking algorithm to capture storm advection in space and time at high resolution from radar imagers. The extracted advection field is then extended to nowcast the rainfall field in the next 3. hr based on a pixel-based Lagrangian dynamic model. The proposed algorithm is compared with two other nowcasting algorithms (WCN: Watershed-Clustering Nowcasting and PER: PERsistency) for ten thunderstorm events over the conterminous United States. Object-based verification metric and traditional statistics have been used to evaluate the performance of the proposed algorithm. It is shown that the proposed algorithm is superior over comparison algorithms and is effective in tracking and predicting severe storm events for the next few hours. © 2012 Elsevier B.V
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Can We "Sense" the Call of The Ocean? Current Advances in Remote Sensing Computational Imaging for Marine Debris Monitoring
Especially due to the unconscious use of petroleum products, the ocean faces
a potential danger: . Plastic pollutes not only the
ocean but also directly the air and foods whilst endangering the ocean
wild-life due to the ingestion and entanglements. Especially, during the last
decade, public initiatives and academic institutions have spent an enormous
time on finding possible solutions to marine plastic pollution. Remote sensing
imagery sits in a crucial place for these efforts since it provides highly
informative earth observation products. Despite this, detection, and monitoring
of the marine environment in the context of plastic pollution is still in its
early stages and the current technology offers possible important development
for the computational efforts. This paper contributes to the literature with a
thorough and rich review and aims to highlight notable literature milestones in
marine debris monitoring applications by promoting the computational imaging
methodology behind these approaches.Comment: 25 pages, 11 figure
U.S. Unmanned Aerial Vehicles (UAVS) and Network Centric Warfare (NCW) impacts on combat aviation tactics from Gulf War I through 2007 Iraq
Unmanned, aerial vehicles (UAVs) are an increasingly important element of many modern militaries. Their success on battlefields in Afghanistan, Iraq, and around the globe has driven demand for a variety of types of unmanned vehicles. Their proven value consists in low risk and low cost, and their capabilities include persistent surveillance, tactical and combat reconnaissance, resilience, and dynamic re-tasking. This research evaluates past, current, and possible future operating environments for several UAV platforms to survey the changing dynamics of combat-aviation tactics and make recommendations regarding UAV employment scenarios to the Turkish military. While UAVs have already established their importance in military operations, ongoing evaluations of UAV operating environments, capabilities, technologies, concepts, and organizational issues inform the development of future systems. To what extent will UAV capabilities increasingly define tomorrow's missions, requirements, and results in surveillance and combat tactics? Integrating UAVs and concepts of operations (CONOPS) on future battlefields is an emergent science. Managing a transition from manned- to unmanned and remotely piloted aviation platforms involves new technological complexity and new aviation personnel roles, especially for combat pilots. Managing a UAV military transformation involves cultural change, which can be measured in decades.http://archive.org/details/usunmannedaerial109454211Turkish Air Force authors.Approved for public release; distribution is unlimited
Space-based Global Maritime Surveillance. Part I: Satellite Technologies
Maritime surveillance (MS) is crucial for search and rescue operations,
fishery monitoring, pollution control, law enforcement, migration monitoring,
and national security policies. Since the early days of seafaring, MS has been
a critical task for providing security in human coexistence. Several
generations of sensors providing detailed maritime information have become
available for large offshore areas in real time: maritime radar sensors in the
1950s and the automatic identification system (AIS) in the 1990s among them.
However, ground-based maritime radars and AIS data do not always provide a
comprehensive and seamless coverage of the entire maritime space. Therefore,
the exploitation of space-based sensor technologies installed on satellites
orbiting around the Earth, such as satellite AIS data, synthetic aperture
radar, optical sensors, and global navigation satellite systems reflectometry,
becomes crucial for MS and to complement the existing terrestrial technologies.
In the first part of this work, we provide an overview of the main available
space-based sensors technologies and present the advantages and limitations of
each technology in the scope of MS. The second part, related to artificial
intelligence, signal processing and data fusion techniques, is provided in a
companion paper, titled: "Space-based Global Maritime Surveillance. Part II:
Artificial Intelligence and Data Fusion Techniques" [1].Comment: This paper has been submitted to IEEE Aerospace and Electronic
Systems Magazin
- …