2,799 research outputs found
Combinatorial Channel Signature Modulation for Wireless ad-hoc Networks
In this paper we introduce a novel modulation and multiplexing method which
facilitates highly efficient and simultaneous communication between multiple
terminals in wireless ad-hoc networks. We term this method Combinatorial
Channel Signature Modulation (CCSM). The CCSM method is particularly efficient
in situations where communicating nodes operate in highly time dispersive
environments. This is all achieved with a minimal MAC layer overhead, since all
users are allowed to transmit and receive at the same time/frequency (full
simultaneous duplex). The CCSM method has its roots in sparse modelling and the
receiver is based on compressive sampling techniques. Towards this end, we
develop a new low complexity algorithm termed Group Subspace Pursuit. Our
analysis suggests that CCSM at least doubles the throughput when compared to
the state-of-the art.Comment: 6 pages, 7 figures, to appear in IEEE International Conference on
Communications ICC 201
Multimodal learning from visual and remotely sensed data
Autonomous vehicles are often deployed to perform exploration and monitoring missions in unseen environments. In such applications, there is often a compromise between the information richness and the acquisition cost of different sensor modalities. Visual data is usually very information-rich, but requires in-situ acquisition with the robot. In contrast, remotely sensed data has a larger range and footprint, and may be available prior to a mission. In order to effectively and efficiently explore and monitor the environment, it is critical to make use of all of the sensory information available to the robot. One important application is the use of an Autonomous Underwater Vehicle (AUV) to survey the ocean floor. AUVs can take high resolution in-situ photographs of the sea floor, which can be used to classify different regions into various habitat classes that summarise the observed physical and biological properties. This is known as benthic habitat mapping. However, since AUVs can only image a tiny fraction of the ocean floor, habitat mapping is usually performed with remotely sensed bathymetry (ocean depth) data, obtained from shipborne multibeam sonar. With the recent surge in unsupervised feature learning and deep learning techniques, a number of previous techniques have investigated the concept of multimodal learning: capturing the relationship between different sensor modalities in order to perform classification and other inference tasks. This thesis proposes related techniques for visual and remotely sensed data, applied to the task of autonomous exploration and monitoring with an AUV. Doing so enables more accurate classification of the benthic environment, and also assists autonomous survey planning. The first contribution of this thesis is to apply unsupervised feature learning techniques to marine data. The proposed techniques are used to extract features from image and bathymetric data separately, and the performance is compared to that with more traditionally used features for each sensor modality. The second contribution is the development of a multimodal learning architecture that captures the relationship between the two modalities. The model is robust to missing modalities, which means it can extract better features for large-scale benthic habitat mapping, where only bathymetry is available. The model is used to perform classification with various combinations of modalities, demonstrating that multimodal learning provides a large performance improvement over the baseline case. The third contribution is an extension of the standard learning architecture using a gated feature learning model, which enables the model to better capture the ‘one-to-many’ relationship between visual and bathymetric data. This opens up further inference capabilities, with the ability to predict visual features from bathymetric data, which allows image-based queries. Such queries are useful for AUV survey planning, especially when supervised labels are unavailable. The final contribution is the novel derivation of a number of information-theoretic measures to aid survey planning. The proposed measures predict the utility of unobserved areas, in terms of the amount of expected additional visual information. As such, they are able to produce utility maps over a large region that can be used by the AUV to determine the most informative locations from a set of candidate missions. The models proposed in this thesis are validated through extensive experiments on real marine data. Furthermore, the introduced techniques have applications in various other areas within robotics. As such, this thesis concludes with a discussion on the broader implications of these contributions, and the future research directions that arise as a result of this work
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Sparse representations in multi-kernel dictionaries for in-situ classification of underwater objects
2017 Spring.Includes bibliographical references.The performance of the kernel-based pattern classification algorithms depends highly on the selection of the kernel function and its parameters. Consequently in the recent years there has been a growing interest in machine learning algorithms to select kernel functions automatically from a predefined dictionary of kernels. In this work we develop a general mathematical framework for multi-kernel classification that makes use of sparse representation theory for automatically selecting the kernel functions and their parameters that best represent a set of training samples. We construct a dictionary of different kernel functions with different parametrizations. Using a sparse approximation algorithm, we represent the ideal score of each training sample as a sparse linear combination of the kernel functions in the dictionary evaluated at all training samples. Moreover, we incorporate the high-level operator's concepts into the learning by using the in-situ learning for the new unseen samples whose scores can not be represented suitably using the previously selected representative samples. Finally, we evaluate the viability of this method for in-situ classification of a database of underwater object images. Results are presented in terms of ROC curve, confusion matrix and correct classification rate measures
Sparse channel estimation for multicarrier underwater acoustic communication : from subspace methods to compressed sensing
Author Posting. © IEEE, 2009. This article is posted here by permission of IEEE for personal use, not for redistribution. The definitive version was published in IEEE Transactions on Signal Processing 58 (2010): 1708-1721, doi:10.1109/TSP.2009.2038424.In this paper, we investigate various channel estimators
that exploit channel sparsity in the time and/or Doppler
domain for a multicarrier underwater acoustic system. We use a
path-based channel model, where the channel is described by a
limited number of paths, each characterized by a delay, Doppler
scale, and attenuation factor, and derive the exact inter-carrierinterference
(ICI) pattern. For channels that have limited Doppler
spread we show that subspace algorithms from the array processing
literature, namely Root-MUSIC and ESPRIT, can be applied
for channel estimation. For channels with Doppler spread, we
adopt a compressed sensing approach, in form of Orthogonal
Matching Pursuit (OMP) and Basis Pursuit (BP) algorithms, and
utilize overcomplete dictionaries with an increased path delay
resolution. Numerical simulation and experimental data of an
OFDM block-by-block receiver are used to evaluate the proposed
algorithms in comparison to the conventional least-squares (LS)
channel estimator.We observe that subspace methods can tolerate
small to moderate Doppler effects, and outperform the LS
approach when the channel is indeed sparse. On the other hand,
compressed sensing algorithms uniformly outperform the LS and
subspace methods. Coupled with a channel equalizer mitigating
ICI, the compressed sensing algorithms can effectively handle
channels with significant Doppler spread.C. Berger, S. Zhou, and P. Willett are supported by ONR
grants N00014-09-10613, N00014-07-1-0805, and N00014-09-1-0704
Long-Term Simultaneous Localization and Mapping in Dynamic Environments.
One of the core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the robot must build a representation of the environment and localize itself within this representation. This process, known as simultaneous localization and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot's sensory observations as it moves through the environment, and by observing the robot's ego-motion through proprioceptive sensors, constraints are placed on the trajectory of the robot and the configuration of the environment. This results in a probabilistic optimization problem to find the most likely robot trajectory and environment configuration given all of the robot's previous sensory experience. SLAM has been well studied under the assumptions that the robot operates for a relatively short time period and that the environment is essentially static during operation. However, performing SLAM over long time periods while modeling the dynamic changes in the environment remains a challenge.
The goal of this thesis is to extend the capabilities of SLAM to enable long-term autonomous operation in dynamic environments. The contribution of this thesis has three main components: First, we propose a framework for controlling the computational complexity of the SLAM optimization problem so that it does not grow unbounded with exploration time. Second, we present a method to learn visual feature descriptors that are more robust to changes in lighting, allowing for improved data association in dynamic environments. Finally, we use the proposed tools in SLAM systems that explicitly models the dynamics of the environment in the map by representing each location as a set of example views that capture how the location changes with time.
We experimentally demonstrate that the proposed methods enable long-term SLAM in dynamic environments using a large, real-world vision and LIDAR dataset collected over the course of more than a year. This dataset captures a wide variety of dynamics: from short-term scene changes including moving people, cars, changing lighting, and weather conditions; to long-term dynamics including seasonal conditions and structural changes caused by construction.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111538/1/carlevar_1.pd
- …