8 research outputs found

    Multi-Sensor Data Fusion for Robust Environment Reconstruction in Autonomous Vehicle Applications

    Get PDF
    In autonomous vehicle systems, understanding the surrounding environment is mandatory for an intelligent vehicle to make every decision of movement on the road. Knowledge about the neighboring environment enables the vehicle to detect moving objects, especially irregular events such as jaywalking, sudden lane change of the vehicle etc. to avoid collision. This local situation awareness mostly depends on the advanced sensors (e.g. camera, LIDAR, RADAR) added to the vehicle. The main focus of this work is to formulate a problem of reconstructing the vehicle environment using point cloud data from the LIDAR and RGB color images from the camera. Based on a widely used point cloud registration tool such as iterated closest point (ICP), an expectation-maximization (EM)-ICP technique has been proposed to automatically mosaic multiple point cloud sets into a larger one. Motion trajectories of the moving objects are analyzed to address the issue of irregularity detection. Another contribution of this work is the utilization of fusion of color information (from RGB color images captured by the camera) with the three-dimensional point cloud data for better representation of the environment. For better understanding of the surrounding environment, histogram of oriented gradient (HOG) based techniques are exploited to detect pedestrians and vehicles.;Using both camera and LIDAR, an autonomous vehicle can gather information and reconstruct the map of the surrounding environment up to a certain distance. Capability of communicating and cooperating among vehicles can improve the automated driving decisions by providing extended and more precise view of the surroundings. In this work, a transmission power control algorithm is studied along with the adaptive content control algorithm to achieve a more accurate map of the vehicle environment. To exchange the local sensor data among the vehicles, an adaptive communication scheme is proposed that controls the lengths and the contents of the messages depending on the load of the communication channel. The exchange of this information can extend the tracking region of a vehicle beyond the area sensed by its own sensors. In this experiment, a combined effect of power control, and message length and content control algorithm is exploited to improve the map\u27s accuracy of the surroundings in a cooperative automated vehicle system

    The Radon Signed Cumulative Distribution Transform and its applications in classification of Signed Images

    Full text link
    Here we describe a new image representation technique based on the mathematics of transport and optimal transport. The method relies on the combination of the well-known Radon transform for images and a recent signal representation method called the Signed Cumulative Distribution Transform. The newly proposed method generalizes previous transport-related image representation methods to arbitrary functions (images), and thus can be used in more applications. We describe the new transform, and some of its mathematical properties and demonstrate its ability to partition image classes with real and simulated data. In comparison to existing transport transform methods, as well as deep learning-based classification methods, the new transform more accurately represents the information content of signed images, and thus can be used to obtain higher classification accuracies. The implementation of the proposed method in Python language is integrated as a part of the software package PyTransKit, available on Github

    Data-driven Identification of Parametric Governing Equations of Dynamical Systems Using the Signed Cumulative Distribution Transform

    Full text link
    This paper presents a novel data-driven approach to identify partial differential equation (PDE) parameters of a dynamical system. Specifically, we adopt a mathematical "transport" model for the solution of the dynamical system at specific spatial locations that allows us to accurately estimate the model parameters, including those associated with structural damage. This is accomplished by means of a newly-developed mathematical transform, the signed cumulative distribution transform (SCDT), which is shown to convert the general nonlinear parameter estimation problem into a simple linear regression. This approach has the additional practical advantage of requiring no a priori knowledge of the source of the excitation (or, alternatively, the initial conditions). By using training data, we devise a coarse regression procedure to recover different PDE parameters from the PDE solution measured at a single location. Numerical experiments show that the proposed regression procedure is capable of detecting and estimating PDE parameters with superior accuracy compared to a number of recently developed machine learning methods. Furthermore, a damage identification experiment conducted on a publicly available dataset provides strong evidence of the proposed method's effectiveness in structural health monitoring (SHM) applications. The Python implementation of the proposed system identification technique is integrated as a part of the software package PyTransKit (https://github.com/rohdelab/PyTransKit)

    Geodesic Properties of a Generalized Wasserstein Embedding for Time Series Analysis

    Full text link
    Transport-based metrics and related embeddings (transforms) have recently been used to model signal classes where nonlinear structures or variations are present. In this paper, we study the geodesic properties of time series data with a generalized Wasserstein metric and the geometry related to their signed cumulative distribution transforms in the embedding space. Moreover, we show how understanding such geometric characteristics can provide added interpretability to certain time series classifiers, and be an inspiration for more robust classifiers

    Multi-Sensor Data Fusion For Vehicle Detection In Autonomous Vehicle Applications

    No full text
    In autonomous vehicle systems, sensing the surrounding environment is important to an intelligent vehicle\u27s making the right decision about the action. Understanding the neighboring environment from sensing data can enable the vehicle to be aware of other moving objects nearby (e.g., vehicles or pedestrians) and therefore avoid collisions. This local situational awareness mostly depends on extracting information from a variety of sensors (e.g. camera, LIDAR, RADAR) each of which has its own operating conditions (e.g., lighting, range, power). One of the open issues in the reconstruction and understanding of the environment of autonomous vehicle is how to fuse locally sensed data to support a specific decision task such as vehicle detection. In this paper, we study the problem of fusing data from camera and LIDAR sensors and propose a novel 6D (RGB+XYZ) data representation to support visual inference. This work extends previous Position and Intensity-included Histogram of Oriented Gradient (PIHOG or pHOG) from color space to the proposed 6D space, which targets at achieving more reliable vehicle detection than single-sensor approach. Our experimental result have validated the effectiveness of the proposed multi-sensor data fusion approach - i.e., it achieves the detection accuracy of 73% on the challenging KITTI dataset

    Multi-sensor Data Fusion for Vehicle Detection in Autonomous Vehicle Applications

    No full text
    In autonomous vehicle systems, sensing the surrounding environment is important to an intelligent vehicle\u27s making the right decision about the action. Understanding the neighboring environment from sensing data can enable the vehicle to be aware of other moving objects nearby (e.g., vehicles or pedestrians) and therefore avoid collisions. This local situational awareness mostly depends on extracting information from a variety of sensors (e.g. camera, LIDAR, RADAR) each of which has its own operating conditions (e.g., lighting, range, power). One of the open issues in the reconstruction and understanding of the environment of autonomous vehicle is how to fuse locally sensed data to support a specific decision task such as vehicle detection. In this paper, we study the problem of fusing data from camera and LIDAR sensors and propose a novel 6D (RGB+XYZ) data representation to support visual inference. This work extends previous Position and Intensity-included Histogram of Oriented Gradient (PIHOG or pHOG) from color space to the proposed 6D space, which targets at achieving more reliable vehicle detection than single-sensor approach. Our experimental result have validated the effectiveness of the proposed multi-sensor data fusion approach - i.e., it achieves the detection accuracy of 73% on the challenging KITTI dataset

    End-to-End Signal Classification in Signed Cumulative Distribution Transform Space

    Full text link
    This paper presents a new end-to-end signal classification method using the signed cumulative distribution transform (SCDT). We adopt a transport-based generative model to define the classification problem. We then make use of mathematical properties of the SCDT to render the problem easier in transform domain, and solve for the class of an unknown sample using a nearest local subspace (NLS) search algorithm in SCDT domain. Experiments show that the proposed method provides high accuracy classification results while being data efficient, robust to out-of-distribution samples, and competitive in terms of computational complexity with respect to the deep learning end-to-end classification methods. The implementation of the proposed method in Python language is integrated as a part of the software package PyTransKit (https://github.com/rohdelab/PyTransKit)

    Invariance encoding in sliced-Wasserstein space for image classification with limited training data

    Full text link
    Deep convolutional neural networks (CNNs) are broadly considered to be state-of-the-art generic end-to-end image classification systems. However, they are known to underperform when training data are limited and thus require data augmentation strategies that render the method computationally expensive and not always effective. Rather than using a data augmentation strategy to encode invariances as typically done in machine learning, here we propose to mathematically augment a nearest subspace classification model in sliced-Wasserstein space by exploiting certain mathematical properties of the Radon Cumulative Distribution Transform (R-CDT), a recently introduced image transform. We demonstrate that for a particular type of learning problem, our mathematical solution has advantages over data augmentation with deep CNNs in terms of classification accuracy and computational complexity, and is particularly effective under a limited training data setting. The method is simple, effective, computationally efficient, non-iterative, and requires no parameters to be tuned. Python code implementing our method is available at https://github.com/rohdelab/mathematical_augmentation. Our method is integrated as a part of the software package PyTransKit, which is available at https://github.com/rohdelab/PyTransKit
    corecore