331 research outputs found
Stabilizing the Maximal Entropy Moment Method for Rarefied Gas Dynamics at Single-Precision
The maximal entropy moment method (MEM) is systematic solution of the
challenging problem: generating extended hydrodynamic equations valid for both
dense and rarefied gases. However, simulating MEM suffers from a computational
expensive and ill-conditioned maximal entropy problem. It causes numerical
overflow and breakdown when the numerical precision is insufficient, especially
for flows like high-speed shock waves. It also prevents modern GPUs from
accelerating MEM with their enormous single floating-point precision
computation power. This paper aims to stabilize MEM, making it possible to
simulating very strong normal shock waves on modern GPUs at single precision.
We improve the condition number of the maximal entropy problem by proposing
gauge transformations, which moves not only flow fields but also hydrodynamic
equations into a more optimal coordinate system. We addressed numerical
overflow and breakdown in the maximal entropy problem by employing the
canonical form of distribution and a modified Newton optimization method.
Moreover, we discovered a counter-intuitive phenomenon that over-refined
spatial mesh beyond mean free path degrades the stability of MEM. With these
techniques, we accomplished single-precision GPU simulations of high speed
shock wave up to Mach 10 utilizing 35 moments MEM, while previous methods only
achieved Mach 4 on double-precision.Comment: 56 pages, 8 figure
Plasma line generation and spectral estimation from Arecibo Observatory radar data
Incoherent scatter radar (ISR) signal spectrum is a statistical measure of Bragg scattered radio waves from thermal fluctuations of the electron density in the ionosphere. The ISR spectrum consists of up- and down-shifted electron plasma lines and a double-humped ion-line component associated with electron density waves with the governing dispersion relations of Langmuir and ion-acoustic waves, respectively. Such ISR spectral measurements can be conducted at the Arecibo Observatory, one of the most important centers in the world for research in radio astronomy, planetary radar and terrestrial aeronomy [Altschuler, 2002]. Although ISR measurements have been routinely taken at Arecibo since the early 1960s, full spectrum ISR measurements including the high-frequency plasma-line components became possible only very recently [Vierinen et al., 2017] as a result of critical recent upgrades in hardware configuration and computing resources. This thesis describes the estimation and analysis of the full Arecibo ISR spectrum using Arecibo line- and Gregorian-feed data collected with Echotec and USRP receivers in September 2016 and processed using GPU-based parallel programming technology. In spectral analysis the “CLEAN” algorithm is used to deconvolve the measured ISR spectrograms from frequency/height mixing caused by the finite pulse length effect. CLEANed spectrograms are subsequently fitted to a Gaussian spectral model for each height to extract an estimate of the plasma-line frequency for each height
Phase Transition in Extended Thermodynamics Triggers Sub-shocks
Extended thermodynamics commonly uses polynomial moments to model
non-equilibrium transportation, but faces a crisis due to sub-shocks, which are
anomalous discontinuities in gas properties when predicting shock waves. The
cause of sub-shocks is still unclear, challenging the validity of extended
thermodynamics. This paper reveals, for the first time, that sub-shocks arise
from intrinsic limitations of polynomials leading to a discontinuous phase
transition. Therefore extended thermodynamics necessitates alternative moments
beyond polynomials to avoid sub-shocks.Comment: 5 pages, 4 figure
Enhanced Learning Strategies for Tactile Shape Estimation and Grasp Planning of Unknown Objects
Grasping is one of the key capabilities for a robot operating and interacting with humans in a real environment. The conventional approaches require accurate information on both object shape and robotic system modeling. The performance, therefore, can be easily influenced by any noise sensor data or modeling errors. Moreover, identifying the shape of an unknown object under some vision-denied conditions is still a challenging problem in the robotics eld. To address this issue, this thesis investigates the estimation of unknown object shape using tactile exploration and the task-oriented grasp planning for a novel
object using enhanced learning techniques.
In order to rapidly estimate the shape of an unknown object, this thesis presents a novel multi- fidelity-based optimal sampling method which attempts to improve the existing shape estimation via tactile exploration. Gaussian process regression is used for implicit surface modeling with sequential sampling strategy. The main objective is to make the process of sample point selection more efficient and systematic such that the unknown shape can be estimated fast and accurately with highly limited sample points (e.g., less than 1% of number of data set for the true shape). Specifically, we propose to select the next best sample point based on two optimization criteria: 1) the mutual information (MI) for uncertainty reduction, and 2) the local curvature for fidelity enhancement. The combination of these two objectives leads to an optimal sampling process that balances between the exploration of the whole shape and the exploitation of the local area where the higher fidelity (or more sampling) is required. Simulation and experimental results successfully demonstrate the advantage of the proposed method in terms of estimation speed and accuracy over the conventional one, which allows us to reconstruct recognizable 3D shapes using only around optimally selected 0.4% of the original data set.
With the available object shape, this thesis also introduces a knowledge-based approach to quickly generate a task-oriented grasp for a novel object. A comprehensive training dataset which consists of specific tasks and geometrical and physical knowledge of grasping is built up from physical experiment. To analyze and e fficiently utilize the training data, a multi-step clustering algorithm is developed based on a self-organizing map. A number of representative grasps are then selected from the entire training dataset and used to generate a suitable grasp for a novel object. The number of representative grasps is automatically determined using the proposed auto-growing method. In addition, to improve the accuracy and efficiency of the proposed clustering algorithm, we also develop a novel method to localize the initial centroids while capturing the outliers. The results of simulation illustrate that the proposed initialization method and the auto-growing method outperform some conventional approaches in terms of accuracy and efficiency. Furthermore, the proposed knowledge-based grasp planning is also validated on a real robot. The results demonstrate the effectiveness of this approach to generate task-oriented grasps for novel objects
Deep Neural Networks for Network Intrusion Detection
Networks have become an indispensable part of people's lives. With the rapid development of new technologies such as 5G and Internet of Things, people are increasingly dependent on networks, and the scale and complexity of networks are ever-growing. As a result, cyber threats are becoming more and more diverse, frequent and sophisticated, which imposes great threats to the massive networked society. The confidential information of the network users can be leaked; The integrity of data transferred over the network can be tampered; And the computing infrastructures connected to the network can be attacked. Therefore, network intrusion detection system (NIDS) plays a crucial role in offering the modern society a secure and reliable network communication environment.
Rule-based NIDSs are effective in identifying known cyber-attacks but ineffective for novel attacks, and hence are unable to cope with the ever-evolving threat landscape today. Machine learning (ML)-based NIDSs with intelligent and automated capabilities, on the other hand, can recognize both known and unknown attacks. Traditional ML-based designs achieve a high threat detection performance at the cost of a large number of false alarms, leading to alert fatigue. Advanced deep learning (DL)-based designs with deep neural networks can effectively mitigate this problem and accomplish better generalization capability than the traditional ML-based NIDSs. However, existing DL-based designs are not mature enough and there is still large room for improvement.
To tackle the above problems, in this thesis, we first propose a two-stage deep neural network architecture, DualNet, for network intrusion detection. DualNet is constructed with a general feature extraction stage and a crucial feature learning stage. It can effectively reuse the spatial-temporal features in accordance with their importance to facilitate the entire learning process and mitigate performance degradation problem occurred in deep learning. DualNet is evaluated on a traditional popular NSL-KDD dataset and a modern near-real-world UNSW-NB15 dataset, which shows a high detection accuracy that can be achieved by DualNet.
Based on DualNet, we then propose an enhanced design, EnsembleNet. EnsembleNet is a deep ensemble neural network model, which is built with a set of specially designed deep neural networks that are integrated by an aggregation algorithm. The model also has an alert-output enhancement design to facilitate security team's response to the intrusions and hence reduce security risks. EnsembleNet is evaluated on two modern datasets, a near-real-world UNSW-NB15 dataset and a more recent and comprehensive TON_IoT dataset, which shows that EnsembleNet has a high generalization capability.
Our evaluations on the UNSW-NB15 dataset that is close to the real-world network traffic demonstrate that DualNet and EnsembleNet outperform state-of-the-art ML-based designs by achieving higher threat detection performance while keeping lower false alarm rate, which also demonstrates that deep neural networks have great application potential in network intrusion detection
DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization
DL compiler's primary function is to translate DNN programs written in
high-level DL frameworks such as PyTorch and TensorFlow into portable
executables. These executables can then be flexibly executed by the deployed
host programs. However, existing DL compilers rely on a tracing mechanism,
which involves feeding a runtime input to a neural network program and tracing
the program execution paths to generate the computational graph necessary for
compilation. Unfortunately, this mechanism falls short when dealing with modern
dynamic neural networks (DyNNs) that possess varying computational graphs
depending on the inputs. Consequently, conventional DL compilers struggle to
accurately compile DyNNs into executable code. To address this limitation, we
propose \tool, a general approach that enables any existing DL compiler to
successfully compile DyNNs. \tool tackles the dynamic nature of DyNNs by
introducing a compilation mechanism that redistributes the control and data
flow of the original DNN programs during the compilation process. Specifically,
\tool develops program analysis and program transformation techniques to
convert a dynamic neural network into multiple sub-neural networks. Each
sub-neural network is devoid of conditional statements and is compiled
independently. Furthermore, \tool synthesizes a host module that models the
control flow of the DyNNs and facilitates the invocation of the sub-neural
networks. Our evaluation demonstrates the effectiveness of \tool, achieving a
100\% success rate in compiling all dynamic neural networks. Moreover, the
compiled executables generated by \tool exhibit significantly improved
performance, running between and faster than the
original DyNNs executed on general-purpose DL frameworks.Comment: This paper has been accepted to ISSTA 202
- …