68 research outputs found

    Advanced Aviation Weather Radar Data Processing and Real-Time Implementations

    Get PDF
    The objectives of this dissertation work are developing an enhanced intelligent radar signal and data processing framework for aviation hazard detection, classification and monitoring, and real-time implementation on massive parallel platforms. Variety of radar sensor platforms are used to prove the concept including airborne precipitation radar and different ground weather radars. As a focused example of the proposed approach, this research applies evolutionary machine learning technology to turbulence level classification for civil aviation. An artificial neural network (ANN) machine learning approach based on radar observation is developed for classifying the cubed root of the Eddy Dissipation Rate (EDR), a widely-accepted measure of turbulence intensity. The approach is validated using typhoon weather data collected by Hong Kong Observatory’s (HKO) Terminal Doppler Weather Radar (TDWR) located near Hong Kong International Airport (HKIA) and comparing HKO-TDWR EDR1/3^{1/3} detections and predictions with in situ EDR1/3^{1/3} measured by commercial aircrafts. The testing results verified that machine learning approach performs reasonably well for both detecting and predicting tasks. As the preliminary step to explore the possibility of acceleration by integrating General Purpose Graphic Processing Unit (GPGPU), this research introduces a practical approach to implement real-time processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar. After the investigation of the GPGPU on radar signal processing chain, the benchmark of applying machine learning approach on embedded GPU platform was performed. According to the performance, real-time requirement of the machine learning method of turbulence detection developed in this research could be met as well as Size, Weight and Power (SWaP) restrictions on embedded GPGPU platforms

    GPU Computing for Cognitive Robotics

    Get PDF
    This thesis presents the first investigation of the impact of GPU computing on cognitive robotics by providing a series of novel experiments in the area of action and language acquisition in humanoid robots and computer vision. Cognitive robotics is concerned with endowing robots with high-level cognitive capabilities to enable the achievement of complex goals in complex environments. Reaching the ultimate goal of developing cognitive robots will require tremendous amounts of computational power, which was until recently provided mostly by standard CPU processors. CPU cores are optimised for serial code execution at the expense of parallel execution, which renders them relatively inefficient when it comes to high-performance computing applications. The ever-increasing market demand for high-performance, real-time 3D graphics has evolved the GPU into a highly parallel, multithreaded, many-core processor extraordinary computational power and very high memory bandwidth. These vast computational resources of modern GPUs can now be used by the most of the cognitive robotics models as they tend to be inherently parallel. Various interesting and insightful cognitive models were developed and addressed important scientific questions concerning action-language acquisition and computer vision. While they have provided us with important scientific insights, their complexity and application has not improved much over the last years. The experimental tasks as well as the scale of these models are often minimised to avoid excessive training times that grow exponentially with the number of neurons and the training data. This impedes further progress and development of complex neurocontrollers that would be able to take the cognitive robotics research a step closer to reaching the ultimate goal of creating intelligent machines. This thesis presents several cases where the application of the GPU computing on cognitive robotics algorithms resulted in the development of large-scale neurocontrollers of previously unseen complexity enabling the conducting of the novel experiments described herein.European Commission Seventh Framework Programm

    Efficient and accurate stereo matching for cloth manipulation

    Get PDF
    Due to the recent development of robotic techniques, researching robots that can assist in everyday household tasks, especially robotic cloth manipulation has become popular in recent years. Stereo matching forms a crucial part of the robotic vision and aims to derive depth information from image pairs captured by the stereo cameras. Although stereo robotic vision is widely adopted for cloth manipulation robots in the research community, this remains a challenging research task. Robotic vision requires very accurate depth output in a relatively short timespan in order to successfully perform cloth manipulation in real-time. In this thesis, we mainly aim to develop a robotic stereo matching based vision system that is both efficient and effective for the task of robotic cloth manipulation. Effectiveness refers to the accuracy of the depth map generated from the stereo matching algorithms for the robot to grasp the required details to achieve the given task on cloth materials while efficiency emphasizes the required time for the stereo matching to process the images. With respect to efficiency, firstly, by exploring a variety of different hardware architectures such as multi-core CPU and graphic processors (GPU) to accelerate stereo matching, we demonstrate that the parallelised stereo-matching algorithm can be significantly accelerated, achieving 12X and 176X speed-ups respectively for multi-core CPU and GPU, compared with SISD (Single Instruction, Single Data) single-thread CPU. In terms of effectiveness, due to the fact that there are no cloth based testbeds with depth map ground-truths for evaluating the accuracy of stereo matching performance in this context, we created five different testbeds to facilitate evaluation of stereo matching in the context of cloth manipulation. In addition, we adapted a guided filtering algorithm into a pyramidical stereo matching framework that works directly for unrectified images, and evaluate its accuracy utilizing the created cloth testbeds. We demonstrate that our proposed approach is not only efficient, but also accurate and suits well to the characteristics of the task of cloth manipulations. This also shows that rather than relying on image rectification, directly applying stereo matching to unrectified images is effective and efficient. Finally, we further explore whether we can improve efficiency while maintaining reasonable accuracy for robotic cloth manipulations (i.e.~trading off accuracy for efficiency). We use a foveated matching algorithm, inspired by biological vision systems, and found that it is effective in trading off accuracy for efficiency, achieving almost the same level of accuracy for both cloth grasping and flattening tasks with two to three fold acceleration. We also demonstrate that with the robot we can use machine learning techniques to predict the optimal foveation level in order to accomplish the robotic cloth manipulation tasks successfully and much more efficiently. To summarize, in this thesis, we extensively study stereo matching, contributing to the long-term goal of developing effective ways for efficient whilst accurate robotic stereo matching for cloth manipulation

    Novel system of pavement cracking detection algorithms using 1mm 3D surface data

    Get PDF
    Pavement cracking is one of the major concerns for pavement design and management. There have been rapid developments of automated pavement cracking detection in recent years. However, none of them has been widely accepted so far due to lack of capability of maintaining consistently high detection accuracy for various pavement surfaces. Using 1mm 3D data collected by WayLink Digital Highway Data Vehicle (DHDV), an entire system of algorithms, which consists of Fully Automated Cracking Detection Subsystem, Interactive Cracking Detection Subsystem and Noisy Pattern Detection Subsystem, is proposed in this study for improvements in adaptability, reliability and interactivity of pavement cracking detection.The Fully Automated Cracking Detection Subsystem utilizes 3D Shadow Simulation to find lower areas in local neighborhood, and then eliminates noises by subsequent noise suppressing procedures. The assumption behind 3D Shadow Simulation is that local lower areas will be shadowed under light with a certain projection angle. According to the Precision-Recall Analysis on two real pavement segments, the fully automated subsystem can achieve a high level of Precision and Recall on both pavement segments.The Interactive Cracking Detection Subsystem implements an interactive algorithm proposed in this study, which is capable of improving its detection accuracy by adjustments based on the operator's feedback, to provide a slower but more flexible as well as confident approach to pavement cracking detection. It is demonstrated in the case study that the interactive subsystem can retrieve almost 100 percent of cracks with nearly no noises.The Noisy Pattern Detection Subsystem is proposed to exclude pavement joints and grooves from cracking detection so that false-positive errors on rigid pavements can be reduced significantly. This subsystem applies Support Vector Machines (SVM) to train the classifiers for the recognition of transverse groove, transverse joint, longitudinal groove and longitudinal joint respectively. Based on the trained classifiers, pattern extraction procedures are developed to find the exact locations of pavement joints and grooves.Non-dominated Sorting Genetic Algorithm II (NSGA-II), which is one of multi objective genetic algorithms, is employed in this study to optimize parameters of the fully automated subsystem for the pursuing of high Precision and high Recall simultaneously. In addition to NSGA-II, an Auxiliary Prediction Model (APM) is proposed in this study to assist NSGA-II for faster convergence and better diversity.Finally, CPU-based and GPU-based Parallel Computing Techniques, including MultiGPU, GPU streaming, Multi-Core and Multi-Threading are combined in this study to increase the processing speed for all computational tasks that can be synchronous

    Proceedings, MSVSCC 2014

    Get PDF
    Proceedings of the 8th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 17, 2014 at VMASC in Suffolk, Virginia

    Deep Model for Improved Operator Function State Assessment

    Get PDF
    A deep learning framework is presented for engagement assessment using EEG signals. Deep learning is a recently developed machine learning technique and has been applied to many applications. In this paper, we proposed a deep learning strategy for operator function state (OFS) assessment. Fifteen pilots participated in a flight simulation from Seattle to Chicago. During the four-hour simulation, EEG signals were recorded for each pilot. We labeled 20- minute data as engaged and disengaged to fine-tune the deep network and utilized the remaining vast amount of unlabeled data to initialize the network. The trained deep network was then used to assess if a pilot was engaged during the four-hour simulation

    Supercomputing Frontiers

    Get PDF
    This open access book constitutes the refereed proceedings of the 6th Asian Supercomputing Conference, SCFA 2020, which was planned to be held in February 2020, but unfortunately, the physical conference was cancelled due to the COVID-19 pandemic. The 8 full papers presented in this book were carefully reviewed and selected from 22 submissions. They cover a range of topics including file systems, memory hierarchy, HPC cloud platform, container image configuration workflow, large-scale applications, and scheduling

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI
    corecore