14 research outputs found

    Implementation of AI/Deep Learning Disruption Predictor into a Plasma Control System

    Full text link
    This paper reports on advances to the state-of-the-art deep-learning disruption prediction models based on the Fusion Recurrent Neural Network (FRNN) originally introduced a 2019 Nature publication. In particular, the predictor now features not only the disruption score, as an indicator of the probability of an imminent disruption, but also a sensitivity score in real-time to indicate the underlying reasons for the imminent disruption. This adds valuable physics-interpretability for the deep-learning model and can provide helpful guidance for control actuators now that it is fully implemented into a modern Plasma Control System (PCS). The advance is a significant step forward in moving from modern deep-learning disruption prediction to real-time control and brings novel AI-enabled capabilities relevant for application to the future burning plasma ITER system. Our analyses use large amounts of data from JET and DIII-D vetted in the earlier NATURE publication. In addition to when a shot is predicted to disrupt, this paper addresses reasons why by carrying out sensitivity studies. FRNN is accordingly extended to use many more channels of information, including measured DIII-D signals such as (i) the n1rms signal that is correlated with the n =1 modes with finite frequency, including neoclassical tearing mode and sawtooth dynamics, (ii) the bolometer data indicative of plasma impurity content, and (iii) q-min, the minimum value of the safety factor relevant to the key physics of kink modes. The additional channels and interpretability features expand the ability of the deep learning FRNN software to provide information about disruption subcategories as well as more precise and direct guidance for the actuators in a plasma control system

    Machine learning application in complicated burning plasmas for future magnetic fusion exploration

    Get PDF

    Automotive sensor fusion systems for traffic aware adaptive cruise control

    Get PDF
    The autonomous driving (AD) industry is advancing at a rapid pace. New sensing technology for tracking vehicles, controlling vehicle behavior, and communicating with infrastructure are being added to commercial vehicles. These new automotive technologies reduce on road fatalities, improve ride quality, and improve vehicle fuel economy. This research explores two types of automotive sensor fusion systems: a novel radar/camera sensor fusion system using a long shortterm memory (LSTM) neural network (NN) to perform data fusion improving tracking capabilities in a simulated environment and a traditional radar/camera sensor fusion system that is deployed in Mississippi State’s entry in the EcoCAR Mobility Challenge (2019 Chevrolet Blazer) for an adaptive cruise control system (ACC) which functions in on-road applications. Along with vehicles, pedestrians, and cyclists, the sensor fusion system deployed in the 2019 Chevrolet Blazer uses vehicle-to-everything (V2X) communication to communicate with infrastructure such as traffic lights to optimize and autonomously control vehicle acceleration through a connected corrido

    Composite Analysis-Based Machine Learning for Prediction of Tropical Cyclone-Induced Sea Surface Height Anomaly

    Get PDF
    Sea surface height anomaly (SSHA) induced by tropical cyclones (TCs) is closely associated with oscillations and is a crucial proxy for thermocline structure and ocean heat content in the upper ocean. The prediction of TC-induced SSHA, however, has been rarely investigated. This study presents a new composite analysis-based random forest (RF) approach to predict daily TC-induced SSHA. The proposed method utilizes TC’s characteristics and pre-storm upper oceanic parameters as input features to predict TC-induced SSHA up to 30 days after TC passage. Simulation results suggest that the proposed method is skillful at inferring both the amplitude and temporal evolution of SSHA induced by TCs of different intensity groups. Using a TC-centered 5°×5° box, the proposed method achieves highly accurate prediction of TC-induced SSHA over the Western North Pacific with root mean square error of 0.024m, outperforming alternative machine learning methods and the numerical model. Moreover, the proposed method also demonstrated good prediction performance in different geographical regions, i.e., the South China Sea and the Western North Pacific subtropical ocean. The study provides insight into the application of machine learning in improving the prediction of SSHA influenced by extreme weather conditions. Accurate prediction of TC-induced SSHA allows for better preparedness and response, reducing the impact of extreme events (e.g., storm surge) on people and property

    Driver Behavior Analysis Based on Real On-Road Driving Data in the Design of Advanced Driving Assistance Systems

    Get PDF
    The number of vehicles on the roads increases every day. According to the National Highway Traffic Safety Administration (NHTSA), the overwhelming majority of serious crashes (over 94 percent) are caused by human error. The broad aim of this research is to develop a driver behavior model using real on-road data in the design of Advanced Driving Assistance Systems (ADASs). For several decades, these systems have been a focus of many researchers and vehicle manufacturers in order to increase vehicle and road safety and assist drivers in different driving situations. Some studies have concentrated on drivers as the main actor in most driving circumstances. The way a driver monitors the traffic environment partially indicates the level of driver awareness. As an objective, we carry out a quantitative and qualitative analysis of driver behavior to identify the relationship between a driver’s intention and his/her actions. The RoadLAB project developed an instrumented vehicle equipped with On-Board Diagnostic systems (OBD-II), a stereo imaging system, and a non-contact eye tracker system to record some synchronized driving data of the driver cephalo-ocular behavior, the vehicle itself, and traffic environment. We analyze several behavioral features of the drivers to realize the potential relevant relationship between driver behavior and the anticipation of the next driver maneuver as well as to reach a better understanding of driver behavior while in the act of driving. Moreover, we detect and classify road lanes in the urban and suburban areas as they provide contextual information. Our experimental results show that our proposed models reached the F1 score of 84% and the accuracy of 94% for driver maneuver prediction and lane type classification respectively

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces
    corecore