565,579 research outputs found
A pilot study of operating department practitioners undertaking high-risk learning: a comparison of experiential, part-task and hi-fidelity simulation teaching methods
Health care learners commonly rely on opportunistic experiential learning in clinical placements in order to develop cognitive and psychomotor clinical skills. In recent years there has been an increasing effort to develop effective alternative, non-opportunistic methods of learning, in an attempt to bypass the questionable tradition of relying on patients to practice on.
As part of such efforts, there is an increased utilisation of simulation-based education. However, the effectiveness of simulation in health care education arguably varies between professions (Liaw, Chan, Scherpbier, Rethans, & Pua, 2012; Oberleitner, Broussard, & Bourque, 2011; Ross, 2012). This pilot study compares the effectiveness of three educational (or ‘teaching’) methods in the development of clinical knowledge and skills during Rapid Sequence Induction (RSI) of anaesthesia, a potentially life-threatening clinical situation. Students of Operating Department Practice (ODP) undertook either a) traditional classroom based and experiential learning, b) part-task training, or c) fully submersive scenario-based simulated learning
An empirical learning-based validation procedure for simulation workflow
Simulation workflow is a top-level model for the design and control of
simulation process. It connects multiple simulation components with time and
interaction restrictions to form a complete simulation system. Before the
construction and evaluation of the component models, the validation of
upper-layer simulation workflow is of the most importance in a simulation
system. However, the methods especially for validating simulation workflow is
very limit. Many of the existing validation techniques are domain-dependent
with cumbersome questionnaire design and expert scoring. Therefore, this paper
present an empirical learning-based validation procedure to implement a
semi-automated evaluation for simulation workflow. First, representative
features of general simulation workflow and their relations with validation
indices are proposed. The calculation process of workflow credibility based on
Analytic Hierarchy Process (AHP) is then introduced. In order to make full use
of the historical data and implement more efficient validation, four learning
algorithms, including back propagation neural network (BPNN), extreme learning
machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture
model (FIGMN), are introduced for constructing the empirical relation between
the workflow credibility and its features. A case study on a landing-process
simulation workflow is established to test the feasibility of the proposed
procedure. The experimental results also provide some useful overview of the
state-of-the-art learning algorithms on the credibility evaluation of
simulation models
Generative Models for Fast Calorimeter Simulation.LHCb case
Simulation is one of the key components in high energy physics. Historically
it relies on the Monte Carlo methods which require a tremendous amount of
computation resources. These methods may have difficulties with the expected
High Luminosity Large Hadron Collider (HL LHC) need, so the experiment is in
urgent need of new fast simulation techniques. We introduce a new Deep Learning
framework based on Generative Adversarial Networks which can be faster than
traditional simulation methods by 5 order of magnitude with reasonable
simulation accuracy. This approach will allow physicists to produce a big
enough amount of simulated data needed by the next HL LHC experiments using
limited computing resources.Comment: Proceedings of the presentation at CHEP 2018 Conferenc
Cause Identification of Electromagnetic Transient Events using Spatiotemporal Feature Learning
This paper presents a spatiotemporal unsupervised feature learning method for
cause identification of electromagnetic transient events (EMTE) in power grids.
The proposed method is formulated based on the availability of
time-synchronized high-frequency measurement, and using the convolutional
neural network (CNN) as the spatiotemporal feature representation along with
softmax function. Despite the existing threshold-based, or energy-based events
analysis methods, such as support vector machine (SVM), autoencoder, and
tapered multi-layer perception (t-MLP) neural network, the proposed feature
learning is carried out with respect to both time and space. The effectiveness
of the proposed feature learning and the subsequent cause identification is
validated through the EMTP simulation of different events such as line
energization, capacitor bank energization, lightning, fault, and high-impedance
fault in the IEEE 30-bus, and the real-time digital simulation (RTDS) of the
WSCC 9-bus system.Comment: 9 pages, 7 figure
Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Reinforcement learning (RL) algorithms for real-world robotic applications
need a data-efficient learning process and the ability to handle complex,
unknown dynamical systems. These requirements are handled well by model-based
and model-free RL approaches, respectively. In this work, we aim to combine the
advantages of these two types of methods in a principled manner. By focusing on
time-varying linear-Gaussian policies, we enable a model-based algorithm based
on the linear quadratic regulator (LQR) that can be integrated into the
model-free framework of path integral policy improvement (PI2). We can further
combine our method with guided policy search (GPS) to train arbitrary
parameterized policies such as deep neural networks. Our simulation and
real-world experiments demonstrate that this method can solve challenging
manipulation tasks with comparable or better performance than model-free
methods while maintaining the sample efficiency of model-based methods. A video
presenting our results is available at
https://sites.google.com/site/icml17pilqrComment: Paper accepted to the International Conference on Machine Learning
(ICML) 201
Review of the Learning-based Camera and Lidar Simulation Methods for Autonomous Driving Systems
Perception sensors, particularly camera and Lidar, are key elements of
Autonomous Driving Systems (ADS) that enable them to comprehend their
surroundings for informed driving and control decisions. Therefore, developing
realistic camera and Lidar simulation methods, also known as camera and Lidar
models, is of paramount importance to effectively conduct simulation-based
testing for ADS. Moreover, the rise of deep learning-based perception models
has propelled the prevalence of perception sensor models as valuable tools for
synthesising diverse training datasets. The traditional sensor simulation
methods rely on computationally expensive physics-based algorithms,
specifically in complex systems such as ADS. Hence, the current potential
resides in learning-based models, driven by the success of deep generative
models in synthesising high-dimensional data. This paper reviews the current
state-of-the-art in learning-based sensor simulation methods and validation
approaches, focusing on two main types of perception sensors: cameras and
Lidars. This review covers two categories of learning-based approaches, namely
raw-data-based and object-based models. Raw-data-based methods are explained
concerning the employed learning strategy, while object-based models are
categorised based on the type of error considered. Finally, the paper
illustrates commonly used validation techniques for evaluating perception
sensor models and highlights the existing research gaps in the area
Deep Reinforcement Learning-Based Channel Allocation for Wireless LANs with Graph Convolutional Networks
Last year, IEEE 802.11 Extremely High Throughput Study Group (EHT Study
Group) was established to initiate discussions on new IEEE 802.11 features.
Coordinated control methods of the access points (APs) in the wireless local
area networks (WLANs) are discussed in EHT Study Group. The present study
proposes a deep reinforcement learning-based channel allocation scheme using
graph convolutional networks (GCNs). As a deep reinforcement learning method,
we use a well-known method double deep Q-network. In densely deployed WLANs,
the number of the available topologies of APs is extremely high, and thus we
extract the features of the topological structures based on GCNs. We apply GCNs
to a contention graph where APs within their carrier sensing ranges are
connected to extract the features of carrier sensing relationships.
Additionally, to improve the learning speed especially in an early stage of
learning, we employ a game theory-based method to collect the training data
independently of the neural network model. The simulation results indicate that
the proposed method can appropriately control the channels when compared to
extant methods
- …