1,103 research outputs found
Fast Monte Carlo Simulations for Quality Assurance in Radiation Therapy
Monte Carlo (MC) simulation is generally considered to be the most accurate method for dose calculation in radiation therapy. However, it suffers from the low simulation efficiency (hours to days) and complex configuration, which impede its applications in clinical studies. The recent rise of MRI-guided radiation platform (e.g. ViewRay’s MRIdian system) brings urgent need of fast MC algorithms because the introduced strong magnetic field may cause big errors to other algorithms. My dissertation focuses on resolving the conflict between accuracy and efficiency of MC simulations through 4 different approaches: (1) GPU parallel computation, (2) Transport mechanism simplification, (3) Variance reduction, (4) DVH constraint. Accordingly, we took several steps to thoroughly study the performance and accuracy influence of these methods. As a result, three Monte Carlo simulation packages named gPENELOPE, gDPMvr and gDVH were developed for subtle balance between performance and accuracy in different application scenarios. For example, the most accurate gPENELOPE is usually used as golden standard for radiation meter model, while the fastest gDVH is usually used for quick in-patient dose calculation, which significantly reduces the calculation time from 5 hours to 1.2 minutes (250 times faster) with only 1% error introduced. In addition, a cross-platform GUI integrating simulation kernels and 3D visualization was developed to make the toolkit more user-friendly. After the fast MC infrastructure was established, we successfully applied it to four radiotherapy scenarios: (1) Validate the vender provided Co60 radiation head model by comparing the dose calculated by gPENELOPE to experiment data; (2) Quantitatively study the effect of magnetic field to dose distribution and proposed a strategy to improve treatment planning efficiency; (3) Evaluate the accuracy of the build-in MC algorithm of MRIdian’s treatment planning system. (4) Perform quick quality assurance (QA) for the “online adaptive radiation therapy” that doesn’t permit enough time to perform experiment QA. Many other time-sensitive applications (e.g. motional dose accumulation) will also benefit a lot from our fast MC infrastructure
Distributed Graph Neural Network Training: A Survey
Graph neural networks (GNNs) are a type of deep learning models that are
trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to
efficiently scale to large graphs. As a remedy, distributed computing becomes a
promising solution of training large-scale GNNs, since it is able to provide
abundant computing resources. However, the dependency of graph structure
increases the difficulty of achieving high-efficiency distributed GNN training,
which suffers from the massive communication and workload imbalance. In recent
years, many efforts have been made on distributed GNN training, and an array of
training algorithms and systems have been proposed. Yet, there is a lack of
systematic review on the optimization techniques for the distributed execution
of GNN training. In this survey, we analyze three major challenges in
distributed GNN training that are massive feature communication, the loss of
model accuracy and workload imbalance. Then we introduce a new taxonomy for the
optimization techniques in distributed GNN training that address the above
challenges. The new taxonomy classifies existing techniques into four
categories that are GNN data partition, GNN batch generation, GNN execution
model, and GNN communication protocol. We carefully discuss the techniques in
each category. In the end, we summarize existing distributed GNN systems for
multi-GPUs, GPU-clusters and CPU-clusters, respectively, and give a discussion
about the future direction on distributed GNN training
Time series forecasting for a call center in a Warsaw holding company
Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsIn recent years, artificial intelligence and cognitive technologies are actively being adopted in
industries that use conversational marketing. Workforce managers face the constant challenge
of balancing the priorities of service levels and related service costs. This problem is especially
common when inaccurate forecasts lead to inefficient scheduling decisions and in turn result in
dramatic impact on the customer engagement and experience and thus call center’s profitability.
The main trigger of this project development was the Company X’s struggle to estimate the
number of inbound phone calls expected in the upcoming 40 days. Accurate phone call volume
forecast could significantly improve consultants’ time management, as well as, service quality.
Keeping this goal in mind, the main focus of this internship is to conduct a set of experiments
with various types of predictive models and identify the best performing for the analyzed use
case. After a thorough review of literature covering work related to time series analysis, the
empirical part of the internship follows which describes the process of developing both,
univariate and multivariate statistical models. The methods used in the report also include two
types of recurrent neural networks which are commonly used for time series prediction. The
exogenous variables used in multivariate models are derived from the Media Planning
department of the company which stores information about the ads being published in the
newspapers. The outcome of the research shows that statistical models outperformed the neural
networks in this specific application. This report covers the overview of statistical and neural
network models used. After that, a comparative study of all tested models is conducted and one
best performing model is selected. Evidently, the experiments showed that SARIMAX model
yields best predictions for the analyzed use-case and thus it is recommended for the company
to be used for a better staff management driving a more pleasant customer experience of the
call center
Translating Videos to Commands for Robotic Manipulation with Deep Recurrent Neural Networks
We present a new method to translate videos to commands for robotic
manipulation using Deep Recurrent Neural Networks (RNN). Our framework first
extracts deep features from the input video frames with a deep Convolutional
Neural Networks (CNN). Two RNN layers with an encoder-decoder architecture are
then used to encode the visual features and sequentially generate the output
words as the command. We demonstrate that the translation accuracy can be
improved by allowing a smooth transaction between two RNN layers and using the
state-of-the-art feature extractor. The experimental results on our new
challenging dataset show that our approach outperforms recent methods by a fair
margin. Furthermore, we combine the proposed translation module with the vision
and planning system to let a robot perform various manipulation tasks. Finally,
we demonstrate the effectiveness of our framework on a full-size humanoid robot
WALK-MAN
Differentiable Algorithm Networks for Composable Robot Learning
This paper introduces the Differentiable Algorithm Network (DAN), a
composable architecture for robot learning systems. A DAN is composed of neural
network modules, each encoding a differentiable robot algorithm and an
associated model; and it is trained end-to-end from data. DAN combines the
strengths of model-driven modular system design and data-driven end-to-end
learning. The algorithms and models act as structural assumptions to reduce the
data requirements for learning; end-to-end learning allows the modules to adapt
to one another and compensate for imperfect models and algorithms, in order to
achieve the best overall system performance. We illustrate the DAN methodology
through a case study on a simulated robot system, which learns to navigate in
complex 3-D environments with only local visual observations and an image of a
partially correct 2-D floor map.Comment: RSS 2019 camera ready. Video is available at
https://youtu.be/4jcYlTSJF4
Scene Understanding for Autonomous Manipulation with Deep Learning
Over the past few years, deep learning techniques have achieved tremendous success
in many visual understanding tasks such as object detection, image segmentation,
and caption generation. Despite this thriving in computer vision and natural language
processing, deep learning has not yet shown signicant impact in robotics.
Due to the gap between theory and application, there are many challenges when
applying the results of deep learning to the real robotic systems. In this study,
our long-term goal is to bridge the gap between computer vision and robotics by
developing visual methods that can be used in real robots. In particular, this work
tackles two fundamental visual problems for autonomous robotic manipulation: affordance
detection and ne-grained action understanding. Theoretically, we propose
dierent deep architectures to further improves the state of the art in each problem.
Empirically, we show that the outcomes of our proposed methods can be applied in
real robots and allow them to perform useful manipulation tasks
- …