2 research outputs found
Distributed Q-Learning for Dynamically Decoupled Systems
Control of large-scale networked systems often necessitates the availability
of complex models for the interactions amongst the agents. However in many
applications, building accurate models of agents or interactions amongst them
might be infeasible or computationally prohibitive due to the curse of
dimensionality or the complexity of these interactions. In the meantime,
data-guided control methods can circumvent model complexity by directly
synthesizing the controller from the observed data. In this paper, we propose a
distributed Q-learning algorithm to design a feedback mechanism based on a
given underlying graph structure parameterizing the agents' interaction
network. We assume that the distributed nature of the system arises from the
cost function of the corresponding control problem and show that for the
specific case of identical dynamically decoupled systems, the learned
controller converges to the optimal Linear Quadratic Regulator (LQR) controller
for each subsystem. We provide a convergence analysis and verify the result
with an example
A Survey on Impact of Transient Faults on BNN Inference Accelerators
Over past years, the philosophy for designing the artificial intelligence
algorithms has significantly shifted towards automatically extracting the
composable systems from massive data volumes. This paradigm shift has been
expedited by the big data booming which enables us to easily access and analyze
the highly large data sets. The most well-known class of big data analysis
techniques is called deep learning. These models require significant
computation power and extremely high memory accesses which necessitate the
design of novel approaches to reduce the memory access and improve power
efficiency while taking into account the development of domain-specific
hardware accelerators to support the current and future data sizes and model
structures.The current trends for designing application-specific integrated
circuits barely consider the essential requirement for maintaining the complex
neural network computation to be resilient in the presence of soft errors. The
soft errors might strike either memory storage or combinational logic in the
hardware accelerator that can affect the architectural behavior such that the
precision of the results fall behind the minimum allowable correctness. In this
study, we demonstrate that the impact of soft errors on a customized deep
learning algorithm called Binarized Neural Network might cause drastic image
misclassification. Our experimental results show that the accuracy of image
classifier can drastically drop by 76.70% and 19.25% in lfcW1A1 and cnvW1A1
networks,respectively across CIFAR-10 and MNIST datasets during the fault
injection for the worst-case scenario