789,178 research outputs found
Automatic March tests generation for static and dynamic faults in SRAMs
New memory production modern technologies introduce new classes of faults usually referred to as dynamic memory faults. Although some hand-made March tests to deal with these new faults have been published, the problem of automatically generate March tests for dynamic faults has still to be addressed, in this paper we propose a new approach to automatically generate March tests with minimal length for both static and dynamic faults. The proposed approach resorts to a formal model to represent faulty behaviors in a memory and to simplify the generation of the corresponding tests
DeepCare: A Deep Dynamic Memory Model for Predictive Medicine
Personalized predictive medicine necessitates the modeling of patient illness
and care processes, which inherently have long-term temporal dependencies.
Healthcare observations, recorded in electronic medical records, are episodic
and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural
network that reads medical records, stores previous illness history, infers
current illness states and predicts future medical outcomes. At the data level,
DeepCare represents care episodes as vectors in space, models patient health
state trajectories through explicit memory of historical records. Built on Long
Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle
irregular timed events by moderating the forgetting and consolidation of memory
cells. DeepCare also incorporates medical interventions that change the course
of illness and shape future medical risk. Moving up to the health state level,
historical and present health states are then aggregated through multiscale
temporal pooling, before passing through a neural network that estimates future
outcomes. We demonstrate the efficacy of DeepCare for disease progression
modeling, intervention recommendation, and future risk prediction. On two
important cohorts with heavy social and economic burden -- diabetes and mental
health -- the results show improved modeling and risk prediction accuracy.Comment: Accepted at JBI under the new name: "Predicting healthcare
trajectories from medical records: A deep learning approach
Dynamic Key-Value Memory Networks for Knowledge Tracing
Knowledge Tracing (KT) is a task of tracing evolving knowledge state of
students with respect to one or more concepts as they engage in a sequence of
learning activities. One important purpose of KT is to personalize the practice
sequence to help students learn knowledge concepts efficiently. However,
existing methods such as Bayesian Knowledge Tracing and Deep Knowledge Tracing
either model knowledge state for each predefined concept separately or fail to
pinpoint exactly which concepts a student is good at or unfamiliar with. To
solve these problems, this work introduces a new model called Dynamic Key-Value
Memory Networks (DKVMN) that can exploit the relationships between underlying
concepts and directly output a student's mastery level of each concept. Unlike
standard memory-augmented neural networks that facilitate a single memory
matrix or two static memory matrices, our model has one static matrix called
key, which stores the knowledge concepts and the other dynamic matrix called
value, which stores and updates the mastery levels of corresponding concepts.
Experiments show that our model consistently outperforms the state-of-the-art
model in a range of KT datasets. Moreover, the DKVMN model can automatically
discover underlying concepts of exercises typically performed by human
annotations and depict the changing knowledge state of a student.Comment: To appear in 26th International Conference on World Wide Web (WWW),
201
Autonomous Reinforcement of Behavioral Sequences in Neural Dynamics
We introduce a dynamic neural algorithm called Dynamic Neural (DN)
SARSA(\lambda) for learning a behavioral sequence from delayed reward.
DN-SARSA(\lambda) combines Dynamic Field Theory models of behavioral sequence
representation, classical reinforcement learning, and a computational
neuroscience model of working memory, called Item and Order working memory,
which serves as an eligibility trace. DN-SARSA(\lambda) is implemented on both
a simulated and real robot that must learn a specific rewarding sequence of
elementary behaviors from exploration. Results show DN-SARSA(\lambda) performs
on the level of the discrete SARSA(\lambda), validating the feasibility of
general reinforcement learning without compromising neural dynamics.Comment: Sohrob Kazerounian, Matthew Luciw are Joint first author
The true reinforced random walk with bias
We consider a self-attracting random walk in dimension d=1, in presence of a
field of strength s, which biases the walker toward a target site. We focus on
the dynamic case (true reinforced random walk), where memory effects are
implemented at each time step, differently from the static case, where memory
effects are accounted for globally. We analyze in details the asymptotic
long-time behavior of the walker through the main statistical quantities (e.g.
distinct sites visited, end-to-end distance) and we discuss a possible mapping
between such dynamic self-attracting model and the trapping problem for a
simple random walk, in analogy with the static model. Moreover, we find that,
for any s>0, the random walk behavior switches to ballistic and that field
effects always prevail on memory effects without any singularity, already in
d=1; this is in contrast with the behavior observed in the static model.Comment: to appear on New J. Phy
- …