8,002 research outputs found
Live Programming Environment for Deep Learning with Instant and Editable Neural Network Visualization
Artificial intelligence (AI) such as deep learning has achieved significant success in a variety of application domains. Several visualization techniques have been proposed for understanding the overall behavior of the neural network defined by deep learning code. However, they show visualization only after the code or network definition is written and it remains complicated and unfriendly for newbies to build deep neural network models on a code editor. In this paper, to help user better understand the behavior of networks, we augment a code editor with instant and editable visualization of network model, inspired by live programming which provides continuous feedback to the programmer
Using Cognitive Computing for Learning Parallel Programming: An IBM Watson Solution
While modern parallel computing systems provide high performance resources,
utilizing them to the highest extent requires advanced programming expertise.
Programming for parallel computing systems is much more difficult than
programming for sequential systems. OpenMP is an extension of C++ programming
language that enables to express parallelism using compiler directives. While
OpenMP alleviates parallel programming by reducing the lines of code that the
programmer needs to write, deciding how and when to use these compiler
directives is up to the programmer. Novice programmers may make mistakes that
may lead to performance degradation or unexpected program behavior. Cognitive
computing has shown impressive results in various domains, such as health or
marketing. In this paper, we describe the use of IBM Watson cognitive system
for education of novice parallel programmers. Using the dialogue service of the
IBM Watson we have developed a solution that assists the programmer in avoiding
common OpenMP mistakes. To evaluate our approach we have conducted a survey
with a number of novice parallel programmers at the Linnaeus University, and
obtained encouraging results with respect to usefulness of our approach
A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions
Robots operating alongside humans in diverse, stochastic environments must be
able to accurately interpret natural language commands. These instructions
often fall into one of two categories: those that specify a goal condition or
target state, and those that specify explicit actions, or how to perform a
given task. Recent approaches have used reward functions as a semantic
representation of goal-based commands, which allows for the use of a
state-of-the-art planner to find a policy for the given task. However, these
reward functions cannot be directly used to represent action-oriented commands.
We introduce a new hybrid approach, the Deep Recurrent Action-Goal Grounding
Network (DRAGGN), for task grounding and execution that handles natural
language from either category as input, and generalizes to unseen environments.
Our robot-simulation results demonstrate that a system successfully
interpreting both goal-oriented and action-oriented task specifications brings
us closer to robust natural language understanding for human-robot interaction.Comment: Accepted at the 1st Workshop on Language Grounding for Robotics at
ACL 201
A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions
Robots operating alongside humans in diverse, stochastic environments must be
able to accurately interpret natural language commands. These instructions
often fall into one of two categories: those that specify a goal condition or
target state, and those that specify explicit actions, or how to perform a
given task. Recent approaches have used reward functions as a semantic
representation of goal-based commands, which allows for the use of a
state-of-the-art planner to find a policy for the given task. However, these
reward functions cannot be directly used to represent action-oriented commands.
We introduce a new hybrid approach, the Deep Recurrent Action-Goal Grounding
Network (DRAGGN), for task grounding and execution that handles natural
language from either category as input, and generalizes to unseen environments.
Our robot-simulation results demonstrate that a system successfully
interpreting both goal-oriented and action-oriented task specifications brings
us closer to robust natural language understanding for human-robot interaction.Comment: Accepted at the 1st Workshop on Language Grounding for Robotics at
ACL 201
GART: The Gesture and Activity Recognition Toolkit
Presented at the 12th International Conference on Human-Computer Interaction, Beijing, China, July 2007.The original publication is available at www.springerlink.comThe Gesture and Activity Recognition Toolit (GART) is
a user interface toolkit designed to enable the development of gesture-based
applications. GART provides an abstraction to machine learning
algorithms suitable for modeling and recognizing different types of
gestures. The toolkit also provides support for the data collection and
the training process. In this paper, we present GART and its machine
learning abstractions. Furthermore, we detail the components of the
toolkit and present two example gesture recognition applications
- …