15 research outputs found
The Challenges in Modeling Human Performance in 3D Space with Fitts’ Law
With the rapid growth in virtual reality technologies, object interaction is
becoming increasingly more immersive, elucidating human perception and leading
to promising directions towards evaluating human performance under different
settings. This spike in technological growth exponentially increased the need
for a human performance metric in 3D space. Fitts' law is perhaps the most
widely used human prediction model in HCI history attempting to capture human
movement in lower dimensions. Despite the collective effort towards deriving an
advanced extension of a 3D human performance model based on Fitts' law, a
standardized metric is still missing. Moreover, most of the extensions to date
assume or limit their findings to certain settings, effectively disregarding
important variables that are fundamental to 3D object interaction. In this
review, we investigate and analyze the most prominent extensions of Fitts' law
and compare their characteristics pinpointing to potentially important aspects
for deriving a higher-dimensional performance model. Lastly, we mention the
complexities, frontiers as well as potential challenges that may lay ahead.Comment: Accepted at ACM CHI 2021 Conference on Human Factors in Computing
Systems (CHI '21 Extended Abstracts
Intrinsic Language-Guided Exploration for Complex Long-Horizon Robotic Manipulation Tasks
Current reinforcement learning algorithms struggle in sparse and complex
environments, most notably in long-horizon manipulation tasks entailing a
plethora of different sequences. In this work, we propose the Intrinsically
Guided Exploration from Large Language Models (IGE-LLMs) framework. By
leveraging LLMs as an assistive intrinsic reward, IGE-LLMs guides the
exploratory process in reinforcement learning to address intricate long-horizon
with sparse rewards robotic manipulation tasks. We evaluate our framework and
related intrinsic learning methods in an environment challenged with
exploration, and a complex robotic manipulation task challenged by both
exploration and long-horizons. Results show IGE-LLMs (i) exhibit notably higher
performance over related intrinsic methods and the direct use of LLMs in
decision-making, (ii) can be combined and complement existing learning methods
highlighting its modularity, (iii) are fairly insensitive to different
intrinsic scaling parameters, and (iv) maintain robustness against increased
levels of uncertainty and horizons.Comment: 8 pages, 3 figure
Hybrid hierarchical learning for solving complex sequential tasks using the robotic manipulation network ROMAN
Solving long sequential tasks remains a non-trivial challenge in the field of embodied artificial intelligence. Enabling a robotic system to perform diverse sequential tasks with a broad range of manipulation skills is a notable open problem and continues to be an active area of research. In this work, we present a hybrid hierarchical learning framework, the robotic manipulation network ROMAN, to address the challenge of solving multiple complex tasks over long time horizons in robotic manipulation. By integrating behavioural cloning, imitation learning and reinforcement learning, ROMAN achieves task versatility and robust failure recovery. It consists of a central manipulation network that coordinates an ensemble of various neural networks, each specializing in different recombinable subtasks to generate their correct in-sequence actions, to solve complex long-horizon manipulation tasks. Our experiments show that, by orchestrating and activating these specialized manipulation experts, ROMAN generates correct sequential activations accomplishing long sequences of sophisticated manipulation tasks and achieving adaptive behaviours beyond demonstrations, while exhibiting robustness to various sensory noises. These results highlight the significance and versatility of ROMAN’s dynamic adaptability featuring autonomous failure recovery capabilities, and underline its potential for various autonomous manipulation tasks that require adaptive motor skills
RObotic MAnipulation Network (ROMAN) \unicode{x2013} Hybrid Hierarchical Learning for Solving Complex Sequential Tasks
Solving long sequential tasks poses a significant challenge in embodied
artificial intelligence. Enabling a robotic system to perform diverse
sequential tasks with a broad range of manipulation skills is an active area of
research. In this work, we present a Hybrid Hierarchical Learning framework,
the Robotic Manipulation Network (ROMAN), to address the challenge of solving
multiple complex tasks over long time horizons in robotic manipulation. ROMAN
achieves task versatility and robust failure recovery by integrating
behavioural cloning, imitation learning, and reinforcement learning. It
consists of a central manipulation network that coordinates an ensemble of
various neural networks, each specialising in distinct re-combinable sub-tasks
to generate their correct in-sequence actions for solving complex long-horizon
manipulation tasks. Experimental results show that by orchestrating and
activating these specialised manipulation experts, ROMAN generates correct
sequential activations for accomplishing long sequences of sophisticated
manipulation tasks and achieving adaptive behaviours beyond demonstrations,
while exhibiting robustness to various sensory noises. These results
demonstrate the significance and versatility of ROMAN's dynamic adaptability
featuring autonomous failure recovery capabilities, and highlight its potential
for various autonomous manipulation tasks that demand adaptive motor skills.Comment: To appear in Nature Machine Intelligence. Includes the main and
supplementary manuscript. Total of 70 pages, with a total of 9 Figures and 17
Table
Hybrid hierarchical learning for solving complex sequential tasks using the robotic manipulation network ROMAN
Solving long sequential tasks remains a non-trivial challenge in the field of embodied artificial intelligence. Enabling a robotic system to perform diverse sequential tasks with a broad range of manipulation skills is a notable open problem and continues to be an active area of research. In this work, we present a hybrid hierarchical learning framework, the robotic manipulation network ROMAN, to address the challenge of solving multiple complex tasks over long time horizons in robotic manipulation. By integrating behavioural cloning, imitation learning and reinforcement learning, ROMAN achieves task versatility and robust failure recovery. It consists of a central manipulation network that coordinates an ensemble of various neural networks, each specializing in different recombinable subtasks to generate their correct in-sequence actions, to solve complex long-horizon manipulation tasks. Our experiments show that, by orchestrating and activating these specialized manipulation experts, ROMAN generates correct sequential activations accomplishing long sequences of sophisticated manipulation tasks and achieving adaptive behaviours beyond demonstrations, while exhibiting robustness to various sensory noises. These results highlight the significance and versatility of ROMAN’s dynamic adaptability featuring autonomous failure recovery capabilities, and underline its potential for various autonomous manipulation tasks that require adaptive motor skills