8,850 research outputs found
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent ādevicesā, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew ācognitive devicesā are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal
Model-free reinforcement learning has recently been shown to be effective at
learning navigation policies from complex image input. However, these
algorithms tend to require large amounts of interaction with the environment,
which can be prohibitively costly to obtain on robots in the real world. We
present an approach for efficiently learning goal-directed navigation policies
on a mobile robot, from only a single coverage traversal of recorded data. The
navigation agent learns an effective policy over a diverse action space in a
large heterogeneous environment consisting of more than 2km of travel, through
buildings and outdoor regions that collectively exhibit large variations in
visual appearance, self-similarity, and connectivity. We compare pretrained
visual encoders that enable precomputation of visual embeddings to achieve a
throughput of tens of thousands of transitions per second at training time on a
commodity desktop computer, allowing agents to learn from millions of
trajectories of experience in a matter of hours. We propose multiple forms of
computationally efficient stochastic augmentation to enable the learned policy
to generalise beyond these precomputed embeddings, and demonstrate successful
deployment of the learned policy on the real robot without fine tuning, despite
environmental appearance differences at test time. The dataset and code
required to reproduce these results and apply the technique to other datasets
and robots is made publicly available at rl-navigation.github.io/deployable
Recommended from our members
Fuzzy transfer learning in human activity recognition.
Assisted living environments are incorporated with diļ¬erent technological solutions to improve the quality of life and well-being. In recent years, there has been a growing interest in the research community on how to develop evolving solutions to aid assisted living. Diļ¬erent techniques have been studied to address the need for technological systems which are intelligent enough to evolve their knowledge to solve tasks which have not been previously encountered. One such approach is Transfer Learning (TL), for example, between humans and robots.
Humans excel at dealing with everyday activities, learning and adapting to diļ¬erent activities. This comprises diļ¬erent complex techniques which enable the lifelong learning process from observation of our environment. To obtain similar learning in assistive agents, TL is needed. The aim of the research reported in this thesis is to address the challenge associated with learning and reuse of knowledge by assistive agents in an Ambient Assisted Living (AAL) environment. In this thesis, a novel approach to transfer learning of human activities through the combination of three methods; TL, Fuzzy Systems (FS) and Human Activity Recognition (HAR) is presented. Through the incorporation of FS into the proposed approach, uncertainty that is evident in the dynamic nature of human activities are embedded into the learning model.
This research is focused on applications in assistive robotics. This is with a purpose of enabling assistive robots in AAL environments to acquire knowledge of such activities as are performed by humans. To achieve this, an extensive investigation into existing learning methods applied in human activities is conducted. The investigation encompasses current state-of-the-art of TL approaches employed in skill transfer across diļ¬erent but contextually related activities.
To address the research questions identiļ¬ed in the thesis, the contributions of the methodology employed are in three main categories; 1) Firstly, a novel framework for human activity learning from information observed. Experiments are conducted on selected human activities to acquire enough information for building the framework. From the acquired information, relevant features extracted are used in a learning model to recognise diļ¬erent activities. 2) Secondly, the sequence of occurrence(s) of tasks in an activity needs to be considered in the learning process. Therefore, in this research, a novel technique for adaptive learning of activity sequences from acquired information is developed. 3) Finally, from the sequence obtained, a novel technique for transfer of human activity across heterogeneous feature space existing between a human and an assistive robot is developed. These categories form the basis of the TL framework modelled in this research.
The framework proposed is applied to TL of human activity from data generated experimentally and benchmark datasets of various classes of human activities. The results presented in this thesis show that exploring the process of human activity learning is an important aspect in the TL framework. The features extracted suļ¬ciently distinguish relevant patterns for each activity. Also, the results demonstrate the ability of the methodology to learn and predict human actions with a high degree of certainty. This encourages the use of TL in assisted living environments and other applications. This and many more applications of TL in technology would be a potential driver of the next revolution in artiļ¬cial intelligence
Robust Place Categorization With Deep Domain Generalization
Traditional place categorization approaches in robot vision assume that training and test images have similar visual appearance. Therefore, any seasonal, illumination, and environmental changes typically lead to severe degradation in performance. To cope with this problem, recent works have been proposed to adopt domain adaptation techniques. While effective, these methods assume that some prior information about the scenario where the robot will operate is available at training time. Unfortunately, in many cases, this assumption does not hold, as we often do not know where a robot will be deployed. To overcome this issue, in this paper, we present an approach that aims at learning classification models able to generalize to unseen scenarios. Specifically, we propose a novel deep learning framework for domain generalization. Our method develops from the intuition that, given a set of different classification models associated to known domains (e.g., corresponding to multiple environments, robots), the best model for a new sample in the novel domain can be computed directly at test time by optimally combining the known models. To implement our idea, we exploit recent advances in deep domain adaptation and design a convolutional neural network architecture with novel layers performing a weighted version of batch normalization. Our experiments, conducted on three common datasets for robot place categorization, confirm the validity of our contribution
- ā¦