20,198 research outputs found
Recommended from our members
Fuzzy transfer learning in human activity recognition.
Assisted living environments are incorporated with different technological solutions to improve the quality of life and well-being. In recent years, there has been a growing interest in the research community on how to develop evolving solutions to aid assisted living. Different techniques have been studied to address the need for technological systems which are intelligent enough to evolve their knowledge to solve tasks which have not been previously encountered. One such approach is Transfer Learning (TL), for example, between humans and robots.
Humans excel at dealing with everyday activities, learning and adapting to different activities. This comprises different complex techniques which enable the lifelong learning process from observation of our environment. To obtain similar learning in assistive agents, TL is needed. The aim of the research reported in this thesis is to address the challenge associated with learning and reuse of knowledge by assistive agents in an Ambient Assisted Living (AAL) environment. In this thesis, a novel approach to transfer learning of human activities through the combination of three methods; TL, Fuzzy Systems (FS) and Human Activity Recognition (HAR) is presented. Through the incorporation of FS into the proposed approach, uncertainty that is evident in the dynamic nature of human activities are embedded into the learning model.
This research is focused on applications in assistive robotics. This is with a purpose of enabling assistive robots in AAL environments to acquire knowledge of such activities as are performed by humans. To achieve this, an extensive investigation into existing learning methods applied in human activities is conducted. The investigation encompasses current state-of-the-art of TL approaches employed in skill transfer across different but contextually related activities.
To address the research questions identified in the thesis, the contributions of the methodology employed are in three main categories; 1) Firstly, a novel framework for human activity learning from information observed. Experiments are conducted on selected human activities to acquire enough information for building the framework. From the acquired information, relevant features extracted are used in a learning model to recognise different activities. 2) Secondly, the sequence of occurrence(s) of tasks in an activity needs to be considered in the learning process. Therefore, in this research, a novel technique for adaptive learning of activity sequences from acquired information is developed. 3) Finally, from the sequence obtained, a novel technique for transfer of human activity across heterogeneous feature space existing between a human and an assistive robot is developed. These categories form the basis of the TL framework modelled in this research.
The framework proposed is applied to TL of human activity from data generated experimentally and benchmark datasets of various classes of human activities. The results presented in this thesis show that exploring the process of human activity learning is an important aspect in the TL framework. The features extracted sufficiently distinguish relevant patterns for each activity. Also, the results demonstrate the ability of the methodology to learn and predict human actions with a high degree of certainty. This encourages the use of TL in assisted living environments and other applications. This and many more applications of TL in technology would be a potential driver of the next revolution in artificial intelligence
On the Integration of Adaptive and Interactive Robotic Smart Spaces
© 2015 Mauro Dragone et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)Enabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the user’s acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree – to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving users’ needs,but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.Peer reviewe
Healthcare Robotics
Robots have the potential to be a game changer in healthcare: improving
health and well-being, filling care gaps, supporting care givers, and aiding
health care workers. However, before robots are able to be widely deployed, it
is crucial that both the research and industrial communities work together to
establish a strong evidence-base for healthcare robotics, and surmount likely
adoption barriers. This article presents a broad contextualization of robots in
healthcare by identifying key stakeholders, care settings, and tasks; reviewing
recent advances in healthcare robotics; and outlining major challenges and
opportunities to their adoption.Comment: 8 pages, Communications of the ACM, 201
Human activity learning for assistive robotics using a classifier ensemble
Assistive robots in ambient assisted living environments can be equipped with learning capabilities to effectively learn and execute human activities. This paper proposes a human activity learning (HAL) system for application in assistive robotics. An RGB-depth sensor is used to acquire information of human activities, and a set of statistical, spatial and temporal features for encoding key aspects of human activities are extracted from the acquired information of human activities. Redundant features are removed and the relevant features used in the HAL model. An ensemble of three individual classifiers—support vector machines (SVMs), K-nearest neighbour and random forest - is employed to learn the activities. The performance of the proposed system is improved when compared with the performance of other methods using a single classifier. This approach is evaluated on experimental dataset created for this work and also on a benchmark dataset—the Cornell Activity Dataset (CAD-60). Experimental results show the overall performance achieved by the proposed system is comparable to the state of the art and has the potential to benefit applications in assistive robots for reducing the time spent in learning activities
Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing
Robotic assistance presents an opportunity to benefit the lives of many
people with physical disabilities, yet accurately sensing the human body and
tracking human motion remain difficult for robots. We present a
multidimensional capacitive sensing technique that estimates the local pose of
a human limb in real time. A key benefit of this sensing method is that it can
sense the limb through opaque materials, including fabrics and wet cloth. Our
method uses a multielectrode capacitive sensor mounted to a robot's end
effector. A neural network model estimates the position of the closest point on
a person's limb and the orientation of the limb's central axis relative to the
sensor's frame of reference. These pose estimates enable the robot to move its
end effector with respect to the limb using feedback control. We demonstrate
that a PR2 robot can use this approach with a custom six electrode capacitive
sensor to assist with two activities of daily living-dressing and bathing. The
robot pulled the sleeve of a hospital gown onto able-bodied participants' right
arms, while tracking human motion. When assisting with bathing, the robot moved
a soft wet washcloth to follow the contours of able-bodied participants' limbs,
cleaning their surfaces. Overall, we found that multidimensional capacitive
sensing presents a promising approach for robots to sense and track the human
body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation
Robotics 201
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
Simultaneous Feature and Body-Part Learning for Real-Time Robot Awareness of Human Behaviors
Robot awareness of human actions is an essential research problem in robotics
with many important real-world applications, including human-robot
collaboration and teaming. Over the past few years, depth sensors have become a
standard device widely used by intelligent robots for 3D perception, which can
also offer human skeletal data in 3D space. Several methods based on skeletal
data were designed to enable robot awareness of human actions with satisfactory
accuracy. However, previous methods treated all body parts and features equally
important, without the capability to identify discriminative body parts and
features. In this paper, we propose a novel simultaneous Feature And Body-part
Learning (FABL) approach that simultaneously identifies discriminative body
parts and features, and efficiently integrates all available information
together to enable real-time robot awareness of human behaviors. We formulate
FABL as a regression-like optimization problem with structured
sparsity-inducing norms to model interrelationships of body parts and features.
We also develop an optimization algorithm to solve the formulated problem,
which possesses a theoretical guarantee to find the optimal solution. To
evaluate FABL, three experiments were performed using public benchmark
datasets, including the MSR Action3D and CAD-60 datasets, as well as a Baxter
robot in practical assistive living applications. Experimental results show
that our FABL approach obtains a high recognition accuracy with a processing
speed of the order-of-magnitude of 10e4 Hz, which makes FABL a promising method
to enable real-time robot awareness of human behaviors in practical robotics
applications.Comment: 8 pages, 6 figures, accepted by ICRA'1
Robotic ubiquitous cognitive ecology for smart homes
Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work
Assistive robotics: research challenges and ethics education initiatives
Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials.
Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations.
In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft
- …