201 research outputs found

    Perception and manipulation for robot-assisted dressing

    Get PDF
    Assistive robots have the potential to provide tremendous support for disabled and elderly people in their daily dressing activities. This thesis presents a series of perception and manipulation algorithms for robot-assisted dressing, including: garment perception and grasping prior to robot-assisted dressing, real-time user posture tracking during robot-assisted dressing for (simulated) impaired users with limited upper-body movement capability, and finally a pipeline for robot-assisted dressing for (simulated) paralyzed users who have lost the ability to move their limbs. First, the thesis explores learning suitable grasping points on a garment prior to robot-assisted dressing. Robots should be endowed with the ability to autonomously recognize the garment state, grasp and hand the garment to the user and subsequently complete the dressing process. This is addressed by introducing a supervised deep neural network to locate grasping points. To reduce the amount of real data required, which is costly to collect, the power of simulation is leveraged to produce large amounts of labeled data. Unexpected user movements should be taken into account during dressing when planning robot dressing trajectories. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. A probabilistic real-time tracking method is proposed using Bayesian networks in latent spaces, which fuses multi-modal sensor information. The latent spaces are created before dressing by modeling the user movements, taking the user's movement limitations and preferences into account. The tracking method is then combined with hierarchical multi-task control to minimize the force between the user and the robot. The proposed method enables the Baxter robot to provide personalized dressing assistance for users with (simulated) upper-body impairments. Finally, a pipeline for dressing (simulated) paralyzed patients using a mobile dual-armed robot is presented. The robot grasps a hospital gown naturally hung on a rail, and moves around the bed to finish the upper-body dressing of a hospital training manikin. To further improve simulations for garment grasping, this thesis proposes to update more realistic physical properties values for the simulated garment. This is achieved by measuring physical similarity in the latent space using contrastive loss, which maps physically similar examples to nearby points.Open Acces

    Visual grasp point localization, classification and state recognition in robotic manipulation of cloth: an overview

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Cloth manipulation by robots is gaining popularity among researchers because of its relevance, mainly (but not only) in domestic and assistive robotics. The required science and technologies begin to be ripe for the challenges posed by the manipulation of soft materials, and many contributions have appeared in the last years. This survey provides a systematic review of existing techniques for the basic perceptual tasks of grasp point localization, state estimation and classification of cloth items, from the perspective of their manipulation by robots. This choice is grounded on the fact that any manipulative action requires to instruct the robot where to grasp, and most garment handling activities depend on the correct recognition of the type to which the particular cloth item belongs and its state. The high inter- and intraclass variability of garments, the continuous nature of the possible deformations of cloth and the evident difficulties in predicting their localization and extension on the garment piece are challenges that have encouraged the researchers to provide a plethora of methods to confront such problems, with some promising results. The present review constitutes for the first time an effort in furnishing a structured framework of these works, with the aim of helping future contributors to gain both insight and perspective on the subjectPeer ReviewedPostprint (author's final draft

    Sensing Highly Non-Rigid Objects with RGBD Sensors for Robotic Systems

    Get PDF
    The goal of this research is to enable a robotic system to manipulate clothing and other highly non-rigid objects using an RGBD sensor. The focus of this thesis is to define and test various algorithms / models that are used to solve parts of the laundry process (i.e. handling, classifying, sorting, unfolding, and folding). First, a system is presented for automatically extracting and classifying items in a pile of laundry. Using only visual sensors, the robot identifies and extracts items sequentially from the pile. When an item is removed and isolated, a model is captured of the shape and appearance of the object, which is then compared against a dataset of known items. The contributions of this part of the laundry process are a novel method for extracting articles of clothing from a pile of laundry, a novel method of classifying clothing using interactive perception, and a multi-layer approach termed L-M-H, more specifically L-C-S-H for clothing classification. This thesis describes two different approaches to classify clothing into categories. The first approach relies upon silhouettes, edges, and other low-level image measurements of the articles of clothing. Experiments from the first approach demonstrate the ability of the system to efficiently classify and label into one of six categories (pants, shorts, short-sleeve shirt, long-sleeve shirt, socks, or underwear). These results show that, on average, classification rates using robot interaction are 59% higher than those that do not use interaction. The second approach relies upon color, texture, shape, and edge information from 2D and 3D data within a local and global perspective. The multi-layer approach compartmentalizes the problem into a high (H) layer, multiple mid-level (characteristics(C), selection masks(S)) layers, and a low (L) layer. This approach produces \u27local\u27 solutions to solve the global classification problem. Experiments demonstrate the ability of the system to efficiently classify each article of clothing into one of seven categories (pants, shorts, shirts, socks, dresses, cloths, or jackets). The results presented in this paper show that, on average, the classification rates improve by +27.47% for three categories, +17.90% for four categories, and +10.35% for seven categories over the baseline system, using support vector machines. Second, an algorithm is presented for automatically unfolding a piece of clothing. A piece of cloth is pulled in different directions at various points of the cloth in order to flatten the cloth. The features of the cloth are extracted and calculated to determine a valid location and orientation in which to interact with it. The features include the peak region, corner locations, and continuity / discontinuity of the cloth. In this thesis, a two-stage algorithm is presented, introducing a novel solution to the unfolding / flattening problem using interactive perception. Simulations using 3D simulation software, and experiments with robot hardware demonstrate the ability of the algorithm to flatten pieces of laundry using different starting configurations. These results show that, at most, the algorithm flattens out a piece of cloth from 11.1% to 95.6% of the canonical configuration. Third, an energy minimization algorithm is presented that is designed to estimate the configuration of a deformable object. This approach utilizes an RGBD image to calculate feature correspondence (using SURF features), depth values, and boundary locations. Input from a Kinect sensor is used to segment the deformable surface from the background using an alpha-beta swap algorithm. Using this segmentation, the system creates an initial mesh model without prior information of the surface geometry, and it reinitializes the configuration of the mesh model after a loss of input data. This approach is able to handle in-plane rotation, out-of-plane rotation, and varying changes in translation and scale. Results display the proposed algorithm over a dataset consisting of seven shirts, two pairs of shorts, two posters, and a pair of pants. The current approach is compared using a simulated shirt model in order to calculate the mean square error of the distance from the vertices on the mesh model to the ground truth, provided by the simulation model

    Towards Intelligent Playful Environments for Animals based on Natural User Interfaces

    Full text link
    Tesis por compendioEl estudio de la interacción de los animales con la tecnología y el desarrollo de sistemas tecnológicos centrados en el animal está ganando cada vez más atención desde la aparición del área de Animal Computer Interaction (ACI). ACI persigue mejorar el bienestar de los animales en diferentes entornos a través del desarrollo de tecnología adecuada para ellos siguiendo un enfoque centrado en el animal. Entre las líneas de investigación que ACI está explorando, ha habido bastante interés en la interacción de los animales con la tecnología basada en el juego. Las actividades de juego tecnológicas tienen el potencial de proveer estimulación mental y física a los animales en diferentes contextos, pudiendo ayudar a mejorar su bienestar. Mientras nos embarcamos en la era de la Internet de las Cosas, las actividades de juego tecnológicas actuales para animales todavía no han explorado el desarrollo de soluciones pervasivas que podrían proveerles de más adaptación a sus preferencias a la vez que ofrecer estímulos tecnológicos más variados. En su lugar, estas actividades están normalmente basadas en interacciones digitales en lugar de explorar dispositivos tangibles o aumentar las interacciones con otro tipo de estímulos. Además, estas actividades de juego están ya predefinidas y no cambian con el tiempo, y requieren que un humano provea el dispositivo o la tecnología al animal. Si los humanos pudiesen centrarse más en su participación como jugadores de un sistema interactivo para animales en lugar de estar pendientes de sujetar un dispositivo para el animal o de mantener el sistema ejecutándose, esto podría ayudar a crear lazos más fuertes entre especies y promover mejores relaciones con los animales. Asimismo, la estimulación mental y física de los animales son aspectos importantes que podrían fomentarse si los sistemas de juego diseñados para ellos pudieran ofrecer un variado rango de respuestas, adaptarse a los comportamientos del animal y evitar que se acostumbre al sistema y pierda el interés. Por tanto, esta tesis propone el diseño y desarrollo de entornos tecnológicos de juego basados en Interfaces Naturales de Usuario que puedan adaptarse y reaccionar a las interacciones naturales de los animales. Estos entornos pervasivos permitirían a los animales jugar por si mismos o con una persona, ofreciendo actividades de juego más dinámicas y atractivas capaces de adaptarse con el tiempo.L'estudi de la interacció dels animals amb la tecnologia i el desenvolupament de sistemes tecnològics centrats en l'animal està guanyant cada vegada més atenció des de l'aparició de l'àrea d'Animal Computer Interaction (ACI) . ACI persegueix millorar el benestar dels animals en diferents entorns a través del desenvolupament de tecnologia adequada per a ells amb un enfocament centrat en l'animal. Entre totes les línies d'investigació que ACI està explorant, hi ha hagut prou interès en la interacció dels animals amb la tecnologia basada en el joc. Les activitats de joc tecnològiques tenen el potencial de proveir estimulació mental i física als animals en diferents contextos, podent ajudar a millorar el seu benestar. Mentre ens embarquem en l'era de la Internet de les Coses, les activitats de joc tecnològiques actuals per a animals encara no han explorat el desenvolupament de solucions pervasives que podrien proveir-los de més adaptació a les seues preferències al mateix temps que oferir estímuls tecnològics més variats. En el seu lloc, estes activitats estan normalment basades en interaccions digitals en compte d'explorar dispositius tangibles o augmentar les interaccions amb estímuls de diferent tipus. A més, aquestes activitats de joc estan ja predefinides i no canvien amb el temps, mentre requereixen que un humà proveïsca el dispositiu o la tecnologia a l'animal. Si els humans pogueren centrar-se més en la seua participació com a jugadors actius d'un sistema interactiu per a animals en compte d'estar pendents de subjectar un dispositiu per a l'animal o de mantenir el sistema executant-se, açò podria ajudar a crear llaços més forts entre espècies i promoure millors relacions amb els animals. Així mateix, l'estimulació mental i física dels animals són aspectes importants que podrien fomentar-se si els sistemes de joc dissenyats per a ells pogueren oferir un rang variat de respostes, adaptar-se als comportaments de l'animal i evitar que aquest s'acostume al sistema i perda l'interès. Per tant, esta tesi proposa el disseny i desenvolupament d'entorns tecnològics de joc basats en Interfícies Naturals d'Usuari que puguen adaptar-se i reaccionar a les interaccions naturals dels animals. Aquestos escenaris pervasius podrien permetre als animals jugar per si mateixos o amb una persona, oferint activitats de joc més dinàmiques i atractives que siguen capaces d'adaptar-se amb el temps.The study of animals' interactions with technology and the development of animal-centered technological systems is gaining attention since the emergence of the research area of Animal Computer Interaction (ACI). ACI aims to improve animals' welfare and wellbeing in several scenarios by developing suitable technology for the animal following an animal-centered approach. Among all the research lines ACI is exploring, there has been significant interest in animals' playful interactions with technology. Technologically mediated playful activities have the potential to provide mental and physical stimulation for animals in different environmental contexts, which could in turn help to improve their wellbeing. As we embark in the era of the Internet of Things, current technological playful activities for animals have not yet explored the development of pervasive solutions that could provide animals with more adaptation to their preferences as well as offering varied technological stimuli. Instead, playful technology for animals is usually based on digital interactions rather than exploring tangible devices or augmenting the interactions with different stimuli. In addition, these playful activities are already predefined and do not change over time, while they require that a human has to be the one providing the device or technology to the animal. If humans could focus more on their participation as active players of an interactive system aimed for animals instead of being concerned about holding a device for the animal or keep the system running, this might help to create stronger bonds between species and foster better relationships with animals. Moreover, animals' mental and physical stimulation are important aspects that could be fostered if the playful systems designed for animals could offer a varied range of outputs, be tailored to the animal's behaviors and prevented the animal to get used to the system and lose interest. Therefore, this thesis proposes the design and development of technological playful environments based on Natural User Interfaces that could adapt and react to the animals' natural interactions. These pervasive scenarios would allow animals to play by themselves or with a human, providing more engaging and dynamic playful activities that are capable of adapting over time.Pons Tomás, P. (2018). Towards Intelligent Playful Environments for Animals based on Natural User Interfaces [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/113075TESISCompendi

    Robotic Platforms for Assistance to People with Disabilities

    Get PDF
    People with congenital and/or acquired disabilities constitute a great number of dependents today. Robotic platforms to help people with disabilities are being developed with the aim of providing both rehabilitation treatment and assistance to improve their quality of life. A high demand for robotic platforms that provide assistance during rehabilitation is expected because of the health status of the world due to the COVID-19 pandemic. The pandemic has resulted in countries facing major challenges to ensure the health and autonomy of their disabled population. Robotic platforms are necessary to ensure assistance and rehabilitation for disabled people in the current global situation. The capacity of robotic platforms in this area must be continuously improved to benefit the healthcare sector in terms of chronic disease prevention, assistance, and autonomy. For this reason, research about human–robot interaction in these robotic assistance environments must grow and advance because this topic demands sensitive and intelligent robotic platforms that are equipped with complex sensory systems, high handling functionalities, safe control strategies, and intelligent computer vision algorithms. This Special Issue has published eight papers covering recent advances in the field of robotic platforms to assist disabled people in daily or clinical environments. The papers address innovative solutions in this field, including affordable assistive robotics devices, new techniques in computer vision for intelligent and safe human–robot interaction, and advances in mobile manipulators for assistive tasks

    Development and evaluation of a haptic framework supporting telerehabilitation robotics and group interaction

    Get PDF
    Telerehabilitation robotics has grown remarkably in the past few years. It can provide intensive training to people with special needs remotely while facilitating therapists to observe the whole process. Telerehabilitation robotics is a promising solution supporting routine care which can help to transform face-to-face and one-on-one treatment sessions that require not only intensive human resource but are also restricted to some specialised care centres to treatments that are technology-based (less human involvement) and easy to access remotely from anywhere. However, there are some limitations such as network latency, jitter, and delay of the internet that can affect negatively user experience and quality of the treatment session. Moreover, the lack of social interaction since all treatments are performed over the internet can reduce motivation of the patients. As a result, these limitations are making it very difficult to deliver an efficient recovery plan. This thesis developed and evaluated a new framework designed to facilitate telerehabilitation robotics. The framework integrates multiple cutting-edge technologies to generate playful activities that involve group interaction with binaural audio, visual, and haptic feedback with robot interaction in a variety of environments. The research questions asked were: 1) Can activity mediated by technology motivate and influence the behaviour of users, so that they engage in the activity and sustain a good level of motivation? 2) Will working as a group enhance users’ motivation and interaction? 3) Can we transfer real life activity involving group interaction to virtual domain and deliver it reliably via the internet? There were three goals in this work: first was to compare people’s behaviours and motivations while doing the task in a group and on their own; second was to determine whether group interaction in virtual and reala environments was different from each other in terms of performance, engagement and strategy to complete the task; finally was to test out the effectiveness of the framework based on the benchmarks generated from socially assistive robotics literature. Three studies have been conducted to achieve the first goal, two with healthy participants and one with seven autistic children. The first study observed how people react in a challenging group task while the other two studies compared group and individual interactions. The results obtained from these studies showed that the group interactions were more enjoyable than individual interactions and most likely had more positive effects in terms of user behaviours. This suggests that the group interaction approach has the potential to motivate individuals to make more movements and be more active and could be applied in the future for more serious therapy. Another study has been conducted to measure group interaction’s performance in virtual and real environments and pointed out which aspect influences users’ strategy for dealing with the task. The results from this study helped to form a better understanding to predict a user’s behaviour in a collaborative task. A simulation has been run to compare the results generated from the predictor and the real data. It has shown that, with an appropriate training method, the predictor can perform very well. This thesis has demonstrated the feasibility of group interaction via the internet using robotic technology which could be beneficial for people who require social interaction (e.g. stroke patients and autistic children) in their treatments without regular visits to the clinical centres

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems
    corecore