4 research outputs found
Hardware for recognition of human activities: a review of smart home and AAL related technologies
Activity recognition (AR) from an applied perspective of ambient assisted living (AAL) and smart homes (SH) has become a subject of great interest. Promising a better quality of life, AR applied in contexts such as health, security, and energy consumption can lead to solutions capable of reaching even the people most in need. This study was strongly motivated because levels of development, deployment, and technology of AR solutions transferred to society and industry are based on software development, but also depend on the hardware devices used. The current paper identifies contributions to hardware uses for activity recognition through a scientific literature review in the Web of Science (WoS) database. This work found four dominant groups of technologies used for AR in SH and AAL—smartphones, wearables, video, and electronic components—and two emerging technologies: Wi-Fi and assistive robots. Many of these technologies overlap across many research works. Through bibliometric networks analysis, the present review identified some gaps and new potential combinations of technologies for advances in this emerging worldwide field and their uses. The review also relates the use of these six technologies in health conditions, health care, emotion recognition, occupancy, mobility, posture recognition, localization, fall detection, and generic activity recognition applications. The above can serve as a road map that allows readers to execute approachable projects and deploy applications in different socioeconomic contexts, and the possibility to establish networks with the community involved in this topic. This analysis shows that the research field in activity recognition accepts that specific goals cannot be achieved using one single hardware technology, but can be using joint solutions, this paper shows how such technology works in this regard
A Survey of Applications and Human Motion Recognition with Microsoft Kinect
Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation
Fall recovery subactivity recognition with RGB-D cameras
Accidental falls have been identified as a cause of mortality for elders who live alone around the globe. Following a fall, additional injury can be sustained if proper fall recovery techniques are not followed. These secondary complications can be reduced if the person had access to safe recovery procedures or were assisted, either by a person or a robot. We propose a framework for in situ robotic assistance for post fall recovery scenarios. In order to assist autonomously robots need to recognize an individual's posture and subactivities (e.g., falling, rolling, move to hands and knees, crawling, and push up through legs, sitting or standing). Human body skeleton tracking through RGB-D pose estimation methods fail to identify the body parts during key phases of fall recovery due to high occlusion rates in fallen, and recovering, postures. To address this issue, we investigated how low-level image features can be leveraged to recognize an individual's subactivities. Depth cuboid similarity features (DCSFs) approach was improved with M-partitioned histograms of depth cuboid prototypes, integration of activity progression direction, and outlier spatiotemporal interest point removal. Our modified DCSF algorithm was evaluated on a unique RGB-D multiview dataset, achieving 87.43 ± 1.74% accuracy in the extensive 3003 (C15 10) combinations of training-test groups of 15 subjects in 10 trials. This result was significantly larger than the nearest competitor, and faster in the training phase. This work could lead to more accurate in situ robotic assistance for fall recovery, saving lives for victims of falls.Kalana Ishara Withanage, Ivan Lee, Russell Brinkworth, Shylie Mackintosh and Dominic Thewli
The 1991 Goddard Conference on Space Applications of Artificial Intelligence
The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in this proceeding fall into the following areas: Planning and scheduling, fault monitoring/diagnosis/recovery, machine vision, robotics, system development, information management, knowledge acquisition and representation, distributed systems, tools, neural networks, and miscellaneous applications