122 research outputs found

    Gesture recognition intermediary robot for abnormality detection in human activities

    Get PDF

    A framework for anomaly detection in activities of daily living using an assistive robot

    Get PDF
    This paper presents an overview of an ongoing research to incorporate an assistive robotic platform towards improved detection of anomalies in daily living activities of older adults. This involves learning human daily behavioural routine and detecting deviation from the known routine which can constitute an abnormality. Current approaches suffer from high rate of false alarms, therefore, lead to dissatisfaction by clients and carers. This may be connected to behavioural changes of human activities due to seasonal or other physical factors. To address this, a framework for anomaly detection is proposed which incorporates an assistive robotic platform as an intermediary. Instances classified as anomalous will first be confirmed from the monitored individual through the intermediary. The proposed framework has the potential of mitigating the false alarm rate generated by current approaches

    High Level Learning Using the Temporal Features of Human Demonstrated Sequential Tasks

    Get PDF
    Modelling human-led demonstrations of high-level sequential tasks is fundamental to a number of practical inference applications including vision-based policy learning and activity recognition. Demonstrations of these tasks are captured as videos with long durations and similar spatial contents. Learning from this data is challenging since inference cannot be conducted solely on spatial feature presence and must instead consider how spatial features play out across time. To be successful these temporal representations must generalize to variations in the duration of activities and be able to capture relationships between events expressed across the scale of an entire video. Contemporary deep learning architectures that represent time (convolution-based and Recurrent Neural Networks) do not address these concerns. Representations learned by these models describe temporal features in terms of fixed durations such as minutes, seconds, and frames. They are also developed sequentially and must use unreasonably large models to capture temporal features expressed at scale. Probabilistic temporal models have been successful in representing the temporal information of videos in a duration invariant manner that is robust to scale, however, this has only been accomplished through the use of user-defined spatial features. Such abstractions make unrealistic assumptions about the content being expressed in these videos, the quality of the perception model, and they also limit the potential applications of trained models. To that end, I present D-ITR-L, a temporal wrapper that extends the spatial features extracted from a typically CNN architecture and transforms them into temporal features. D-ITR-L-derived temporal features are duration invariant and can identify temporal relationships between events at the scale of a full video. Validation of this claim is conducted through various vision-based policy learning and action recognition settings. Additionally, these studies show that challenging visual domains such as human-led demonstration of high-level sequential tasks can be effectively represented when using a D-ITR-L-based model

    High Level Learning Using the Temporal Features of Human Demonstrated Sequential Tasks

    Get PDF
    Modelling human-led demonstrations of high-level sequential tasks is fundamental to a number of practical inference applications including vision-based policy learning and activity recognition. Demonstrations of these tasks are captured as videos with long durations and similar spatial contents. Learning from this data is challenging since inference cannot be conducted solely on spatial feature presence and must instead consider how spatial features play out across time. To be successful these temporal representations must generalize to variations in the duration of activities and be able to capture relationships between events expressed across the scale of an entire video. Contemporary deep learning architectures that represent time (convolution-based and Recurrent Neural Networks) do not address these concerns. Representations learned by these models describe temporal features in terms of fixed durations such as minutes, seconds, and frames. They are also developed sequentially and must use unreasonably large models to capture temporal features expressed at scale. Probabilistic temporal models have been successful in representing the temporal information of videos in a duration invariant manner that is robust to scale, however, this has only been accomplished through the use of user-defined spatial features. Such abstractions make unrealistic assumptions about the content being expressed in these videos, the quality of the perception model, and they also limit the potential applications of trained models. To that end, I present D-ITR-L, a temporal wrapper that extends the spatial features extracted from a typically CNN architecture and transforms them into temporal features. D-ITR-L-derived temporal features are duration invariant and can identify temporal relationships between events at the scale of a full video. Validation of this claim is conducted through various vision-based policy learning and action recognition settings. Additionally, these studies show that challenging visual domains such as human-led demonstration of high-level sequential tasks can be effectively represented when using a D-ITR-L-based model

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Articulated human tracking and behavioural analysis in video sequences

    Get PDF
    Recently, there has been a dramatic growth of interest in the observation and tracking of human subjects through video sequences. Arguably, the principal impetus has come from the perceived demand for technological surveillance, however applications in entertainment, intelligent domiciles and medicine are also increasing. This thesis examines human articulated tracking and the classi cation of human movement, rst separately and then as a sequential process. First, this thesis considers the development and training of a 3D model of human body structure and dynamics. To process video sequences, an observation model is also designed with a multi-component likelihood based on edge, silhouette and colour. This is de ned on the articulated limbs, and visible from a single or multiple cameras, each of which may be calibrated from that sequence. Second, for behavioural analysis, we develop a methodology in which actions and activities are described by semantic labels generated from a Movement Cluster Model (MCM). Third, a Hierarchical Partitioned Particle Filter (HPPF) was developed for human tracking that allows multi-level parameter search consistent with the body structure. This tracker relies on the articulated motion prediction provided by the MCM at pose or limb level. Fourth, tracking and movement analysis are integrated to generate a probabilistic activity description with action labels. The implemented algorithms for tracking and behavioural analysis are tested extensively and independently against ground truth on human tracking and surveillance datasets. Dynamic models are shown to predict and generate synthetic motion, while MCM recovers both periodic and non-periodic activities, de ned either on the whole body or at the limb level. Tracking results are comparable with the state of the art, however the integrated behaviour analysis adds to the value of the approach.Overseas Research Students Awards Scheme (ORSAS

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    A survey on deep transfer learning and edge computing for mitigating the COVID-19 pandemic

    Get PDF
    This is an accepted manuscript of an article published by Elsevier in Journal of Systems Architecture on 30/06/2020, available online: https://doi.org/10.1016/j.sysarc.2020.101830 The accepted version of the publication may differ from the final published version.Global Health sometimes faces pandemics as are currently facing COVID-19 disease. The spreading and infection factors of this disease are very high. A huge number of people from most of the countries are infected within six months from its first report of appearance and it keeps spreading. The required systems are not ready up to some stages for any pandemic; therefore, mitigation with existing capacity becomes necessary. On the other hand, modern-era largely depends on Artificial Intelligence(AI) including Data Science; and Deep Learning(DL) is one of the current flag-bearer of these techniques. It could use to mitigate COVID-19 like pandemics in terms of stop spread, diagnosis of the disease, drug & vaccine discovery, treatment, patient care, and many more. But this DL requires large datasets as well as powerful computing resources. A shortage of reliable datasets of a running pandemic is a common phenomenon. So, Deep Transfer Learning(DTL) would be effective as it learns from one task and could work on another task. In addition, Edge Devices(ED) such as IoT, Webcam, Drone, Intelligent Medical Equipment, Robot, etc. are very useful in a pandemic situation. These types of equipment make the infrastructures sophisticated and automated which helps to cope with an outbreak. But these are equipped with low computing resources, so, applying DL is also a bit challenging; therefore, DTL also would be effective there. This article scholarly studies the potentiality and challenges of these issues. It has described relevant technical backgrounds and reviews of the related recent state-of-the-art. This article also draws a pipeline of DTL over Edge Computing as a future scope to assist the mitigation of any pandemic
    • …
    corecore