3 research outputs found

    An approach to provide dynamic, illustrative, video-based guidance within a goal-driven smart home

    Get PDF
    The global population is aging in a never-before seen way, introducing an increasing ageing-related cognitive ailments, such as dementia. This aging is coupled with a reduction in the global support ratio, reducing the availability of formal and informal support and therefore capacity to care for those suffering these aging related ailments. Assistive Smart Homes (SH) are a promising form of technology enabling assistance with activities of daily living, providing support of suffers of cognitive ailments and increasing their independence and quality of life. Traditional SH systems have deficiencies that have been partially addressed by through goal-driven SH systems. Goal-driven SHs incorporate flexible activity models, goals, which partially address some of these issues. Goals may be combined to provide assistance with dynamic and variable activities. This paradigm-shift, however, introduces the need to provide dynamic assistance within such SHs. This study presents a novel approach to achieve this through video based content analysis and a mechanism to facilitate matching analysed videos to dynamic activities/goals. The mechanism behind this approach is detailed and followed by the presentation of an evaluation where showing promising results were shown

    Automatic Summarization of Activities Depicted in Instructional Videos by Use of Speech Analysis

    No full text
    Existing activity recognition based assistive living solutions have adopted a relatively rigid approach to modelling activities. To address the deficiencies of such approaches, a goal-oriented solution has been proposed that will offer a method of flexibly modelling activities. This approach does, however, have a disadvantage in that the performance of goals may vary hence requiring differing video clips to be associated with these variations. In order to address this shortcoming, the use of rich metadata to facilitate automatic sequencing and matching of appropriate video clips is necessary. This paper introduces a mechanism of automatically generating rich metadata which details the actions depicted in video files to facilitate matching and sequencing. This mechanism was evaluated with 14 video files, producing annotations with a high degree of accuracy

    Automatic Summarization of Activities Depicted in Instructional Videos by Use of Speech Analysis

    No full text
    Existing activity recognition based assistive living solutions have adopted a relatively rigid approach to modelling activities. To address the deficiencies of such approaches, a goal-oriented solution has been proposed that will offer a method of flexibly modelling activities. This approach does, however, have a disadvantage in that the performance of goals may vary hence requiring differing video clips to be associated with these variations. In order to address this shortcoming, the use of rich metadata to facilitate automatic sequencing and matching of appropriate video clips is necessary. This paper introduces a mechanism of automatically generating rich metadata which details the actions depicted in video files to facilitate matching and sequencing. This mechanism was evaluated with 14 video files, producing annotations with a high degree of accuracy
    corecore