22 research outputs found

    Real Time Hand Movement Trajectory Tracking for Enhancing Dementia Screening in Ageing Deaf Signers of British Sign Language

    Get PDF
    Real time hand movement trajectory tracking based on machine learning approaches may assist the early identification of dementia in ageing Deaf individuals who are users of British Sign Language (BSL), since there are few clinicians with appropriate communication skills, and a shortage of sign language interpreters. Unlike other computer vision systems used in dementia stage assessment such as RGB-D video with the aid of depth camera, activities of daily living (ADL) monitored by information and communication technologies (ICT) facilities, or X-Ray, computed tomography (CT), and magnetic resonance imaging (MRI) images fed to machine learning algorithms, the system developed here focuses on analysing the sign language space envelope(sign trajectories/depth/speed) and facial expression of deaf individuals, using normal 2D videos. In this work, we are interested in providing a more accurate segmentation of objects of interest in relation to the background, so that accurate real-time hand trajectories (path of the trajectory and speed) can be achieved. The paper presents and evaluates two types of hand movement trajectory models. In the first model, the hand sign trajectory is tracked by implementing skin colour segmentation. In the second model, the hand sign trajectory is tracked using Part Afinity Fields based on the OpenPose Skeleton Model [1, 2]. Comparisons of results between the two different models demonstrate that the second model provides enhanced improvements in terms of tracking accuracy and robustness of tracking. The pattern differences in facial and trajectory motion data achieved from the presented models will be beneficial not only for screening of deaf individuals for dementia, but also for assessment of other acquired neurological impairments associated with motor changes, for example, stroke and Parkinsons disease

    Machine Learning for Enhancing Dementia Screening in Ageing Deaf Signers of British Sign Language

    Get PDF
    Real-time hand movement trajectory tracking based on machine learning approaches may assist the early identification of dementia in ageing deaf individuals who are users of British Sign Language (BSL), since there are few clinicians with appropriate communication skills, and a shortage of sign language interpreters. In this paper, we introduce an automatic dementia screening system for ageing Deaf signers of BSL, using a Convolutional Neural Network (CNN) to analyse the sign space envelope and facial expression of BSL signers recorded in normal 2D videos from the BSL corpus. Our approach involves the introduction of a sub-network (the multi-modal feature extractor) which includes an accurate real-time hand trajectory tracking model and a real-time landmark facial motion analysis model. The experiments show the effectiveness of our deep learning based approach in terms of sign space tracking, facial motion tracking and early stage dementia performance assessment tasks

    A Multi-modal Machine Learning Approach and Toolkit to Automate Recognition of Early Stages of Dementia among British Sign Language Users

    Get PDF
    The ageing population trend is correlated with an increased prevalence of acquired cognitive impairments such as dementia. Although there is no cure for dementia, a timely diagnosis helps in obtaining necessary support and appropriate medication. Researchers are working urgently to develop effective technological tools that can help doctors undertake early identification of cognitive disorder. In particular, screening for dementia in ageing Deaf signers of British Sign Language (BSL) poses additional challenges as the diagnostic process is bound up with conditions such as quality and availability of interpreters, as well as appropriate questionnaires and cognitive tests. On the other hand, deep learning based approaches for image and video analysis and understanding are promising, particularly the adoption of Convolutional Neural Network (CNN), which require large amounts of training data. In this paper, however, we demonstrate novelty in the following way: a) a multi-modal machine learning based automatic recognition toolkit for early stages of dementia among BSL users in that features from several parts of the body contributing to the sign envelope, e.g., hand-arm movements and facial expressions, are combined, b) universality in that it is possible to apply our technique to users of any sign language, since it is language independent, c) given the trade-off between complexity and accuracy of machine learning (ML) prediction models as well as the limited amount of training and testing data being available, we show that our approach is not over-fitted and has the potential to scale up

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data

    A Survey of Assistive Technology (AT) Knowledge and Experiences of Healthcare Professionals in the UK and France: Challenges and Opportunities for Workforce Development

    Get PDF
    Background: Assistive Technologies (AT) in healthcare can increase independence and quality of life for users. Concurrently, new AT devices offer opportunities for individualised care solutions. Nonetheless, AT remains under-utilised and is poorly integrated in practice by healthcare professionals (HCPs). Although occupational therapists (OTs), physiotherapists and speech and language therapists (SLTs) consider that AT solutions can offer problem-solving approaches to personalised care, they have a lesser understanding of application of AT in their practice. In this paper, we report findings of a survey on AT knowledge and experiences of HCPs in UK and France. Training needs also explored in the survey are presented in a separate paper on development of online training for the ADAPT project. Method: A survey of 37 closed/open questions was developed in English and French by a team of healthcare researchers. Content was informed by published surveys and studies. Email invitations were circulated to contacts in Health Trusts in UK and France ADAPT regions and the survey was hosted on an online platform. Knowledge questions addressed AT understanding and views of impact on user’s lives. Experience questions focussed on current practices, prescription, follow-up, abandonment and practice standards. 429 HCPs completed the survey (UK = 167; FR = 262) between June and November 2018. Key results: Participants were mainly female (UK 89.2%; FR 82.8%) and qualified 10+ years (UK 66.5%; FR 62.2%). A key group in both countries were OTs (UK 34.1%; FR 46.6%), with more physiotherapists and SLTs in UK (16.8%, 16.8%; vs. FR 6.5%, 2.3%), and more nurses in France (22.1% Vs. UK 10.8%). More HCPs were qualified to degree level in France (75.2%; UK 48.5%, p < 0.001). In terms of knowledge, all HCPs agreed that AT helps people complete otherwise difficult or impossible tasks (UK 86.2%; FR 94.3%) and that successful AT adoption always depends on support from carers, family and professionals (UK 52.7%; FR 66.2%). There were some notable differences between countries that require further exploration. For example, more French HCPs thought that AT is provided by trial and error (84.7%, UK 45.5%, p < 0.001), while more UK HCPs believed that AT promotes autonomous living (93.4%; FR 42.8%, p < 0.001). Also, more French HCPs considered that AT refers exclusively to technologically advanced electronic devices (71.8%, UK 28.8%, p <0.001). In both countries, top AT prescribers were OTs, physiotherapists and SLTs. Respondents had little/no knowledge in comparing/choosing AT (UK 86.8%; FR 76.7%) and stated they would benefit from interdisciplinary clinical standards (UK 80.8%; FR 77.1%). A third of HCPs did not know if AT users had access to adequate resources/support (UK 34.1%; FR 27.5%) and rated themselves as capable to monitor continued effective use of AT (UK 38.9%; FR 34.8%). Conclusion: Knowledge and application of AT was varied between the two countries due to differences in health care provision and support mechanisms. Survey findings suggest that HCPs recognised the value of AT for users’ improved care, but had low confidence in their ability to choose appropriate AT solutions and monitor continued use, and would welcome AT interdisciplinary clinical standards

    Training Needs and Development of Online AT Training for Healthcare Professionals in UK and France

    Get PDF
    Background: Assistive Technology (AT) solutions for people with disabilities has become part of mainstream care provision. Despite advantages of AT on offer, abandonment and non-compliance are challenges for healthcare professionals (HCPs), introducing this technology to clients. Studies of abandonment reveal that 1/3 of all devices provided to service users end up stored unused. Key need is training to make informed decisions about AT tailored to individual needs and circumstances. In an online survey undertaken by the ADAPT project, HPCs identified AT training needs and barriers. Currently, a programme is being developed aimed at introducing AT concepts and enhancing practices to a wide range of HCPs. Method: Survey questions explored gaps, availability, qualifications and barriers to AT training in England and France. A series of consultation meetings with ADAPT partners took place. An advisory group consisting of longstanding AT users and their formal/informal carers and HCPs (occupational therapist, speech and language therapist, psychologist and biomedical engineer) contributed to the discussions on survey findings, development and evaluation of AT training for HCPs, key content areas and means of delivery. Key results: HCPs had no AT specific qualifications (UK 94.6%; FR 81.3%) nor in-service AT training (UK 65.1%; FR 66.4%). They either did not know of AT courses (UK 63.3%) or knew that none existed (FR 72.5%). Barriers to AT training were mainly local training (UK 62.7%, FR 50%) and funding (UK 62.7%, FR 55.7%). Some training priorities were clearer for French HCPs – overall knowledge of AT devices (82.1%, UK 45.8%), customization of AT (65.3%, UK 30.1%), assessing patient holistically (53.4%, UK 25.3%), educating patient/carers (56.5%, UK 28.3%) (p < 0.001). Variances may be due to differing country-specific HCP education approach. A third of both groups highlighted also abandonment, client follow-up, powered wheelchair training and prescribing AT. To bridge gaps in knowledge and identified training needs of HCPs, the online interactive training programme starts by introducing foundations of AT, including definitions, types/uses of AT, legislation/policies and AT in practice. More specialist units build and expand on specific areas, e.g. AT for mobility, communication, assessment and evidence-based practice. The biopsychosocial model of Health and World Health Organisation’s (WHO) International Classification of Functioning, Disability and Health (ICF) framework underpin development of content. ICF shifts focus from disability to health and functioning, in line with a social model of rehabilitation. E-learning comprises existing videos, AT textbook material and bespoke animated presentations. Selfassessment and evaluation of training are embedded and learners receive certificate of completion. Training was piloted to a group of HCPs trainees and postregistration HCPs who commented on relevance of AT content, clarity, accessibility of presentation, and usefulness. Users found training very useful, especially legislation/policies and AT literature. Conclusion: Overall, survey results suggest that both UK and French HCPs’ training on AT solutions is limited and highly variable. There is need for crosschannel AT professional competencies, availability of work-based training and funding support. Development of online, interactive training aims to increase professional confidence and competence in this area as well as the evidence base for AT
    corecore