24 research outputs found

    Using Mobile Augmented Reality to Improve Attention in Adults with Autism Spectrum Disorder

    Get PDF
    Adults on the autism spectrum commonly experience impairments in attention management that hinder many other cognitive functions necessary to appreciate relationships between sensory stimuli. As autistic individuals generally identify as visual learners, the effective use of visual aids can be critical in developing life skills. In this brief paper, we propose a Mobile Augmented Reality for Attention (MARA) application which addresses a lack of supportive and simple costeffective solutions for autistic adults to train attention management skills. We present the proposed design, configuration and implementation. Lastly, we discuss future directions for research

    Evaluating Context-Aware Applications Accessed Through Wearable Devices as Assistive Technology for Students with Disabilities

    Get PDF
    The purpose of these two single subject design studies was to evaluate the use of the wearable and context-aware technologies for college students with intellectual disability and autism as tools to increase independence and vocational skills. There is a compelling need for the development of tools and strategies that will facilitate independence, self-sufficiency, and address poor outcomes in adulthood for students with disabilities. Technology is considered to be a great equalizer for people with disabilities. The proliferation of new technologies allows access to real-time, contextually-based information as a means to compensate for limitations in cognitive functioning and decrease the complexity of prerequisite skills for successful use of previous technologies. Six students participated in two single-subject design studies; three students participate in Study I and three different students participated in Study II. The results of these studies are discussed in the context applying new technology applications to assist and improve individuals with intellectual disability and autism to self-manage technological supports to learn new skills, set reminders, and enhance independence. During Study I, students were successfully taught to use a wearable smartglasses device, which delivered digital auditory and visual information to complete three novel vocational tasks. The results indicated that all students learned all vocational task using the wearable device. Students also continued to use the device beyond the initial training phase to self-direct their learning and self-manage prompts for task completion as needed. During Study II, students were successfully taught to use a wearable smartwatch device to enter novel appointments for the coming week, as well as complete the tasks associated with each appointment. The results indicated that all students were able to self-operate the wearable device to enter appointments, attend all appointments on-time and complete all associated tasks

    Augmented Reality Action Assistance and Learning for Cognitively Impaired People. A Systematic Literature Review

    Get PDF
    Blattgerste J, Renner P, Pfeiffer T. Augmented Reality Action Assistance and Learning for Cognitively Impaired People. A Systematic Literature Review. In: The 12th PErvasive Technologies Related to Assistive Environments Conference (PETRA ’19). New York, NY, USA: ACM; 2019.Augmented reality (AR) is a promising tool for many situations in which assistance is needed, as it allows for instructions and feedback to be contextualized. While research and development in this area have been primarily driven by industry, AR could also have a huge impact on those who need assistance the most: cognitively impaired people of all ages. In recent years some primary research on applying AR for action assistance and learning in the context of this target group has been conducted. However, the research field is sparsely covered and contributions are hard to categorize. An overview of the current state of research is missing. We contribute to filling this gap by providing a systematic literature review covering 52 publications. We describe the often rather technical publications on an abstract level and quantitatively assess their usage purpose, the targeted age group and the type of AR device used. Additionally, we provide insights on the current challenges and chances of AR learning and action assistance for people with cognitive impairments. We discuss trends in the research field, including potential future work for researchers to focus on

    Tecnologia assistiva para crianças com transtorno do espectro autista que vivenciam estresse e ansiedade

    Get PDF
    With the development of current technology and influences that have been made by the Industry 4.0 utilizing ICTs, IoT, smart systems and products and many others, Assistive Technology (AT) is an important and integral part of the daily life of many people who experience disabilities. Autism Spectrum Disorder (ASD) is a special category of disorder that can greatly benefit from its use. The purpose of this research is to collect data of Assistive Technology aimed at the detection, prevention and improvement of anxiety and stress (a characteristic of which has been proven to exist and is expressed in various ways in people with ASD). In the introduction, basic definitions regarding the neurobiology of stress and ASD are analyzed. In the main part AT, stress and anxiety correlations are made with ASD and AT devices are described and documented regarding their use for anxiety and stress in children and adolescents with ASD. The Assistive equipment and devices are divided into 2 main categories, 1) Low-tech and 2) Mid-High tech. The results of the research reveal a significant research gap in the use of AT to combat stress and anxiety and the difficulty of many promising options (especially in the domain of Mid-High tech) to be an easy and economical solution in integrating them into the daily life of people with ASD.Con el desarrollo de la tecnología actual y las influencias que ha tenido la Industria 4.0 utilizando TIC, IoT, sistemas y productos inteligentes y muchos otros, la Tecnología de asistencia (TA) es una parte importante e integral de la vida diaria de muchas personas que sufren de discapacidad. . . El trastorno del espectro autista (TEA) es una categoría especial de trastorno que puede beneficiarse enormemente de su uso. El objetivo de esta investigación es recopilar datos de Tecnología Asistiva dirigidos a detectar, prevenir y mejorar la ansiedad y el estrés (una característica que está comprobada y se expresa de diferentes formas en las personas con TEA). En la introducción se analizan definiciones básicas sobre la neurobiología del estrés y el TEA. En su mayor parte se realizan correlaciones de TA, estrés y ansiedad con los TEA y se describen y documentan los dispositivos de TA en relación a su uso para la ansiedad y el estrés en niños y adolescentes con TEA. Los equipos y dispositivos de asistencia se dividen en 2 categorías principales, 1) Tecnología baja y 2) Tecnología media-alta. Los resultados de la encuesta revelan una importante brecha de investigación en el uso de TA para combatir el estrés y la ansiedad y la dificultad de que muchas opciones prometedoras (especialmente en el dominio tecnológico medio-alto) sean una solución fácil y rentable para integrarlas en la vida cotidiana. de personas con TEA.Com o desenvolvimento da tecnologia atual e as influências que foram feitas pela Indústria 4.0 utilizando TICs, IoT, sistemas e produtos inteligentes e muitos outros, a Tecnologia Assistiva (TA) é uma parte importante e integrante da vida diária de muitas pessoas que sofrem de deficiência. O Transtorno do Espectro do Autismo (TEA) é uma categoria especial de transtorno que pode se beneficiar muito com seu uso. O objetivo desta pesquisa é coletar dados de Tecnologia Assistiva voltados para a detecção, prevenção e melhora da ansiedade e do estresse (característica que comprovadamente existe e se expressa de diversas formas em pessoas com TEA). Na introdução, são analisadas definições básicas sobre a neurobiologia do estresse e do TEA. Na parte principal, são feitas correlações de TA, estresse e ansiedade com ASD e dispositivos de TA são descritos e documentados em relação ao seu uso para ansiedade e estresse em crianças e adolescentes com TEA. Os equipamentos e dispositivos assistivos são divididos em 2 categorias principais, 1) Low-tech e 2) Mid-High tech. Os resultados da pesquisa revelam uma lacuna significativa de pesquisa no uso de TA para combater o estresse e a ansiedade e a dificuldade de muitas opções promissoras (especialmente no domínio da tecnologia média-alta) serem uma solução fácil e econômica em integrá-las ao cotidiano de pessoas com TEA

    Smart and Secure Augmented Reality for Assisted Living

    Get PDF
    Augmented reality (AR) is one of the biggest technology trends which enables people to see the real-life surrounding environment with a layer of virtual information overlaid on it. Assistive devices use this match of information to help people better understand the environment and consequently be more efficient. Specially, AR has been extremely useful in the area of Ambient Assisted Living (AAL). AR-based AAL solutions are designed to support people in maintaining their autonomy and compensate for slight physical and mental restrictions by instructing them on everyday tasks. The discovery of visual attention for assistive aims is a big challenge since in dynamic cluttered environments objects are constantly overlapped and partial object occlusion is also frequent. Current solutions use egocentric object recognition techniques. However, the lack of accuracy affects the system's ability to predict users’ needs and consequently provide them with the proper support. Another issue is the manner that sensitive data is treated. This highly private information is crucial for improving the quality of healthcare services. However, current blockchain approaches are used only as a permission management system, while the data is still stored locally. As a result, there is a potential risk of security breaches. Privacy risk in the blockchain domain is also a concern. As major investigation tackles privacy issues based on off-chain approaches, there is a lack of effective solutions for providing on-chain data privacy. Finally, the Blockchain size has been shown to be a limiting factor even for chains that store simple transactional data, much less the massive blocks that would be required for storing medical imaging studies. To tackle the aforementioned major issues, this research proposes a framework to provide a smarter and more secure AR-based solution for AAL. Firstly, a combination of head-worn eye-trackers cameras with egocentric video is designed to improve the accuracy of visual attention object recognition in free-living settings. A heuristic function is designed to generate a probability estimation of visual attention over objects within an egocentric video. Secondly, a novel methodology for the storage of large sensitive AR-based AAL data is introduced in a decentralized fashion. By leveraging the power of the IPFS (InterPlanetary File System) protocol to tackle the lack of storage issue in the Blockchain. Meanwhile, a blockchain solution on the Secret Network blockchain is developed to tackle the existent lack of privacy on smart contracts, which provides data privacy at both transactional and computational levels. In addition, is included a new off-chain solution encapsulates a governing body for permission management purposes to solve the problem of the lost or eventual theft of private keys. Based on the research findings, that visual attention-object detection approach is applicable to cluttered environments which presents a transcend performance compared to the current methods. This study also produced an egocentric indoor dataset annotated with human fixation during natural exploration in a cluttered environment. Comparing to previous works, this dataset is more realistic because it was recorded in real settings with variations in terms of objects overlapping regions and object sizes. With respect to the novel decentralized storage methodology, results indicate that sensitive data can be stored and queried efficiently using the Secret Network blockchain. The proposed approach achieves both computational and transactional privacy with significantly less cost. Additionally, this approach mitigates the risk of permanent loss of access to the patient on-chain data records. The proposed framework can be applied as an assistive technology in a wide range of sectors that requires AR-based solution with high-precision visual-attention object detection, efficient data access, high-integrity data storage and full data privacy and security

    Efficient Deep Learning-Driven Systems for Real-Time Video Expression Recognition

    Get PDF
    The ability to detect, recognize, and interpret facial expressions is an important skill for humans to have due to the abundance of social interactions one faces on a daily basis, but it is also something that most take for granted. Being the social animals that we are, expression understanding not only enables us to gauge current emotional states, but also allows for the recognition of conversational cues such as level of interest, speaking turns, and level of information understanding. For individuals with autism spectrum disorder, a core challenge that they face is an impaired ability to infer other people's emotions based on their facial expressions, which can cause problems when creating and sustaining meaningful, positive relationships, leading to troubles integrating into society and a higher prevalence of depression and loneliness. However, with significant recent advances in machine learning, one potential solution is to leverage assistive technology to aid these individuals to better recognize facial expressions. Such a technology requires reasonable accuracy in order to provide users with correct information, but also must follow a real-time constraint to be relevant and seamless in a social setting. Due to the dynamic and transient nature of human facial expressions, a challenge during classification is the usage of temporal information to provide additional context to a scene. Many applications require the real-time aspect to be preserved, and thus temporal information must be leveraged in an efficient manner. Consequently, we explore the dynamic and transient nature of facial expressions through a novel deep time windowed convolutional neural network design called TimeConvNets, that is capable of encoding spatiotemporal information in an efficient manner. We compare against other methods capable of leveraging temporal information, and show that TimeConvNets can provide a real-time solution that is both accurate as well as architecturally and computationally less complex. Even with the strong performances that the TimeConvNet architecture offers, additional architecture modifications tailored specifically for human facial expression classification can likely result in increased performance gains. Thus, we explore a human-machine collaborative design strategy for the purpose of further reducing and optimizing these facial expression classifiers. EmotionNet Nano was created and tailored specifically for the task of expression classification on edge devices, by leveraging human experience combined with the meticulousness and speed of machines. Experimental results on the CK+ facial expression benchmark dataset demonstrate that the proposed EmotionNet Nano networks achieved accuracy comparable to state-of-the-art, while requiring significantly fewer parameters, and are also capable of performing inference in real-time, making them suitable for deployment on a variety of platforms including mobile phones. To train these models, a high quality expression dataset is required, specifically one that retains temporal information between consecutive image frames. We introduce FaceParty as a solution, which is a more difficult dataset created by the modified aggregation of six public video facial expression datasets, and provide details for replication. We hope that models trained using FaceParty can achieve increased generalization ability for faces in the wild due to the nature of the dataset

    Inclusive Augmented and Virtual Reality: A Research Agenda

    Get PDF
    Augmented and virtual reality experiences present significant barriers for disabled people, making it challenging to fully engage with immersive platforms. Whilst researchers have started to explore potential solutions addressing these accessibility issues, we currently lack a comprehensive understanding of research areas requiring further investigation to support the development of inclusive AR/VR systems. To address current gaps in knowledge, we led a series of multidisciplinary sandpits with relevant stakeholders (i.e., academic researchers, industry specialists, people with lived experience of disability, assistive technologists, and representatives from disability organisations, charities, and special needs educational institutions) to collaboratively explore research challenges, opportunities, and solutions. Based on insights shared by participants, we present a research agenda identifying key areas where further work is required in relation to specific forms of disability (i.e., across the spectrum of physical, visual, cognitive, and hearing impairments), including wider considerations associated with the development of more accessible immersive platforms

    Real-time 3D Graphic Augmentation of Therapeutic Music Sessions for People on the Autism Spectrum

    Get PDF
    This thesis looks at the requirements analysis, design, development and evaluation of an application, CymaSense, as a means of improving the communicative behaviours of autistic participants through therapeutic music sessions, via the addition of a visual modality. Autism spectrum condition (ASC) is a lifelong neurodevelopmental disorder that can affect people in a number of ways, commonly through difficulties in communication. Interactive audio-visual feedback can be an effective way to enhance music therapy for people on the autism spectrum. A multi-sensory approach encourages musical engagement within clients, increasing levels of communication and social interaction beyond the sessions.Cymatics describes a resultant visualised geometry of vibration through a variety of mediums, typically through salt on a brass plate or via water. The research reported in this thesis focuses on how an interactive audio-visual application, based on Cymatics, might improve communication for people on the autism spectrum.A requirements analysis was conducted through interviews with four therapeutic music practitioners, aimed at identifying working practices with autistic clients. CymaSense was designed for autistic users in exploring effective audio-visual feedback, and to develop meaningful cross-modal mappings of musical practitioner-client communication. CymaSense mappings were tested by 17 high functioning autistic participants, and by 30 neurotypical participants. The application was then trialled as a multimodal intervention for eight participants with autism, over a 12-week series of therapeutic music sessions. The study captured the experiences of the users and identified behavioural changes as a result, including information on how CymaSense could be developed further. This dissertation contributes evidence that multimodal applications can be used within therapeutic music sessions as a tool to increase communicative behaviours for autistic participants

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    corecore