12 research outputs found

    Exploring intrinsic and extrinsic motivations to participate in a crowdsourcing project to support blind and partially sighted students

    Get PDF
    There have been a number of crowdsourcing projects to support people with disabilities. However, there is little exploration of what motivates people to participate in such crowdsourcing projects. In this study we investigated how different motivational factors can affect the participation of people in a crowdsourcing project to support visually disabled students. We are developing “DescribeIT”, a crowdsourcing project to support blind and partially students by having sighted people describe images in digital learning resources. We investigated participants’ behavior of the DescribeIT project using three conditions: one intrinsic motivation condition and two extrinsic motivation conditions. The results showed that participants were significantly intrinsically motivated to participate in the DescribeIT project. In addition, participants’ intrinsic motivation dominated the effect of the two extrinsic motivational factors in the extrinsic conditions

    Comparing Map Learning between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users

    Get PDF
    The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the learning and mental representation of digital maps, representing a key real-world translational eyes-free application. Two experiments involving 12 blind participants and 16 sighted participants compared cognitive map development and test performance on a range of spatio-behavioral tasks across three information-matched learning-mode conditions: (1) our prototype vibro-audio map (VAM), (2) traditional hardcopy-tactile maps, and (3) visual maps. Results demonstrated that when perceptual parameters of the stimuli were matched between modalities during haptic and visual map learning, test performance was highly similar (functionally equivalent) between the learning modes and participant groups. These results suggest equivalent cognitive map formation between both blind and sighted users and between maps learned from different sensory inputs, providing compelling evidence supporting the development of amodal spatial representations in the brain. The practical implications of these results include empirical evidence supporting a growing interest in the efficacy of multisensory interfaces as a primary interaction style for people both with and without vision. Findings challenge the long-held assumption that blind people exhibit deficits on global spatial tasks compared to their sighted peers, with results also providing empirical support for the methodological use of sighted participants in studies pertaining to technologies primarily aimed at supporting blind users

    Exploring asymmetric roles in mixed-ability gaming

    Get PDF
    Tese de mestrado, InformĂĄtica, Universidade de Lisboa, Faculdade de CiĂȘncias, 2020Noticeably, the majority of mainstreamgames — digitalgames and tabletopgames — are still designed for players with a standard set of abilities. As such, people with someformof disability, oftenface insurmountable challengestoplay mainstreamgames or are limited to playgames specifcally designed for them. By conducting an initial study, we share multiplayer gaming experiences of people with visual impairments collected from interviews with 10 adults and 10 minors, and 140 responses to an online survey. We include the perspectives of 17 sighted people who play with someone who has a visual impairment, collected in a second online survey. We found that people with visual impairments are playingdiversegames,butface limitationsin playing with others who have different visual abilities. What stood out is the lack of intersection ingaming opportunities, and consequently, in habits and interests of people with different visual abilities. In this study, we highlight barriers associated with these experiences beyond inaccessibility issues and discuss implications and opportunities for the design of mixed-abilitygaming.Asexpected,we foundaworrying absenceofgames that caterto different abilities. In this context, we explored ability-based asymmetric roles as a design approach to create engaging and challenging mixed-ability play. We designed and developed two collaborative testbedgamesexploring asymmetric interdependent roles. In a remote study with 13 mixed-visual-ability pairs we assessed how roles affected perceptions of engagement, competence, and autonomy, using a mixed-methods approach. The games provided an engaging and challenging experience, in which differences in visual ability were not limiting. Our results underline how experiences unequal by design can give rise to an equitable joint experience

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye

    Model tulisan tangan berbantukan teknologi untuk memudahcara penulisan murid Disgrafia

    Get PDF
    Dysgraphia is a student learning problem related to handwriting skills, which is not inline according to their age. To overcome the problem, various handwriting skills intervention methods to dysgrapbic students have been conducted. However, past research on the intervention methods have not covered all levels of handwriting skills from the basic levels to automaticity. This embodies a research gap especially the absence of a comprehensive technology-assisted support model. The function of the dysgraphic student writing support application will be more effective with the presence of interaction design guidelines (IxD). Thus, this study proposes a technology-assisted handwriting model of dysgraphic students (DCHM) to improve their writing proficiency at the level of visualization, imagination and automation. The model involves a combination of letter formation components (transcription, visualization, imagination, text generation and cognitive) and ICT support (letter formation animation, control, tracing, arrow animation, feedback and repetition). To achieve the objective, three phases of research methods are involved namely 1) categorising and analysing handwriting patterns, model design and verifying the model; 2) developing the prototype; and 3) evaluating the prototype. The reliability of the model was tested through user's evaluation using handwriting legibility scale (HLS). Comparisons were made based on handwriting samples before and after the intervention with the assistance of the prototype. The findings demonstrated that, HLS is able to identify the sensitivity of each parameter in DCHM. To be more specific, the score achieved for global legibility and effort to read the script is 92 percent, layout on the page and letter fonnation attained 96 percent; while alterations to writing scored 100 percent. The overall evaluation of the model showed positive scores from the aspects of legibility, memory, correct letter formation, and ultimately automaticity achievement in handwriting. In conclusion, these findings confirmed that the implementation of DCHM in the prototype can significantly enhance the mastery of handwriting among dysgraphic students. More importantly, this study has contributed substantially to the field of interaction design by providing a novel understanding of the design model

    FINDING OBJECTS IN COMPLEX SCENES

    Get PDF
    Object detection is one of the fundamental problems in computer vision that has great practical impact. Current object detectors work well under certain con- ditions. However, challenges arise when scenes become more complex. Scenes are often cluttered and object detectors trained on Internet collected data fail when there are large variations in objects’ appearance. We believe the key to tackle those challenges is to understand the rich context of objects in scenes, which includes: the appearance variations of an object due to viewpoint and lighting condition changes; the relationships between objects and their typical environment; and the composition of multiple objects in the same scene. This dissertation aims to study the complexity of scenes from those aspects. To facilitate collecting training data with large variations, we design a novel user interface, ARLabeler, utilizing the power of Augmented Reality (AR) devices. Instead of labeling images from the Internet passively, we put an observer in the real world with full control over the scene complexities. Users walk around freely and observe objects from multiple angles. Lighting can be adjusted. Objects can be added and/or removed to the scene to create rich compositions. Our tool opens new possibilities to prepare data for complex scenes. We also study challenges in deploying object detectors in real world scenes: detecting curb ramps in street view images. A system, Tohme, is proposed to combine detection results from detectors and human crowdsourcing verifications. One core component is a meta-classifier that estimates the complexity of a scene and assigns it to human (accurate but costly) or computer (low cost but error-prone) accordingly. One of the insights from Tohme is that context is crucial in detecting objects. To understand the complex relationship between objects and their environment, we propose a standalone context model that predicts where an object can occur in an image. By combining this model with object detection, it can find regions where an object is missing. It can also be used to find out-of-context objects. To take a step beyond single object based detections, we explicitly model the geometrical relationships between groups of objects and use the layout information to represent scenes as a whole. We show that such a strategy is useful in retrieving indoor furniture scenes with natural language inputs

    Creating Age-friendly Communities

    Get PDF
    The "Creating Age-friendly Communities: Housing and Technology" publication presents contemporary, innovative, and insightful narratives, debates, and frameworks based on an international collection of papers from scholars spanning the fields of gerontology, social sciences, architecture, computer science, and gerontechnology. This extensive collection of papers aims to move the narrative and debates forward in this interdisciplinary field of age-friendly cities and communities

    The development of a SmartAbility Framework to enhance multimodal interaction for people with reduced physical ability.

    Get PDF
    Assistive technologies are an evolving market due to the number of people worldwide who have conditions resulting in reduced physical ability (also known as disability). Various classification schemes exist to categorise disabilities, as well as government legislations to ensure equal opportunities within the community. However, there is a notable absence of a process to map physical conditions to technologies in order to improve Quality of Life for this user group. This research is characterised primarily under the Human Computer Interaction (HCI) domain, although aspects of Systems of Systems (SoS) and Assistive Technologies have been applied. The thesis focuses on examples of multimodal interactions leading to the development of a SmartAbility Framework that aims to assist people with reduced physical ability by utilising their abilities to suggest interaction mediums and technologies. The framework was developed through a predominantly Interpretivism methodology approach consisting of a variety of research methods including state- of-the-art literature reviews, requirements elicitation, feasibility trials and controlled usability evaluations to compare multimodal interactions. The developed framework was subsequently validated through the involvement of the intended user community and domain experts and supported by a concept demonstrator incorporating the SmartATRS case study. The aim and objectives of this research were achieved through the following key outputs and findings: - A comprehensive state-of-the-art literature review focussing on physical conditions and their classifications, HCI concepts relevant to multimodal interaction (Ergonomics of human-system interaction, Design For All and Universal Design), SoS definition and analysis techniques involving System of Interest (SoI), and currently-available products with potential uses as assistive technologies. - A two-phased requirements elicitation process applying surveys and semi-structured interviews to elicit the daily challenges for people with reduced physical ability, their interests in technology and the requirements for assistive technologies obtained through collaboration with a manufacturer. - Findings from feasibility trials involving monitoring brain activity using an electroencephalograph (EEG), tracking facial features through Tracking Learning Detection (TLD), applying iOS Switch Control to track head movements and investigating smartglasses. - Results of controlled usability evaluations comparing multimodal interactions with the technologies deemed to be feasible from the trials. The user community of people with reduced physical ability were involved during the process to maximise the usefulness of the data obtained. - An initial SmartDisability Framework developed from the results and observations ascertained through requirements elicitation, feasibility trials and controlled usability evaluations, which was validated through an approach of semi-structured interviews and a focus group. - An enhanced SmartAbility Framework to address the SmartDisability validation feedback by reducing the number of elements, using simplified and positive terminology and incorporating concepts from Quality Function Deployment (QFD). - A final consolidated version of the SmartAbility Framework that has been validated through semi-structured interviews with additional domain experts and addressed all key suggestions. The results demonstrated that it is possible to map technologies to people with physical conditions by considering the abilities that they can perform independently without external support and the exertion of significant physical effort. This led to a realisation that the term ‘disability’ has a negative connotation that can be avoided through the use of the phrase ‘reduced physical ability’. It is important to promote this rationale to the wider community, through exploitation of the framework. This requires a SmartAbility smartphone application to be developed that allows users to input their abilities in order for recommendations of interaction mediums and technologies to be provided

    Machine Learning Driven Emotional Musical Prosody for Human-Robot Interaction

    Get PDF
    This dissertation presents a method for non-anthropomorphic human-robot interaction using a newly developed concept entitled Emotional Musical Prosody (EMP). EMP consists of short expressive musical phrases capable of conveying emotions, which can be embedded in robots to accompany mechanical gestures. The main objective of EMP is to improve human engagement with, and trust in robots while avoiding the uncanny valley. We contend that music - one of the most emotionally meaningful human experiences - can serve as an effective medium to support human-robot engagement and trust. EMP allows for the development of personable, emotion-driven agents, capable of giving subtle cues to collaborators while presenting a sense of autonomy. We present four research areas aimed at developing and understanding the potential role of EMP in human-robot interaction. The first research area focuses on collecting and labeling a new EMP dataset from vocalists, and using this dataset to generate prosodic emotional phrases through deep learning methods. Through extensive listening tests, the collected dataset and generated phrases were validated with a high level of accuracy by a large subject pool. The second research effort focuses on understanding the effect of EMP in human-robot interaction with industrial and humanoid robots. Here, significant results were found for improved trust, perceived intelligence, and likeability of EMP enabled robotic arms, but not for humanoid robots. We also found significant results for improved trust in a social robot, as well as perceived intelligence, creativity and likeability in a robotic musician. The third and fourth research areas shift to broader use cases and potential methods to use EMP in HRI. The third research area explores the effect of robotic EMP on different personality types focusing on extraversion and neuroticism. For robots, personality traits offer a unique way to implement custom responses, individualized to human collaborators. We discovered that humans prefer robots with emotional responses based on high extraversion and low neuroticism, with some correlation between the humans collaborator’s own personality traits. The fourth and final research question focused on scaling up EMP to support interaction between groups of robots and humans. Here, we found that improvements in trust and likeability carried across from single robots to groups of industrial arms. Overall, the thesis suggests EMP is useful for improving trust and likeability for industrial, social and robot musicians but not in humanoid robots. The thesis bears future implications for HRI designers, showing the extensive potential of careful audio design, and the wide range of outcomes audio can have on HRI.Ph.D

    Enabling people with dementia and mild cognitive impairment to maintain physically active lives: what role can technology play?

    Get PDF
    Ph. D. Thesis.People with dementia and mild cognitive impairment (MCI) tend to be inactive, despite evidence that physical activity can improve cognition. To date, interventions to support physical activity have been lacking. This thesis explores the barriers, motivators and facilitators of physical activity for people with mild dementia and MCI and the opportunities for digital technologies to facilitate more active lives. In the first of three stages of human-centred design research, eight people with mild dementia, seven with MCI and eleven of their spouses shared their experiences of physical activity through diary-probe led interviews. Next, in design workshops with experts in health research, engineering and design, concepts for technologies to support physical activity were developed, informed by personas that described participants’ experiences. Finally, storyboard illustrations of the concept technologies were presented to participants for their critique in focus groups. Thematic data analysis was conducted at each stage. This thesis makes three key contributions to the literature on physical activity in MCI and dementia. First, the importance of everyday activities for an active and fulfilled life is revealed. Second, for people with dementia a variety of barriers to activity are identified, including motivational impairment and difficulties performing everyday activities, whereas MCI appears to have negligible impact. Third, the significance of partners in an active life is revealed, particularly for those with dementia. In response to these findings, technologies to support physical activity in dementia are proposed, however, participants’ responses indicate that human interventions and low-tech solutions should be prioritised. This enquiry also provides novel insights into methods for human-centred design with people with MCI and mild dementia. This thesis highlights the importance of working with people with dementia and MCI to develop technologies and services that facilitate the valued, purposeful activities that contribute to physically active and fulfilled lives
    corecore