409 research outputs found

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye

    The Anatomy of Virtual Manipulative Apps: Using Grounded Theory to Conceptualize and Evaluate Educational Apps that Contain Virtual Manipulatives

    Get PDF
    This exploratory qualitative study used grounded theory to investigate the anatomy of educational apps that contain virtual manipulatives. For this study 100 virtual manipulatives within educational apps designed for the iPad were observed by the researcher in order to expand the explanations of and build theory about virtual manipulatives within apps. Affordance theory was used to frame all six phases of the study in which the researcher identified virtual manipulatives situated within educational apps, conducted observer-as-participant structured and unstructured observations, analyzed component data including field notes and memos using open and axial coding, created a conceptual framework, developed an evaluation tool prototype to evaluate virtual manipulatives within educational apps, and used the evaluation tool prototype to evaluate additional virtual manipulatives within educational apps. The constant comparative method of open and axial coding was used to analyze the observation data that included field notes, memos, and video recordings. This indepth qualitative analysis led to the emergence of six study results concerning the components and relationships within educational apps that contain virtual manipulatives. The results revealed that: (1) virtual manipulatives within apps are comprised of two components: dynamic mathematical objects and features; (2) there are three distinct types of dynamic mathematical objects; (3) there are eight categories of features; (4) within one virtual manipulative there can be one or multiple objects; (5) varying relationships can exist among the dynamic object and the features within a virtual manipulative; and (6) varying relationships can exist among the virtual manipulatives within an education app in terms of the number, type, and ways the user proceeds. A conceptual framework was also developed during the study to illustrate the components and relationships that emerged from the analysis and to serve as the basis for the development of an evaluation tool prototype to evaluate educational apps that contain virtual manipulatives. The components, relationships, framework, and evaluation tool prototype developed during this study advance the literature on virtual manipulatives and provide researchers with a common language to evaluate these apps

    Applying touch gesture to improve application accessing speed on mobile devices.

    Get PDF
    The touch gesture shortcut is one of the most significant contributions to Human-Computer Interaction (HCI). It is used in many fields: e.g., performing web browsing tasks (i.e., moving to the next page, adding bookmarks, etc.) on a smartphone, manipulating a virtual object on a tabletop device and communicating between two touch screen devices. Compared with the traditional Graphic User Interface (GUI), the touch gesture shortcut is more efficient, more natural, it is intuitive and easier to use. With the rapid development of smartphone technology, an increasing number of data items are showing up in users’ mobile devices, such as contacts, installed apps and photos. As a result, it has become troublesome to find a target item on a mobile device with traditional GUI. For example, to find a target app, sliding and browsing through several screens is a necessity. This thesis addresses this challenge by proposing two alternative methods of using a touch gesture shortcut to find a target item (an app, as an example) in a mobile device. Current touch gesture shortcut methods either employ a universal built-in system- defined shortcut template or a gesture-item set, which is defined by users before using the device. In either case, the users need to learn/define first and then recall and draw the gesture to reach the target item according to the template/predefined set. Evidence has shown that compared with GUI, the touch gesture shortcut has an advantage when performing several types of tasks e.g., text editing, picture drawing, audio control, etc. but it is unknown whether it is quicker or more effective than the traditional GUI for finding target apps. This thesis first conducts an exploratory study to understand user memorisation of their Personalized Gesture Shortcuts (PGS) for 15 frequently used mobile apps. An experiment will then be conducted to investigate (1) the users’ recall accuracy on the PGS for finding both frequently and infrequently used target apps, (2) and the speed by which users are able to access the target apps relative to GUI. The results show that the PGS produced a clear speed advantage (1.3s faster on average) over the traditional GUI, while there was an approximate 20% failure rate due to unsuccessful recall on the PGS. To address the unsuccessful recall problem, this thesis explores ways of developing a new interactive approach based on the touch gesture shortcut but without requiring recall or having to be predefined before use. It has been named the Intelligent Launcher in this thesis, and it predicts and launches any intended target app from an unconstrained gesture drawn by the user. To explore how to achieve this, this thesis conducted a third experiment to investigate the relationship between the reasons underlying the user’s gesture creation and the gesture shape (handwriting, non-handwriting or abstract) they used as their shortcut. According to the results, unlike the existing approaches, the thesis proposes that the launcher should predict the users’ intended app from three types of gestures. First, the non-handwriting gestures via the visual similarity between it and the app’s icon; second, the handwriting gestures via the app’s library name plus functionality; and third, the abstract gestures via the app’s usage history. In light of these findings mentioned above, we designed and developed the Intelligent Launcher, which is based on the assumptions drawn from the empirical data. This thesis introduces the interaction, the architecture and the technical details of the launcher. How to use the data from the third experiment to improve the predictions based on a machine learning method, i.e., the Markov Model, is described in this thesis. An evaluation experiment, shows that the Intelligent Launcher has achieved user satisfaction with a prediction accuracy of 96%. As of now, it is still difficult to know which type of gesture a user tends to use. Therefore, a fourth experiment, which focused on exploring the factors that influence the choice of touch gesture shortcut type for accessing a target app is also conducted in this thesis. The results of the experiment show that (1) those who preferred a name-based method used it more consistently and used more letter gestures compared with those who preferred the other three methods; (2) those who preferred the keyword app search method created more letter gestures than other types; (3) those who preferred an iOS system created more drawing gestures than other types; (4) letter gestures were more often used for the apps that were used frequently, whereas drawing gestures were more often used for the apps that were used infrequently; (5) the participants tended to use the same creation method as the preferred method on different days of the experiment. This thesis contributes to the body of Human-Computer Interaction knowledge. It proposes two alternative methods which are more efficient and flexible for finding a target item among a large number of items. The PGS method has been confirmed as being effective and has a clear speed advantage. The Intelligent Launcher has been developed and it demonstrates a novel way of predicting a target item via the gesture user’s drawing. The findings concerning the relationship between the user’s choice of gesture for the shortcut and some of the individual factors have informed the design of a more flexible touch gesture shortcut interface for ”target item finding” tasks. When searching for different types of data items, the Intelligent Launcher is a prototype for finding target apps since the variety in visual appearance of an app and its functionality make it more difficult to predict than other targets, such as a standard phone setting, a contact or a website. However, we believe that the ideas that have been presented in this thesis can be further extended to other types of items, such as videos or photos in a Photo Library, places on a map or clothes in an online store. What is more, this study also leads the way in tackling the advantage of a machine learning method in touch gesture shortcut interactions

    Metafore mobilnih komunikacija ; Метафоры мобильной связи.

    Get PDF
    Mobilne komunikacije su polje informacione i komunikacione tehnologije koje karakteriše brzi razvoj i u kome se istraživanjem u analitičkim okvirima kognitivne lingvistike, zasnovanom na uzorku od 1005 odrednica, otkriva izrazito prisustvo metafore, metonimije, analogije i pojmovnog objedinjavanja. Analiza uzorka reči i izraza iz oblasti mobilnih medija, mobilnih operativnih sistema, dizajna korisničkih interfejsa, terminologije mobilnih mreža, kao i slenga i tekstizama koje upotrebljavaju korisnici mobilnih naprava ukazuje da pomenuti kognitivni mehanizmi imaju ključnu ulogu u olakšavanju interakcije između ljudi i širokog spektra mobilnih uređaja sa računarskim sposobnostima, od prenosivih računara i ličnih digitalnih asistenata (PDA), do mobilnih telefona, tableta i sprava koje se nose na telu. Ti mehanizmi predstavljaju temelj razumevanja i nalaze se u osnovi principa funkcionisanja grafičkih korisničkih interfejsa i direktne manipulacije u računarskim okruženjima. Takođe je analiziran i poseban uzorak od 660 emotikona i emođija koji pokazuju potencijal za proširenje značenja, imajući u vidu značaj piktograma za tekstualnu komunikaciju u vidu SMS poruka i razmenu tekstualnih sadržaja na društvenim mrežama kojima se redovno pristupa putem mobilnih uređaja...Mobile communications are a fast-developing field of information and communication technology whose exploration within the analytical framework of cognitive linguistics, based on a sample of 1005 entries, reveals the pervasive presence of metaphor, metonymy analogy and conceptual integration. The analysis of the sample consisting of words and phrases related to mobile media, mobile operating systems and interface design, the terminology of mobile networking, as well as the slang and textisms employed by mobile gadget users shows that the above cognitive mechanisms play a key role in facilitating interaction between people and a wide range of mobile computing devices from laptops and PDAs to mobile phones, tablets and wearables. They are the cornerstones of comprehension that are behind the principles of functioning of graphical user interfaces and direct manipulation in computing environments. A separate sample, featuring a selection of 660 emoticons and emoji, exhibiting the potential for semantic expansion was also analyzed, in view of the significance of pictograms for text-based communication in the form of text messages or exchanges on social media sites regularly accessed via mobile devices..

    Designing for Shareable Interfaces in the Wild

    Get PDF
    Despite excitement about the potential of interactive tabletops to support collaborative work, there have been few empirical demonstrations of their effectiveness (Marshall et al., 2011). In particular, while lab-based studies have explored the effects of individual design features, there has been a dearth of studies evaluating the success of systems in the wild. For this technology to be of value, designers and systems builders require a better understanding of how to develop and evaluate tabletop applications to be deployed in real world settings. This dissertation reports on two systems designed through a process that incorporated ethnography-style observations, iterative design and in the wild evaluation. The first study focused on collaborative learning in a medical setting. To address the fact that visitors to a hospital emergency ward were leaving with an incomplete understanding of their diagnosis and treatment, a system was prototyped in a working Emergency Room (ER) with doctors and patients. The system was found to be helpful but adoption issues hampered its impact. The second study focused on a planning application for visitors to a tourist information centre. Issues and opportunities for a successful, contextually-fitted system were addressed and it was found to be effective in supporting group planning activities by novice users, in particular, facilitating users’ first experiences, providing effective signage and offering assistance to guide the user through the application. This dissertation contributes to understanding of multi-user systems through literature review of tabletop systems, collaborative tasks, design frameworks and evaluation of prototypes. Some support was found for the claim that tabletops are a useful technology for collaboration, and several issues were discussed. Contributions to understanding in this field are delivered through design guidelines, heuristics, frameworks, and recommendations, in addition to the two case studies to help guide future tabletop system creators

    Feral Ecologies: A Foray into the Worlds of Animals and Media

    Get PDF
    This dissertation wonders what non-human animals can illuminate about media in the visible contact zones where they meet. It treats these zones as rich field sites from which to excavate neglected material-discursive-semiotic relationships between animals and media. What these encounters demonstrate is that animals are historically and theoretically implicated in the imagination and materialization of media and their attendant processes of communication. Chapter 1 addresses how animals have been excluded from the cultural production of knowledge as a result of an anthropocentric perspective that renders them invisible or reduces them to ciphers for human meanings. It combines ethology and cinematic realism to craft a reparative, non-anthropocentric way of looking that is able to accommodate the plenitude of animals and their traces, and grant them the ontological heft required to exert productive traction in the visual field. Chapter 2 identifies an octopuss encounter with a digital camera and its chance cinematic inscription as part of a larger phenomenon of accidental animal videos. Because non-humans are the catalysts for their production, these videos offer welcome realist counterpoints to traditional wildlife imagery, and affirm cinemas ability to intercede non-anthropocentrically between humans and the world. Realism is essential to cinematic communication, and that realism is ultimately an achievement of non-human intervention. Chapter 3 investigates how an Internet hoax about a non-human ape playing with an iPad in a zoo led to the development of Apps for Apes, a real life enrichment project that pairs captive orangutans with iPads. It contextualizes and criticizes this projects discursive underpinnings but argues that the contingencies that transpire at the touchscreen interface shift our understanding of communication away from sharing minds and toward respecting immanence and accommodating difference. Finally, Chapter 4 examines a publicity stunt wherein a digital data-carrying homing pigeon races against the Internet to meet a computer. Rather than a competition, this is a continuation of a longstanding collaboration between the carrier pigeon and the infrastructure of modern communications. The carrier pigeon is not external but rather endemic to our understanding of communication as a material process that requires movement and coordination to make connections

    Using pressure input and thermal feedback to broaden haptic interaction with mobile devices

    Get PDF
    Pressure input and thermal feedback are two under-researched aspects of touch in mobile human-computer interfaces. Pressure input could provide a wide, expressive range of continuous input for mobile devices. Thermal stimulation could provide an alternative means of conveying information non-visually. This thesis research investigated 1) how accurate pressure-based input on mobile devices could be when the user was walking and provided with only audio feedback and 2) what forms of thermal stimulation are both salient and comfortable and so could be used to design structured thermal feedback for conveying multi-dimensional information. The first experiment tested control of pressure on a mobile device when sitting and using audio feedback. Targeting accuracy was >= 85% when maintaining 4-6 levels of pressure across 3.5 Newtons, using only audio feedback and a Dwell selection technique. Two further experiments tested control of pressure-based input when walking and found accuracy was very high (>= 97%) even when walking and using only audio feedback, when using a rate-based input method. A fourth experiment tested how well each digit of one hand could apply pressure to a mobile phone individually and in combination with others. Each digit could apply pressure highly accurately, but not equally so, while some performed better in combination than alone. 2- or 3-digit combinations were more precise than 4- or 5-digit combinations. Experiment 5 compared one-handed, multi-digit pressure input using all 5 digits to traditional two-handed multitouch gestures for a combined zooming and rotating map task. Results showed comparable performance, with multitouch being ~1% more accurate but pressure input being ~0.5sec faster, overall. Two experiments, one when sitting indoors and one when walking indoors tested how salient and subjectively comfortable/intense various forms of thermal stimulation were. Faster or larger changes were more salient, faster to detect and less comfortable and cold changes were more salient and faster to detect than warm changes. The two final studies designed two-dimensional structured ‘thermal icons’ that could convey two pieces of information. When indoors, icons were correctly identified with 83% accuracy. When outdoors, accuracy dropped to 69% when sitting and 61% when walking. This thesis provides the first detailed study of how precisely pressure can be applied to mobile devices when walking and provided with audio feedback and the first systematic study of how to design thermal feedback for interaction with mobile devices in mobile environments

    Functional Animation:Interactive Animation in Digital Artifacts

    Get PDF
    corecore