10 research outputs found

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern

    Predicting and Reducing the Impact of Errors in Character-Based Text Entry

    Get PDF
    This dissertation focuses on the effect of errors in character-based text entry techniques. The effect of errors is targeted from theoretical, behavioral, and practical standpoints. This document starts with a review of the existing literature. It then presents results of a user study that investigated the effect of different error correction conditions on popular text entry performance metrics. Results showed that the way errors are handled has a significant effect on all frequently used error metrics. The outcomes also provided an understanding of how users notice and correct errors. Building on this, the dissertation then presents a new high-level and method-agnostic model for predicting the cost of error correction with a given text entry technique. Unlike the existing models, it accounts for both human and system factors and is general enough to be used with most character-based techniques. A user study verified the model through measuring the effects of a faulty keyboard on text entry performance. Subsequently, the work then explores the potential user adaptation to a gesture recognizer’s misrecognitions in two user studies. Results revealed that users gradually adapt to misrecognition errors by replacing the erroneous gestures with alternative ones, if available. Also, users adapt to a frequently misrecognized gesture faster if it occurs more frequently than the other error-prone gestures. Finally, this work presents a new hybrid approach to simulate pressure detection on standard touchscreens. The new approach combines the existing touch-point- and time-based methods. Results of two user studies showed that it can simulate pressure detection more reliably for at least two pressure levels: regular (~1 N) and extra (~3 N). Then, a new pressure-based text entry technique is presented that does not require tapping outside the virtual keyboard to reject an incorrect or unwanted prediction. Instead, the technique requires users to apply extra pressure for the tap on the next target key. The performance of the new technique was compared with the conventional technique in a user study. Results showed that for inputting short English phrases with 10% non-dictionary words, the new technique increases entry speed by 9% and decreases error rates by 25%. Also, most users (83%) favor the new technique over the conventional one. Together, the research presented in this dissertation gives more insight into on how errors affect text entry and also presents improved text entry methods

    Two one-handed tilting-based writing techniques on a smartphone

    Get PDF
    Text entry is a vital part of operating a mobile device, and is often done using a virtual keyboard such as QWERTY. Text entry using the virtual keyboard often faces difficulties, as the size of a single button is small and intangible, which can lead to high error rates and low text entry speed. This thesis reports a user experiment of two novel tilting-based text entry techniques with and without button press for key selection. The experiment focused on two main issues: 1) the performance of the tilting-based methods in comparison to the current commonly used reference method, the virtual QWERTY keyboard; and 2) evaluation of subjective satisfaction of the novel methods. The experiment was conducted using TEMA software running on an Android smartphone with a relativity small screen size. All writing was done with one hand only. The participants were able to comprehend and learn to use the new methods without any major problems. The development of text entry skill with the new methods was clear, as the mean text entry rates improved by 63-80 percent. The reference method QWERTY remained fastest of the three throughout the experiment. The tilting-based technique with key press for selection had the lowest total error rate at the end of the experiment, closely followed by QWERTY. Interview and questionnaire results showed that in some cases the tilting-based method was the preferred method of the three. Many of the shortcomings of tilt-based methods found during the experiment can be addressed in further development, and these methods are likely to prove competitive on devices with very small displays. Tilting has a potential as part of other interaction techniques besides text entry, and could be used to increase bandwidth between the device and the user without significantly increasing the cognitive load

    Keyboard layout in eye gaze communication access: typical vs. ALS

    Get PDF
    The purpose of the current investigation was to determine which of three keyboard layouts is the most efficient for typical as well as neurologically-compromised first-time users of eye gaze access. All participants (16 neurotypical, 16 amyotrophic lateral sclerosis; ALS) demonstrated hearing and reading abilities sufficient to interact with all stimuli. Participants from each group answered questions about technology use and vision status. Participants with ALS also noted date of first disease-related symptoms, initial symptoms, and date of diagnosis. Once a speech generating device (SGD) with eye gaze access capabilities was calibrated to an individual participant's eyes, s/he practiced utilizing the access method. Then all participants spelled word, phrases, and a longer phrase on each of three keyboard layouts (i.e., standard QWERTY, alphabetic with highlighted vowels, frequency of occurrence). Accuracy of response, error rate, and eye typing time were determined for each participant for all layouts.  Results indicated that both groups shared equivalent experience with technology. Additionally, neurotypical adults typed more accurately than the ALS group on all keyboards. The ALS group made more errors in eye typing than the neurotypical participants, but accuracy and disease status were independent of one another. Although the neurotypical group had a higher efficiency ratio (i.e. accurate keystrokes to total active task time) for the frequency layout, there were no such differences noted for the QWERTY or alphabetic keyboards. No differences were observed between the groups for either typing rate or preference ratings on any keyboard, though most participants preferred the standard QWERTY layout. No relationships were identified between preference order of the three keyboards and efficiency scores or the quantitative variables (i.e., rate, accuracy, error scores). There was no relationship between time since ALS diagnosis and preference ratings for each of the three keyboard layouts.   It appears that individuals with spinal-onset ALS perform similarly to their neurotypical peers with respect to first-time use of eye gaze access for typing words and phrases on three different keyboard layouts. Ramifications of the results as well as future directions for research are discussed.  Ph.D

    Supporting the Development Process of Multimodal and Natural Automotive User Interfaces

    Get PDF
    Nowadays, driving a car places multi-faceted demands on the driver that go beyond maneuvering a vehicle through road traffic. The number of additional functions for entertainment, infotainment and comfort increased rapidly in the last years. Each new function in the car is designed to make driving as pleasant as possible but also increases the risk that the driver will be distracted from the primary driving task. One of the most important goals for designers of new and innovative automotive user interfaces is therefore to keep driver distraction to a minimum while providing an appropriate support to the driver. This goal can be achieved by providing tools and methods that support a human-centred development process. In this dissertation, a design space will be presented that helps to analyze the use of context, to generate new ideas for automotive user interfaces and to document them. Furthermore, new opportunities for rapid prototyping will be introduced. To be able to evaluate new automotive user interfaces and interaction concepts regarding their effect on driving performance, a driving simulation software was developed within the scope of this dissertation. In addition, research results in the field of multimodal, implicit and eye-based interaction in the car are presented. The different case studies mentioned illustrate the systematic and comprehensive research on the opportunities of these kinds of interaction, as well as their effects on driving performance. We developed a prototype of a vibration steering wheel that communicates navigation instructions. Another prototype of a steering wheel has a display integrated in the middle and enables handwriting input. A further case study explores a visual placeholder concept to assist drivers when using in-car displays while driving. When a driver looks at a display and then at the street, the last gaze position on the display is highlighted to assist the driver when he switches his attention back to the display. This speeds up the process of resuming an interrupted task. In another case study, we compared gaze-based interaction with touch and speech input. In the last case study, a driver-passenger video link system is introduced that enables the driver to have eye contact with the passenger without turning his head. On the whole, this dissertation shows that by using a new human-centred development process, modern interaction concepts can be developed in a meaningful way.Das Führen eines Fahrzeuges stellt heute vielfältige Ansprüche an den Fahrer, die über das reine Manövrieren im Straßenverkehr hinausgehen. Die Fülle an Zusatzfunktionen zur Unterhaltung, Navigation- und Komfortzwecken, die während der Fahrt genutzt werden können, ist in den letzten Jahren stark angestiegen. Einerseits dient jede neu hinzukommende Funktion im Fahrzeug dazu, das Fahren so angenehm wie möglich zu gestalten, birgt aber anderseits auch immer das Risiko, den Fahrer von seiner primären Fahraufgabe abzulenken. Eines der wichtigsten Ziele für Entwickler von neuen und innovativen Benutzungsschnittstellen im Fahrzeug ist es, die Fahrerablenkung so gering wie möglich zu halten und dabei dem Fahrer eine angemessene Unterstützung zu bieten. Werkzeuge und Methoden, die einen benutzerzentrierten Entwicklungsprozess unter-stützen, können helfen dieses Ziel zu erreichen. In dieser Dissertation wird ein Entwurfsraum vorgestellt, welcher helfen soll den Benutzungskontext zu analysieren, neue Ideen für Benutzungsschnittstellen zu generieren und diese zu dokumentieren. Darüber hinaus wurden im Rahmen der Arbeit neue Möglichkeiten zur schnellen Prototypenerstellung entwickelt. Es wurde ebenfalls eine Fahrsimulationssoftware erstellt, welche die quantitative Bewertung der Auswirkungen von Benutzungs-schnittstellen und Interaktionskonzepten auf die Fahreraufgabe ermöglicht. Desweiteren stellt diese Dissertation neue Forschungsergebnisse auf den Gebieten der multimodalen, impliziten und blickbasierten Interaktion im Fahrzeug vor. In verschiedenen Fallbeispielen wurden die Möglichkeiten dieser Interaktionsformen sowie deren Auswirkung auf die Fahrerablenkung umfassend und systematisch untersucht. Es wurde ein Prototyp eines Vibrationslenkrads erstellt, womit Navigations-information übermittelt werden können sowie ein weiterer Prototyp eines Lenkrads, welches ein Display in der Mitte integriert hat und damit handschriftliche Texteingabe ermöglicht. Ein visuelles Platzhalterkonzept ist im Fokus eines weiteren Fallbeispiels. Auf einem Fahrzeugdisplay wird die letzte Blickposition bevor der Fahrer seine Aufmerksamkeit dem Straßenverkehr zuwendet visuell hervorgehoben. Dies ermöglicht dem Fahrer eine unterbrochene Aufgabe z.B. das Durchsuchen einer Liste von Musik-titel schneller wieder aufzunehmen, wenn er seine Aufmerksamkeit wieder dem Display zuwendet. In einer weiteren Studie wurde blickbasierte Interaktion mit Sprach- und Berührungseingabe verglichen und das letzte Fallbeispiel beschäftigt sich mit der Unterstützung der Kommunikation im Fahrzeug durch die Bereitstellung eines Videosystems, welches Blickkontakt zwischen dem Fahrer und den Mitfahrern ermöglicht, ohne dass der Fahrer seinen Kopf drehen muss. Die Arbeit zeigt insgesamt, dass durch den Einsatz eines neuen benutzerzentrierten Entwicklungsprozess moderne Interaktionskonzept sinnvoll entwickelt werden können

    Investigating retrospective interoperability between the accessible and mobile webs with regard to user input

    Get PDF
    The World Wide Web (Web) has become a key technology to provide access to on-line information. The Mobile Web users, who access the Web using small devices such as mobile phones and Personal Digital Assistants (PDAs), make errors on entering text and controlling cursors. These errors are caused by both the characteristics of a device and the environment in which it is used, and are called situational impairments. Disabled Web users, on the other hand, have difficulties in accessing the Web due to their impairments in visual, hearing or motor abilities. We assert that errors experienced by the Mobile Web users share similarity in scope with those hindering motor-impaired Web users with dexterity issues, and existing solutions from the motor-impaired users domain can be migrated to the Mobile Web domain to address the common errors.Results of a systematic literature survey have revealed 12 error types that affect both the Mobile Web users and disabled Web users. These errors range from unable to locate a key to unable to pin-point a cursor. User experiments have confirmed that the Mobile Web users and motor-impaired Web users share errors in scope: they both miss key presses, press additional keys, unintentionally press a key more than once or press a key too long. In addition, both small device users and motor-impaired desktop users have difficulties in performing clicking, multiple clicking and drag selecting. Furthermore, when small device users are moving, both the scope and the magnitude of the errors are shared. In order to address these errors, we have migrated existing solutions from the disabled Web users domain into the Mobile Web users domain. We have developed a typing error correction system for the Mobile Web users. Results of the user evaluation have indicated that the proposed system can significantly reduce the error rates of the Mobile Web users.This work has an important contribution to both the Web accessibility field and the Mobile Web field. By leveraging research from the Web accessibility field into the Mobile Web field, we have linked two disjoint domains together. We have migrated solutions from one domain to another, and thus have improved the usability and accessibility of the Mobile Web.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Onsetsu hyoki no kyotsusei ni motozuita Ajia moji nyuryoku intafesu ni kansuru kenkyu

    Get PDF
    制度:新 ; 報告番号:甲3450号 ; 学位の種類:博士(国際情報通信学) ; 授与年月日:2011/10/26 ; 早大学位記番号:新577

    Mobile phones interaction techniques for second economy people

    Get PDF
    Second economy people in developing countries are people living in communities that are underserved in terms of basic amenities and social services. Due to literacy challenges and user accessibility problems in rural communities, it is often difficult to design user interfaces that conform to the capabilities and cultural experiences of low-literacy rural community users. Rural community users are technologically illiterate and lack the knowledge of the potential of information and communication technologies. In order to embrace new technology, users will need to perceive the user interface and application as useful and easy to interact with. This requires proper understanding of the users and their socio-cultural environment. This will enable the interfaces and interactions to conform to their behaviours, motivations as well as cultural experiences and preferences and thus enhance usability and user experience. Mobile phones have the potential to increase access to information and provide a platform for economic development in rural communities. Rural communities have economic potential in terms of agriculture and micro-enterprises. Information technology can be used to enhance socio-economic activities and improve rural livelihood. We conducted a study to design user interfaces for a mobile commerce application for micro-entrepreneurs in a rural community in South Africa. The aim of the study was to design mobile interfaces and interaction techniques that are easy to use and meet the cultural preferences and experiences of users who have little to no previous experience of mobile commerce technology. And also to explore the potentials of information technologies rural community users, and bring mobile added value services to rural micro-entrepreneurs. We applied a user-centred design approach in Dwesa community and used qualitative and quantitative research methods to collect data for the design of the user interfaces (graphic user interface and voice user interface) and mobile commerce application. We identified and used several interface elements to design and finally evaluate the graphical user interface. The statistics analysis of the evaluation results show that the users in the community have positive perception of the usefulness of the application, the ease of use and intention to use the application. Community users with no prior experience with this technology were able to learn and understand the interface, recorded minimum errors and a high level of v precision during task performance when they interacted with the shop-owner graphic user interface. The voice user interface designed in this study consists of two flavours (dual tone multi-frequency input and voice input) for rural users. The evaluation results show that community users recorded higher tasks successes and minimum errors with the dual tone multi-frequency input interface than the voice only input interface. Also, a higher percentage of users prefer the dual tone multi-frequency input interface. The t-Test statistical analysis performed on the tasks completion times and error rate show that there was significant statistical difference between the dual tone multi-frequency input interface and the voice input interface. The interfaces were easy to learn, understand and use. Properly designed user interfaces that meet the experience and capabilities of low-literacy users in rural areas will improve usability and users‟ experiences. Adaptation of interfaces to users‟ culture and preferences will enhance information services accessibility among different user groups in different regions. This will promote technology acceptance in rural communities for socio-economic benefits. The user interfaces presented in this study can be adapted to different cultures to provide similar services for marginalised communities in developing countrie

    Designing EdgeWrite Versions for Japanese Text Entry

    No full text

    Investigating User Experience Using Gesture-based and Immersive-based Interfaces on Animation Learners

    Get PDF
    Creating animation is a very exciting activity. However, the long and laborious process can be extremely challenging. Keyframe animation is a complex technique that takes a long time to complete, as the procedure involves changing the poses of characters through modifying the time and space of an action, called frame-by-frame animation. This involves the laborious, repetitive process of constantly reviewing results of the animation in order to make sure the movement-timing is accurate. A new approach to animation is required in order to provide a more intuitive animating experience. With the evolution of interaction design and the Natural User Interface (NUI) becoming widespread in recent years, a NUI-based animation system is expected to allow better usability and efficiency that would benefit animation. This thesis investigates the effectiveness of gesture-based and immersive-based interfaces as part of animation systems. A practice-based element of this research is a prototype of the hand gesture interface, which was created based on experiences from reflective practices. An experimental design is employed to investigate the usability and efficiency of gesture-based and immersive-based interfaces in comparison to the conventional GUI/WIMP interface application. The findings showed that gesture-based and immersive-based interfaces are able to attract animators in terms of the efficiency of the system. However, there was no difference in their preference for usability with the two interfaces. Most of our participants are pleasant with NUI interfaces and new technologies used in the animation process, but for detailed work and taking control of the application, the conventional GUI/WIMP is preferable. Despite the awkwardness of devising gesture-based and immersive-based interfaces for animation, the concept of the system showed potential for a faster animation process, an enjoyable learning system, and stimulating interest in a kinaesthetic learning experience
    corecore