140 research outputs found

    Design and Development of a Multimodal Vest for Virtual Immersion and Guidance

    Get PDF
    This paper is focused on the development of a haptic vest to enhance immersion and realism in virtual environments, through vibrotactile feedback. The first steps to achieve touch-based communication are presented in order to set an actuation method based on vibration motors. Resulting vibrotactile patterns helping users to move inside virtual reality (VR). The research investigates human torso resolution and perception of vibration patterns, evaluating different kind of actuators at different locations on the vest. Finally, determining an appropriate distribution of vibration patterns allowed the generation of sensations that, for instance, help to guide in a mixed or virtual reality environment

    Proceedings of the 1st joint workshop on Smart Connected and Wearable Things 2016

    Get PDF
    These are the Proceedings of the 1st joint workshop on Smart Connected and Wearable Things (SCWT'2016, Co-located with IUI 2016). The SCWT workshop integrates the SmartObjects and IoWT workshops. It focusses on the advanced interactions with smart objects in the context of the Internet-of-Things (IoT), and on the increasing popularity of wearables as advanced means to facilitate such interactions

    Designing smart garments for rehabilitation

    Get PDF

    Active-Proprioceptive-Vibrotactile and Passive-Vibrotactile Haptics for Navigation

    Get PDF
    Navigation is a complex activity and an enabling skill that humans take for granted. It is vital for humans as it fosters spatial awareness, enables exploration, facilitates efficient travel, ensures safety, supports daily activities, promotes cognitive development, and provides a sense of independence. Humans have created tools for diverse activities, including navigation. Usually, these tools for navigation are vision-based, but for situations where visual channels are obstructed, unavailable, or are to be complemented for immersion or multi-tasking, touch-based tools exist. These touch-based tools or devices are called haptic displays. Many different types of haptic displays are employed by a range of fields from telesurgery to education and navigation. In the context of navigation, certain classes of haptic displays are more popular than others, for example, passive multi-element vibrotactile haptic displays, such as haptic belts. However, certain other classes of haptic displays, such as active proprioceptive vibrotactile and passive single-element vibrotactile, may be better suited for certain practical situations and may prove to be more effective and intuitive for navigational tasks than a popular option, such as a haptic belt. However, these other classes have not been evaluated and cross-compared in the context of navigation. This research project aims to contribute towards the understanding and, consequently, the improvement of designs and user experience of navigational haptic displays by thoroughly evaluating and cross-comparing the effectiveness and intuitiveness of three classes of haptic display (passive single-element vibrotactile; passive multi-element vibrotactile; and various active proprioceptive vibrotactile) for navigation. Evaluation and cross-comparisons take into account quantitative measures, for example, accuracy, response time, number of repeats taken, experienced mental workload, and perceived usability, as well as qualitative feedback collected through informal interviews during the testing of the prototypes. Results show that the passive single-element vibrotactile and active proprioceptive vibrotactile classes can be used as effective and intuitive navigational displays. Furthermore, results shed light on the multifaceted nature of haptic displays and their impact on user performance, preferences, and experiences. Quantitative findings related to performance combined with qualitative findings emphasise that one size does not fit all, and a tailored approach is necessary to address the varying needs and preferences of users

    Ubiquitous haptic feedback in human-computer interaction through electrical muscle stimulation

    Get PDF
    [no abstract

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022, held in Hamburg, Germany, in May 2022. The 36 regular papers included in this book were carefully reviewed and selected from 129 submissions. They were organized in topical sections as follows: haptic science; haptic technology; and haptic applications

    Image Based Ringgit Banknote Recognition for Visually Impaired

    Get PDF
    Visually impaired people face a number of difficulties in order to interact with the environment because most of the information encoded is visual. Visual impaired people faced a problem in identifying and recognizing the different currency. There are many devices available in the market but not acceptable to detect Malaysian ringgit banknote and very pricey. Many studies and investigation have been done in introducing automated bank note recognition system and can be separated into vision based system or sensor based system. The objective of this project was to develop an automated system or algorithm that can recognize and classify different Ringgit Banknote for visually impaired person based on banknote image. In this project, the features extraction of the RGB values in six different classes of banknotes (RM1, RM5, RM10, RM20, RM 50, and RM100) was done by using Matlab software. Three features called RB, RG and GB extracted from the RGB values were used for the classification algorithms such as k-Nearest Neighbors (k-NN) and Decision Tree Classifier (DTC) for recognizing each classes of banknote. Ten-fold cross validation was used to select the optimized k-NN and DTC, which was based on the smallest cross validation loss. After that, the performance of optimize k-NN and DTC model was presented in confusion matrix. Result shows that the proposed k-NN and DTC model managed to achieve 99.7% accuracy with the RM50 class causing major reduction in performance. In conclusion, an image based automated system that can recognize the Malaysian banknote using k-NN and DTC classifier has been successfully developed

    Contextual awareness, messaging and communication in nomadic audio environments

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.Includes bibliographical references (p. 119-122).Nitin Sawhney.M.S

    Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoples‘ Building Exploration in the Context of Orientation and Mobility

    Get PDF
    Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger fĂŒr eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell prĂ€sentiert, was den Zugang fĂŒr blinde und sehbehinderte Menschen einschrĂ€nkt. Die grundlegendste Barriere ist oft die elementare Orientierung und MobilitĂ€t (und folglich die soziale MobilitĂ€t), einschließlich der Erlangung von Kenntnissen ĂŒber unbekannte GebĂ€ude vor deren Besuch. Um solche Barrieren zu ĂŒberbrĂŒcken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugĂ€nglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen RealitĂ€t (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte InteraktivitĂ€t, die Aktualisierbarkeit und die Möglichkeit, virtuelle RĂ€ume und Modelle als Abbilder von realen RĂ€umen zu erkunden, ohne dass reale Gefahren und die begrenzte VerfĂŒgbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit. Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle ReprĂ€sentationen realer GebĂ€ude im Kontext von Orientierung und MobilitĂ€t berĂŒhrbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. ZusĂ€tzlich bietet diese Arbeit einen Überblick ĂŒber die verschiedenen Interaktionen. Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung fĂŒr blinde und sehbehinderte Nutzer und der Weg dorthin durch die EinfĂŒhrung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des GerĂ€ts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. BegrĂŒndet und motiviert werden die folgenden Kapitel durch explorative AnsĂ€tze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung). Die folgenden Kapitel fĂŒhren empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der HĂ€nde mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden gerĂ€teunabhĂ€ngige technologische Möglichkeiten und auch Herausforderungen fĂŒr weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und rĂ€umlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality. Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions. Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches. The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)
    • 

    corecore