1,680 research outputs found

    Computational Personalization through Physical and Aesthetic Featured Digital Fabrication

    Get PDF
    Thesis (Master of Science in Informatics)--University of Tsukuba, no. 41269, 2019.3.2

    Wearable Devices and their Implementation in Various Domains

    Get PDF
    Wearable technologies are networked devices that collect data, track activities and customize experiences to users? needs and desires. They are equipped, with microchips sensors and wireless communications. All are mounted into consumer electronics, accessories and clothes. They use sensors to measure temperature, humidity, motion, heartbeat and more. Wearables are embedded in various domains, such as healthcare, sports, agriculture and navigation systems. Each wearable device is equipped with sensors, network ports, data processor, camera and more. To allow monitoring and synchronizing multiple parameters, typical wearables have multi-sensor capabilities and are configurable for the application purpose. For the wearer?s convenience, wearables are lightweight, modest shape and multifunctional. Wearables perform the following tasks: sense, analyze, store, transmit and apply. The processing may occur on the wearer or at a remote location. For example, if dangerous gases are detected, the data are processed, and an alert is issued. It may be transmitted to a remote location for testing and the results can be communicated in real-time to the user. Each scenario requires personalized mobile information processing, which transforms the sensory data to information and then to knowledge that will be of value to the individual responding to the situation

    Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind

    Get PDF
    Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones

    Visual Search Behavior in Individuals With Retinitis Pigmentosa During Level Walking and Obstacle Crossing

    Get PDF
    Purpose: Investigate the visual search strategy of individuals with retinitis pigmentosa (RP) when negotiating a floor-based obstacle compared with level walking, and compared with those with normal vision. Methods: Wearing a mobile eye tracker, individuals with RP and normal vision walked along a level walkway or walked along the walkway negotiating a floor-based obstacle. In the level walking condition, tape was placed on the floor to act as an object attracting visual attention. Analysis compared where individuals looked within the environment. Results: In the obstacle compared with level walking condition: (1) the RP group reduced the length of time and the number of times they looked Ahead, and increased the time and how often they looked at features on the ground (Object and Down, P < 0.05); and (2) the visual normal group reduced the time (by 19%) they looked Ahead (P = 0.076), and increased the time and how often they looked at the Object (P < 0.05). Compared with the normal vision group, in both level walking and obstacle conditions, the RP group reduced the time looking Ahead and looked for longer and more often Down (P < 0.05). Conclusions: The RP group demonstrated a more active visual search pattern, looking at more areas on the ground in both level walking and obstacle crossing compared with visual normals. This gaze strategy was invariant across conditions. This is most likely due to the constricted visual field and inability to rely on inferior peripheral vision to acquire information from the floor within the environment when walking

    Assisted Viewpoint Interaction for 3D Visualization

    Get PDF
    Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.Attentive navigation is a specific technique designed to actively redirect viewers' attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences

    Enabling audio-haptics

    Get PDF
    This thesis deals with possible solutions to facilitate orientation, navigation and overview of non-visual interfaces and virtual environments with the help of sound in combination with force-feedback haptics. Applications with haptic force-feedback, s

    Facilitating Social Interaction between Visually Impaired and Sighted Children through Toys and Games

    Get PDF
    This project examines different approaches to designing toys and games which promote social interaction between blind and sighted children. Onsite research includes literature reviews, classroom observations, a parent workshop and interviews in Copenhagen, Denmark, in collaboration with the Videncenter for Synshandicap. We conclude that the play environment is equally as important as toys and games, and that the best approach is one in which play resources are developed specifically for blind children, using techniques such as audio and tactile adaptations

    Virtual Reality for the Visually Impaired

    Get PDF
    This thesis aims to illuminate and describe how there are problems with the development of virtual reality regarding visually impaired people. After discussing the reasons how and why this is a problem, this thesis will provide some possible solutions to develop virtual reality into a more user accessible technology, specifically for the visually impaired. As the popularity of virtual reality increases in digital culture, especially with Facebook announcing their development of Metaverse, there is a need for a future virtual reality environment that everyone can use. And it is in these early stages of development, that the need to address the problem of inaccessibility arises. As virtual reality is a relatively new medium in digital culture, the research on its use by visually impaired people has significant gaps. And as relatively few researchers are exploring this topic, my research will hopefully lead to more activity in this important area. Therefore, my research questions aim to address the current limitations of virtual reality, filling in some of the most significant gaps in this research area. My thesis will do this by conducting interviews and surveys to gather data that can further support and identify the crucial limitations of the visually impaired experience while trying to use virtual reality technology. The findings in this thesis will further address the problem, creating a possible solution and emphasizing the importance of user accessibility for the visually impaired in the future development of virtual reality. If digital companies and developers address this problem now, we can have a future where visually impaired people are treated more equally, with technologies developed specifically for them to experience virtual worlds.Master's Thesis in Digital CultureDIKULT350MAHF-DIKU

    Proceedings of the 1st joint workshop on Smart Connected and Wearable Things 2016

    Get PDF
    These are the Proceedings of the 1st joint workshop on Smart Connected and Wearable Things (SCWT'2016, Co-located with IUI 2016). The SCWT workshop integrates the SmartObjects and IoWT workshops. It focusses on the advanced interactions with smart objects in the context of the Internet-of-Things (IoT), and on the increasing popularity of wearables as advanced means to facilitate such interactions
    corecore