168 research outputs found

    Pictures in Your Mind: Using Interactive Gesture-Controlled Reliefs to Explore Art

    Get PDF
    Tactile reliefs offer many benefits over the more classic raised line drawings or tactile diagrams, as depth, 3D shape, and surface textures are directly perceivable. Although often created for blind and visually impaired (BVI) people, a wider range of people may benefit from such multimodal material. However, some reliefs are still difficult to understand without proper guidance or accompanying verbal descriptions, hindering autonomous exploration. In this work, we present a gesture-controlled interactive audio guide (IAG) based on recent low-cost depth cameras that can be operated directly with the hands on relief surfaces during tactile exploration. The interactively explorable, location-dependent verbal and captioned descriptions promise rapid tactile accessibility to 2.5D spatial information in a home or education setting, to online resources, or as a kiosk installation at public places. We present a working prototype, discuss design decisions, and present the results of two evaluation studies: the first with 13 BVI test users and the second follow-up study with 14 test users across a wide range of people with differences and difficulties associated with perception, memory, cognition, and communication. The participant-led research method of this latter study prompted new, significant and innovative developments

    Photo2Relief: Let Human in the Photograph Stand Out

    Full text link
    In this paper, we propose a technique for making humans in photographs protrude like reliefs. Unlike previous methods which mostly focus on the face and head, our method aims to generate art works that describe the whole body activity of the character. One challenge is that there is no ground-truth for supervised deep learning. We introduce a sigmoid variant function to manipulate gradients tactfully and train our neural networks by equipping with a loss function defined in gradient domain. The second challenge is that actual photographs often across different light conditions. We used image-based rendering technique to address this challenge and acquire rendering images and depth data under different lighting conditions. To make a clear division of labor in network modules, a two-scale architecture is proposed to create high-quality relief from a single photograph. Extensive experimental results on a variety of scenes show that our method is a highly effective solution for generating digital 2.5D artwork from photographs.Comment: 10 pages, 11 figure

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye

    inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation

    Get PDF
    Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.National Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374)Swedish Research Council (Fellowship)Blanceflor Foundation (Scholarship

    Free-form Shape Modeling in XR: A Systematic Review

    Full text link
    Shape modeling research in Computer Graphics has been an active area for decades. The ability to create and edit complex 3D shapes has been of key importance in Computer-Aided Design, Animation, Architecture, and Entertainment. With the growing popularity of Virtual and Augmented Reality, new applications and tools have been developed for artistic content creation; real-time interactive shape modeling has become increasingly important for a continuum of virtual and augmented reality environments (eXtended Reality (XR)). Shape modeling in XR opens new possibilities for intuitive design and shape modeling in an accessible way. Artificial Intelligence (AI) approaches generating shape information from text prompts are set to change how artists create and edit 3D models. There has been a substantial body of research on interactive 3D shape modeling. However, there is no recent extensive review of the existing techniques and what AI shape generation means for shape modeling in interactive XR environments. In this state-of-the-art paper, we fill this research gap in the literature by surveying free-form shape modeling work in XR, with a focus on sculpting and 3D sketching, the most intuitive forms of free-form shape modeling. We classify and discuss these works across five dimensions: contribution of the articles, domain setting, interaction tool, auto-completion, and collaborative designing. The paper concludes by discussing the disconnect between interactive 3D sculpting and sketching and how this will likely evolve with the prevalence of AI shape-generation tools in the future

    NON-VISUAL DRAWING TOOL: Co-Designing a Cross-Sensory 3D Drawing Interface for and with Blind and Partially Sighted Drawers during Covid-19

    Get PDF
    Drawing as an activity aids problem solving, collaboration, and presentation in design, science, and engineering in addition to artistic creativity and expression in the arts. Although drawings are some of the most ancient and cross-cultural examples of human creativity, blind and low vision learners still lack an inclusive and effective drawing tool, even in the digital age. Raised-line drawing kits aim to provide this, but blind participants found these to be barely comprehensible, most likely attributed to the fact that a line representing a surface edge reflects a visual bias that violates haptic principles of perception. In contrast, participants found 3D models to be more effective. For this reason, a drawing tool for the blind should afford 3D perceptual cues. Furthermore, my investigation of blind and sighted drawers reveals how they continuously react to their prior marks while developing their drawings. How could this be afforded by a 3D drawing tool non-visually? Through, co-design sessions (conducted during the Covid-19 pandemic) with blind and partially sighted drawers (BPSD), I prototyped a 3D construction kit with a digital interface to translate 3D-haptic drawings of a custom-designed kit into an online virtual environment, suitable for 3D printing and collaboration
    • …
    corecore