21 research outputs found

    Altering one's body-perception through e-textiles and haptic metaphors

    Get PDF
    Tajadura-Jiménez A, Väljamäe A and Kuusk K (2020) Altering One's Body-Perception Through E-Textiles and Haptic Metaphors. Front. Robot. AI 7:7.Technologies change rapidly our perception of reality, moving from augmented to virtual to magical. While e-textiles are a key component in exergame or space suits, the transformative potential of the internal side of garments to create embodied experiences still remains largely unexplored. This paper is the result from an art-science collaborative project that combines recent neuroscience findings, body-centered design principles and 2D vibrotactile array-based fabrics to alter one's body perception. We describe an iterative design process intertwined with two user studies on the effects on body-perceptions and emotional responses of various vibration patterns within textile that were designed as spatial haptic metaphors. Our results show potential in considering materials (e.g., rocks) as sensations to design for body perceptions (e.g., being heavy, strong) and emotional responses. We discuss these results in terms of sensory effects on body perception and synergetic impact to research on embodiment in virtual environments, human-computer interaction, and e-textile design. The work brings a new perspective to the sensorial design of embodied experiences which is based on "material perception" and haptic metaphors, and highlights potential opportunities opened by haptic clothing to change body-perception.This work was partially supported by PSI2016-79004-R Magic Shoes project grant (AEI/FEDER, UE), from Ministerio de Economía, Industria y Competitividad of Spain and the Magic Lining VERTIGO project as part of the STARTS program of the European Commission, based on technological elements from the project Magic Shoes. AT-J was supported by RYC- 2014–15421 grant from the Ministerio de Economía, Industria y Competitividad of Spain and AV was supported by the Estonian Research Council grant PUT1518

    A transdisciplinary collaborative journey leading to sensorial clothing

    Get PDF
    Recent science funding initiatives have enabled participants from a diverse array of disciplines to engage in common spaces for developing solutions for new wearables. These initiatives include collaborations between the arts and sciences, fields which have traditionally contributed very different forms of knowledge, methodology, and results. However, many such collaborations often turn out as science communication and dissemination activities that make no concrete contribution to technological innovation. Magic Lining, a transdisciplinary collaborative project involving artistic and scientific partners working in the fields of e-textile design, cognitive neuroscience and human-computer interaction, creates a shared experiential knowledge space. This article focuses on the research question of how a transdisciplinary collaborative design processinvolving material explorations, prototyping, first-person-perspective and user studies, can lead to the creation of a garment that invites various perceptual and emotional responses in its wearer. The article reflects on the design journey, highlighting the transdisciplinary team's research through design experience and shared language for knowledge exchange. This process has revealed new research paths for an emerging field of 'sensorial clothing', combining the various team members' fields of expertise and resulting in a wearable prototype.This work was partially supported by the VERTIGO project as part of the STARTS program of the European Commission, based on technological elements from the project Magic Shoes (grant PSI2016-79004-R, Ministerio de Economía, Industria y Competitividad of Spain, AEI/FEDER). The work was also supported by the project Magic outFIT, funded by the Spanish Agencia Estatal de Investigación (PID2019-105579RB-I00/AEI/10.13039/501100011033). Aleksander Väljamäe’s work was supported by the Estonian Research Council grant PUT1518; and Ana Tajadura-Jiménez’s work was supported by RYC-2014–15421 grant, Ministerio de Economía, Industria y Competitividad of Spain

    Prototyping a method for the assessment of real-time EEG sonifications

    Get PDF
    This paper presents a first step in the development of a methodology to compare the ability of different sonifications to convey the fine temporal detail of the Electroencephalography (EEG) brainwave signal in real time. In EEG neurofeedback a person‟s EEG activity is monitored and presented back to them, to help them to learn how to modify their brain activity. Learning theory suggests that the more rapidly and accurately the feedback follows behaviour the more efficient the learning will be. Therefore a critical issue is how to assess the ability of a sonification to convey rapid and temporally complex EEG data for neurofeedback. To allow for replication, this study used sonifications of pre-recorded EEG data and asked participants to try and track aspects of the signal in real time using a mouse. This study showed that, although imperfect, this approach is a practical way to compare the suitability of EEG sonifications for tracking detailed EEG signals in real time and that the combination of quantitative and qualitative data helped characterise the relative efficacy of different sonifications

    Auditorily-induced illusory self-motion: A review

    No full text
    The aim of this paper is to provide a first review of studies related to auditorily-induced self-motion (vection). These studies have been scarce and scattered over the years and over several research communities including clinical audiology, multisensory perception of self-motion and its neural correlates, ergonomics, and virtual reality. The reviewed studies provide evidence that auditorily-induced vection has behavioral, physiological and neural correlates. Although the sound contribution to self-motion perception appears to be weaker than the visual modality, specific acoustic cues appear to be instrumental for a number of domains including posture prosthesis, navigation in unusual gravitoinertial environments (in the air, in space, or underwater), non-visual navigation, and multisensory integration during self-motion. A number of open research questions are highlighted opening avenue for more active and systematic studies in this area. (C) 2009 Elsevier B.V. All rights reserved

    A Feasibility Study Regarding Implementation of Holographic Audio Rendering Techniques Over Broadcast Networks

    No full text
    At the present time, 5 channel surround sound has become standard for a high quality audio reproduction. In the nearest future new rendering systems with higher number of audio channels will be introduced to the consumer market. One of emerging audio rendering technologies is Wave Field Synthesis (WFS), which creates a perceptually correct spatial impression over the entire listening area. The main problem for the feasibility..

    Exploring Physiology-Based Interactions in Performing Art Using Artistic Interventions / Kunstiliste sekkumiste kasutamine füsioloogiapõhiste interaktsioonide uurimiseks etenduskunstis

    No full text
    Abstract: Technological innovations like physiological computing offer new possibilities when exploring audience-performer interaction. To avoid technological solutionism that often accompanies biosensor applications in performing art, an artistic interventions approach was used. This paper describes a recent art-science residency consisting of three artistic experiments: the audience’s electrodermal, activity-driven soundscape in a dance improvisation, a “lie detector” applied to the actor just after the performance, and a heart-rate-driven personal discotheque installation. Both artist and scientist provide reflections on future development of this transdisciplinary field from the performing art perspective.   NĂĽĂĽdisaegne interaktiivne teater toetub tehnoloogilistele uuendustele ja järjest enam kasutatakse uusi tehnoloogiaid ka kunstiteose sisu loomisel: on need siis vaatajate reaktsioone tajuvad riided, voogteatri etendus või vaatajate neurofĂĽsioloogilisi reaktsioone mõõtvad sensorid. Etenduskunstnik Taavet Jansen ja neuroteadlane Aleksander Väljamäe töötasid publiku ja esinejate fĂĽsioloogiliste reaktsioonidega kunstiteaduse residentuuris Tallinna Ăślikoolis veebruarist 2019 kuni juunini 2019. FĂĽsioloogilisi reaktsioone uuriti kolme kunstilise eksperimendi jooksul või järel ning see artikkel kirjeldabki neid eksperimente, kunstniku mõtisklusi oma uurimisreisist ja arutleb, kuidas selliseid interaktsioonivõimalusi saaks kasutada voogteatri platvormidel. Iga kirjeldatud eksperimendi kohta avaldavad autorid ka oma mõtteid ja teevad ettepanekuid, mida eksperimendi kordamisel võiks teha teistmoodi. Kunstilises eksperimendis „Neurokoreograafiline eksperiment nr 4“ kasutati interaktiivset lahendust, kus neljale vaatajale kinnitatud sensorid mõõtsid nende erutuse taset (electrodermal activity galvanic skin response) improvisatsioonilise tantsuetenduse vältel Tallinnas, Kanuti Gildi SAALis 06.06.2019. Vaatajate reaktsioone kasutati reaalajas helikujunduse manipuleerimisel. Selline interaktiivne lahendus tekitas kunstiliselt intrigeeriva tagasiside-efekti, kus vaatajate tahtmatud reaktsioonid hakkasid mõjutama kogu lavastuse tervikut. Vaatajad said tahtmatult endale „hääle“, mida said interpreteerida kõik osalised kogu terviku kontekstis. Kunstilises eksperimendis „Macbeth“ kasutati erutust mõõtvaid sensoreid, salvestamaks näitleja reaktsioone intervjuu ajal, kus esitati kĂĽsimusi tema rolliloome kohta etenduses, mis oli lõppenud 10 minutit enne intervjuu algust. Tegemist oli Alo Kõrve Macbethi rolliga Tallinna Linnateatri lavastuses „Macbeth“. Prokurör Steven-Hristo Evestuse läbi viidud intervjuu eesmärgiks oli mõista, milliseid tehnikaid kasutab näitleja oma rolli luues, ning tehnoloogiat kasutades analĂĽĂĽsida, kas näitleja on teadlik laval tehtud otsustest. Heli- ja valgusinstallatsioon „Heartrate Party“ põhines kontseptsioonil, kus kĂĽlastaja sĂĽdamerĂĽtm mõjutas kogu installatsiooni heli- ja valguskujunduse tempot. SĂĽdamerĂĽtmi mõõdeti spetsiaalse sensoriga ja kasutatud videokujundus nii instrueeris osalejaid kui ka andis tagasisidet õnnestumisest või ebaõnnestumisest. Installatsioon oli avatud Tallinnas, Kanuti Gildi SAALi keldrisaalis 05.–07.06.2020 ja seda kĂĽlastas 20 vaatajat. aasta esimeses pooles, kui COVID-19 pandeemia põhjustas eriolukorra kogu maailmas, ei tohtinud teatrid ja etendusasutused avalikke ĂĽritusi korraldada. Teatrid hakkasid oma etendusi andma voogedastust võimaldavatel platvormidel. Kuna voogteater avab etenduste mängimiseks palju uusi võimalusi, siis me analĂĽĂĽsime residentuuris kasutatud kontseptsioone ka voogteatri perspektiivist. Kõiki eelpool mainitud kontseptsioone oleks võimalik osaliselt kanda ĂĽle ka veebikeskkonda, kuid need eeldavad kasutajapoolset tehnoloogilist valmisolekut. Sensortehnoloogiad võimaldavad voogteatri etenduste vaatajate reaktsioone ja käitumist analĂĽĂĽsida ja salvestada. Kuna nende tehnoloogiate kasutus sellises kontekstis on veel uus, siis kĂĽsimused, mis puudutavad eetikat ja isikuandmeid, vajavad alles väljatöötamist. Kokkuvõttes väidame, et väga palju uurimistööd on alles ees ja meetodid, kuidas interpreteerida esinejatelt ja vaatajatelt kogutud andmeid, on alles vaja välja töötada. Tihti kasutatakse andmeid interaktsiooni eesmärgil pigem otseseid tõlgendusi luues – biosignaalide numbriline väärtus tõlgitakse otse mõneks audiovisuaalses kujunduses oluliseks väärtuseks. Selline tõlgendus annab kĂĽll laval toimuvale perfektse sĂĽnkrooni, kuid vastust jääb ootama oluline kĂĽsimus, mida need andmed väljendavad, mida nad tähendavad. Kunstis jääb tihti puudu teoreetilistest teadmistest, mis aitaksid intuitiivselt tehtud kunstilisi otsuseid raamistada. Selline teadmiste ĂĽlekandmine kunsti ja teaduse vahel avaks uusi võimalusi kunstiteoste interpreteerimisel, aga ka teaduse rikastamisel

    Magic lining: crafting multidisciplinary experiential knowledge by changing wearer's body-perception through vibrotactile clothing

    Get PDF
    Our complex and rapidly changing world presents us with profound societal challenges, but also offers tremendous opportunities for new technology to respond to those challenges. Several recent U initiatives have enabled participants from a diverse array of disciplines to engage in common spaces for developing solutions to existing challenges and to imagine possible futures. This includes collaborations between the arts and sciences, fields which have traditionally contributed very different forms of knowledge, methodology, results and measures of success. They also speak very different languages. Magic Lining is a collaborative project involving participants from the fields of e-textile design, neuroscience and human-computer interaction (HCI). Magic Lining combines the findings of their respective disciplines to develop a 'vibrotactile' garment utilising soft, interactive materials and is designed to alter the wearer's perception of their own body. Here we explain the process of designing the first prototype garment—a dress that produces in its wearer the sensation that their body is made of some of other material (stone, air, etc.) and in turn elicits various perceptual and emotional responses (feeling strong, feeling calm, etc.). We reflect on the collaborative process, highlighting the multidisciplinary team's experience in finding a common space and language for sharing cognitive and experiential knowledge. We share our insights into the various outcomes of the collaboration, giving also our views on the benefits and on potential improvements for this kind of process.This work was partially supported by the VERTIGO project as part of the STARTS program of the European Commission, based on technological elements from Magic Shoes. Aleksander Väljamäe’s work was supported by the Estonian Research Council grant PUT1518; and Ana Tajadura-JimĂ©nez’s work was supported by RYC-2014–15421 and PSI2016-79004-R (AEI/FEDER, UE) grants, Ministerio de EconomĂ­a, Industria y Competitividad of Spain

    Spatial sound in auditory vision substitution systems

    No full text
    Current auditory vision sensory substitution (AVSS) systems might be improved by the direct mapping of an image into a matrix of concurrently active sound sources in a virtual acoustic space. This mapping might be similar to the existing techniques for tactile substitution of vision where point arrays are successfully used. This paper gives an overview of the current auditory displays used to sonify 2D visual information and discuss the feasibility of new perceptually motivated AVSS methods encompassing spatial sound

    Filling-in visual motion with sounds

    No full text
    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audiovisual interactions in the perception of time-sampled object motion by measuring adaptation aftereffects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the highrate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications. (C) 2008 Elsevier B.V. All rights reserved
    corecore