234 research outputs found

    A survey on hardware and software solutions for multimodal wearable assistive devices targeting the visually impaired

    Get PDF
    The market penetration of user-centric assistive devices has rapidly increased in the past decades. Growth in computational power, accessibility, and cognitive device capabilities have been accompanied by significant reductions in weight, size, and price, as a result of which mobile and wearable equipment are becoming part of our everyday life. In this context, a key focus of development has been on rehabilitation engineering and on developing assistive technologies targeting people with various disabilities, including hearing loss, visual impairments and others. Applications range from simple health monitoring such as sport activity trackers, through medical applications including sensory (e.g. hearing) aids and real-time monitoring of life functions, to task-oriented tools such as navigational devices for the blind. This paper provides an overview of recent trends in software and hardware-based signal processing relevant to the development of wearable assistive solutions

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    MEDIATION: An eMbEddeD System for Auditory Feedback of Hand-water InterAcTION while Swimming

    Get PDF
    Cesarini D, Calvaresi D, Farnesi C, et al. MEDIATION: An eMbEddeD System for Auditory Feedback of Hand-water InterAcTION while Swimming. Procedia Engineering. 2016;147:324-329.In swimming sport, the proper perception of moving water masses is a key factor. This paper presents an embedded system for the acquisition of values of pressure on swimmers hands and their transformation into sound. The sound, obtained using sonification, is used as an auditive representation of hand-water interactions while swimming in water. The sound obtained is used as an auditive feedback for the swimmer and as an augmented communication channel between the swimming trainer and the athlete. The developed system is self-contained, battery powered and able to work continuously for over eight hours, thus, representing a viable solution for daily usage in swimmers training. Preliminary results from in-pool experiments with both novel and experienced swimmers demonstrate the high acceptability of this technology and its promising future evolution and usage possibilities

    Augmenting the Spatial Perception Capabilities of Users Who Are Blind

    Get PDF
    People who are blind face a series of challenges and limitations resulting from their lack of being able to see, forcing them to either seek the assistance of a sighted individual or work around the challenge by way of a inefficient adaptation (e.g. following the walls in a room in order to reach a door rather than walking in a straight line to the door). These challenges are directly related to blind users' lack of the spatial perception capabilities normally provided by the human vision system. In order to overcome these spatial perception related challenges, modern technologies can be used to convey spatial perception data through sensory substitution interfaces. This work is the culmination of several projects which address varying spatial perception problems for blind users. First we consider the development of non-visual natural user interfaces for interacting with large displays. This work explores the haptic interaction space in order to find useful and efficient haptic encodings for the spatial layout of items on large displays. Multiple interaction techniques are presented which build on prior research (Folmer et al. 2012), and the efficiency and usability of the most efficient of these encodings is evaluated with blind children. Next we evaluate the use of wearable technology in aiding navigation of blind individuals through large open spaces lacking tactile landmarks used during traditional white cane navigation. We explore the design of a computer vision application with an unobtrusive aural interface to minimize veering of the user while crossing a large open space. Together, these projects represent an exploration into the use of modern technology in augmenting the spatial perception capabilities of blind users

    Context Aware Computing or the Sense of Context

    Get PDF
    ITALIANO: I sistemi ubiquitous e pervasivi, speciali categorie di sistemi embedded (immersi), possono essere utilizzati per rilevare il contesto che li circonda. In particolare, i sistemi context-aware sono in grado di alterare il loro stato interno e il loro comportamento in base all’ambiente (context) che percepiscono. Per aiutare le persone nell’espletare le proprie attivitá, tali sistemi possono utilizzare le conoscenze raccolte attorno a loro. Un grande sforzo industriale e di ricerca, orientato all’innovazione dei sensori, processori, sistemi operativi, protocolli di comunicazione, e framework, offre molte tecnologie definibili abilitanti, come le reti di sensori wireless o gli Smartphone. Tuttavia, nonostante tale sforzo significativo, l’adozione di sistemi pervasivi che permettano di migliorare il monitoraggio dello sport, l’allenamento e le tecnologie assistive é ancora piuttosto limitato. Questa tesi individua due fattori determinanti per questo basso utilizzo delle tecnologie pervasive, principalmente relativi agli utenti. Da un lato il tentativo degli esperti e dei ricercatori dell’informatica di indurre l’adozione di soluzioni informatiche, trascurando parzialmente l’interazione con gli utenti finali, dall’altro lato una scarsa attenzione all’interazione tra uomo e computer. Il primo fattore puó essere tradotto nella mancanza di attenzione a ció che é rilevante nel contesto dei bisogni (speciali) dell’utente. Il secondo é rappresentato dall’utilizzo diffuso di interfacce grafiche di presentazione delle informazioni, che richiede un elevato livello di sforzo cognitivo da parte degli utenti. Mentre lo studio della letteratura puó fornire conoscenze sul contesto dell’utente, solo il contatto diretto con lui arricchisce la conoscenza di consapevolezza, fornendo una precisa identificazione dei fattori che sono piú rilevanti per il destinatario dell’applicazione. Per applicare con successo le tecnologie pervasive al campo dello sport e delle tecnologie assistive, l’identificazione dei fattori rilevanti é una premessa necessaria, Tale processo di identificazione rappresenta l’approccio metodologico principale utilizzato per questa tesi. Nella tesi si analizzano diversi sport (canottaggio, nuoto, corsa ) e una disabilitá (la cecitá), per mostrare come la metodologia di investigazione e di progettazione proposta venga messa in pratica. Infatti i fattori rilevanti sono stati identificati grazie alla stretta collaborazione con gli utenti e gli esperti nei rispettivi campi. Si descrive il processo di identificazione, insieme alle soluzioni elaborate su misura per il particolare campo d’uso. L’uso della sonificazione, cioé la trasmissione di informazioni attraverso il suono, si propone di affrontare il secondo problema presentato, riguardante le interfacce utente. L’uso della sonificazione puó facilitare la fruizione in tempo reale delle informazioni sulle prestazioni di attivitá sportive, e puó contribuire ad alleviare parzialmente la disabilitá degli utenti non vedenti. Nel canottaggio, si é identificato nel livello di sincronia del team uno dei fattori rilevanti per una propulsione efficace dell’imbarcazione. Il problema di rilevare il livello di sincronia viene analizzato mediante una rete di accelerometri wireless, proponendo due diverse soluzioni. La prima soluzione é basata sull’indice di correlazione di Pearson e la seconda su un approccio emergente chiamato stigmergia. Entrambi gli approcci sono stati testati con successo in laboratorio e sul campo. Inoltre sono state sviluppate due applicazioni, per smartphone e PC, per fornire la telemetria e la sonificazione del moto di una barca a remi. Nel campo del nuoto é stata condotta una ricerca in letteratura riguardo la convinzione diffusa di considerare la cinematica come il fattore rilevante della propulsione efficace dei nuotatori. Questa indagine ha richiamato l’attenzione sull’importanza di studiare il cosiddetto feel-for-water (sensazione-dell’-acqua) percepito dai nuotatori d’alto livello. É stato progettato un innovativo sistema, per rilevare e comunicare gli effetti fluidodinamici causati dallo spostamento delle masse d’acqua intorno alle mani dei nuotatori. Il sistema é in grado di trasformare la pressione dell’acqua, misurata con sonde Piezo intorno alle mani, in un bio-feedback auditivo, pensato per i nuotatori e gli allenatori, come base per un nuovo modo di comunicare la sensazione-dell’acqua. Il sistema é stato testato con successo nel campo e ha dimostrato di fornire informazioni in tempo reale per il nuotatore e il formatore. Nello sport della corsa sono stati individuati due parametri rilevanti: il tempo di volo e di contatto dei piedi. É stato progettato un sistema innovativo per ottenere questi parametri attraverso un unico accelerometro montato sul tronco del corridore ed é stato implementato su uno smartphone. Per ottenere il risultato voluto é stato necessario progettare e realizzare un sistema per riallineare virtualmente gli assi dell’accelerometro e per estrarre il tempo di volo e di contatto dal segnale dell’accelerometro riallineato. L’applicazione per smartphone completa é stata testata con successo sul campo, confrontando i valori con quelli di attrezzature specializzate, dimostrando la sua idoneitá come ausilio pervasivo all’allenamento di corridori. Per esplorare le possibilitá della sonificazione usata come una base per tecnologia assistiva, abbiamo iniziato una collaborazione con un gruppo di ricerca presso l’Universitá di Scienze Applicate, Ginevra, in Svizzera. Tale collaborazione si é concentrata su un progetto chiamato SeeColOr (See Color with an Orchestra - vedere i colori con un’orchestra). In particolare, abbiamo avuto l’opportunitá di implementare il sistema SeeColOr su smartphone, al fine di consentire agli utenti non vedenti di utilizzare tale tecnologia su dispositivi leggeri e a basso costo. Inoltre, la tesi esplora alcune questioni relative al campo del rilevamento ambientale in ambienti estremi, come i ghiacciai, utilizzando la tecnologia delle Wireless Sensor Networks. Considerando che la tecnologia é simile a quella usata in altri contesti presentati, le considerazioni possono facilmente essere riutilizzate. Si sottolinea infatti che i problemi principali sono legati alla elevata difficoltá e scarsa affidabilitá di questa tecnologia innovativa rispetto alle altre soluzioni disponibili in commercio , definite legacy, basate solitamente su dispositivi piú grandi e costosi, chiamati datalogger. La tesi presenta i problemi esposti e le soluzioni proposte per mostrare l’applicazione dell’approccio progettuale cercato e definito durante lo sviluppo delle attività sperimentali e la ricerca che le ha implementate. ---------------------------------------- ENGLISH: Ubiquitous and pervasive systems, special categories of embedded systems, can be used to sense the context in their surrounding. In particular, context-aware systems are able to alter their internal state and their behaviour based on the context they perceive. To help people in better performing their activities, such systems must use the knowledge gathered about the context. A big research and industrial effort, geared towards the innovation of sensors, processors, operating systems, communication protocols, and frameworks, provides many "enabling" technologies, such as Wireless Sensor Networks or Smartphones. However, despite that significant effort, the adoption of pervasive systems to enhance sports monitoring, training and assistive technologies is still rather small. This thesis identifies two main issues concerning this low usage of pervasive technologies, both mainly related to users. On one side the attempt of computer science experts and researchers to induce the adoption of information technology based solutions, partially neglecting interaction with end users; on the other side a scarce attention to the interaction between humans and computers. The first can be translated into the lack of attention at what is relevant in the context of the user’s (special) needs. The second is represented by the widespread usage of graphical user interfaces to present information, requiring a high level of cognitive effort. While literature studies can provide knowledge about the user’s context, only direct contact with users enriches knowledge with awareness, providing a precise identification of the factors that are more relevant to the user. To successfully apply pervasive technologies to the field of sports engineering and assistive technology, the identification of relevant factors is an obliged premise, and represents the main methodological approach used throughout this thesis. This thesis analyses different sports (rowing, swimming, running) and a disability (blindness), to show how the proposed design methodology is put in practice. Relevant factors were identified thanks to the tight collaboration with users and experts in the respective fields. The process of identification is described, together with the proposed application tailored for the special field. The use of sonification, i.e. conveying information as sound, is proposed to leverage the second presented issue, that regards the user interfaces. The usage of sonification can ease the exploitation of information about performance in real-time for sport activities and can help to partially leverage the disability of blind users. In rowing, the synchrony level of the team was identified as one of the relevant factors for effective propulsion. The problem of detecting the synchrony level is analysed by means of a network of wireless accelerometers, proposing two different solutions. The first solution is based on Pearson’s correlation index and the second on an emergent approach called stigmergy. Both approaches were successfully tested in laboratory and in the field. Moreover two applications, for smartphones and PCs, were developed to provide telemetry and sonification of a rowing boat’s motion. In the field of swimming, an investigation about the widespread belief considering kinematics as the relevant factor of effective propulsion of swimmers drew attention to the importance of studying the so called "feel-for-water" experienced by elite swimmers. An innovative system was designed to sense and communicate fluid-dynamic effects caused by moving water masses around swimmers hands. The system is able to transform water pressure, measured with Piezo-probes, around hands into an auditive biofeedback, to be used by swimmers and trainers, as the base for a new way of communication about the "feel-for-water". The system was successfully tested in the field and proved to provide real-time information for the swimmer and the trainer. In running sports two relevant parameters are time of flight and contact of feet. An innovative system was designed to obtain these parameters using a single trunk mounted accelerometer and was implemented on a smartphone. To achieve the intended result it was necessary to design and implement a system to virtually realign the axes of the accelerometer and to extract time of flight and time of contact phases from the realigned accelerometer signal. The complete smartphone application was successfully tested in the field with specialized equipment, proving its suitability in enhancing training of runners with a pervasive system. To explore possibilities of sonification applied as an assistive technology, we started a collaboration with research group from University of Applied Science, Geneva, Switzerland, focused on a project called SeeColOr (See Color with an Orchestra). In particular we had the opportunity to implement the SeeColOr system on smartphones, in order to enable blind users to use that technology on low cost and lightweight devices. Moreover, the thesis exposes some issues related to a field, environmental sensing in extreme environments, like glaciers, using the innovative Wireless Sensor Networks technology. Considering that the technology is similar to the one used in other presented contexts, learned lessons can easily be reused. It is emphasized that the main problems are related to the high difficulty and low reliability of that innovative technology with respect to other "legacy" commercially available solutions, based on expensive and bigger devices, called dataloggers. The thesis presents the exposed problems and proposed solutions to show the application of the design approach strived during the development and research

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    Safe and Sound: Proceedings of the 27th Annual International Conference on Auditory Display

    Get PDF
    Complete proceedings of the 27th International Conference on Auditory Display (ICAD2022), June 24-27. Online virtual conference

    Computer-aided investigation of interaction mediated by an AR-enabled wearable interface

    Get PDF
    Dierker A. Computer-aided investigation of interaction mediated by an AR-enabled wearable interface. Bielefeld: Universitätsbibliothek Bielefeld; 2012.This thesis provides an approach on facilitating the analysis of nonverbal behaviour during human-human interaction. Thereby, much of the work that researchers do starting with experiment control, data acquisition, tagging and finally the analysis of the data is alleviated. For this, software and hardware techniques are used as sensor technology, machine learning, object tracking, data processing, visualisation and Augmented Reality. These are combined into an Augmented-Reality-enabled Interception Interface (ARbInI), a modular wearable interface for two users. The interface mediates the users’ interaction thereby intercepting and influencing it. The ARbInI interface consists of two identical setups of sensors and displays, which are mutually coupled. Combining cameras and microphones with sensors, the system offers to record rich multimodal interaction cues in an efficient way. The recorded data can be analysed online and offline for interaction features (e. g. head gestures in head movements, objects in joint attention, speech times) using integrated machine-learning approaches. The classified features can be tagged in the data. For a detailed analysis, the recorded multimodal data is transferred automatically into file bundles loadable in a standard annotation tool where the data can be further tagged by hand. For statistic analyses of the complete multimodal corpus, a toolbox for use in a standard statistics program allows to directly import the corpus and to automate the analysis of multimodal and complex relationships between arbitrary data types. When using the optional multimodal Augmented Reality techniques integrated into ARbInI, the camera records exactly what the participant can see and nothing more or less. The following additional advantages can be used during the experiment: (a) the experiment can be controlled by using the auditory or visual displays thereby ensuring controlled experimental conditions, (b) the experiment can be disturbed, thus offering to investigate how problems in interaction are discovered and solved, and (c) the experiment can be enhanced by interactively comprising the behaviour of the user thereby offering to investigate how users cope with novel interaction channels. This thesis introduces criteria for the design of scenarios in which interaction analysis can benefit from the experimentation interface and presents a set of scenarios. These scenarios are applied in several empirical studies thereby collecting multimodal corpora that particularly include head gestures. The capabilities of computer-aided interaction analysis for the investigation of speech, visual attention and head movements are illustrated on this empirical data. The effects of the head-mounted display (HMD) are evaluated thoroughly in two studies. The results show that the HMD users need more head movements to achieve the same shift of gaze direction and perform less head gestures with slower velocity and fewer repetitions compared to non-HMD users. From this, a reduced willingness to perform head movements if not necessary can be concluded. Moreover, compensation strategies are established like leaning backwards to enlarge the field of view, and increasing the number of utterances or changing the reference to objects to compensate for the absence of mutual eye contact. Two studies investigate the interaction while actively inducing misunderstandings. The participants here use compensation strategies like multiple verification questions and arbitrary gaze movements. Additionally, an enhancement method that highlights the visual attention of the interaction partner is evaluated in a search task. The results show a significantly shorter reaction time and fewer errors

    NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS

    Get PDF
    Historically, communication implies the transfer of information between bodies, yet this phenomenon is constantly adapting to new technological and cultural standards. In a digital context, it’s commonplace to envision systems that revolve around verbal modalities. However, behavioural analysis grounded in psychology research calls attention to the emotional information disclosed by non-verbal social cues, in particular, actions that are involuntary. This notion has circulated heavily into various interdisciplinary computing research fields, from which multiple studies have arisen, correlating non-verbal activity to socio-affective inferences. These are often derived from some form of motion capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that occur from inside the body. This thesis proposes a motivation and methodology for using physiological sensory data as an expressive resource for technology-mediated interactions. Initialised from a thorough discussion on state-of-the-art technologies and established design principles regarding this topic, then applied to a novel approach alongside a selection of practice works to compliment this. We advocate for aesthetic experience, experimenting with abstract representations. Atypically from prevailing Affective Computing systems, the intention is not to infer or classify emotion but rather to create new opportunities for rich gestural exchange, unconfined to the verbal domain. Given the preliminary proposition of non-representation, we justify a correspondence with modern Machine Learning and multimedia interaction strategies, applying an iterative, human-centred approach to improve personalisation without the compromising emotional potential of bodily gesture. Where related studies in the past have successfully provoked strong design concepts through innovative fabrications, these are typically limited to simple linear, one-to-one mappings and often neglect multi-user environments; we foresee a vast potential. In our use cases, we adopt neural network architectures to generate highly granular biofeedback from low-dimensional input data. We present the following proof-of-concepts: Breathing Correspondence, a wearable biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance; and Anti-Social Distancing Ensemble, an installation for public space interventions, analysing physical distance to generate a collective soundscape. Key findings are extracted from the individual reports to formulate an extensive technical and theoretical framework around this topic. The projects first aim to embrace some alternative perspectives already established within Affective Computing research. From here, these concepts evolve deeper, bridging theories from contemporary creative and technical practices with the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram em torno de modalidades verbais. Contudo, a análise comportamental fundamentada na investigação psicológica chama a atenção para a informação emocional revelada por sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou fortemente em vários campos interdisciplinares de investigação na área das ciências da computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal com inferências sócio-afectivas. Estes são frequentemente derivados de alguma forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas “invisíveis” que ocorrem no interior do corpo. Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia. Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem, juntamente com uma selecção de trabalhos práticos, para complementar esta. Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente aos sistemas de Computação Afectiva predominantes, a intenção não é inferir ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual, não confinada ao domínio verbal. Dada a proposta preliminar de não representação, justificamos uma correspondência com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma abordagem iterativa e centrada no ser humano para melhorar a personalização sem o potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores demonstraram com sucesso conceitos de design fortes através de fabricações inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um, e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão. Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema de biofeedback wearable inspirado nos princípios de design somaestético; Latent Steps, um modelo autoencoder em tempo real para representar experiências corporais a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing Ensemble, uma instalação para intervenções no espaço público, analisando a distância física para gerar uma paisagem sonora colectiva. Os principais resultados são extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas perspectivas alternativas às que já estão estabelecidas no âmbito da investigação da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente, fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o avanço das tecnologias biomédicas
    corecore