70 research outputs found

    Ultra-high-speed imaging of bubbles interacting with cells and tissue

    Get PDF
    Ultrasound contrast microbubbles are exploited in molecular imaging, where bubbles are directed to target cells and where their high-scattering cross section to ultrasound allows for the detection of pathologies at a molecular level. In therapeutic applications vibrating bubbles close to cells may alter the permeability of cell membranes, and these systems are therefore highly interesting for drug and gene delivery applications using ultrasound. In a more extreme regime bubbles are driven through shock waves to sonoporate or kill cells through intense stresses or jets following inertial bubble collapse. Here, we elucidate some of the underlying mechanisms using the 25-Mfps camera Brandaris128, resolving the bubble dynamics and its interactions with cells. We quantify acoustic microstreaming around oscillating bubbles close to rigid walls and evaluate the shear stresses on nonadherent cells. In a study on the fluid dynamical interaction of cavitation bubbles with adherent cells, we find that the nonspherical collapse of bubbles is responsible for cell detachment. We also visualized the dynamics of vibrating microbubbles in contact with endothelial cells followed by fluorescent imaging of the transport of propidium iodide, used as a membrane integrity probe, into these cells showing a direct correlation between cell deformation and cell membrane permeability

    Audiovisual Reproduction in Surrounding Display: Effect of Spatial Width of Audio and Video

    Get PDF
    Moniaistinen havaitseminen perustuu informaation yhdistämiseen eri aistikanavista siten, että yhdistetty aistimus tuottaa enemmän tietoa ympäröivästä maailmasta kuin aistimusten käsitteleminen erillisinä. Tämän seurauksena vanhat laatumittarit yhteen aistiin perustuville järjestelmille eivät toimi arvioitaessa monimutkaisempia audiovisuaalisia järjestelmiä, ja uusien laatumittareiden kehittäminen on tarpeellista. Tässä työssä audiovisuaalista havaitsemista tutkittiin immersiivisen audiovisuaalisen näytön avulla. Näyttö koostui 226 laajasta videokuvasta ja 20 kaiuttimella toteutetusta 3D äänentoistosta. Tutkimuksen tavoite oli tarkkailla kuulon ja näön vuorovaikutusta, kun kuvan- ja äänentoiston avaruudellista laajuutta rajoitettiin. Subjektiivinen laatuarviointi toteutettiin käyttäen diskreettiä näytteenhuonontumaskaalaa (DCR) havaitun laadun heikkenemisen arviointiin neljän eri videosisällön kanssa, kun äänen- ja kuvantoiston leveyttä rajoitettiin. Tämän lisäksi osallistujilta kerättiin vapaita kuvauksia heidän antamiinsa laatuarviointeihin vaikuttaneista seikoista. Osallistujien yksilölliset taipumukset kokea uppoutumista arvioitiin ennen koetta kyselylomakkeen avulla. Tulokset osoittavat, että videon leveys on määräävä tekijä arvioitaessa havaittua laadun heikkenemistä. Myös äänenleveydellä oli merkitystä, kun videonleveys oli suurimmillaan. Taipumus kokea uppoutumista ei ollut merkittävä tekijä laadun kannalta tässä tutkimuksessa. Videosisällön merkitys oli vähäinen. Vapaille kuvauksille suoritettu rajoitettu korrespondenssianalyysi ehdottaa huonoon havaittuun laatuun vaikuttaviksi tekijöiksi äänen väärän tulosuunnan, rajoitetun videonleveyden ja puuttuvan tärkeän sisällön.Multimodal perception strives to integrate information from multiple sensorial channels into a unified experience, that contains more information than just the sum of the separate unimodal percepts. As a result, traditional quality metrics for unimodal services cannot reflect the perceived quality in multimodal situations, and new quality estimation methods are needed. In this work, audiovisual perception was studied with an immersive audiovisual display. The audiovisual display consisted of a video screen with field of view of 226 and 3D sound reproduction with 20 loudspeakers. The aim of the study was to observe the crossmodal interaction of auditory and visual modalities, when the spatial widths of audio and video reproduction were limited. A subjective study was organized, where the overall perceived degradation of the stimuli was evaluated with Degradation Category Rating in four different types of audiovisual content. In addition, free descriptions of the most prominent degrading factors were collected. The participants' individual tendencies to experience immersion were screened prior to the experiment with a questionnaire. The results show that video width is the dominant element in defining the degradation of a stimulus. Also audio width had an impact when the video width was at maximum. Individual tendency to experience immersion was not found to have significant impact on perceived degradation in this study. Slight content effects were observed. Constrained correspondence analysis of the free description data suggests the reasons for highest perceived degradation to be caused by wrong audio direction, reduced video width and missing essential content

    Realising the head-shadow benefit to cochlear implant users

    Get PDF
    Cochlear implant (CI) users struggle to understand speech in noise. They suffer from elevated hearing thresholds and, with practically no binaural unmasking, they rely heavily on better-ear listening and lip reading. Traditional measures of spatial release from masking (SRM) quantify the speech reception threshold (SRT) improvement due to the azimuthal separation of speech and interferers when directly facing the speech source. The Jelfs et al. (2011) model of SRM predicts substantial benefits of orienting the head away from the target speech. Audio-only and audio-visual (AV) SRTs in normally hearing (NH) listeners and CI users confirmed model predictions of speech-facing SRM and head-orientation benefit (HOB). The lip-reading benefit (LRB) was not disrupted by a modest 30° orientation. When attending to speech with a gradually diminishing speech-to-noise-ratio (SNR), CI users were found to make little spontaneous use of their available HOB. Following a simple instruction to explore their HOB, CI users immediately reached as much as 5 dB lower SNRs. AV speech presentation significantly inhibited head movements (it nearly eradicated CI users’ spontaneous head turns), but had a limited impact on the SNRs reached post-instruction, compared to audio-only presentation. NH listeners age-matched to our CI participants made more spontaneous head turns in the free-head experiment but were poorer than CI users at exploiting their HOB post-instruction, despite their exhibiting larger objective HOB. NH listeners’ and CI users’ LRB measured 3 and 5 dB, respectively. Our findings both dispel the erroneous beliefs held by CI professionals that facing the speech constitutes an optimal listening strategy (whether for lip-reading or to optimise the use of microphone directionality) and pave the way to obvious translational applications

    Acoustic event detection and localization using distributed microphone arrays

    Get PDF
    Automatic acoustic scene analysis is a complex task that involves several functionalities: detection (time), localization (space), separation, recognition, etc. This thesis focuses on both acoustic event detection (AED) and acoustic source localization (ASL), when several sources may be simultaneously present in a room. In particular, the experimentation work is carried out with a meeting-room scenario. Unlike previous works that either employed models of all possible sound combinations or additionally used video signals, in this thesis, the time overlapping sound problem is tackled by exploiting the signal diversity that results from the usage of multiple microphone array beamformers. The core of this thesis work is a rather computationally efficient approach that consists of three processing stages. In the first, a set of (null) steering beamformers is used to carry out diverse partial signal separations, by using multiple arbitrarily located linear microphone arrays, each of them composed of a small number of microphones. In the second stage, each of the beamformer output goes through a classification step, which uses models for all the targeted sound classes (HMM-GMM, in the experiments). Then, in a third stage, the classifier scores, either being intra- or inter-array, are combined using a probabilistic criterion (like MAP) or a machine learning fusion technique (fuzzy integral (FI), in the experiments). The above-mentioned processing scheme is applied in this thesis to a set of complexity-increasing problems, which are defined by the assumptions made regarding identities (plus time endpoints) and/or positions of sounds. In fact, the thesis report starts with the problem of unambiguously mapping the identities to the positions, continues with AED (positions assumed) and ASL (identities assumed), and ends with the integration of AED and ASL in a single system, which does not need any assumption about identities or positions. The evaluation experiments are carried out in a meeting-room scenario, where two sources are temporally overlapped; one of them is always speech and the other is an acoustic event from a pre-defined set. Two different databases are used, one that is produced by merging signals actually recorded in the UPC¿s department smart-room, and the other consists of overlapping sound signals directly recorded in the same room and in a rather spontaneous way. From the experimental results with a single array, it can be observed that the proposed detection system performs better than either the model based system or a blind source separation based system. Moreover, the product rule based combination and the FI based fusion of the scores resulting from the multiple arrays improve the accuracies further. On the other hand, the posterior position assignment is performed with a very small error rate. Regarding ASL and assuming an accurate AED system output, the 1-source localization performance of the proposed system is slightly better than that of the widely-used SRP-PHAT system, working in an event-based mode, and it even performs significantly better than the latter one in the more complex 2-source scenario. Finally, though the joint system suffers from a slight degradation in terms of classification accuracy with respect to the case where the source positions are known, it shows the advantage of carrying out the two tasks, recognition and localization, with a single system, and it allows the inclusion of information about the prior probabilities of the source positions. It is worth noticing also that, although the acoustic scenario used for experimentation is rather limited, the approach and its formalism were developed for a general case, where the number and identities of sources are not constrained

    Evaluation of room acoustic qualities and defects by use of auralization

    Get PDF
    corecore