11,523 research outputs found

    Acoustic Space Learning for Sound Source Separation and Localization on Binaural Manifolds

    Get PDF
    In this paper we address the problems of modeling the acoustic space generated by a full-spectrum sound source and of using the learned model for the localization and separation of multiple sources that simultaneously emit sparse-spectrum sounds. We lay theoretical and methodological grounds in order to introduce the binaural manifold paradigm. We perform an in-depth study of the latent low-dimensional structure of the high-dimensional interaural spectral data, based on a corpus recorded with a human-like audiomotor robot head. A non-linear dimensionality reduction technique is used to show that these data lie on a two-dimensional (2D) smooth manifold parameterized by the motor states of the listener, or equivalently, the sound source directions. We propose a probabilistic piecewise affine mapping model (PPAM) specifically designed to deal with high-dimensional data exhibiting an intrinsic piecewise linear structure. We derive a closed-form expectation-maximization (EM) procedure for estimating the model parameters, followed by Bayes inversion for obtaining the full posterior density function of a sound source direction. We extend this solution to deal with missing data and redundancy in real world spectrograms, and hence for 2D localization of natural sound sources such as speech. We further generalize the model to the challenging case of multiple sound sources and we propose a variational EM framework. The associated algorithm, referred to as variational EM for source separation and localization (VESSL) yields a Bayesian estimation of the 2D locations and time-frequency masks of all the sources. Comparisons of the proposed approach with several existing methods reveal that the combination of acoustic-space learning with Bayesian inference enables our method to outperform state-of-the-art methods.Comment: 19 pages, 9 figures, 3 table

    Direct growth of 2D and 3D graphene nano-structures over large glass substrates by tuning a sacrificial Cu-template layer

    Get PDF
    We demonstrate direct growth of two-dimensional (2D) and three-dimensional (3D) graphene structures on glass substrates. By starting from catalytic copper nanoparticles of different densities and using chemical vapour deposition (CVD) techniques, different 2D and 3D morphologies can be obtained, including graphene sponge-like, nano-ball and conformal graphene structures. More important, we show that the initial copper template can be completely removed via sublimation during CVD and, if need be, subsequent metal etching. This allows optical transmissions close to the bare substrate, which, combined with electrical conductivity make the proposed technique very attractive for creating graphene with high surface to volume ratio for a wide variety of applications, including antiglare display screens, solar cells, light-emitting diodes, gas and biological plasmonic sensors.Peer ReviewedPostprint (author's final draft

    The first ultra-high resolution Digital Terrain Model of the shallow-water sector around Lipari Island (Aeolian Islands, Italy)

    Get PDF
    Very high resolution bathymetric map obtained through multibeam echosounders data are crucial to generate accurate Digital Terrain Models from which the morphological setting of active volcanic areas can be analyzed in detail. Here we show and discuss the main results from the first multibeam bathymetric survey performed in shallow-waters around the island of Lipari, the largest and the most densely populated of the Aeolian Islands (southern Italy). Data have been collected in the depth range of 0.1-150 m and complete the already existent high-resolution multibeam bathymetry realized between 100 and 1300 m water depth. The new ultrahigh resolution bathymetric maps at 0.1-0.5 m provide new insights on the shallow seafloor of Lipari, allowing to detail a large spectrum of volcanic, erosive-depositional and anthropic features. Moreover, the presented data allow outlining the recent morphological evolution of the shallow coastal sector of this active volcanic island, indicating the presence of potential geo-hazard factors in shallow waters

    Scan and paint: theory and practice of a sound field visualization method

    No full text
    Sound visualization techniques have played a key role in the development of acoustics throughout history. The development of measurement apparatus and techniques for displaying sound and vibration phenomena has provided excellent tools for building understanding about specific problems. Traditional methods, such as step-by-step measurements or simultaneous multichannel systems, have a strong tradeoff between time requirements, flexibility, and cost. However, if the sound field can be assumed time stationary, scanning methods allow us to assess variations across space with a single transducer, as long as the position of the sensor is known. The proposed technique, Scan and Paint, is based on the acquisition of sound pressure and particle velocity by manually moving a P-U probe (pressure-particle velocity sensors) across a sound field whilst filming the event with a camera. The sensor position is extracted by applying automatic color tracking to each frame of the recorded video. It is then possible to visualize sound variations across the space in terms of sound pressure, particle velocity, or acoustic intensity. In this paper, not only the theoretical foundations of the method, but also its practical applications are explored such as scanning transfer path analysis, source radiation characterization, operational deflection shapes, virtual phased arrays, material characterization, and acoustic intensity vector field mapping

    Seeing (ultra)sound in real-time through the Acousto-PiezoLuminescent lens

    Get PDF
    In this contribution, we focus on a recently developed piezoluminescent phosphor BaSi2O2N2:Eu (BaSiON), and report on Acoustically induced PiezoLuminescence (APL). Insonification of the BaSiON phosphor with (ultra)sound waves leads to intense light emission patterns which are clearly visible by the bare eye. The emitted light intensity has been measured with a calibrated photometer revealing it is directly proportional to the applied acoustic power. As such, APL can be used to devise a simple but effective acoustic power sensor. Further, the emitted APL light pattern has a specific geometrical shape which we successfully linked to the pressure field of the incident (ultra)sonic wave. This is explicitly demonstrated for an ultrasonic (f = 3.3 MHz) transducer. By varying the insonification distance (from near- to far-field), multiple 2D slices of the transducer's radiation field light up on the BaSiON phosphor plate. By simply photographing these light patterns, and stacking them one after another, the 3D spatial radiation field of the ultrasonic transducer was reconstructed. Good agreement was found with both classical scanning hydrophone experiments and simulations. Recently we found that APL can also be activated by acoustic waves in the kHz range, thus covering a wide frequency range. Some first preliminary results are shown

    Wearable performance

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment
    • …
    corecore