288 research outputs found

    Rolling sound synthesis : work in progress

    No full text
    International audienceThis paper presents a physically informed rolling sound synthesis model for the MetaSon synthesis platform. The aim of this sound synthesis platform will be shortly described. As shown in the state of the art, both in terms of sound effects and proposed controls, existing models can be improved. Some details on asymmetric rolling objects will be given and the sound synthesis model will be exposed. Perspectives for further studies and work in progress will be discussed

    Subjective evaluation of spatial distorsions induced by a sound source separation process

    Get PDF
    International audienceThe fields of video games, simulations and virtual reality are now tending to develop increasingly high-performance, realistic and immersive technologies. Efforts are made in terms of sound devices and sound processing to synthesize realistic sound scenes in a 3-D environment. One of the greatest challenges is the ability to analyze a 3-D audio stream corresponding to a complex sound scene in its basic components (i.e. individual sound sources), to modify the scene (e.g. to change the locations of sound sources) and to resynthesize a modified 3-D audio stream. This situation is referred to as "spatial remix".Actually, the spatial remix problem is still an open field. Work in progress rely on sound separation algorithms to analyze a sound scene, but these techniques are not perfect and can damage the reconstructed source signals. These are referred to as "separation artefacts", including transient alteration of the target source and rejections of other sources into the target source. Objective and subjective evaluation of separation artefacts have been conducted [1], but these studies usually consider the separated source signals alone, i.e. when each source is listened to separately. This is different form the spatial remix problem, where all sources are listened to simultaneously.In that case, one may wonder if the separation artefacts can affect the spatial image of the synthesized 3-D sound scene. According to the perceptual mechanisms involved in spatial hearing, hypothesis can be made on the kind of spatial distortions that could occur in this context. Indeed, as transients are important cues to precisely localize sounds sources, its alteration may result in a localization blur or source widening. On the other hand, when separated sources are spatialized and played simultaneously, rejections of one source into another may also produce unwanted effects such as a feeling of moving sources and "phantom" sources emergence. This paper presents a new methodology to perceptually evaluate the spatial distortions that can occur in a spatial remix context. It consists in carrying out a localization test on complex scenes composed of three synthetic musical instruments played on a set of loudspeakers. In order to eliminate possible issues related to the spatial audio rendering device, we consider a simple case: We consider only three spatial positions, each corresponding to a single loudspeaker. Then, the spatial remix is restrained to a simple permutation of the source locations.The test is run through a virtual interface, using a head mounted display. The subject is placed in a simple visual virtual environment and is asked to surround with a remote the areas where each instrument is perceived. This experimental device allows the subject to report precisely both instruments position and size. A single instrument can also be spotted at multiple locations. Perceived source positions are approximated as ellipses from which center position and dimensions can easily be deduced. In order to quantify spatial distortions, the localization task is performed on both clean and degraded versions of the same musical extract. Localization performances in both cases are then compared taking the clean sources case as a reference. In this paper, the methodology is applied to assess the quality of Non-Negative Matrix Factorization source separation algorithm developped by Leglaive [2] which performs separation on convolutive mixtures.Our study reveals that the source separation process leads to perceptible degradations of the spatial image. Three main kinds of spatial distortions have been characterized. First, in the majority of degraded cases, "phantom" sources have been observed. This artifact mainly concerns percussive sources. The results also show a significant increase in the perceived width of the degraded sources. Finally, azimuth and elevation localization error is significantly higher in the case of scenes composed of separated sources.[1] V. Emiya, E. Vincent, N. Harlander and V. Hohmann, "Subjective and Objective Quality Assessment of Audio Source Separation," in <i>IEEE Transactions on Audio, Speech, and Language Processing</i>, vol. 19, no. 7, pp. 2046-2057, Sept. 2011.[2] S. Leglaive, R. Badeau and G. Richard, "Separating time-frequency sources from time-domain convolutive mixtures using non-negative matrix factorization," <i>2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)</i>, New Paltz, NY, 2017, pp. 264-268

    In-situ estimation of non-regulated pollutant emission factors in an urban area of Nantes, France, with fleet composition characterization

    Get PDF
    The purpose of this study is to estimate the in-situ emission factors of several pollutants (particle number [PN], black carbon [BC] and several volatile and semi-volatile organic compounds [VOCs and SVOCs]) in an urban area of Nantes, France, with real-world traffic conditions and characterization of the fleet composition. The fleet composition and driving conditions are characterized by the number of vehicles, their speeds and their types (passenger cars [PCs], light commercial vehicles [LCVs], heavy-duty vehicles [HDVs]) as well as their characteristics (make, model, fuel, engine, EURO emission standard, etc.). The number of vehicles passing on the boulevard is around 20,000 per day with about 44% of Euro 5 and Euro 6 vehicles. The impacts of fleet composition on emission were analyzed by ANOVA. The results show that the fleet composition has a significant impact on emissions for different pollutants. Higher percentage of gasoline PCs between Euro 4 to Euro 6 and Euro 4 diesel PCs induces more BC emission. Higher percentage of old gasoline and diesel vehicles (? Euro 3) induces higher emission of toluene, ethylbenzene and m+p- and o-xylene. Furthermore, emission factors estimated in this work were compared to those calculated in other in-situ studies that show a good agreement. For the chassis bench comparison, the in-situ PN and BC emission factors are in the same range as those measured for diesel vehicles without particle filter and gasoline vehicles with direct injection system. These EFs are also comparable with old heavy duty vehicles without particle filter (5x1013-2x1014 #/km)

    A synthesis model with intuitive control capabilities for rolling sounds

    No full text
    International audienceThis paper presents a physically inspired source-filter model for rolling sound synthesis. The model, which is suitable for real-time implementation, is based on qualitative and quantitative observations obtained from a physics-based model described in the literature. In the first part of the paper, the physics-based model is presented, followed by a perceptual experiment, whose aim is to identify the perceptually relevant information characterizing the rolling interaction. On the basis of this experiment, we hypothesize that the particular pattern of the interaction force is responsible for the perception of a rolling object. A complete analysis-synthesis scheme of this interaction force is then provided, along with a description of the calibration of the proposed source-filter sound synthesis process. Finally, a mapping strategy for intuitive control of the proposed synthesis process (i.e. size and velocity of the rolling object and roughness of the surface) is proposed and validated by a listening test

    The spatial pattern of light determines the kinetics and modulates backpropagation of optogenetic action potentials

    Get PDF
    Optogenetics offers an unprecedented ability to spatially target neuronal stimulations. This study investigated via simulation, for the first time, how the spatial pattern of excitation affects the response of channelrhodopsin-2 (ChR2) expressing neurons. First we described a methodology for modeling ChR2 in the NEURON simulation platform. Then, we compared four most commonly considered illumination strategies (somatic, dendritic, axonal and whole cell) in a paradigmatic model of a cortical layer V pyramidal cell. We show that the spatial pattern of illumination has an important impact on the efficiency of stimulation and the kinetics of the spiking output. Whole cell illumination synchronizes the depolarization of the dendritic tree and the soma and evokes spiking characteristics with a distinct pattern including an increased bursting rate and enhanced back propagation of action potentials (bAPs). This type of illumination is the most efficient as a given irradiance threshold was achievable with only 6 % of ChR2 density needed in the case of somatic illumination. Targeting only the axon initial segment requires a high ChR2 density to achieve a given threshold irradiance and a prolonged illumination does not yield sustained spiking. We also show that patterned illumination can be used to modulate the bAPs and hence spatially modulate the direction and amplitude of spike time dependent plasticity protocols. We further found the irradiance threshold to increase in proportion to the demyelination level of an axon, suggesting that measurements of the irradiance threshold (for example relative to the soma) could be used to remotely probe a loss of neural myelin sheath, which is a hallmark of several neurodegenerative diseases

    Navigating in a space of synthesized interaction-sounds: rubbing, scratching and rolling sounds

    No full text
    International audienceIn this paper, we investigate a control strategy of synthesized interaction-sounds. The framework of our research is based on the {action/object} paradigm that considers that sounds result from an action on an object. This paradigm presumes that there exists some sound invariants, i.e. perceptually relevant signal morphologies that carry information about the action or the object. Some of these auditory cues are considered for rubbing, scratching and rolling interactions. A generic sound synthesis model, allowing the production of these three types of interaction together with a control strategy of this model are detailed. The proposed control strategy allows the users to navigate continuously in an ''action space'', and to morph between interactions, e.g. from rubbing to rolling

    Counting Groves-Ledyard equilibria via degree theory

    Full text link
    We study the Groves-Ledyard mechanism for determining optimal amounts of public goods in economies whose agents have the most general class of preferences for which a Pareto amount of public goods can be computed independently of income distribution. We use degree theory on affine spaces to show that the number of equilibria in such economies grows exponentially as the number of agents in the economy increases. The large number of equilibria in such simple economic models raises doubts as to whether the Groves-Ledyard mechanism is a workable solution to the Free Rider Problem since individuals may have incentives to falsify their preferences in order to drive the adjustment process to a preferred Nash equilibrium.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/25100/1/0000532.pd

    Impact of safety-related dose reductions or discontinuations on sustained virologic response in HCV-infected patients: Results from the GUARD-C Cohort

    Get PDF
    BACKGROUND: Despite the introduction of direct-acting antiviral agents for chronic hepatitis C virus (HCV) infection, peginterferon alfa/ribavirin remains relevant in many resource-constrained settings. The non-randomized GUARD-C cohort investigated baseline predictors of safety-related dose reductions or discontinuations (sr-RD) and their impact on sustained virologic response (SVR) in patients receiving peginterferon alfa/ribavirin in routine practice. METHODS: A total of 3181 HCV-mono-infected treatment-naive patients were assigned to 24 or 48 weeks of peginterferon alfa/ribavirin by their physician. Patients were categorized by time-to-first sr-RD (Week 4/12). Detailed analyses of the impact of sr-RD on SVR24 (HCV RNA <50 IU/mL) were conducted in 951 Caucasian, noncirrhotic genotype (G)1 patients assigned to peginterferon alfa-2a/ribavirin for 48 weeks. The probability of SVR24 was identified by a baseline scoring system (range: 0-9 points) on which scores of 5 to 9 and <5 represent high and low probability of SVR24, respectively. RESULTS: SVR24 rates were 46.1% (754/1634), 77.1% (279/362), 68.0% (514/756), and 51.3% (203/396), respectively, in G1, 2, 3, and 4 patients. Overall, 16.9% and 21.8% patients experienced 651 sr-RD for peginterferon alfa and ribavirin, respectively. Among Caucasian noncirrhotic G1 patients: female sex, lower body mass index, pre-existing cardiovascular/pulmonary disease, and low hematological indices were prognostic factors of sr-RD; SVR24 was lower in patients with 651 vs. no sr-RD by Week 4 (37.9% vs. 54.4%; P = 0.0046) and Week 12 (41.7% vs. 55.3%; P = 0.0016); sr-RD by Week 4/12 significantly reduced SVR24 in patients with scores <5 but not 655. CONCLUSIONS: In conclusion, sr-RD to peginterferon alfa-2a/ribavirin significantly impacts on SVR24 rates in treatment-naive G1 noncirrhotic Caucasian patients. Baseline characteristics can help select patients with a high probability of SVR24 and a low probability of sr-RD with peginterferon alfa-2a/ribavirin

    Measurement of the Bottom-Strange Meson Mixing Phase in the Full CDF Data Set

    Get PDF
    We report a measurement of the bottom-strange meson mixing phase \beta_s using the time evolution of B0_s -> J/\psi (->\mu+\mu-) \phi (-> K+ K-) decays in which the quark-flavor content of the bottom-strange meson is identified at production. This measurement uses the full data set of proton-antiproton collisions at sqrt(s)= 1.96 TeV collected by the Collider Detector experiment at the Fermilab Tevatron, corresponding to 9.6 fb-1 of integrated luminosity. We report confidence regions in the two-dimensional space of \beta_s and the B0_s decay-width difference \Delta\Gamma_s, and measure \beta_s in [-\pi/2, -1.51] U [-0.06, 0.30] U [1.26, \pi/2] at the 68% confidence level, in agreement with the standard model expectation. Assuming the standard model value of \beta_s, we also determine \Delta\Gamma_s = 0.068 +- 0.026 (stat) +- 0.009 (syst) ps-1 and the mean B0_s lifetime, \tau_s = 1.528 +- 0.019 (stat) +- 0.009 (syst) ps, which are consistent and competitive with determinations by other experiments.Comment: 8 pages, 2 figures, Phys. Rev. Lett 109, 171802 (2012

    Words cluster phonetically beyond phonotactic regularities

    Get PDF
    Recent evidence suggests that cognitive pressures associated with language acquisition and use could affect the organization of the lexicon. On one hand, consistent with noisy channel models of language (e.g., Levy, 2008), the phonological distance between wordforms should be maximized to avoid perceptual confusability (a pressure for dispersion). On the other hand, a lexicon with high phonological regularity would be simpler to learn, remember and produce (e.g., Monaghan et al., 2011) (a pressure for clumpiness). Here we investigate wordform similarity in the lexicon, using measures of word distance (e.g., phonological neighborhood density) to ask whether there is evidence for dispersion or clumpiness of wordforms in the lexicon. We develop a novel method to compare lexicons to phonotactically-controlled baselines that provide a null hypothesis for how clumpy or sparse wordforms would be as the result of only phonotactics. Results for four languages, Dutch, English, German and French, show that the space of monomorphemic wordforms is clumpier than what would be expected by the best chance model according to a wide variety of measures: minimal pairs, average Levenshtein distance and several network properties. This suggests a fundamental drive for regularity in the lexicon that conflicts with the pressure for words to be as phonologically distinct as possible. Keywords: Linguistics; Lexical design; Communication; Phonotactic
    • …
    corecore