76,179 research outputs found

    Music Mixing Surface

    Get PDF

    Lithium depletion in solar-like stars: effect of overshooting based on realistic multi-dimensional simulations

    Get PDF
    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∌\sim 50 Myr to ∌\sim 4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long standing problem of lithium depletion in pre-main sequence and main sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.Comment: 7 pages, 3 figures, Accepted for publication in ApJ Letter

    Deep Remix: Remixing Musical Mixtures Using a Convolutional Deep Neural Network

    Full text link
    Audio source separation is a difficult machine learning problem and performance is measured by comparing extracted signals with the component source signals. However, if separation is motivated by the ultimate goal of re-mixing then complete separation is not necessary and hence separation difficulty and separation quality are dependent on the nature of the re-mix. Here, we use a convolutional deep neural network (DNN), trained to estimate 'ideal' binary masks for separating voice from music, to perform re-mixing of the vocal balance by operating directly on the individual magnitude components of the musical mixture spectrogram. Our results demonstrate that small changes in vocal gain may be applied with very little distortion to the ultimate re-mix. Our method may be useful for re-mixing existing mixes

    Weeps Happiness: The Dysfunctional Drama of the White Album

    Full text link
    With Wilde’s words in mind, listen again to the White Album, or simply its opening. About seven seconds into the first track, “Back in the U.S.S.R.,” as we hear the descent of a jet—a masterful, momentous sound, universally recognized—there’s another, much odder sound: a sound that is not monumental at all, and that no one could recognize. If you know the Beatles, you know the sound; you can hear it in your head this moment if you try. But what is it? A throat imitating a guitar? A guitar imitating a throat? It’s like something out of Spike Jones. Yet it isn’t to any apparent purpose, comedic or musical. It’s simply there. It has always been there. And whether we’ve thought about it or not, it has influenced how we hear every sound that follows it. That unassuming oddball sound is our introduction to the underworld of the White Album. This underworld is a place of contradiction and comedy, absurdity and enigma. As the inversion of a world in which everything must be for a reason, it thrives on reasonlessness: proximities that are jarring but intriguing, that make no rational sense but ring bells all over the imagination. It’s full of cries and whispers, mumbles and mutterings, things you can barely make out and things you’ll never make out. Sounds that are wedged in the spaces between songs, or that creep into the margins of the music, as if to undermine it, or inspire it. Sounds that die away, only to rise again as shouts or moans, aftermaths that alter your entire conception of what you thought you heard

    Monte Carlo aided design of the inner muon veto detectors for the Double Chooz experiment

    Full text link
    The Double Chooz neutrino experiment aims to measure the last unknown neutrino mixing angle theta_13 using two identical detectors positioned at sites both near and far from the reactor cores of the Chooz nuclear power plant. To suppress correlated background induced by cosmic muons in the detectors, they are protected by veto detector systems. One of these systems is the inner muon veto. It is an active liquid scintillator based detector and instrumented with encapsulated photomultiplier tubes. In this paper we describe the Monte Carlo aided design process of the inner muon veto, that resulted in a detector configuration with 78 PMTs yielding an efficiency of 99.978 +- 0.004% for rejecting muon events and an efficiency of >98.98% for rejecting correlated events induced by muons. A veto detector of this design is currently used at the far detector site and will be built and incorporated as the muon identification system at the near site of the Double Chooz experiment

    Iris segmentation

    Get PDF
    The quality of eye image data become degraded particularly when the image is taken in the non-cooperative acquisition environment such as under visible wavelength illumination. Consequently, this environmental condition may lead to noisy eye images, incorrect localization of limbic and pupillary boundaries and eventually degrade the performance of iris recognition system. Hence, this study has compared several segmentation methods to address the abovementioned issues. The results show that Circular Hough transform method is the best segmentation method with the best overall accuracy, error rate and decidability index that more tolerant to ‘noise’ such as reflection

    Final Research Report for Sound Design and Audio Player

    Get PDF
    This deliverable describes the work on Task 4.3 Algorithms for sound design and feature developments for audio player. The audio player runs on the in-store player (ISP) and takes care of rendering the music playlists via beat-synchronous automatic DJ mixing, taking advantage of the rich musical content description extracted in T4.2 (beat markers, structural segmentation into intro and outro, musical and sound content classification). The deliverable covers prototypes and final results on: (1) automatic beat-synchronous mixing by beat alignment and time stretching – we developed an algorithm for beat alignment and scheduling of time-stretched tracks; (2) compensation of play duration changes introduced by time stretching – in order to make the playlist generator independent of beat mixing, we chose to readjust the tempo of played tracks such that their stretched duration is the same as their original duration; (3) prospective research on the extraction of data from DJ mixes – to alleviate the lack of extensive ground truth databases of DJ mixing practices, we propose steps towards extracting this data from existing mixes by alignment and unmixing of the tracks in a mix. We also show how these methods can be evaluated even without labelled test data, and propose an open dataset for further research; (4) a description of the software player module, a GUI-less application to run on the ISP that performs streaming of tracks from disk and beat-synchronous mixing. The estimation of cue points where tracks should cross-fade is now described in D4.7 Final Research Report on Auto-Tagging of Music.EC/H2020/688122/EU/Artist-to-Business-to-Business-to-Consumer Audio Branding System/ABC D
    • 

    corecore