5,822 research outputs found

    Two Bipolar Outflows and Magnetic Fields in a Multiple Protostar System, L1448 IRS 3

    Get PDF
    We performed spectral line observations of CO J=2-1, 13CO J=1-0, and C18O J=1-0 and polarimetric observations in the 1.3 mm continuum and CO J=2-1 toward a multiple protostar system, L1448 IRS 3, in the Perseus molecular complex at a distance of ~250 pc, using the BIMA array. In the 1.3 mm continuum, two sources (IRS 3A and 3B) were clearly detected with estimated envelope masses of 0.21 and 1.15 solar masses, and one source (IRS 3C) was marginally detected with an upper mass limit of 0.03 solar masses. In CO J=2-1, we revealed two outflows originating from IRS 3A and 3B. The masses, mean number densities, momentums, and kinetic energies of outflow lobes were estimated. Based on those estimates and outflow features, we concluded that the two outflows are interacting and that the IRS 3A outflow is nearly perpendicular to the line of sight. In addition, we estimated the velocity, inclination, and opening of the IRS 3B outflow using Bayesian statistics. When the opening angle is ~20 arcdeg, we constrain the velocity to ~45 km/s and the inclination angle to ~57 arcdeg. Linear polarization was detected in both the 1.3 mm continuum and CO J=2-1. The linear polarization in the continuum shows a magnetic field at the central source (IRS 3B) perpendicular to the outflow direction, and the linear polarization in the CO J=2-1 was detected in the outflow regions, parallel or perpendicular to the outflow direction. Moreover, we comprehensively discuss whether the binary system of IRS 3A and 3B is gravitationally bound, based on the velocity differences detected in 13CO J=1-0 and C18O J=1-0 observations and on the outflow features. The specific angular momentum of the system was estimated as ~3e20 cm^2/s, comparable to the values obtained from previous studies on binaries and molecular clouds in Taurus.Comment: ApJ accepted, 20 pages, 2 tables, 10 figure

    Automatic Calibration of Modified FM Synthesis to Harmonic Sounds Using Genetic Algorithms

    Get PDF
    Many audio synthesis techniques have been successful inreproducing the sounds of musical instruments. Several of these techniques require parameters calibration. However, this task can be difficult and time-consuming especially when there is not intuitive correspondence between a parameter value and the change in the produced sound. Searching the parameter space for a given synthesis technique is, therefore, a task more naturally suited to an automatic optimization scheme. Genetic algorithms (GA) have been used rather extensively for this purpose, and in particular for calibrating Classic FM (ClassicFM) synthesis to mimic recorded harmonic sounds. In this work, we use GA to further explore its modified counterpart, Modified FM (ModFM), which has not been used as widely, and its ability to produce musical sounds not as fully explored. We completely automize the calibration of a ModFM synthesis model for the reconstruction of harmonic instrument tones using GA. In this algorithm, we refine parameters and operators such as crossover probability or mutation operator for closer match. As an evaluation, we show that GA system automatically generates harmonic musical instrument sounds closely matching the target recordings, a match comparable to the application of GA to ClassicFM synthesis

    Experiments with time-frequency inversions

    Get PDF

    Pitch-Informed Solo and Accompaniment Separation

    Get PDF
    Das Thema dieser Dissertation ist die Entwicklung eines Systems zur Tonhöhen-informierten Quellentrennung von Musiksignalen in Soloinstrument und Begleitung. Dieses ist geeignet, die dominanten Instrumente aus einem Musikstück zu isolieren, unabhängig von der Art des Instruments, der Begleitung und Stilrichtung. Dabei werden nur einstimmige Melodieinstrumente in Betracht gezogen. Die Musikaufnahmen liegen monaural vor, es kann also keine zusätzliche Information aus der Verteilung der Instrumente im Stereo-Panorama gewonnen werden. Die entwickelte Methode nutzt Tonhöhen-Information als Basis für eine sinusoidale Modellierung der spektralen Eigenschaften des Soloinstruments aus dem Musikmischsignal. Anstatt die spektralen Informationen pro Frame zu bestimmen, werden in der vorgeschlagenen Methode Tonobjekte für die Separation genutzt. Tonobjekt-basierte Verarbeitung ermöglicht es, zusätzlich die Notenanfänge zu verfeinern, transiente Artefakte zu reduzieren, gemeinsame Amplitudenmodulation (Common Amplitude Modulation CAM) einzubeziehen und besser nichtharmonische Elemente der Töne abzuschätzen. Der vorgestellte Algorithmus zur Quellentrennung von Soloinstrument und Begleitung ermöglicht eine Echtzeitverarbeitung und ist somit relevant für den praktischen Einsatz. Ein Experiment zur besseren Modellierung der Zusammenhänge zwischen Magnitude, Phase und Feinfrequenz von isolierten Instrumententönen wurde durchgeführt. Als Ergebnis konnte die Kontinuität der zeitlichen Einhüllenden, die Inharmonizität bestimmter Musikinstrumente und die Auswertung des Phasenfortschritts für die vorgestellte Methode ausgenutzt werden. Zusätzlich wurde ein Algorithmus für die Quellentrennung in perkussive und harmonische Signalanteile auf Basis des Phasenfortschritts entwickelt. Dieser erreicht ein verbesserte perzeptuelle Qualität der harmonischen und perkussiven Signale gegenüber vergleichbaren Methoden nach dem Stand der Technik. Die vorgestellte Methode zur Klangquellentrennung in Soloinstrument und Begleitung wurde zu den Evaluationskampagnen SiSEC 2011 und SiSEC 2013 eingereicht. Dort konnten vergleichbare Ergebnisse im Hinblick auf perzeptuelle Bewertungsmaße erzielt werden. Die Qualität eines Referenzalgorithmus im Hinblick auf den in dieser Dissertation beschriebenen Instrumentaldatensatz übertroffen werden. Als ein Anwendungsszenario für die Klangquellentrennung in Solo und Begleitung wurde ein Hörtest durchgeführt, der die Qualitätsanforderungen an Quellentrennung im Kontext von Musiklernsoftware bewerten sollte. Die Ergebnisse dieses Hörtests zeigen, dass die Solo- und Begleitspur gemäß unterschiedlicher Qualitätskriterien getrennt werden sollten. Die Musiklernsoftware Songs2See integriert die vorgestellte Klangquellentrennung bereits in einer kommerziell erhältlichen Anwendung.This thesis addresses the development of a system for pitch-informed solo and accompaniment separation capable of separating main instruments from music accompaniment regardless of the musical genre of the track, or type of music accompaniment. For the solo instrument, only pitched monophonic instruments were considered in a single-channel scenario where no panning or spatial location information is available. In the proposed method, pitch information is used as an initial stage of a sinusoidal modeling approach that attempts to estimate the spectral information of the solo instrument from a given audio mixture. Instead of estimating the solo instrument on a frame by frame basis, the proposed method gathers information of tone objects to perform separation. Tone-based processing allowed the inclusion of novel processing stages for attack refinement, transient interference reduction, common amplitude modulation (CAM) of tone objects, and for better estimation of non-harmonic elements that can occur in musical instrument tones. The proposed solo and accompaniment algorithm is an efficient method suitable for real-world applications. A study was conducted to better model magnitude, frequency, and phase of isolated musical instrument tones. As a result of this study, temporal envelope smoothness, inharmonicty of musical instruments, and phase expectation were exploited in the proposed separation method. Additionally, an algorithm for harmonic/percussive separation based on phase expectation was proposed. The algorithm shows improved perceptual quality with respect to state-of-the-art methods for harmonic/percussive separation. The proposed solo and accompaniment method obtained perceptual quality scores comparable to other state-of-the-art algorithms under the SiSEC 2011 and SiSEC 2013 campaigns, and outperformed the comparison algorithm on the instrumental dataset described in this thesis.As a use-case of solo and accompaniment separation, a listening test procedure was conducted to assess separation quality requirements in the context of music education. Results from the listening test showed that solo and accompaniment tracks should be optimized differently to suit quality requirements of music education. The Songs2See application was presented as commercial music learning software which includes the proposed solo and accompaniment separation method

    Investigating computational models of perceptual attack time

    Get PDF
    The perceptual attack time (PAT) is the compensation for differing attack components of sounds, in the case of seeking a perceptually isochronous presentation of sounds. It has applications in scheduling and is related to, but not necessarily the same as, the moment of perceptual onset. This paper describes a computational investigation of PAT over a set of 25 synthesised stimuli, and a larger database of 100 sounds equally divided into synthesised and ecological. Ground truth PATs for modeling were obtained by the alternating presentation paradigm, where subjects adjusted the relative start time of a reference click and the sound to be judged. Whilst fitting experimental data from the 25 sound set was plausible, difficulties with existing models were found in the case of the larger test set. A pragmatic solution was obtained using a neural net architecture. In general, learnt schema of sound classification may be implicated in resolving the multiple detection cues evoked by complex sounds

    A spectral-envelope synthesis model to study perceptual blend between wind instruments

    Get PDF
    Wind instrument sounds can be shown to be characterized by pitch-invariant spectral maxima or formants. An acoustical signal-analysis approach is pursued to obtain spectral-envelope descriptions that reveal these pitch- invariant spectral traits. Spectral envelopes are estimated empirically by applying a curve-fitting procedure to a composite distribution of partial-tone frequencies and amplitudes obtained across an instrument’s pitch range. A source-filter synthesis model is designed based on two independent formant filters with their frequency responses matched to the spectral envelope estimates. This is then used in perceptual experiments in which parameter variations of the synthesis filter are manipulated systematically to investigate their contribution to the degree of per- ceived blend between the synthesized sound and a recorded instrument sound. The perceptual relevance is assessed through two tasks in which participants either produce the best attainable blend by directly controlling synthesis parameters or rate the degree of blend for 5 parameter presets. Behavioral data from both experiments suggest the utility of this formant-based model for correlating pitch-invariant acoustical description with perceptual relevance, as both formant frequency and magnitude appear to affect perceived blend

    Guide to the William Russo Collection

    Get PDF
    This guide describes the organization and scope of the William Russo archival collection, housed within the College Archives & Special Collections at Columbia College Chicago. William Bill Russo (1928-2003) was a composer, conductor, musician, teacher, and author; he was also the founder of the Chicago Jazz Ensemble at Columbia College Chicago, the founder and Director of Chicago Free Theater, and the director of High School Jazz Festival

    Hearing Emergence: Towards Sound-Based Self-Organisation

    Get PDF
    A fascination for models derived from natural organisation of organisms has a long history of influence in the arts. This paper discusses emergence as a complex behaviour and its manifestations in the sonic domain. We address issues inherent in the use of visual/spatial metaphors for sonic representation and propose an approach based on sound interaction within biological complex systems
    • …
    corecore