2,190 research outputs found

    Utilising Tree-Based Ensemble Learning for Speaker Segmentation

    Get PDF
    Part 2: Learning-Ensemble LearningInternational audienceIn audio and speech processing, accurate detection of the changing points between multiple speakers in speech segments is an important stage for several applications such as speaker identification and tracking. Bayesian Information Criteria (BIC)-based approaches are the most traditionally used ones as they proved to be very effective for such task. The main criticism levelled against BIC-based approaches is the use of a penalty parameter in the BIC function. The use of this parameters consequently means that a fine tuning is required for each variation of the acoustic conditions. When tuned for a certain condition, the model becomes biased to the data used for training limiting the model’s generalisation ability.In this paper, we propose a BIC-based tuning-free approach for speaker segmentation through the use of ensemble-based learning. A forest of segmentation trees is constructed in which each tree is trained using a sampled version of the speech segment. During the tree construction process, a set of randomly selected points in the input sequence is examined as potential segmentation points. The point that yields the highest ΔBIC is chosen and the same process is repeated for the resultant left and right segments. The tree is constructed where each node corresponds to the highest ΔBIC with the associated point index. After building the forest and using all trees, the accumulated ΔBIC for each point is calculated and the positions of the local maximums are considered as speaker changing points. The proposed approach is tested on artificially created conversations from the TIMIT database. The approach proposed show very accurate results comparable to those achieved by the-state-of-the-art methods with a 9% (absolute) higher F1 compared with the standard ΔBIC with optimally tuned penalty parameter

    Appearance and disappearance of superconductivity in SmFe1-xNixAsO (x = 0.0 to 1.0)

    Full text link
    Bulk polycrystalline Ni-substituted SmFe1-xNixAsO (x = 0.0 to 1.0) samples are synthesized by solid state reaction route in an evacuated sealed quartz tube. The cell volume decreases with increase of Ni content in SmFe1-xNixAsO, thus indicating successful substitution of smaller ion Ni at Fe site. The resistivity measurements showed that the spin-density-wave (SDW) transition is suppressed drastically with Ni doping and subsequently superconductivity is achieved in a narrow range of x from 0.04 to 0.10 with maximum Tc of 9K at x = 0.06. For higher content of Ni (x > 0.10), the system becomes metallic and superconductivity is not observed down to 2K. The magneto-transport [R(T)H] measurements exhibited the upper critical field [Hc2(0)] of up to 300kOe. The flux flow activation energy (U/kB) is estimated ~98.37K for 0.1T field. Magnetic susceptibility measurements also confirms bulk superconductivity for x = 0.04, 0.06 and 0.08 samples. The lower critical field (Hc1) is around 100Oe at 2K for x = 0.06 sample. Heat capacity CP(T) measurements exhibited a hump like transition pertaining to SDW in Fe planes at around 150K and an AFM ordering of Sm spins below temperature of 5.4K for ordered Sm spins [TN(Sm)]. Though, the SDW hump for Fe spins disappears for Ni doped samples, the TN (Sm) remains unaltered but with a reduced transition height, i.e., decreased entropy. In conclusion, complete phase diagram of SmFe1-xNixAsO (x = 0.0 to 1.0) is studied in terms of its structural, electrical, magnetic and thermal properties.Comment: 18 pages text + Figures; comments suggestions welcome ([email protected]

    A Novel Approach to the Common Due-Date Problem on Single and Parallel Machines

    Full text link
    This paper presents a novel idea for the general case of the Common Due-Date (CDD) scheduling problem. The problem is about scheduling a certain number of jobs on a single or parallel machines where all the jobs possess different processing times but a common due-date. The objective of the problem is to minimize the total penalty incurred due to earliness or tardiness of the job completions. This work presents exact polynomial algorithms for optimizing a given job sequence for single and identical parallel machines with the run-time complexities of O(nlogn)O(n \log n) for both cases, where nn is the number of jobs. Besides, we show that our approach for the parallel machine case is also suitable for non-identical parallel machines. We prove the optimality for the single machine case and the runtime complexities of both. Henceforth, we extend our approach to one particular dynamic case of the CDD and conclude the chapter with our results for the benchmark instances provided in the OR-library.Comment: Book Chapter 22 page

    Possibilistic KNN regression using tolerance intervals

    Get PDF
    International audienceBy employing regression methods minimizing predictive risk, we are usually looking for precise values which tends to their true response value. However, in some situations, it may be more reasonable to predict intervals rather than precise values. In this paper, we focus to find such intervals for the K-nearest neighbors (KNN) method with precise values for inputs and output. In KNN, the prediction intervals are usually built by considering the local probability distribution of the neighborhood. In situations where we do not dispose of enough data in the neighborhood to obtain statistically significant distributions, we would rather wish to build intervals which takes into account such distribution uncertainties. For this latter we suggest to use tolerance intervals to build the maximal specific possibility distribution that bounds each population quantiles of the true distribution (with a fixed confidence level) that might have generated our sample set. Next we propose a new interval regression method based on KNN which take advantage of our possibility distribution in order to choose, for each instance, the value of K which will be a good trade-off between precision and uncertainty due to the limited sample size. Finally we apply our method on an aircraft trajectory prediction problem

    Hygroscopicity distribution concept for measurement data analysis and modeling of aerosol particle mixing state with regard to hygroscopic growth and CCN activation

    Get PDF
    This paper presents a general concept and mathematical framework of particle hygroscopicity distribution for the analysis and modeling of aerosol hygroscopic growth and cloud condensation nucleus (CCN) activity. The cumulative distribution function of particle hygroscopicity, H(κ, Dd) is defined as the number fraction of particles with a given dry diameter, Dd, and with an effective hygroscopicity parameter smaller than the parameter κ. From hygroscopicity tandem differential mobility analyzer (HTDMA) and size-resolved CCN measurement data, H(κ, Dd) can be derived by solving the κ-Köhler model equation. Alternatively, H(κ, Dd) can be predicted from measurement or model data resolving the chemical composition of single particles. A range of model scenarios are used to explain and illustrate the concept, and exemplary practical applications are shown with HTDMA and CCN measurement data from polluted megacity and pristine rainforest air. Lognormal distribution functions are found to be suitable for approximately describing the hygroscopicity distributions of the investigated atmospheric aerosol samples. For detailed characterization of aerosol hygroscopicity distributions, including externally mixed particles of low hygroscopicity such as freshly emitted soot, we suggest that size-resolved CCN measurements with a wide range and high resolution of water vapor supersaturation and dry particle diameter should be combined with comprehensive HTDMA measurements and size-resolved or single-particle measurements of aerosol chemical composition, including refractory components. In field and laboratory experiments, hygroscopicity distribution data from HTDMA and CCN measurements can complement mixing state information from optical, chemical and volatility-based techniques. Moreover, we propose and intend to use hygros

    From Fake Supergravity to Superstars

    Get PDF
    The fake supergravity method is applied to 5-dimensional asymptotically AdS spacetimes containing gravity coupled to a real scalar and an abelian gauge field. The motivation is to obtain bulk solutions with R x S^3 symmetry in order to explore the AdS/CFT correspondence when the boundary gauge theory is on R x S^3. A fake supergravity action, invariant under local supersymmetry through linear order in fermion fields, is obtained. The gauge field makes things more restrictive than in previous applications of fake supergravity which allowed quite general scalar potentials. Here the superpotential must take the form W(\phi) ~ exp(-k\phi) + c exp(2\phi/(3k)), and the only freedom is the choice of the constant k. The fermion transformation rules of fake supergravity lead to fake Killing spinor equations. From their integrability conditions, we obtain first order differential equations which we solve analytically to find singular electrically charged solutions of the Lagrangian field equations. A Schwarzschild mass term can be added to produce a horizon which shields the singularity. The solutions, which include "superstars", turn out to be known in the literature. We compute their holographic parameters.Comment: 42 pages, 3 figure

    Improved Holographic QCD

    Full text link
    We provide a review to holographic models based on Einstein-dilaton gravity with a potential in 5 dimensions. Such theories, for a judicious choice of potential are very close to the physics of large-N YM theory both at zero and finite temperature. The zero temperature glueball spectra as well as their finite temperature thermodynamic functions compare well with lattice data. The model can be used to calculate transport coefficients, like bulk viscosity, the drag force and jet quenching parameters, relevant for the physics of the Quark-Gluon Plasma.Comment: LatEX, 65 pages, 28 figures, 9 Tables. Based on lectures given at several Schools. To appear in the proceedinds of the 5th Aegean School (Milos, Greece

    Charmless BsPP,PV,VVB_s\to PP, PV, VV Decays Based on the six-quark Effective Hamiltonian with Strong Phase Effects II

    Full text link
    We provide a systematic study of charmless BsPP,PV,VVB_s \to PP, PV, VV decays (PP and VV denote pseudoscalar and vector mesons, respectively) based on an approximate six-quark operator effective Hamiltonian from QCD. The calculation of the relevant hard-scattering kernels is carried out, the resulting transition form factors are consistent with the results of QCD sum rule calculations. By taking into account important classes of power corrections involving "chirally-enhanced" terms and the vertex corrections as well as weak annihilation contributions with non-trivial strong phase, we present predictions for the branching ratios and CP asymmetries of BsB_s decays into PP, PV and VV final states, and also for the corresponding polarization observables in VV final states. It is found that the weak annihilation contributions with non-trivial strong phase have remarkable effects on the observables in the color-suppressed and penguin-dominated decay modes. In addition, we discuss the SU(3) flavor symmetry and show that the symmetry relations are generally respected
    corecore