4,794 research outputs found

    Resonance contributions to HBT correlation radii

    Get PDF
    We study the effect of resonance decays on intensity interferometry for heavy ion collisions. Collective expansion of the source leads to a dependence of the two-particle correlation function on the pair momentum K. This opens the possibility to reconstruct the dynamics of the source from the K-dependence of the measured HBT radii. Here we address the question to what extent resonance decays can fake such a flow signal. Within a simple parametrization for the emission function we present a comprehensive analysis of the interplay of flow and resonance decays on the one- and two-particle spectra. We discuss in detail the non-Gaussian features of the correlation function introduced by long-lived resonances and the resulting problems in extracting meaningful HBT radii. We propose to define them in terms of the second order q-moments of the correlator C(q, K). We show that this yields a more reliable characterisation of the correlator in terms of its width and the correlation strength `lambda' than other commonly used fit procedures. The normalized fourth-order q-moments (kurtosis) provide a quantitative measure for the non-Gaussian features of the correlator. At least for the class of models studied here, the kurtosis helps separating effects from expansion flow and resonance decays, and provides the cleanest signal to distinguish between scenarios with and without transverse flow.Comment: 23 pages, twocolumn RevTeX, 12 eps-figures included, minor changes following referee comment

    Analyze the Efficiency of Blind Signal Extraction Algorithms in a Background of Impulse Noise Based on the Maximization of the Absolute Value of the Kurtosis.

    Get PDF
     في هذا البحث، تم تحليل كفاءة خوارزميات استخلاص إشارة النبض العمياء في خلفية ضوضاء النبض على أساس تعظيم القيمة المطلقة للتفرطح. توليف خوارزميات الاستخلاص العمياء مع نقطة ثابتة وتدرس في تركيبة مع التدرج. ويظهر التقارب بين هذه الخوارزميات للشروط الاولية الصفرية وغير الصفرية .يتم صياغة برهان  ليمي ومبرهنتين للسماح باثبات تخصيص أعمى للإشارة وتحديد عدد القرارات فيما يتعلق باستخلاص الاإشارة. وقد أثبتت النمذجة أن خوارزمية النقطة الثابتة القائمة على تعظيم قيمة التفرطح المطلق هي أكثر كفاءة وتسمح بفصل إشارة النبضية المطلوبه بحيث ان نسبة الإشارة إلى الضوضاء تبلغ30 ديسيبل أكثر من خوارزمية التدرج التي لها نفس الوظيفة والهدف. ان المحاكاة لخوارزمية تعظيم قيمة التفرطح المطلق و خوارزمية النقطه الثابتة القائمة على تعظيم قيمة التفرطح المطلق تم تننفيذها باساخدام برنامج ماتلاب.In this paper, analyzed the efficiency of algorithms of blind pulse signal extraction in a background of impulse noise based on the maximization of the absolute value of the kurtosis. Synthesized blind separation algorithms with fixed point and it is considered in combination with the gradient. The convergence of these algorithms is shown for zero and nonzero initial conditions. A lemma and two theorems are formulated Allowing to prove the blind allocation of the signal and to determine the number of decisions with regard to signal extraction. Modeling established that the fixed point algorithm based on the maximization of the absolute kurtosis value is more efficient and allows to separate the pulse desire signal with the signal-to-noise ratio of 30 dB more than the gradient algorithm with the same objective function. Computer modeling of AbsoKurt and AbsoKurtFP algorithms Carried out in Simulink using Matlab programing

    Early hospital mortality prediction using vital signals

    Full text link
    Early hospital mortality prediction is critical as intensivists strive to make efficient medical decisions about the severely ill patients staying in intensive care units. As a result, various methods have been developed to address this problem based on clinical records. However, some of the laboratory test results are time-consuming and need to be processed. In this paper, we propose a novel method to predict mortality using features extracted from the heart signals of patients within the first hour of ICU admission. In order to predict the risk, quantitative features have been computed based on the heart rate signals of ICU patients. Each signal is described in terms of 12 statistical and signal-based features. The extracted features are fed into eight classifiers: decision tree, linear discriminant, logistic regression, support vector machine (SVM), random forest, boosted trees, Gaussian SVM, and K-nearest neighborhood (K-NN). To derive insight into the performance of the proposed method, several experiments have been conducted using the well-known clinical dataset named Medical Information Mart for Intensive Care III (MIMIC-III). The experimental results demonstrate the capability of the proposed method in terms of precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). The decision tree classifier satisfies both accuracy and interpretability better than the other classifiers, producing an F1-score and AUC equal to 0.91 and 0.93, respectively. It indicates that heart rate signals can be used for predicting mortality in patients in the ICU, achieving a comparable performance with existing predictions that rely on high dimensional features from clinical records which need to be processed and may contain missing information.Comment: 11 pages, 5 figures, preprint of accepted paper in IEEE&ACM CHASE 2018 and published in Smart Health journa

    Skewness and Kurtosis as Indicators of Non-Gaussianity in Galactic Foreground Maps

    Full text link
    Observational cosmology is entering an era in which high precision will be required in both measurement and data analysis. Accuracy, however, can only be achieved with a thorough understanding of potential sources of contamination from foreground effects. Our primary focus will be on non- Gaussian effects in foregrounds. This issue will be crucial for coming experiments to determine B-mode polarization. We propose a novel method for investigating a data set in terms of skewness and kurtosis in locally defined regions that collectively cover the entire sky. The method is demonstrated on two sky maps: (i) the SMICA map of Cosmic Microwave Background fluctuations provided by the Planck Collaboration and (ii) a version of the Haslam map at 408 MHz that describes synchrotron radiation. We find that skewness and kurtosis can be evaluated in combination to reveal local physical information. In the present case, we demonstrate that the local properties of both maps are predominantly Gaussian. This result was expected for the SMICA map; that it also applies for the Haslam map is surprising. The approach described here has a generality and flexibility that should make it useful in a variety of astrophysical and cosmological contexts.Comment: 15 pages, 7 figures, minor change, as published in JCA

    ICA-Based Fetal Monitoring

    Get PDF

    Seismic Ray Impedance Inversion

    Get PDF
    This thesis investigates a prestack seismic inversion scheme implemented in the ray parameter domain. Conventionally, most prestack seismic inversion methods are performed in the incidence angle domain. However, inversion using the concept of ray impedance, as it honours ray path variation following the elastic parameter variation according to Snell’s law, shows the capacity to discriminate different lithologies if compared to conventional elastic impedance inversion. The procedure starts with data transformation into the ray-parameter domain and then implements the ray impedance inversion along constant ray-parameter profiles. With different constant-ray-parameter profiles, mixed-phase wavelets are initially estimated based on the high-order statistics of the data and further refined after a proper well-to-seismic tie. With the estimated wavelets ready, a Cauchy inversion method is used to invert for seismic reflectivity sequences, aiming at recovering seismic reflectivity sequences for blocky impedance inversion. The impedance inversion from reflectivity sequences adopts a standard generalised linear inversion scheme, whose results are utilised to identify rock properties and facilitate quantitative interpretation. It has also been demonstrated that we can further invert elastic parameters from ray impedance values, without eliminating an extra density term or introducing a Gardner’s relation to absorb this term. Ray impedance inversion is extended to P-S converted waves by introducing the definition of converted-wave ray impedance. This quantity shows some advantages in connecting prestack converted wave data with well logs, if compared with the shearwave elastic impedance derived from the Aki and Richards approximation to the Zoeppritz equations. An analysis of P-P and P-S wave data under the framework of ray impedance is conducted through a real multicomponent dataset, which can reduce the uncertainty in lithology identification.Inversion is the key method in generating those examples throughout the entire thesis as we believe it can render robust solutions to geophysical problems. Apart from the reflectivity sequence, ray impedance and elastic parameter inversion mentioned above, inversion methods are also adopted in transforming the prestack data from the offset domain to the ray-parameter domain, mixed-phase wavelet estimation, as well as the registration of P-P and P-S waves for the joint analysis. The ray impedance inversion methods are successfully applied to different types of datasets. In each individual step to achieving the ray impedance inversion, advantages, disadvantages as well as limitations of the algorithms adopted are detailed. As a conclusion, the ray impedance related analyses demonstrated in this thesis are highly competent compared with the classical elastic impedance methods and the author would like to recommend it for a wider application

    Analytical Solutions to the Mass-Anisotropy Degeneracy with Higher Order Jeans Analysis: A General Method

    Full text link
    The Jeans analysis is often used to infer the total density of a system by relating the velocity moments of an observable tracer population to the underlying gravitational potential. This technique has recently been applied in the search for Dark Matter in objects such as dwarf spheroidal galaxies where the presence of Dark Matter is inferred via stellar velocities. A precise account of the density is needed to constrain the expected gamma ray flux from DM self-annihilation and to distinguish between cold and warm dark matter models. Unfortunately the traditional method of fitting the second order Jeans equation to the tracer dispersion suffers from an unbreakable degeneracy of solutions due to the unknown velocity anisotropy of the projected system. To tackle this degeneracy one can appeal to higher moments of the Jeans equation. By introducing an analog to the Binney anisotropy parameter at fourth order, beta' we create a framework that encompasses all solutions to the fourth order Jeans equations rather than those in the literature that impose unnecessary correlations between anisotropy of second and fourth order moments. The condition beta' = f(beta) ensures that the degeneracy is lifted and we interpret the separable augmented density system as the order-independent case beta'= beta. For a generic choice of beta' we present the line of sight projection of the fourth moment and how it could be incorporated into a joint likelihood analysis of the dispersion and kurtosis. Having presented the mathematical framework, we then use it to develop a statistical method for the purpose of placing constraints on dark matter density parameters from discrete velocity data. The method is tested on simulated dwarf spheroidal data sets leading to results which motivate study of real dwarf spheroidal data sets.Comment: 21 pages, 15 figures. Accepted by MNRAS. Typo corrected in eq. 3

    Efficient Signatures Verification System Based on Artificial Neural Networks

    Get PDF
    Biometrics refer to the system of authenticating identities of humans, using features like retina scans, thumb and fingerprint scanning, face recognition and also signature recognition. Signatures are a simple and natural method of verifying a person’s identity. It can be saved as an image and verified by matching, using neural networks. Signature verification can be offline or online. In this work, we present a system for offline signature verification. The user has to submit a number of signatures that are used to extract two types of features, statistical features and structural features. A vector obtained from each of them is used to train propagation neural network in the verification stage. A test signature is then taken from the user, to compare it with those the network had been trained with. A test experiment was carried out with two sets of data. One set is used as a training set for the propagation neural network in its verification stage. This set with four signatures form each user is used for the training purpose. The second set consists of one sample of signature for each of the 20 persons is used as a test set for the system. A negative identification test was carried out using a signature of one person to test others’ signatures. The experimental results for the accuracy showed excellent false reject rate and false acceptance rate

    A novel lip geometry approach for audio-visual speech recognition

    Get PDF
    By identifying lip movements and characterizing their associations with speech sounds, the performance of speech recognition systems can be improved, particularly when operating in noisy environments. Various method have been studied by research group around the world to incorporate lip movements into speech recognition in recent years, however exactly how best to incorporate the additional visual information is still not known. This study aims to extend the knowledge of relationships between visual and speech information specifically using lip geometry information due to its robustness to head rotation and the fewer number of features required to represent movement. A new method has been developed to extract lip geometry information, to perform classification and to integrate visual and speech modalities. This thesis makes several contributions. First, this work presents a new method to extract lip geometry features using the combination of a skin colour filter, a border following algorithm and a convex hull approach. The proposed method was found to improve lip shape extraction performance compared to existing approaches. Lip geometry features including height, width, ratio, area, perimeter and various combinations of these features were evaluated to determine which performs best when representing speech in the visual domain. Second, a novel template matching technique able to adapt dynamic differences in the way words are uttered by speakers has been developed, which determines the best fit of an unseen feature signal to those stored in a database template. Third, following on evaluation of integration strategies, a novel method has been developed based on alternative decision fusion strategy, in which the outcome from the visual and speech modality is chosen by measuring the quality of audio based on kurtosis and skewness analysis and driven by white noise confusion. Finally, the performance of the new methods introduced in this work are evaluated using the CUAVE and LUNA-V data corpora under a range of different signal to noise ratio conditions using the NOISEX-92 dataset
    corecore