10 research outputs found

    Reconstructing mass profiles of simulated galaxy clusters by combining Sunyaev-Zeldovich and X-ray images

    Full text link
    We present a method to recover mass profiles of galaxy clusters by combining data on thermal Sunyaev-Zeldovich (tSZ) and X-ray imaging, thereby avoiding to use any information on X-ray spectroscopy. This method, which represents a development of the geometrical deprojection technique presented in Ameglio et al. (2007), implements the solution of the hydrostatic equilibrium equation. In order to quantify the efficiency of our mass reconstructions, we apply our technique to a set of hydrodynamical simulations of galaxy clusters. We propose two versions of our method of mass reconstruction. Method 1 is completely model-independent, while Method 2 assumes instead the analytic mass profile proposed by Navarro et al. (1997) (NFW). We find that the main source of bias in recovering the mass profiles is due to deviations from hydrostatic equilibrium, which cause an underestimate of the mass of about 10 per cent at r_500 and up to 20 per cent at the virial radius. Method 1 provides a reconstructed mass which is biased low by about 10 per cent, with a 20 per cent scatter, with respect to the true mass profiles. Method 2 proves to be more stable, reducing the scatter to 10 per cent, but with a larger bias of 20 per cent, mainly induced by the deviations from equilibrium in the outskirts. To better understand the results of Method 2, we check how well it allows to recover the relation between mass and concentration parameter. When analyzing the 3D mass profiles we find that including in the fit the inner 5 per cent of the virial radius biases high the halo concentration. Also, at a fixed mass, hotter clusters tend to have larger concentration. Our procedure recovers the concentration parameter essentially unbiased but with a scatter of about 50 per cent.Comment: 13 pages, 11 figures, submitted to MNRA

    Prosody based co-analysis for continuous recognition of coverbal gestures

    No full text
    Although recognition of natural speech and gestures have been studied extensively, previous attempts at combining them in a unified framework to boost classification were mostly semantically motivated, e.g., keyword-gesture co-occurrence. Such formulations inherit the complexity of natural language processing. This paper presents a Bayesian formulation that uses a phenomenon of gesture and speech articulation for improving accuracy of automatic recognition of continuous coverbal gestures. The prosodic features from the speech signal were co-analyzed with the visual signal to learn the prior probability of co-occurrence of the prominent spoken segments with the particular kinematical phases of gestures. It was found that the above co-analysis helps in detecting and disambiguating small hand movements, which subsequently improves the rate of continuous gesture recognition. The efficacy of the proposed approach was demonstrated on a large database collected front the weather channel broadcast. This formulation opens new avenues for bottom-up frameworks of multimodal integration

    A real-time framework for natural multimodal interaction with large screen displays

    No full text
    This paper presents a framework for designing a natural multimodal human computer interaction (HCI) system. The core of the proposed framework is a principled method for combining information derived from audio and visual cues. To achieve natural interaction, both audio and visual modalities are fused along with feedback through a large screen display. Careful design along with due considerations of possible aspects of a systems interaction cycle and integration has resulted in a successful system. The performance of the proposed framework has been validated through the development of several prototype systems as well as commercial applications for the retail and entertainment industry. To assess the impact of these multimodal systems (MMS), informal studies have been conducted. It was found that the system performed according to its specifications in 95% of the cases and that users showed ad-hoc proficiency, indicating natural acceptance of such systems

    Multi-modal Contact-Less Human Computer Interaction

    No full text
    We describe a contact-less Human Computer Interaction (HCI) system that aims to provide paraplegics the opportunity to use computers without the need for additional invasive hardware. The proposed system is a multi-modal system combining both visual and speech input. Visual input is provided through a standard web camera used for capturing face images showing the user of the computer. Image processing techniques are used for tracking head movements, making it possible to use head motion in order to interact with a computer. Speech input is used for activating commonly used tasks that are normally activated using the mouse or the keyboard. The performance of the proposed system was evaluated using a number of specially designed test applications. According to the quantitative results, it is possible to perform most HCI tasks with the same ease and accuracy as in the case that a touch pad of a portable computer is used

    Robust Recognition of Emotion from Speech

    No full text
    This paper presents robust recognition of a subset of emotions by animated agents from salient spoken words. To develop and evaluate the model for each emotion from the chosen subset, both the prosodie and acoustic features were used to extract the intonational patterns and correlates of emotion from speech samples. The computed features were projected using a combination of linear projection techniques for compact and clustered representation of features. The projected features were used to build models of emotions using a set of classifiers organized in hierarchical fashion. The performances of the models were obtained using number of classifiers from the WEKA machine learning toolbox. Empirical analysis indicated that the lexical information computed from both the prosodie and acoustic features at word level yielded robust classification of emotions. © Springer-Verlag Berlin Heidelberg 2006

    Receptor density influences ligand-induced dopamine D2L receptor homodimerization

    No full text
    Chronic treatments with dopamine D2 receptor ligands induce fluctuations in D2 receptor density. Since D2 receptors tend to assemble as homodimers, we hypothesized that receptor density might influence constitutive and ligand-induced homodimerization. Using a nanoluciferase-based complementation assay to monitor dopamine D2L receptor homodimerization in a cellular model enabling the tetracycline-controlled expression of dopamine D2L receptors, we observed that increasing receptor density promoted constitutive dopamine D2L receptor homodimerization. Receptor full agonists promoted homodimerization, while antagonists and partial agonists disrupted dopamine D2L receptor homodimers. High receptor densities enhanced this inhibitory effect only for receptor antagonists. Taken together, our findings indicate that both receptor density and receptor ligands influence dopamine D2L receptor homodimerization, albeit excluding any strict correlation with ligands’ intrinsic activity and highlighting further complexity to dopaminergic pharmacology
    corecore