2,793 research outputs found

    Can giant radio halos probe the merging rate of galaxy clusters?

    Get PDF
    Radio and X-ray observations of galaxy clusters probe a direct link between cluster mergers and giant radio halos (RH), suggesting that these sources can be used as probes of the cluster merging rate with cosmic time. In this paper we carry out an explorative study that combines the observed fractions of merging clusters (fm) and RH (fRH) with the merging rate predicted by cosmological simulations and attempt to infer constraints on merger properties of clusters that appear disturbed in X-rays and of clusters with RH. We use morphological parameters to identify merging systems and analyze the currently largest sample of clusters with radio and X-ray data (M500>6d14 Msun, and 0.2<z<0.33, from the Planck SZ cluster catalogue). We found that in this sample fm~62-67% while fRH~44-51%. The comparison of the theoretical f_m with the observed one allows to constrain the combination (xi_m,tau_m), where xi_m and tau_m are the minimum merger mass ratio and the timescale of merger-induced disturbance. Assuming tau_m~ 2-3 Gyr, as constrained by simulations, we find that the observed f_m matches the theoretical one for xi_m~0.1-0.18. This is consistent with optical and near-IR observations of clusters in the sample (xi_m~0.14-0.16). The fact that RH are found only in a fraction of merging clusters may suggest that merger events generating RH are characterized by larger mass ratio; this seems supported by optical/near-IR observations of RH clusters in the sample (xi_min~0.2-0.25). Alternatively, RH may be generated in all mergers but their lifetime is shorter than \tau_m (by ~ fRH/fm). This is an explorative study, however it suggests that follow up studies using the forthcoming radio surveys and adequate numerical simulations have the potential to derive quantitative constraints on the link between cluster merging rate and RH at different cosmic epochs and for different cluster masses.Comment: 10 pages, 3 figures, accepted for publication in A&

    Anxious to see you: Neuroendocrine mechanisms of social vigilance and anxiety during adolescence.

    Get PDF
    Social vigilance is a behavioral strategy commonly used in adverse or changing social environments. In animals, a combination of avoidance and vigilance allows an individual to evade potentially dangerous confrontations while monitoring the social environment to identify favorable changes. However, prolonged use of this behavioral strategy in humans is associated with increased risk of anxiety disorders, a major burden for human health. Elucidating the mechanisms of social vigilance in animals could provide important clues for new treatment strategies for social anxiety. Importantly, during adolescence the prevalence of social anxiety increases significantly. We hypothesize that many of the actions typically characterized as anxiety behaviors begin to emerge during this time as strategies for navigating more complex social structures. Here, we consider how the social environment and the pubertal transition shape neural circuits that modulate social vigilance, focusing on the bed nucleus of the stria terminalis and prefrontal cortex. The emergence of gonadal hormone secretion during adolescence has important effects on the function and structure of these circuits, and may play a role in the emergence of a notable sex difference in anxiety rates across adolescence. However, the significance of these changes in the context of anxiety is still uncertain, as not enough studies are sufficiently powered to evaluate sex as a biological variable. We conclude that greater integration between human and animal models will aid the development of more effective strategies for treating social anxiety

    Validity and reliability of the Structured Clinical Interview for Depersonalization-Derealization Spectrum (SCI-DER).

    Get PDF
    This study evaluates the validity and reliability of a new instrument developed to assess symptoms of depresonalization: the Structured Clinical Interview for the Depersonalization-Derealization Spectrum (SCI-DER). The instrument is based on a spectrum model that emphasizes soft-signs, sub-threshold syndromes as well as clinical and subsyndromal manifestations. Items of the interview include, in addition to DSM-IV criteria for depersonalization, a number of features derived from clinical experience and from a review of phenomenological descriptions. Study participants included 258 consecutive patients with mood and anxiety disorders, 16.7% bipolar I disorder, 18.6% bipolar II disorder, 32.9% major depression, 22.1% panic disorder, 4.7% obsessive compulsive disorder, and 1.5% generalized anxiety disorder; 2.7% patients were also diagnosed with depersonalization disorder. A comparison group of 42 unselected controls was enrolled at the same site. The SCI-DER showed excellent reliability and good concurrent validity with the Dissociative Experiences Scale. It significantly discriminated subjects with any diagnosis of mood and anxiety disorders from controls and subjects with depersonalization disorder from controls. The hypothesized structure of the instrument was confirmed empirically

    Fast and Accurate Error Simulation for CNNs Against Soft Errors

    Get PDF
    The great quest for adopting AI-based computation for safety-/mission-critical applications motivates the interest towards methods for assessing the robustness of the application w.r.t. not only its training/tuning but also errors due to faults, in particular soft errors, affecting the underlying hardware. Two strategies exist: architecture-level fault injection and application-level functional error simulation. We present a framework for the reliability analysis of Convolutional Neural Networks (CNNs) via an error simulation engine that exploits a set of validated error models extracted from a detailed fault injection campaign. These error models are defined based on the corruption patterns of the output of the CNN operators induced by faults and bridge the gap between fault injection and error simulation, exploiting the advantages of both approaches. We compared our methodology against SASSIFI for the accuracy of functional error simulation w.r.t. fault injection, and against TensorFI in terms of speedup for the error simulation strategy. Experimental results show that our methodology achieves about 99% accuracy of the fault effects w.r.t. SASSIFI, and a speedup ranging from 44x up to 63x w.r.t. TensorFI, that only implements a limited set of error models

    153 MHz GMRT follow-up of steep-spectrum diffuse emission in galaxy clusters

    Full text link
    In this paper we present new high sensitivity 153 MHz Giant Meterwave Radio Telescope follow-up observations of the diffuse steep spectrum cluster radio sources in the galaxy clusters Abell 521, Abell 697, Abell 1682. Abell 521 hosts a relic, and together with Abell 697 it also hosts a giant very steep spectrum radio halo. Abell 1682 is a more complex system with candidate steep spectrum diffuse emission. We imaged the diffuse radio emission in these clusters at 153 MHz, and provided flux density measurements of all the sources at this frequency. Our new flux density measurements, coupled with the existing data at higher frequencies, allow us to study the total spectrum of the halos and relic over at least one order of magnitude in frequency. Our images confirm the presence of a very steep "diffuse component" in Abell 1682. We found that the spectrum of the relic in Abell 521 can be fitted by a single power-law with α=1.45±0.02\alpha=1.45\pm0.02 from 153 MHz to 5 GHz. Moreover, we confirm that the halos in Abell 521 and Abell 697 have a very steep spectrum, with α=1.81.9\alpha=1.8-1.9 and α=1.52±0.05\alpha=1.52\pm0.05 respectively. Even with the inclusion of the 153 MHz flux density information it is impossible to discriminate between power-law and curved spectra, as derived from homogeneous turbulent re-acceleration. The latter are favored on the basis of simple energetic arguments, and we expect that LOFAR will finally unveil the shape of the spectra of radio halos below 100 MHz, thus providing clues on their origin.Comment: 11 pages, 6 figures, 3 tables, accepted for publication in A&

    Selective Hardening of CNNs based on Layer Vulnerability Estimation

    Get PDF
    There is an increasing interest in employing Convolutional Neural Networks (CNNs) in safety-critical application fields. In such scenarios, it is vital to ensure that the application fulfills the reliability requirements expressed by customers and design standards. On the other hand, given the CNNs extremely high computational requirements, it is also paramount to achieve high performance. To meet both reliability and performance requirements, partial and selective replication of the layers of the CNN can be applied. In this paper, we identify the most critical layers of a CNN in terms of vulnerability to fault and selectively duplicate them to achieve a target reliability vs. execution time trade-off. To this end we perform a design space exploration to identify layers to be duplicated. Results on the application of the proposed approach to four case study CNNs are reported

    Analyzing the Reliability of Alternative Convolution Implementations for Deep Learning Applications

    Get PDF
    Convolution represents the core of Deep Learning (DL) applications, enabling the automatic extraction of features from raw input data. Several implementations of the convolution have been proposed. The impact of these different implementations on the performance of DL applications has been studied. However, no specific reliability-related analysis has been carried out. In this paper, we apply the CLASSES cross-layer reliability analysis methodology for an in-depth study aimed at: i) analyzing and characterizing the effects of Single Event Upsets occurring in Graphics Processing Units while executing the convolution operators; and ii) identifying whether a convolution implementation is more robust than others. The outcomes can then be exploited to tailor better hardening schemes for DL applications to improve reliability and reduce overhead

    Approximation-Based Fault Tolerance in Image Processing Applications

    Get PDF
    Image processing applications exhibit an intrinsic degree of fault tolerance due to i) the redundant nature of images, and ii) the possible ability of the consumers of the application output to effectively carry out their task even when it is slightly corrupted. In this application scenario the classical Duplication with Comparison (DWC) scheme, that rejects images (and requires re-executions) when the two replicas' outputs differ in a per-pixel comparison, may be over-conservative. In this article, we propose a novel lightweight fault tolerant scheme specifically tailored for image processing applications. The proposed scheme enhances the state-of-the-art by: i) improving the DWC scheme by replacing one of the two exact replicas with an approximated counterpart, and ii) allowing to distinguish between usable and unusable images instead of corrupted and uncorrupted ones by means of a Convolutional Neural Network-based checker. To tune the proposed scheme we introduce a specific design methodology that optimizes both execution time and fault detection capability of the hardened system. We report the results of the application of the proposed approach on two case studies; our proposal achieves an average execution time reduction larger than 30% w.r.t. the DWC with re-execution, and less than 4% misclassified unusable images
    corecore