40 research outputs found

    X-ray study of a sample of FR0 radio galaxies: unveiling the nature of the central engine

    Get PDF
    FR0s are compact radio sources that represent the bulk of the Radio-Loud (RL) AGN population, but they are still poorly understood. Pilot studies on these sources have been already performed at radio and optical wavelengths: here we present the first X-ray study of a sample of 19 FR0 radio galaxies selected from the SDSS/NVSS/FIRST sample of Best & Heckman (2012), with redshift ≤\leq 0.15, radio size ≤\leq 10 kpc and optically classified as low-excitation galaxies (LEG). The X-ray spectra are modeled with a power-law component absorbed by Galactic column density with, in some cases, a contribution from thermal extended gas. The X-ray photons are likely produced by the jet as attested by the observed correlation between X-ray (2-10 keV) and radio (5 GHz) luminosities, similar to FRIs. The estimated Eddington-scaled luminosities indicate a low accretion rate. Overall, we find that the X-ray properties of FR0s are indistinguishable from those of FRIs, thus adding another similarity between AGN associated with compact and extended radio sources. A comparison between FR0s and low luminosity BL Lacs, rules out important beaming effects in the X-ray emission of the compact radio galaxies. FR0s have different X-ray properties with respect to young radio sources (e.g. GPS/CSS sources), generally characterized by higher X-ray luminosities and more complex spectra. In conclusion, the paucity of extended radio emission in FR0s is probably related to the intrinsic properties of their jets that prevent the formation of extended structures, and/or to intermittent activity of their engines.Comment: Accepted for publication in MNRAS (18 pages, 4 figures

    The gamma-ray emission region in the FRII Radio Galaxy 3C 111

    Full text link
    The Broad Line Radio Galaxy 3C 111, characterized by a Fanaroff-Riley II (FRII) radio morphology, is one of the sources of the Misaligned Active Galactic Nuclei sample, consisting of Radio Galaxies and Steep Spectrum Radio Quasars, recently detected by the Fermi-Large Area Telescope. Our analysis of the 24-month gamma-ray light curve shows that 3C 111 was only occasionally detected at high energies. It was bright at the end of 2008 and faint, below the Fermi-Large Area Telescope sensitivity threshold, for the rest of the time. A multifrequency campaign of 3C~111, ongoing in the same period, revealed an increase of the mm, optical and X-ray fluxes in 2008 September-November, interpreted by Chatterjee et al. (2011) as due to the passage of a superluminal knot through the jet core. The temporal coincidence of the mm-optical-X-ray outburst with the GeV activity suggests a co-spatiality of the events, allowing, for the first time, the localization of the gamma-ray dissipative zone in a FRII jet. We argue that the GeV photons of 3C 111 are produced in a compact region confined within 0.1 pc and at a distance of about 0.3 pc from the black hole.Comment: 8 pages, 4 figures. ApJL in pres

    The On-Site Analysis of the Cherenkov Telescope Array

    Get PDF
    The Cherenkov Telescope Array (CTA) observatory will be one of the largest ground-based very high-energy gamma-ray observatories. The On-Site Analysis will be the first CTA scientific analysis of data acquired from the array of telescopes, in both northern and southern sites. The On-Site Analysis will have two pipelines: the Level-A pipeline (also known as Real-Time Analysis, RTA) and the level-B one. The RTA performs data quality monitoring and must be able to issue automated alerts on variable and transient astrophysical sources within 30 seconds from the last acquired Cherenkov event that contributes to the alert, with a sensitivity not worse than the one achieved by the final pipeline by more than a factor of 3. The Level-B Analysis has a better sensitivity (not be worse than the final one by a factor of 2) and the results should be available within 10 hours from the acquisition of the data: for this reason this analysis could be performed at the end of an observation or next morning. The latency (in particular for the RTA) and the sensitivity requirements are challenging because of the large data rate, a few GByte/s. The remote connection to the CTA candidate site with a rather limited network bandwidth makes the issue of the exported data size extremely critical and prevents any kind of processing in real-time of the data outside the site of the telescopes. For these reasons the analysis will be performed on-site with infrastructures co-located with the telescopes, with limited electrical power availability and with a reduced possibility of human intervention. This means, for example, that the on-site hardware infrastructure should have low-power consumption. A substantial effort towards the optimization of high-throughput computing service is envisioned to provide hardware and software solutions with high-throughput, low-power consumption at a low-cost.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589

    Cluster di Calcolo di OAS-Bologna e Software Scientifico disponibile

    Get PDF
    All’interno dell’ Osservatorio di Astrofisica e Scienza dello Spazio di Bologna (OAS), da sempre abbiamo avuto la necessità di strumenti di calcolo potenti e condivisi su cui gli scienziati possano svolgere le elaborazioni sui dati scientifici di Progetti e Missioni. Il Cluster OAS nasce proprio per soddisfare questa esigenza ed è ospitato nel Centro di Calcolo del plesso OAS presso il CNR di Bologna. Tutti gli afferenti a OAS possono chiedere di essere registrati nel sistema di autenticazione LDAP del Cluster e in questo modo accedere via Internet ai nodi di login da dove possono sottomettere le loro elaborazioni. Ci sono due modalità di usare il cluster: quella interattiva e quella batch più classica per un Cluster. L’accesso interattivo consente di lanciare in tempo reale le elaborazioni anche in maniera grafica tramite opportuni programmi si console virtuale, questo tipo di elaborazione viene svolta direttamente nei nodi di login. La modalità batch sfrutta il meccanismo a code, slurm, per sottomettere i lavori in maniera organizzata ai più potenti nodi di calcolo che non sono direttamente utilizzabili dagli utenti. Il cluster inoltre fornisce spazio di archiviazione condiviso organizzato in HOME per i dati personali degli utenti, DATA per i risultati delle elaborazioni PROGRAMMI per la memorizzazione dei moduli di elaborazione e LUSTRE per il calcolo parallelo. I vantaggi di avere un Cluster sono che gli utenti trovano i programmi e compilatori di cui necessitano già installati nelle principali versioni e hanno a disposizione una buona potenza di calcolo e spazio di storage per poter lavorare molto più agevolmente rispetto ai propri computer personali. Il Cluster OAS non ha una potenza paragonabile ai grandi Cluster Commerciali e di Ricerca, ma essendo ritagliato sulle esigenze degli afferenti a OAS risponde bene alle esigenze dell'istituto e può servire come Nave Scuola per poter poi accedere a strutture più grandi qualora fosse necessario. Nel documento saranno illustrate nel dettaglio le caratteristiche Hardware e Software del Cluster, compresi tutti i principali Software scientifici installati di cui si spiega brevemente l’utilizzo

    Multiband Observations of the Quasar PKS 2326-502 during Active and Quiescent Gamma-Ray States in 2010-2012

    Get PDF
    Quasi-simultaneous observations of the Flat Spectrum Radio Quasar PKS 2326-502 were carried out in the γ-ray, X-ray, UV, optical, near-infrared, and radio bands. Using these observations, we are able to characterize the spectral energy distribution (SED) of the source during two flaring and one quiescent γ-ray states. These data were used to constrain one-zone leptonic models of the SEDs of each flare and investigate the physical conditions giving rise to them. While modeling one flare required only changes in the electron spectrum compared to the quiescent state, modeling the other flare required changes in both the electron spectrum and the size of the emitting region. These results are consistent with an emerging pattern of two broad classes of flaring states seen in blazars. Type 1 flares are explained by changes solely in the electron distribution, whereas type 2 flares require a change in an additional parameter. This suggests that different flares, even in the same source, may result from different physical conditions or different regions in the jet

    A prototype for the real-time analysis of the Cherenkov Telescope Array

    Full text link
    The Cherenkov Telescope Array (CTA) observatory will be one of the biggest ground-based very-high-energy (VHE) γ- ray observatory. CTA will achieve a factor of 10 improvement in sensitivity from some tens of GeV to beyond 100 TeV with respect to existing telescopes. The CTA observatory will be capable of issuing alerts on variable and transient sources to maximize the scientific return. To capture these phenomena during their evolution and for effective communication to the astrophysical community, speed is crucial. This requires a system with a reliable automated trigger that can issue alerts immediately upon detection of γ-ray flares. This will be accomplished by means of a Real-Time Analysis (RTA) pipeline, a key system of the CTA observatory. The latency and sensitivity requirements of the alarm system impose a challenge because of the anticipated large data rate, between 0.5 and 8 GB/s. As a consequence, substantial efforts toward the optimization of highthroughput computing service are envisioned. For these reasons our working group has started the development of a prototype of the Real-Time Analysis pipeline. The main goals of this prototype are to test: (i) a set of frameworks and design patterns useful for the inter-process communication between software processes running on memory; (ii) the sustainability of the foreseen CTA data rate in terms of data throughput with different hardware (e.g. accelerators) and software configurations, (iii) the reuse of nonreal- time algorithms or how much we need to simplify algorithms to be compliant with CTA requirements, (iv) interface issues between the different CTA systems. In this work we focus on goals (i) and (ii)
    corecore