45 research outputs found

    The properties of the AGN torus as revealed from a set of unbiased NuSTAR observations

    Get PDF
    The obscuration observed in active galactic nuclei (AGN) is mainly caused by dust and gas distributed in a torus-like structure surrounding the supermassive black hole (SMBH). However, properties of the obscuring torus of the AGN in X-ray have not been fully investigated yet due to the lack of high-quality data and proper models. In this work, we perform a broadband X-ray spectral analysis of a large, unbiased sample of obscured AGN (with line-of-sight column density 23≤\lelog(NH)≤\le24) in the nearby universe which has high-quality archival NuSTAR data. The source spectra are analyzed using the recently developed borus02 model, which enables us to accurately characterize the physical and geometrical properties of AGN obscuring tori. We also compare our results obtained from the unbiased Compton thin AGN with those of Compton-thick AGN. We find that Compton thin and Compton-thick AGN may possess similar tori, whose average column density is Compton thick (NH,tor,ave\rm _{H,tor,ave} ∼\sim1.4×\times1024^{24} cm−2^{-2}), but they are observed through different (under-dense or over-dense) regions of the tori. We also find that the obscuring torus medium is significantly inhomogeneous, with the torus average column densities significantly different from their line-of-sight column densities (for most of the sources in the sample). The average torus covering factor of sources in our unbiased sample is cf_f=0.67, suggesting that the fraction of unobscured AGN is ∼\sim33%. We develop a new method to measure the intrinsic line-of-sight column density distribution of AGN in the nearby universe, which we find the result is in good agreement with the constraints from recent population synthesis models.Comment: 16 pages, 14 figures, 7 tables; accepted by A&

    Multi-Modality Pathology Segmentation Framework: Application to Cardiac Magnetic Resonance Images

    Full text link
    Multi-sequence of cardiac magnetic resonance (CMR) images can provide complementary information for myocardial pathology (scar and edema). However, it is still challenging to fuse these underlying information for pathology segmentation effectively. This work presents an automatic cascade pathology segmentation framework based on multi-modality CMR images. It mainly consists of two neural networks: an anatomical structure segmentation network (ASSN) and a pathological region segmentation network (PRSN). Specifically, the ASSN aims to segment the anatomical structure where the pathology may exist, and it can provide a spatial prior for the pathological region segmentation. In addition, we integrate a denoising auto-encoder (DAE) into the ASSN to generate segmentation results with plausible shapes. The PRSN is designed to segment pathological region based on the result of ASSN, in which a fusion block based on channel attention is proposed to better aggregate multi-modality information from multi-modality CMR images. Experiments from the MyoPS2020 challenge dataset show that our framework can achieve promising performance for myocardial scar and edema segmentation.Comment: 12 pages,MyoPS 202

    Automatic initialization and quality control of large-scale cardiac MRI segmentations

    Get PDF
    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies

    Deep Generative Model-based Quality Control for Cardiac MRI Segmentation

    Get PDF
    In recent years, convolutional neural networks have demonstrated promising performance in a variety of medical image segmentation tasks. However, when a trained segmentation model is deployed into the real clinical world, the model may not perform optimally. A major challenge is the potential poor-quality segmentations generated due to degraded image quality or domain shift issues. There is a timely need to develop an automated quality control method that can detect poor segmentations and feedback to clinicians. Here we propose a novel deep generative model-based framework for quality control of cardiac MRI segmentation. It first learns a manifold of good-quality image-segmentation pairs using a generative model. The quality of a given test segmentation is then assessed by evaluating the difference from its projection onto the good-quality manifold. In particular, the projection is refined through iterative search in the latent space. The proposed method achieves high prediction accuracy on two publicly available cardiac MRI datasets. Moreover, it shows better generalisation ability than traditional regression-based methods. Our approach provides a real-time and model-agnostic quality control for cardiac MRI segmentation, which has the potential to be integrated into clinical image analysis workflows

    High Throughput Computation of Reference Ranges of Biventricular Cardiac Function on the UK Biobank Population Cohort

    Get PDF
    The exploitation of large-scale population data has the potential to improve healthcare by discovering and understanding patterns and trends within this data. To enable high throughput analysis of cardiac imaging data automatically, a pipeline should comprise quality monitoring of the input images, segmentation of the cardiac structures, assessment of the segmentation quality, and parsing of cardiac functional indexes. We present a fully automatic, high throughput image parsing workflow for the analysis of cardiac MR images, and test its performance on the UK Biobank (UKB) cardiac dataset. The proposed pipeline is capable of performing end-to-end image processing including: data organisation, image quality assessment, shape model initialisation, segmentation, segmentation quality assessment, and functional parameter computation; all without any user interaction. To the best of our knowledge, this is the first paper tackling the fully automatic 3D analysis of the UKB population study, providing reference ranges for all key cardiovascular functional indexes, from both left and right ventricles of the heart. We tested our workflow on a reference cohort of 800 healthy subjects for which manual delineations, and reference functional indexes exist. Our results show statistically significant agreement between the manually obtained reference indexes, and those automatically computed using our framework

    Quantitative CMR population imaging on 20,000 subjects of the UK Biobank imaging study: LV/RV quantification pipeline and its evaluation

    Get PDF
    Population imaging studies generate data for developing and implementing personalised health strategies to prevent, or more effectively treat disease. Large prospective epidemiological studies acquire imaging for pre-symptomatic populations. These studies enable the early discovery of alterations due to impending disease, and enable early identification of individuals at risk. Such studies pose new challenges requiring automatic image analysis. To date, few large-scale population-level cardiac imaging studies have been conducted. One such study stands out for its sheer size, careful implementation, and availability of top quality expert annotation; the UK Biobank (UKB). The resulting massive imaging datasets (targeting ca. 100,000 subjects) has put published approaches for cardiac image quantification to the test. In this paper, we present and evaluate a cardiac magnetic resonance (CMR) image analysis pipeline that properly scales up and can provide a fully automatic analysis of the UKB CMR study. Without manual user interactions, our pipeline performs end-to-end image analytics from multi-view cine CMR images all the way to anatomical and functional bi-ventricular quantification. All this, while maintaining relevant quality controls of the CMR input images, and resulting image segmentations. To the best of our knowledge, this is the first published attempt to fully automate the extraction of global and regional reference ranges of all key functional cardiovascular indexes, from both left and right cardiac ventricles, for a population of 20,000 subjects imaged at 50 time frames per subject, for a total of one million CMR volumes. In addition, our pipeline provides 3D anatomical bi-ventricular models of the heart. These models enable the extraction of detailed information of the morphodynamics of the two ventricles for subsequent association to genetic, omics, lifestyle habits, exposure information, and other information provided in population imaging studies. We validated our proposed CMR analytics pipeline against manual expert readings on a reference cohort of 4620 subjects with contour delineations and corresponding clinical indexes. Our results show broad significant agreement between the manually obtained reference indexes, and those automatically computed via our framework. 80.67% of subjects were processed with mean contour distance of less than 1 pixel, and 17.50% with mean contour distance between 1 and 2 pixels. Finally, we compare our pipeline with a recently published approach reporting on UKB data, and based on deep learning. Our comparison shows similar performance in terms of segmentation accuracy with respect to human experts

    Uncovering de novo gene birth in yeast using deep transcriptomics

    Get PDF
    De novo gene origination has been recently established as an important mechanism for the formation of new genes. In organisms with a large genome, intergenic and intronic regions provide plenty of raw material for new transcriptional events to occur, but little is know about how de novo transcripts originate in more densely-packed genomes. Here, we identify 213 de novo originated transcripts in Saccharomyces cerevisiae using deep transcriptomics and genomic synteny information from multiple yeast species grown in two different conditions. We find that about half of the de novo transcripts are expressed from regions which already harbor other genes in the opposite orientation; these transcripts show similar expression changes in response to stress as their overlapping counterparts, and some appear to translate small proteins. Thus, a large fraction of de novo genes in yeast are likely to co-evolve with already existing genes

    A search for spectral hysteresis and energy-dependent time lags from X-ray and TeV gamma-ray observations of Mrk 421

    Get PDF
    Blazars are variable emitters across all wavelengths over a wide range of timescales, from months down to minutes. It is therefore essential to observe blazars simultaneously at different wavelengths, especially in the X-ray and gamma-ray bands, where the broadband spectral energy distributions usually peak. In this work, we report on three "target-of-opportunity" (ToO) observations of Mrk 421, one of the brightest TeV blazars, triggered by a strong flaring event at TeV energies in 2014. These observations feature long, continuous, and simultaneous exposures with XMM-Newton (covering X-ray and optical/ultraviolet bands) and VERITAS (covering TeV gamma-ray band), along with contemporaneous observations from other gamma-ray facilities (MAGIC and Fermi-LAT) and a number of radio and optical facilities. Although neither rapid flares nor significant X-ray/TeV correlation are detected, these observations reveal subtle changes in the X-ray spectrum of the source over the course of a few days. We search the simultaneous X-ray and TeV data for spectral hysteresis patterns and time delays, which could provide insight into the emission mechanisms and the source properties (e.g. the radius of the emitting region, the strength of the magnetic field, and related timescales). The observed broadband spectra are consistent with a one-zone synchrotron self-Compton model. We find that the power spectral density distribution at ≳4×10−4\gtrsim 4\times 10^{-4} Hz from the X-ray data can be described by a power-law model with an index value between 1.2 and 1.8, and do not find evidence for a steepening of the power spectral index (often associated with a characteristic length scale) compared to the previously reported values at lower frequencies.Comment: 45 pages, 15 figure

    The High-Energy X-ray Probe (HEX-P): the circum-nuclear environment of growing supermassive black holes

    Get PDF
    Ever since the discovery of the first active galactic nuclei (AGN), substantial observational and theoretical effort has been invested into understanding how massive black holes have evolved across cosmic time. Circum-nuclear obscuration is now established as a crucial component, with almost every AGN observed known to display signatures of some level of obscuration in their X-ray spectra. However, despite more than six decades of effort, substantial open questions remain: how does the accretion power impact the structure of the circum-nuclear obscurer? What are the dynamical properties of the obscurer? Can dense circum-nuclear obscuration exist around intrinsically weak AGN? How many intermediate mass black holes occupy the centers of dwarf galaxies? In this paper, we showcase a number of next-generation prospects attainable with the High-Energy X-ray Probe (HEX-P1) to contribute toward solving these questions in the 2030s. The uniquely broad (0.2–80 keV) and strictly simultaneous X-ray passband of HEX-P makes it ideally suited for studying the temporal co-evolution between the central engine and circum-nuclear obscurer. Improved sensitivities and reduced background will enable the development of spectroscopic models complemented by current and future multi-wavelength observations. We show that the angular resolution of HEX-P both below and above 10 keV will enable the discovery and confirmation of accreting massive black holes at both low accretion power and low black hole masses even when concealed by thick obscuration. In combination with other next-generation observations of the dusty hearts of nearby galaxies, HEX-P will be pivotal in paving the way toward a complete picture of black hole growth and galaxy co-evolution

    ProteinHistorian: Tools for the Comparative Analysis of Eukaryote Protein Origin

    Get PDF
    The evolutionary history of a protein reflects the functional history of its ancestors. Recent phylogenetic studies identified distinct evolutionary signatures that characterize proteins involved in cancer, Mendelian disease, and different ontogenic stages. Despite the potential to yield insight into the cellular functions and interactions of proteins, such comparative phylogenetic analyses are rarely performed, because they require custom algorithms. We developed ProteinHistorian to make tools for performing analyses of protein origins widely available. Given a list of proteins of interest, ProteinHistorian estimates the phylogenetic age of each protein, quantifies enrichment for proteins of specific ages, and compares variation in protein age with other protein attributes. ProteinHistorian allows flexibility in the definition of protein age by including several algorithms for estimating ages from different databases of evolutionary relationships. We illustrate the use of ProteinHistorian with three example analyses. First, we demonstrate that proteins with high expression in human, compared to chimpanzee and rhesus macaque, are significantly younger than those with human-specific low expression. Next, we show that human proteins with annotated regulatory functions are significantly younger than proteins with catalytic functions. Finally, we compare protein length and age in many eukaryotic species and, as expected from previous studies, find a positive, though often weak, correlation between protein age and length. ProteinHistorian is available through a web server with an intuitive interface and as a set of command line tools; this allows biologists and bioinformaticians alike to integrate these approaches into their analysis pipelines. ProteinHistorian's modular, extensible design facilitates the integration of new datasets and algorithms. The ProteinHistorian web server, source code, and pre-computed ages for 32 eukaryotic genomes are freely available under the GNU public license at http://lighthouse.ucsf.edu/ProteinHistorian/
    corecore