69 research outputs found

    End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    Get PDF
    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it

    Mid-infrared spectroscopy of infrared-luminous galaxies at z~0.5-3

    Get PDF
    We present results on low-resolution mid-infrared (MIR) spectra of 70 infrared-luminous galaxies obtained with the Infrared Spectrograph (IRS) onboard Spitzer. We selected sources from the European Large Area Infrared Survey (ELAIS) with S15 > 0.8 mJy and photometric or spectroscopic z > 1. About half of the sample are QSOs in the optical, while the remaining sources are galaxies, comprising both obscured AGN and starbursts. We classify the spectra using well-known infrared diagnostics, as well as a new one that we propose, into three types of source: those dominated by an unobscured AGN (QSOs), obscured AGN, and starburst-dominated sources. Starbursts concentrate at z ~ 0.6-1.0 favored by the shift of the 7.7-micron PAH band into the selection 15 micron band, while AGN spread over the 0.5 < z < 3.1 range. Star formation rates (SFR) are estimated for individual sources from the luminosity of the PAH features. An estimate of the average PAH luminosity in QSOs and obscured AGN is obtained from the composite spectrum of all sources with reliable redshifts. The estimated mean SFR in the QSOs is 50-100 Mo yr^-1, but the implied FIR luminosity is 3-10 times lower than that obtained from stacking analysis of the FIR photometry, suggesting destruction of the PAH carriers by energetic photons from the AGN. The SFR estimated in obscured AGN is 2-3 times higher than in QSOs of similar MIR luminosity. This discrepancy might not be due to luminosity effects or selection bias alone, but could instead indicate a connection between obscuration and star formation. However, the observed correlation between silicate absorption and the slope of the near- to mid-infrared spectrum is compatible with the obscuration of the AGN emission in these sources being produced in a dust torus.Comment: 32 pages, 24 figures, 15 tables, accepted for publication in MNRA

    Estimation of trabecular bone parameters in children from multisequence MRI using texture-based regression

    Get PDF
    PURPOSE: This paper presents a statistical approach for the prediction of trabecular bone parameters from low-resolution multisequence magnetic resonance imaging (MRI) in children, thus addressing the limitations of high-resolution modalities such as HR-pQCT, including the significant exposure of young patients to radiation and the limited applicability of such modalities to peripheral bones in vivo. METHODS: A statistical predictive model is constructed from a database of MRI and HR-pQCT datasets, to relate the low-resolution MRI appearance in the cancellous bone to the trabecular parameters extracted from the high-resolution images. The description of the MRI appearance is achieved between subjects by using a collection of feature descriptors, which describe the texture properties inside the cancellous bone, and which are invariant to the geometry and size of the trabecular areas. The predictive model is built by fitting to the training data a nonlinear partial least square regression between the input MRI features and the output trabecular parameters. RESULTS: Detailed validation based on a sample of 96 datasets shows correlations >0.7 between the trabecular parameters predicted from low-resolution multisequence MRI based on the proposed statistical model and the values extracted from high-resolution HRp-QCT. CONCLUSIONS: The obtained results indicate the promise of the proposed predictive technique for the estimation of trabecular parameters in children from multisequence MRI, thus reducing the need for high-resolution radiation-based scans for a fragile population that is under development and growth

    An innovative integral field unit upgrade with 3D-printed micro-lenses for the RHEA at Subaru

    Get PDF
    In the new era of Extremely Large Telescopes (ELTs) currently under construction, challenging requirements drive spectrograph designs towards techniques that efficiently use a facility's light collection power. Operating in the single-mode (SM) regime, close to the diffraction limit, reduces the footprint of the instrument compared to a conventional high-resolving power spectrograph. The custom built injection fiber system with 3D-printed micro-lenses on top of it for the replicable high-resolution exoplanet and asteroseismology spectrograph at Subaru in combination with extreme adaptive optics of SCExAO, proved its high efficiency in a lab environment, manifesting up to ~77% of the theoretical predicted performance

    Indications of M-Dwarf Deficits in the Halo and Thick Disk of the Galaxy

    Get PDF
    We compared the number of faint stars detected in deep survey fields with the current stellar distribution model of the Galaxy and found that the detected number in the H band is significantly smaller than the predicted number. This indicates that M-dwarfs, the major component, are fewer in the halo and the thick disk. We used archived data of several surveys in both the north and south field of GOODS (Great Observatories Origins Deep Survey), MODS in GOODS-N, and ERS and CANDELS in GOODS-S. The number density of M-dwarfs in the halo has to be 20 +/- 13% relative to that in the solar vicinity, in order for the detected number of stars fainter than 20.5 mag in the H band to match with the predicted value from the model. In the thick disk, the number density of M-dwarfs must be reduced (52 +/- 13%) or the scale height must be decreased (approximately 600 pc). Alternatively, overall fractions of the halo and thick disks can be significantly reduced to achieve the same effect, because our sample mainly consists of faint M-dwarfs. Our results imply that the M-dwarf population in regions distant from the Galactic plane is significantly smaller than previously thought. We then discussed the implications this has on the suitability of the model predictions for the prediction of non-companion faint stars in direct imaging extrasolar planet surveys by using the best-fit number densities

    The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report

    Get PDF
    The Habitable Exoplanet Observatory, or HabEx, has been designed to be the Great Observatory of the 2030s. For the first time in human history, technologies have matured sufficiently to enable an affordable space-based telescope mission capable of discovering and characterizing Earthlike planets orbiting nearby bright sunlike stars in order to search for signs of habitability and biosignatures. Such a mission can also be equipped with instrumentation that will enable broad and exciting general astrophysics and planetary science not possible from current or planned facilities. HabEx is a space telescope with unique imaging and multi-object spectroscopic capabilities at wavelengths ranging from ultraviolet (UV) to near-IR. These capabilities allow for a broad suite of compelling science that cuts across the entire NASA astrophysics portfolio. HabEx has three primary science goals: (1) Seek out nearby worlds and explore their habitability; (2) Map out nearby planetary systems and understand the diversity of the worlds they contain; (3) Enable new explorations of astrophysical systems from our own solar system to external galaxies by extending our reach in the UV through near-IR. This Great Observatory science will be selected through a competed GO program, and will account for about 50% of the HabEx primary mission. The preferred HabEx architecture is a 4m, monolithic, off-axis telescope that is diffraction-limited at 0.4 microns and is in an L2 orbit. HabEx employs two starlight suppression systems: a coronagraph and a starshade, each with their own dedicated instrument

    The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report

    Get PDF
    The Habitable Exoplanet Observatory, or HabEx, has been designed to be the Great Observatory of the 2030s. For the first time in human history, technologies have matured sufficiently to enable an affordable space-based telescope mission capable of discovering and characterizing Earthlike planets orbiting nearby bright sunlike stars in order to search for signs of habitability and biosignatures. Such a mission can also be equipped with instrumentation that will enable broad and exciting general astrophysics and planetary science not possible from current or planned facilities. HabEx is a space telescope with unique imaging and multi-object spectroscopic capabilities at wavelengths ranging from ultraviolet (UV) to near-IR. These capabilities allow for a broad suite of compelling science that cuts across the entire NASA astrophysics portfolio. HabEx has three primary science goals: (1) Seek out nearby worlds and explore their habitability; (2) Map out nearby planetary systems and understand the diversity of the worlds they contain; (3) Enable new explorations of astrophysical systems from our own solar system to external galaxies by extending our reach in the UV through near-IR. This Great Observatory science will be selected through a competed GO program, and will account for about 50% of the HabEx primary mission. The preferred HabEx architecture is a 4m, monolithic, off-axis telescope that is diffraction-limited at 0.4 microns and is in an L2 orbit. HabEx employs two starlight suppression systems: a coronagraph and a starshade, each with their own dedicated instrument.Comment: Full report: 498 pages. Executive Summary: 14 pages. More information about HabEx can be found here: https://www.jpl.nasa.gov/habex

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach

    Get PDF
    Continuously indexed Gaussian fields (GFs) are the most important ingredient in spatial statistical modelling and geostatistics. The specification through the covariance function gives an intuitive interpretation of the field properties. On the computational side, GFs are hampered with the big n problem, since the cost of factorizing dense matrices is cubic in the dimension. Although computational power today is at an all time high, this fact seems still to be a computational bottleneck in many applications. Along with GFs, there is the class of Gaussian Markov random fields (GMRFs) which are discretely indexed. The Markov property makes the precision matrix involved sparse, which enables the use of numerical algorithms for sparse matrices, that for fields in R-2 only use the square root of the time required by general algorithms. The specification of a GMRF is through its full conditional distributions but its marginal properties are not transparent in such a parameterization. We show that, using an approximate stochastic weak solution to (linear) stochastic partial differential equations, we can, for some GFs in the Matern class, provide an explicit link, for any triangulation of R-d, between GFs and GMRFs, formulated as a basis function representation. The consequence is that we can take the best from the two worlds and do the modelling by using GFs but do the computations by using GMRFs. Perhaps more importantly, our approach generalizes to other covariance functions generated by SPDEs, including oscillating and non-stationary GFs, as well as GFs on manifolds. We illustrate our approach by analysing global temperature data with a non-stationary model defined on a sphere
    corecore