125 research outputs found

    A Bayesian approach to star-galaxy classification

    Full text link
    Star-galaxy classification is one of the most fundamental data-processing tasks in survey astronomy, and a critical starting point for the scientific exploitation of survey data. For bright sources this classification can be done with almost complete reliability, but for the numerous sources close to a survey's detection limit each image encodes only limited morphological information. In this regime, from which many of the new scientific discoveries are likely to come, it is vital to utilise all the available information about a source, both from multiple measurements and also prior knowledge about the star and galaxy populations. It is also more useful and realistic to provide classification probabilities than decisive classifications. All these desiderata can be met by adopting a Bayesian approach to star-galaxy classification, and we develop a very general formalism for doing so. An immediate implication of applying Bayes's theorem to this problem is that it is formally impossible to combine morphological measurements in different bands without using colour information as well; however we develop several approximations that disregard colour information as much as possible. The resultant scheme is applied to data from the UKIRT Infrared Deep Sky Survey (UKIDSS), and tested by comparing the results to deep Sloan Digital Sky Survey (SDSS) Stripe 82 measurements of the same sources. The Bayesian classification probabilities obtained from the UKIDSS data agree well with the deep SDSS classifications both overall (a mismatch rate of 0.022, compared to 0.044 for the UKIDSS pipeline classifier) and close to the UKIDSS detection limit (a mismatch rate of 0.068 compared to 0.075 for the UKIDSS pipeline classifier). The Bayesian formalism developed here can be applied to improve the reliability of any star-galaxy classification schemes based on the measured values of morphology statistics alone.Comment: Accepted 22 November 2010, 19 pages, 17 figure

    A Simple Likelihood Method for Quasar Target Selection

    Full text link
    We present a new method for quasar target selection using photometric fluxes and a Bayesian probabilistic approach. For our purposes we target quasars using Sloan Digital Sky Survey (SDSS) photometry to a magnitude limit of g=22. The efficiency and completeness of this technique is measured using the Baryon Oscillation Spectroscopic Survey (BOSS) data, taken in 2010. This technique was used for the uniformly selected (CORE) sample of targets in BOSS year one spectroscopy to be realized in the 9th SDSS data release. When targeting at a density of 40 objects per sq-deg (the BOSS quasar targeting density) the efficiency of this technique in recovering z>2.2 quasars is 40%. The completeness compared to all quasars identified in BOSS data is 65%. This paper also describes possible extensions and improvements for this techniqueComment: Updated to accepted version for publication in the Astrophysical Journal. 10 pages, 10 figures, 3 table

    Galaxy formation as a cosmological tool - I. The galaxy merger history as a measure of cosmological parameters

    Get PDF
    As galaxy formation and evolution over long cosmic time-scales depends to a large degree on the structure of the universe, the assembly history of galaxies is potentially a powerful approach for learning about the universe itself. In this paper we examine the merger history of dark matter halos based on the Extended Press-Schechter formalism as a function of cosmological parameters, redshift and halo mass. We calculate how major halo mergers are influenced by changes in the cosmological values of Ωm\Omega_{\rm m}, ΩΛ\Omega_{\Lambda}, σ8\sigma_{8}, the dark matter particle temperature (warm vs. cold dark matter), and the value of a constant and evolving equation of state parameter w(z)w(z). We find that the merger fraction at a given halo mass varies by up to a factor of three for halos forming under the assumption of Cold Dark Matter, within different underling cosmological parameters. We find that the current measurements of the merger history, as measured through observed galaxy pairs as well as through structure, are in agreement with the concordance cosmology with the current best fit giving 1Ωm=ΩΛ=0.840.17+0.161 - \Omega_{\rm m} = \Omega_{\rm \Lambda} = 0.84^{+0.16}_{-0.17}. To obtain a more accurate constraint competitive with recently measured cosmological parameters from Planck and WMAP requires a measured merger accuracy of δfm0.01\delta f_{\rm m} \sim 0.01, implying surveys with an accurately measured merger history over 2 - 20 deg2^{2}, which will be feasible with the next generation of imaging and spectroscopic surveys such as Euclid and LSST.Comment: MNRAS, accepted for publication, 22 page

    Recurrent Tissue-Specific Mtdna Mutations are Common in Humans

    Get PDF
    Mitochondrial DNA (mtDNA) variation can affect phenotypic variation; therefore, knowing its distribution within and among individuals is of importance to understanding many human diseases. Intra-individual mtDNA variation (heteroplasmy) has been generally assumed to be random. We used massively parallel sequencing to assess heteroplasmy across ten tissues and demonstrate that in unrelated individuals there are tissue-specific, recurrent mutations. Certain tissues, notably kidney, liver and skeletal muscle, displayed the identical recurrent mutations that were undetectable in other tissues in the same individuals. Using RFLP analyses we validated one of the tissue-specific mutations in the two sequenced individuals and replicated the patterns in two additional individuals. These recurrent mutations all occur within or in very close proximity to sites that regulate mtDNA replication, strongly implying that these variations alter the replication dynamics of the mutated mtDNA genome. These recurrent variants are all independent of each other and do not occur in the mtDNA coding regions. The most parsimonious explanation of the data is that these frequently repeated mutations experience tissue-specific positive selection, probably through replication advantage

    A quantitative assessment of the annual contribution of platform downwearing to beach sediment budget: Happisburgh, England, UK

    Get PDF
    Field and numerical investigations at Happisburgh, East coast of England, UK, sought to characterize beach thickness and determine geologic framework controls on coastal change. After a major failure of coastal protection infrastructure, removal of about 1 km of coastal defence along the otherwise protected cliffed coastline of Happisburgh triggered a period of rapid erosion over 20 years of ca. 140 m. Previous sensitivity studies suggest that beach thickness plays a major role in coastal recession. These studies were limited, however, by a lack of beach volume data. In this study, we have integrated the insights gained from our understanding of the Quaternary geology of the area, a novel non-intrusive passive seismic survey method, and a 3D novel representation of the subsurface source and transportable material into a coastal modelling environment to explore the role of beach thickness on the back wearing and downwearing of the cliffs and consolidated platform, respectively. Results show that beach thickness is non-homogeneous along the study site: we estimate that the contribution to near-shore sediment budget via platform downwearing is of a similar order of magnitude as sediment lost from the beach and therefore non-negligible. We have provided a range of evidence to support the idea that the Happisburgh beach is a relatively thin layer perched on a sediment rich platform of sand and gravel. This conceptualization differs from previous publications, which assume that the platform was mostly till and fine material. This has direct implication on regional sediment management along this coastline. The present study contributes to our understanding of a poorly known aspect of coastal sediment budgeting and outlines a quantitative approach that allows for simple integration of geological understanding for coastline evolution assessments worldwid

    The inflationary trispectrum

    Get PDF
    We calculate the trispectrum of the primordial curvature perturbation generated by an epoch of slow-roll inflation in the early universe, and demonstrate that the non-gaussian signature imprinted at horizon crossing is unobservably small, of order tau_NL < r/50, where r < 1 is the tensor-to-scalar ratio. Therefore any primordial non-gaussianity observed in future microwave background experiments is likely to have been synthesized by gravitational effects on superhorizon scales. We discuss the application of Maldacena's consistency condition to the trispectrum.Comment: 23 pages, 2 diagrams drawn with feynmp.sty, uses iopart.cls. v2, replaced with version accepted by JCAP. Estimate of maximal tau_NL refined in Section 5, resulting in smaller numerical value. Sign errors in Eq. (44) and Eq. (48) corrected. Some minor notational change

    Abrupt climatic events during the last glacial-interglacial transition in Alaska

    Get PDF
    Evidence is mounting that abrupt climatic shifts occurred during the last glacial-interglacial transition (LGIT) in the North Atlantic and other regions. However, few high-resolution climatic records of the LGIT exist from the high latitudes of the North Pacific rim. We analyzed lake sediments from southwestern Alaska for biogenic silica, organic carbon, organic nitrogen, diatom assemblages, and compound-specific hydrogen isotopes. Results reveal climatic changes coincident with the Younger Dryas, Intra-Allerod Cold Period, and Pre-Boreal Oscillation. However, major discrepancies exist in the paleoclimate patterns of the Bolling-Allerod interstadial between our data and the GISP2 18O record from Greenland, and causes are uncertain. These data suggest that the North Pacific and North Atlantic experienced similar reversals during climatic warming of the LGIT but that the Bolling-Allerod cooling trend in the GISP2 18O record is probably not a hemispheric or global pattern

    Development of an automatic delineation of cliff top and toe on very irregular planform coastlines (CliffMetrics v1.0)

    Get PDF
    We describe a new algorithm that automatically delineates the cliff top and toe of a cliffed coastline from a digital elevation model (DEM). The algorithm builds upon existing methods but is specifically designed to resolve very irregular planform coastlines with many bays and capes, such as parts of the coastline of Great Britain. The algorithm automatically and sequentially delineates and smooths shoreline vectors, generates orthogonal transects and elevation profiles with a minimum spacing equal to the DEM resolution, and extracts the position and elevation of the cliff top and toe. Outputs include the non-smoothed raster and smoothed vector coastlines, normals to the coastline (as vector shape files), xyz profiles (as comma-separated-value, CSV, files), and the cliff top and toe (as point shape files). The algorithm also automatically assesses the quality of the profile and omits low-quality profiles (i.e. extraction of cliff top and toe is not possible). The performance of the proposed algorithm is compared with an existing method, which was not specifically designed for very irregular coastlines, and to manually digitized boundaries by numerous professionals. Also, we assess the reproducibility of the results using different DEM resolutions (5, 10 and 50m), different user-defined parameter sets related to the degree of coastline smoothing, and the threshold used to identify the cliff top and toe. The model output sensitivity is found to be smaller than the manually digitized uncertainty. The code and a manual are publicly available on a GitHub repository
    corecore