1,044 research outputs found

    Finding strong lenses in CFHTLS using convolutional neural networks

    Get PDF
    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62,406 simulated lenses and 64,673 non-lens negative examples generated with two different methodologies. The networks were able to learn the features of simulated lenses with accuracy of up to 99.8% and a purity and completeness of 94-100% on a test set of 2000 simulations. An ensemble of trained networks was applied to all of the 171 square degrees of the CFHTLS wide field image data, identifying 18,861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early type galaxies selected from the survey catalog as potential deflectors, identified 2,465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalog-based search we estimate a completeness of 21-28% with respect to detectable lenses and a purity of 15%, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify ~20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.Comment: 16 pages, 8 figures. Accepted by MNRA

    The genetic architecture underlying the evolution of a rare piscivorous life history form in brown trout after secondary contact and strong introgression

    Get PDF
    Identifying the genetic basis underlying phenotypic divergence and reproductive isolation is a longstanding problem in evolutionary biology. Genetic signals of adaptation and reproductive isolation are often confounded by a wide range of factors, such as variation in demographic history or genomic features. Brown trout ( ) in the Loch Maree catchment, Scotland, exhibit reproductively isolated divergent life history morphs, including a rare piscivorous (ferox) life history form displaying larger body size, greater longevity and delayed maturation compared to sympatric benthivorous brown trout. Using a dataset of 16,066 SNPs, we analyzed the evolutionary history and genetic architecture underlying this divergence. We found that ferox trout and benthivorous brown trout most likely evolved after recent secondary contact of two distinct glacial lineages, and identified 33 genomic outlier windows across the genome, of which several have most likely formed through selection. We further identified twelve candidate genes and biological pathways related to growth, development and immune response potentially underpinning the observed phenotypic differences. The identification of clear genomic signals divergent between life history phenotypes and potentially linked to reproductive isolation, through size assortative mating, as well as the identification of the underlying demographic history, highlights the power of genomic studies of young species pairs for understanding the factors shaping genetic differentiation

    Potential implications of practice effects in Alzheimer's disease prevention trials.

    Get PDF
    IntroductionPractice effects (PEs) present a potential confound in clinical trials with cognitive outcomes. A single-blind placebo run-in design, with repeated cognitive outcome assessments before randomization to treatment, can minimize effects of practice on trial outcome.MethodsWe investigated the potential implications of PEs in Alzheimer's disease prevention trials using placebo arm data from the Alzheimer's Disease Cooperative Study donepezil/vitamin E trial in mild cognitive impairment. Frequent ADAS-Cog measurements early in the trial allowed us to compare two competing trial designs: a 19-month trial with randomization after initial assessment, versus a 15-month trial with a 4-month single-blind placebo run-in and randomization after the second administration of the ADAS-Cog. Standard power calculations assuming a mixed-model repeated-measure analysis plan were used to calculate sample size requirements for a hypothetical future trial designed to detect a 50% slowing of cognitive decline.ResultsOn average, ADAS-Cog 13 scores improved at first follow-up, consistent with a PE and progressively worsened thereafter. The observed change for a 19-month trial (1.18 points) was substantively smaller than that for a 15-month trial with 4-month run-in (1.79 points). To detect a 50% slowing in progression under the standard design (i.e., a 0.59 point slowing), a future trial would require 3.4 times more subjects than would be required to detect the comparable percent slowing (i.e., 0.90 points) with the run-in design.DiscussionAssuming the improvement at first follow-up observed in this trial represents PEs, the rate of change from the second assessment forward is a more accurate representation of symptom progression in this population and is the appropriate reference point for describing treatment effects characterized as percent slowing of symptom progression; failure to accommodate this leads to an oversized clinical trial. We conclude that PEs are an important potential consideration when planning future trials

    Towards automatic pulmonary nodule management in lung cancer screening with deep learning

    Get PDF
    The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers.Comment: Published on Scientific Report

    Benthic assemblages of the Anton Dohrn seamount (NE Atlantic): defining deep-sea biotopes to support habitat mapping and management efforts with a focus on vulnerable marine ecosystems

    Get PDF
    In 2009 the NW and SE flanks of Anton Dohrn Seamount were surveyed using multibeam echosounder and video ground-truthing to characterise megabenthic biological assemblages (biotopes) and assess those which clearly adhere to the definition of Vulnerable Marine Ecosystems, for use in habitat mapping. A combination of multivariate analysis of still imagery and video ground-truthing defined 13 comprehensive descriptions of biotopes that function as mapping units in an applied context. The data reveals that the NW and SE sides of Anton Dohrn Seamount (ADS) are topographically complex and harbour diverse biological assemblages, some of which agree with current definitions of ‘listed’ habitats of conservation concern. Ten of these biotopes could easily be considered Vulnerable Marine Ecosystems; three coral gardens, four cold-water coral reefs, two xenophyophore communities and one sponge dominated community, with remaining biotopes requiring more detailed assessment. Coral gardens were only found on positive geomorphic features, namely parasitic cones and radial ridges, found both sides of the seamount over a depth of 1311–1740 m. Two cold-water coral reefs (equivalent to summit reef) were mapped on the NW side of the seamount; Lophelia pertusa reef associated with the cliff top mounds at a depth of 747–791 m and Solenosmilia variabilis reef on a radial ridge at a depth of 1318-1351 m. Xenophyophore communities were mapped from both sides of the seamount at a depth of 1099–1770 m and were either associated with geomorphic features or were in close proximity (< 100 m) to them. The sponge dominated community was found on the steep escarpment either side of the seamount over at a depth of 854-1345 m. Multivariate diversity revealed the xenophyophore biotopes to be the least diverse, and a hard substratum biotope characterised by serpulids and the sessile holothurian, Psolus squamatus, as the most diverse

    Mapping Cosmic Dawn and Reionization: Challenges and Synergies

    Get PDF
    Cosmic dawn and the Epoch of Reionization (EoR) are among the least explored observational eras in cosmology: a time at which the first galaxies and supermassive black holes formed and reionized the cold, neutral Universe of the post-recombination era. With current instruments, only a handful of the brightest galaxies and quasars from that time are detectable as individual objects, due to their extreme distances. Fortunately, a multitude of multi-wavelength intensity mapping measurements, ranging from the redshifted 21 cm background in the radio to the unresolved X-ray background, contain a plethora of synergistic information about this elusive era. The coming decade will likely see direct detections of inhomogenous reionization with CMB and 21 cm observations, and a slew of other probes covering overlapping areas and complementary physical processes will provide crucial additional information and cross-validation. To maximize scientific discovery and return on investment, coordinated survey planning and joint data analysis should be a high priority, closely coupled to computational models and theoretical predictions.Comment: 5 pages, 1 figure, submitted to the Astro2020 Decadal Survey Science White Paper cal

    On the Linkage between Antarctic Surface Water Stratification and Global Deep-Water Temperature

    Get PDF
    The suggestion is advanced that the remarkably low static stability of Antarctic surface waters may arise from a feedback loop involving global deep-water temperatures. If deep-water temperatures are too warm, this promotes Antarctic convection, thereby strengthening the inflow of Antarctic Bottom Water into the ocean interior and cooling the deep ocean. If deep waters are too cold, this promotes Antarctic stratification allowing the deep ocean to warm because of the input of North Atlantic Deep Water. A steady-state deep-water temperature is achieved such that the Antarctic surface can barely undergo convection. A two-box model is used to illustrate this feedback loop in its simplest expression and to develop basic concepts, such as the bounds on the operation of this loop. The model illustrates the possible dominating influence of Antarctic upwelling rate and Antarctic freshwater balance on global deep-water temperatures

    Optimization of Approximate Maps for Linear Systems Arising in Discretized PDEs

    Full text link
    Generally, discretization of partial differential equations (PDEs) creates a sequence of linear systems Akxk=bk,k=0,1,2,...,NA_k x_k = b_k, k = 0, 1, 2, ..., N with well-known and structured sparsity patterns. Preconditioners are often necessary to achieve fast convergence When solving these linear systems using iterative solvers. We can use preconditioner updates for closely related systems instead of computing a preconditioner for each system from scratch. One such preconditioner update is the sparse approximate map (SAM), which is based on the sparse approximate inverse preconditioner using a least squares approximation. A SAM then acts as a map from one matrix in the sequence to another nearby one for which we have an effective preconditioner. To efficiently compute an effective SAM update (i.e., one that facilitates fast convergence of the iterative solver), we seek to compute an optimal sparsity pattern. In this paper, we examine several sparsity patterns for computing the SAM update to characterize optimal or near-optimal sparsity patterns for linear systems arising from discretized PDEs.Comment: 12 pages, 11 figures, submitted to Proceedings in Applied Mathematics and Mechanics (PAMM
    corecore