97 research outputs found

    Geographic variation, null hypotheses, and subspecies limits in the California Gnatcatcher: A response to McCormack and Maley

    Get PDF
    We interpreted the results of nuclear DNA sequencing to be inconsistent with the recognition of California Gnatcatcher (Polioptila californica) subspecies. McCormack and Maley (2015) suggested that our data did support 2 taxa, one of which was P. c. californica, listed as Threatened under the Endangered Species Act (ESA). We summarize here how 2 sets of researchers with access to the same data reached different conclusions by including different analyses. We included the southern subspecies’ boundary from the taxonomy of Atwood (1991), the taxonomic basis for the ESA listing, which resulted in an Analysis of Molecular Variance that provided no support for subspecies. In contrast, using a novel taxonomic hypothesis without precedent in the literature, McCormack and Maley (2015) found statistically significant FST values for 2 loci, which they suggested supports P. c. californica. We propose that our mitochondrial and nuclear data had sufficient power to capture geographical structure at either the phylogenetic (monophyly) or traditional ‘‘75% rule’’ level. McCormack and Maley (2015) suggested that finding an absence of population structure was a ‘‘negative result,’’ whereas we consider it to be the null hypothesis for a species with gene flow and no geographical barriers. We interpret the unstructured mtDNA and nuclear DNA trees, the STRUCTURE analysis supporting one group, the identification of just 26% (and not 75%) of individuals of P. c. californica with the most diagnostic nuclear locus, the overall GST that suggests that over 98% of the variation is explained by nontaxonomic sources, and the lack of evidence of ecological differentiation to indicate that P. c. californica is not a valid subspecies. McCormack and Maley (2015) suggest that statistically significant differences at 2 loci that explained ,6% of the genetic variation, and previous morphological data, support recognition of P. c. californica. If ornithology continues to recognize subspecies, these different standards should be reconciled

    PHYLOGEOGRAPHY OF THE CALIFORNIA GNATCATCHER (POLIOPTILA CALIFORNICA) USING MULTILOCUS DNA SEQUENCES AND ECOLOGICAL NICHE MODELING: IMPLICATIONS FOR CONSERVATION

    Get PDF
    An important step in conservation is to identify whether threatened populations are evolutionarily discrete and significant to the species. A prior mitochondrial DNA (mtDNA) phylogeographic study of the California Gnatcatcher (Polioptila californica) revealed no geographic structure and, thus, did not support the subspecies validity of the threatened coastal California Gnatcatcher (P. c. californica). The U.S. Fish and Wildlife Service concluded that mtDNA data alone were insufficient to test subspecies taxonomy. We sequenced eight nuclear loci to search for historically discrete groupings that might have been missed by the mtDNA study (which we confirmed with new ND2 sequences). Phylogenetic analyses of the nuclear loci revealed no historically significant groupings and a low level of divergence (GST = 0.013). Sequence data suggested an older population increase in southern populations, consistent with niche modeling that suggested a northward range expansion following the Last Glacial Maximum (LGM). The signal of population increase was most evident in the mtDNA data, revealing the importance of including loci with short coalescence times. The threatened subspecies inhabits the distinctive Coastal Sage Scrub ecosystem, which might indicate ecological differentiation, but a test of niche divergence was insignificant. The best available genetic, morphological, and ecological data indicate a southward population displacement during the LGM followed by northward range expansion, without the occurrence of significant isolating barriers having led to the existence of evolutionarily discrete subspecies or distinct population segments that would qualify as listable units under the Endangered Species Act

    The DEEP2 Galaxy Redshift Survey: Design, Observations, Data Reduction, and Redshifts

    Get PDF
    We describe the design and data sample from the DEEP2 Galaxy Redshift Survey, the densest and largest precision-redshift survey of galaxies at z ~ 1 completed to date. The survey has conducted a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude M_B = -20 at z ~ 1 via ~90 nights of observation on the DEIMOS spectrograph at Keck Observatory. DEEP2 covers an area of 2.8 deg^2 divided into four separate fields, observed to a limiting apparent magnitude of R_AB=24.1. Objects with z < 0.7 are rejected based on BRI photometry in three of the four DEEP2 fields, allowing galaxies with z > 0.7 to be targeted ~2.5 times more efficiently than in a purely magnitude-limited sample. Approximately sixty percent of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets which fail to yield secure redshifts are blue objects that lie beyond z ~ 1.45. The DEIMOS 1200-line/mm grating used for the survey delivers high spectral resolution (R~6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. DEEP2 surpasses other deep precision-redshift surveys at z ~ 1 in terms of galaxy numbers, redshift accuracy, sample number density, and amount of spectral information. We also provide an overview of the scientific highlights of the DEEP2 survey thus far. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the publicly-available DEEP2 DEIMOS data reduction pipelines. [Abridged]Comment: submitted to ApJS; data products available for download at http://deep.berkeley.edu/DR4

    The FAIR Guiding Principles for scientific data management and stewardship

    Get PDF
    There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community

    Correlations Between Gene Expression and Mercury Levels in Blood of Boys With and Without Autism

    Get PDF
    Gene expression in blood was correlated with mercury levels in blood of 2- to 5-year-old boys with autism (AU) compared to age-matched typically developing (TD) control boys. This was done to address the possibility that the two groups might metabolize toxicants, such as mercury, differently. RNA was isolated from blood and gene expression assessed on whole genome Affymetrix Human U133 expression microarrays. Mercury levels were measured using an inductively coupled plasma mass spectrometer. Analysis of covariance (ANCOVA) was performed and partial correlations between gene expression and mercury levels were calculated, after correcting for age and batch effects. To reduce false positives, only genes shared by the ANCOVA models were analyzed. Of the 26 genes that correlated with mercury levels in both AU and TD boys, 11 were significantly different between the groups (P(Diagnosis*Mercury) ≤ 0.05). The expression of a large number of genes (n = 316) correlated with mercury levels in TD but not in AU boys (P ≤ 0.05), the most represented biological functions being cell death and cell morphology. Expression of 189 genes correlated with mercury levels in AU but not in TD boys (P ≤ 0.05), the most represented biological functions being cell morphology, amino acid metabolism, and antigen presentation. These data and those in our companion study on correlation of gene expression and lead levels show that AU and TD children display different correlations between transcript levels and low levels of mercury and lead. These findings might suggest different genetic transcriptional programs associated with mercury in AU compared to TD children

    TXS 0506+056 with Updated IceCube Data

    Get PDF
    Past results from the IceCube Collaboration have suggested that the blazar TXS 0506+056 is a potential source of astrophysical neutrinos. However, in the years since there have been numerous updates to event processing and reconstruction, as well as improvements to the statistical methods used to search for astrophysical neutrino sources. These improvements in combination with additional years of data have resulted in the identification of NGC 1068 as a second neutrino source candidate. This talk will re-examine time-dependent neutrino emission from TXS 0506+056 using the most recent northern-sky data sample that was used in the analysis of NGC 1068. The results of using this updated data sample to obtain a significance and flux fit for the 2014 TXS 0506+056 "untriggered" neutrino flare are reported

    Searches for IceCube Neutrinos Coincident with Gravitational Wave Events

    Get PDF

    Conditional normalizing flows for IceCube event reconstruction

    Get PDF

    Galactic Core-Collapse Supernovae at IceCube: “Fire Drill” Data Challenges and follow-up

    Get PDF
    The next Galactic core-collapse supernova (CCSN) presents a once-in-a-lifetime opportunity to make astrophysical measurements using neutrinos, gravitational waves, and electromagnetic radiation. CCSNe local to the Milky Way are extremely rare, so it is paramount that detectors are prepared to observe the signal when it arrives. The IceCube Neutrino Observatory, a gigaton water Cherenkov detector below the South Pole, is sensitive to the burst of neutrinos released by a Galactic CCSN at a level >10σ. This burst of neutrinos precedes optical emission by hours to days, enabling neutrinos to serve as an early warning for follow-up observation. IceCube\u27s detection capabilities make it a cornerstone of the global network of neutrino detectors monitoring for Galactic CCSNe, the SuperNova Early Warning System (SNEWS 2.0). In this contribution, we describe IceCube\u27s sensitivity to Galactic CCSNe and strategies for operational readiness, including "fire drill" data challenges. We also discuss coordination with SNEWS 2.0

    All-Energy Search for Solar Atmospheric Neutrinos with IceCube

    Get PDF
    The interaction of cosmic rays with the solar atmosphere generates a secondary flux of mesons that decay into photons and neutrinos – the so-called solar atmospheric flux. Although the gamma-ray component of this flux has been observed in Fermi-LAT and HAWC Observatory data, the neutrino component remains undetected. The energy distribution of those neutrinos follows a soft spectrum that extends from the GeV to the multi-TeV range, making large Cherenkov neutrino telescopes a suitable for probing this flux. In this contribution, we will discuss current progress of a search for the solar neutrino flux by the IceCube Neutrino Observatory using all available data since 2011. Compared to the previous analysis which considered only high-energy muon neutrino tracks, we will additionally consider events produced by all flavors of neutrinos down to GeV-scale energies. These new events should improve our analysis sensitivity since the flux falls quickly with energy. Determining the magnitude of the neutrino flux is essential, since it is an irreducible background to indirect solar dark matter searches
    corecore