277 research outputs found

    Echolocation detections and digital video surveys provide reliable estimates of the relative density of harbour porpoises

    Get PDF
    Acknowledgements We would like to thank Erik Rexstad and Rob Williams for useful reviews of this manuscript. The collection of visual and acoustic data was funded by the UK Department of Energy & Climate Change, the Scottish Government, Collaborative Offshore Wind Research into the Environment (COWRIE) and Oil & Gas UK. Digital aerial surveys were funded by Moray Offshore Renewables Ltd and additional funding for analysis of the combined datasets was provided by Marine Scotland. Collaboration between the University of Aberdeen and Marine Scotland was supported by MarCRF. We thank colleagues at the University of Aberdeen, Moray First Marine, NERI, Hi-Def Aerial Surveying Ltd and Ravenair for essential support in the field, particularly Tim Barton, Bill Ruck, Rasmus Nielson and Dave Rutter. Thanks also to Andy Webb, David Borchers, Len Thomas, Kelly McLeod, David L. Miller, Dinara Sadykova and Thomas Cornulier for advice on survey design and statistical approache. Data Accessibility Data are available from the Dryad Digital Repository: http://dx.doi.org/10.5061/dryad.cf04gPeer reviewedPublisher PD

    Direct and Absolute Quantification of over 1800 Yeast Proteins via Selected Reaction Monitoring

    Get PDF
    Defining intracellular protein concentration is critical in molecular systems biology. Although strategies for determining relative protein changes are available, defining robust absolute values in copies per cell has proven significantly more challenging. Here we present a reference data set quantifying over 1800 Saccharomyces cerevisiae proteins by direct means using protein-specific stable-isotope labeled internal standards and selected reaction monitoring (SRM) mass spectrometry, far exceeding any previous study. This was achieved by careful design of over 100 QconCAT recombinant proteins as standards, defining 1167 proteins in terms of copies per cell and upper limits on a further 668, with robust CVs routinely less than 20%. The selected reaction monitoring-derived proteome is compared with existing quantitative data sets, highlighting the disparities between methodologies. Coupled with a quantification of the transcriptome by RNA-seq taken from the same cells, these data support revised estimates of several fundamental molecular parameters: a total protein count of ∼100 million molecules-per-cell, a median of ∼1000 proteins-per-transcript, and a linear model of protein translation explaining 70% of the variance in translation rate. This work contributes a “gold-standard” reference yeast proteome (including 532 values based on high quality, dual peptide quantification) that can be widely used in systems models and for other comparative studies. Reliable and accurate quantification of the proteins present in a cell or tissue remains a major challenge for post-genome scientists. Proteins are the primary functional molecules in biological systems and knowledge of their abundance and dynamics is an important prerequisite to a complete understanding of natural physiological processes, or dysfunction in disease. Accordingly, much effort has been spent in the development of reliable, accurate and sensitive techniques to quantify the cellular proteome, the complement of proteins expressed at a given time under defined conditions (1). Moreover, the ability to model a biological system and thus characterize it in kinetic terms, requires that protein concentrations be defined in absolute numbers (2, 3). Given the high demand for accurate quantitative proteome data sets, there has been a continual drive to develop methodology to accomplish this, typically using mass spectrometry (MS) as the analytical platform. Many recent studies have highlighted the capabilities of MS to provide good coverage of the proteome at high sensitivity often using yeast as a demonstrator system (4⇓⇓⇓⇓⇓–10), suggesting that quantitative proteomics has now “come of age” (1). However, given that MS is not inherently quantitative, most of the approaches produce relative quantitation and do not typically measure the absolute concentrations of individual molecular species by direct means. For the yeast proteome, epitope tagging studies using green fluorescent protein or tandem affinity purification tags provides an alternative to MS. Here, collections of modified strains are generated that incorporate a detectable, and therefore quantifiable, tag that supports immunoblotting or fluorescence techniques (11, 12). However, such strategies for copies per cell (cpc) quantification rely on genetic manipulation of the host organism and hence do not quantify endogenous, unmodified protein. Similarly, the tagging can alter protein levels - in some instances hindering protein expression completely (11). Even so, epitope tagging methods have been of value to the community, yielding high coverage quantitative data sets for the majority of the yeast proteome (11, 12). MS-based methods do not rely on such nonendogenous labels, and can reach genome-wide levels of coverage. Accurate estimation of absolute concentrations i.e. protein copy number per cell, also usually necessitates the use of (one or more) external or internal standards from which to derive absolute abundance (4). Examples include a comprehensive quantification of the Leptospira interrogans proteome that used a 19 protein subset quantified using selected reaction monitoring (SRM)1 to calibrate their label-free data (8, 13). It is worth noting that epitope tagging methods, although also absolute, rely on a very limited set of standards for the quantitative western blots and necessitate incorporation of a suitable immunogenic tag (11). Other recent, innovative approaches exploiting total ion signal and internal scaling to estimate protein cellular abundance (10, 14), avoid the use of internal standards, though they do rely on targeted proteomic data to validate their approach. The use of targeted SRM strategies to derive proteomic calibration standards highlights its advantages in comparison to label-free in terms of accuracy, precision, dynamic range and limit of detection and has gained currency for its reliability and sensitivity (3, 15⇓–17). Indeed, SRM is often referred to as the “gold standard proteomic quantification method,” being particularly well-suited when the proteins to be quantified are known, when appropriate surrogate peptides for protein quantification can be selected a priori, and matched with stable isotope-labeled (SIL) standards (18⇓–20). In combination with SIL peptide standards that can be generated through a variety of means (3, 15), SRM can be used to quantify low copy number proteins, reaching down to ∼50 cpc in yeast (5). However, although SRM methodology has been used extensively for S. cerevisiae protein quantification by us and others (19, 21, 22), it has not been used for large protein cohorts because of the requirement to generate the large numbers of attendant SIL peptide standards; the largest published data set is only for a few tens of proteins. It remains a challenge therefore to robustly quantify an entire eukaryotic proteome in absolute terms by direct means using targeted MS and this is the focus of our present study, the Census Of the Proteome of Yeast (CoPY). We present here direct and absolute quantification of nearly 2000 endogenous proteins from S. cerevisiae grown in steady state in a chemostat culture, using the SRM-based QconCAT approach. Although arguably not quantification of the entire proteome, this represents an accurate and rigorous collection of direct yeast protein quantifications, providing a gold-standard data set of endogenous protein levels for future reference and comparative studies. The highly reproducible SIL-SRM MS data, with robust CVs typically less than 20%, is compared with other extant data sets that were obtained via alternative analytical strategies. We also report a matched high quality transcriptome from the same cells using RNA-seq, which supports additional calculations including a refined estimate of the total protein content in yeast cells, and a simple linear model of translation explaining 70% of the variance between RNA and protein levels in yeast chemostat cultures. These analyses confirm the validity of our data and approach, which we believe represents a state-of-the-art absolute quantification compendium of a significant proportion of a model eukaryotic proteome

    Estimating bycatch mortality for marine mammals : concepts and best practices

    Get PDF
    Support for this project was provided by the Lenfest Ocean Program (Contract ID: #31008).Fisheries bycatch is the greatest current source of human-caused deaths of marine mammals worldwide, with severe impacts on the health and viability of many populations. Recent regulations enacted in the United States under the Fish and Fish Product Import Provisions of its Marine Mammal Protection Act require nations with fisheries exporting fish and fish products to the United States (hereafter, “export fisheries”) to have or establish marine mammal protection standards that are comparable in effectiveness to the standards for United States commercial fisheries. In many cases, this will require estimating marine mammal bycatch in those fisheries. Bycatch estimation is conceptually straightforward but can be difficult in practice, especially if resources (funding) are limiting or for fisheries consisting of many, small vessels with geographically-dispersed landing sites. This paper describes best practices for estimating bycatch mortality, which is an important ingredient of bycatch assessment and mitigation. We discuss a general bycatch estimator and how to obtain its requisite bycatch-rate and fisheries-effort data. Scientific observer programs provide the most robust bycatch estimates and consequently are discussed at length, including characteristics such as study design, data collection, statistical analysis, and common sources of estimation bias. We also discuss alternative approaches and data types, such as those based on self-reporting and electronic vessel-monitoring systems. This guide is intended to be useful to managers and scientists in countries having or establishing programs aimed at managing marine mammal bycatch, especially those conducting first-time assessments of fisheries impacts on marine mammal populations.Publisher PDFPeer reviewe

    Genetic epilepsy with febrile seizures plus: definite and borderline phenotypes

    Get PDF
    Generalised epilepsy with febrile seizures plus (GEFS+) is the most studied familial epilepsy syndrome. However, characteristics of UK families have not previously been reported. Among the first 80 families recruited to our families study, four broad subphenotypes were identified: families with classical GEFS+; families with borderline GEFS+; families with unclassified epilepsy; and families with an alternative syndromal diagnosis. Borderline GEFS+ families shared many characteristics of classical GEFS+ families—such as prominent febrile seizures plus and early onset febrile seizures—but included more adults with focal epilepsies (rather than the idiopathic generalised epilepsies predominating in GEFS+) and double the prevalence of migraine. Thus the authors believe that a novel and robust familial epilepsy phenotype has been identified. Subcategorising families with epilepsy is helpful in targeting both clinical and research resources. Most families with GEFS+ have no identified causal mutation, and so predicting genetic homogeneity by identifying endophenotypes becomes more important

    Systems, interactions and macrotheory

    Get PDF
    A significant proportion of early HCI research was guided by one very clear vision: that the existing theory base in psychology and cognitive science could be developed to yield engineering tools for use in the interdisciplinary context of HCI design. While interface technologies and heuristic methods for behavioral evaluation have rapidly advanced in both capability and breadth of application, progress toward deeper theory has been modest, and some now believe it to be unnecessary. A case is presented for developing new forms of theory, based around generic “systems of interactors.” An overlapping, layered structure of macro- and microtheories could then serve an explanatory role, and could also bind together contributions from the different disciplines. Novel routes to formalizing and applying such theories provide a host of interesting and tractable problems for future basic research in HCI

    Estimating the abundance of marine mammal populations

    Get PDF
    Support for this project was provided by the Lenfest Ocean Program.Motivated by the need to estimate the abundance of marine mammal populations to inform conservation assessments, especially relating to fishery bycatch, this paper provides background on abundance estimation and reviews the various methods available for pinnipeds, cetaceans and sirenians. We first give an “entry-level” introduction to abundance estimation, including fundamental concepts and the importance of recognizing sources of bias and obtaining a measure of precision. Each of the primary methods available to estimate abundance of marine mammals is then described, including data collection and analysis, common challenges in implementation, and the assumptions made, violation of which can lead to bias. The main method for estimating pinniped abundance is extrapolation of counts of animals (pups or all-ages) on land or ice to the whole population. Cetacean and sirenian abundance is primarily estimated from transect surveys conducted from ships, small boats or aircraft. If individuals of a species can be recognized from natural markings, mark-recapture analysis of photo-identification data can be used to estimate the number of animals using the study area. Throughout, we cite example studies that illustrate the methods described. To estimate the abundance of a marine mammal population, key issues include: defining the population to be estimated, considering candidate methods based on strengths and weaknesses in relation to a range of logistical and practical issues, being aware of the resources required to collect and analyze the data, and understanding the assumptions made. We conclude with a discussion of some practical issues, given the various challenges that arise during implementation.Publisher PDFPeer reviewe

    Ocean mass, sterodynamic effects, and vertical land motion largely explain US coast relative sea level rise

    Get PDF
    © The Author(s), 2021. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Harvey, T., Hamlington, B. D., Frederikse, T., Nerem, R. S., Piecuch, C. G., Hammond, W. C., Blewitt, G., Thompson, P. R., Bekaert, D. P. S., Landerer, F. W., Reager, J. T., Kopp, R. E., Chandanpurkar, H., Fenty, I., Trossman, D. S., Walker, J. S., & Boening, C. W. Ocean mass, sterodynamic effects, and vertical land motion largely explain US coast relative sea level rise. Communications Earth & Environment, 2(1), (2021): 233, https://doi.org/10.1038/s43247-021-00300-w.Regional sea-level changes are caused by several physical processes that vary both in space and time. As a result of these processes, large regional departures from the long-term rate of global mean sea-level rise can occur. Identifying and understanding these processes at particular locations is the first step toward generating reliable projections and assisting in improved decision making. Here we quantify to what degree contemporary ocean mass change, sterodynamic effects, and vertical land motion influence sea-level rise observed by tide-gauge locations around the contiguous U.S. from 1993 to 2018. We are able to explain tide gauge-observed relative sea-level trends at 47 of 55 sampled locations. Locations where we cannot explain observed trends are potentially indicative of shortcomings in our coastal sea-level observational network or estimates of uncertainty.The research was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. C.G.P. was supported by NASA grant 80NSSC20K1241. B.D.H., T.C.H., and T.F. were supported by NASA JPL Task 105393.281945.02.25.04.59. R.E.K. and J.S.W. were supported by U.S. National Aeronautics and Space Administration (grants 80NSSC17K0698, 80NSSC20K1724 and JPL task 105393.509496.02.08.13.31) and U.S. National Science Foundation (grant ICER-1663807). P.R.T. acknowledges financial support from the NOAA Global Ocean Monitoring and Observing program in support of the University of Hawaii Sea Level Center (NA11NMF4320128). The ECCO project is funded by the NASA Physical Oceanography; Modeling, Analysis, and Prediction; and Cryosphere Programs

    Prime Focus Spectrograph (PFS) for the Subaru Telescope: Overview, recent progress, and future perspectives

    Full text link
    PFS (Prime Focus Spectrograph), a next generation facility instrument on the 8.2-meter Subaru Telescope, is a very wide-field, massively multiplexed, optical and near-infrared spectrograph. Exploiting the Subaru prime focus, 2394 reconfigurable fibers will be distributed over the 1.3 deg field of view. The spectrograph has been designed with 3 arms of blue, red, and near-infrared cameras to simultaneously observe spectra from 380nm to 1260nm in one exposure at a resolution of ~1.6-2.7A. An international collaboration is developing this instrument under the initiative of Kavli IPMU. The project is now going into the construction phase aiming at undertaking system integration in 2017-2018 and subsequently carrying out engineering operations in 2018-2019. This article gives an overview of the instrument, current project status and future paths forward.Comment: 17 pages, 10 figures. Proceeding of SPIE Astronomical Telescopes and Instrumentation 201
    corecore