887 research outputs found

    Kaneohe Bay Sewage Diversion Experiment: Perspectives on Ecosystem Responses to Nutritional Perturbation

    Get PDF
    Kaneohe Bay, Hawaii, received increasing amounts of sewage from the 1950s through 1977. Most sewage was diverted from the bay in 1977 and early 1978. This investigation, begun in January 1976 and continued through August 1979, described the bay over that period, with particular reference to the responses of the ecosystem to sewage diversion. The sewage was a nutritional subsidy. All of the inorganic nitrogen and most of the inorganic phosphorus introduced into the ecosystem were taken up biologically before being advected from the bay. The major uptake was by phytoplankton, and the internal water-column cycle between dissolved nutrients, phytoplankton, zooplankton, microheterotrophs, and detritus supported a rate of productivity far exceeding the rate of nutrient loading. These water-column particles were partly washed out of the ecosystem and partly sedimented and became available to the benthos. The primary benthic response to nutrient loading was a large buildup of detritivorous heterotrophic biomass. Cycling of nutrients among heterotrophs, autotrophs, detritus, and inorganic nutrients was important. With sewage diversion, the biomass of both plankton and benthos decreased rapidly. Benthic biological composition has not yet returned to presewage conditions, partly because some key organisms are long-lived and partly because the bay substratum has been perturbed by both the sewage and other human influences

    Predictors of placement from a juvenile detention facility

    Get PDF
    The purpose of this project was to determine whether certain personal, socioeconomic, and court-related factors are significantly related to the differential placement of delinquent and dependent children from the detention facility at the Donald E. Long Home. A stratified random sample was composed of 173 placements of children who were held in detention after a preliminary hearing. The review of literature revealed that little systematic. Information is known regarding the placement process as it is related to differential placement of children from a detention facility. A code sheet was developed for recording the information in the children’s records maintained by the court. Fourteen variables were ultimately selected for analysis of their relationship to differential placement. These variables were subjected to three statistical approaches; a descriptive analysis of the random sample, testing of the significance of each variable to the alternatives in placement by either Chi square or analyses of variance, and testing of several variables in combination by discriminant function. This study was limited by the fact that only demographic variables were tested. Although three individual variables were found to have a high degree of significance in relation to placement, the data as produced within the scope of this research project does not provide an effective placement profile. The need for additional research in the area of the differential placement process is clearly indicated. Suggestions are made for future research

    Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures Γ— time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set.</p> <p>Results</p> <p>We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost.</p> <p>Conclusion</p> <p>Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity, while neighbour-based methods (KNN, OLS, LSA, LLS) performed better in data with higher complexity. We also found that the EBS and STS schemes serve as complementary and effective tools for selecting the optimal imputation algorithm.</p

    Grooming coercion and the post-conflict trading of social services in wild Barbary macaques

    Get PDF
    In animal and human societies, social services such as protection from predators are often exchanged between group members. The tactics that individuals display to obtain a service depend on its value and on differences between individuals in their capacity to aggressively obtain it. Here we analysed the exchange of valuable social services (i.e. grooming and relationship repair) in the aftermath of a conflict, in wild Barbary macaques (Macaca sylvanus). The relationship repair function of post-conflict affiliation (i.e. reconciliation) was apparent in the victim but not in the aggressor. Conversely, we found evidence for grooming coercion by the aggressor; when the victim failed to give grooming soon after a conflict they received renewed aggression from the aggressor. We argue that post-conflict affiliation between former opponents can be better described as a trading of social services rather than coercion alone, as both animals obtain some benefits (i.e. grooming for the aggressor and relationship repair for the victim). Our study is the first to test the importance of social coercion in the aftermath of a conflict. Differences in competitive abilities can affect the exchange of services and the occurrence of social coercion in animal societies. This may also help explain the variance between populations and species in their social behaviour and conflict management strategies

    Temporally ordered collective creep and dynamic transition in the charge-density-wave conductor NbSe3

    Full text link
    We have observed an unusual form of creep at low temperatures in the charge-density-wave (CDW) conductor NbSe3_3. This creep develops when CDW motion becomes limited by thermally-activated phase advance past individual impurities, demonstrating the importance of local pinning and related short-length-scale dynamics. Unlike in vortex lattices, elastic collective dynamics on longer length scales results in temporally ordered motion and a finite threshold field. A first-order dynamic phase transition from creep to high-velocity sliding produces "switching" in the velocity-field characteristic.Comment: 4 pages, 4 eps figures; minor clarifications To be published in Phys. Rev. Let

    Accelerating Bayesian hierarchical clustering of time series data with a randomised algorithm

    Get PDF
    We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/

    Bayesian correlated clustering to integrate multiple datasets

    Get PDF
    Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct – but often complementary – information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured via parameters that describe the agreement among the datasets. Results: Using a set of 6 artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real S. cerevisiae datasets. In the 2-dataset case, we show that MDI’s performance is comparable to the present state of the art. We then move beyond the capabilities of current approaches and integrate gene expression, ChIP-chip and protein-protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques – as well as to non-integrative approaches – demonstrate that MDI is very competitive, while also providing information that would be difficult or impossible to extract using other methods

    Bayesian hierarchical clustering for studying cancer gene expression data with unknown statistics

    Get PDF
    Clustering analysis is an important tool in studying gene expression data. The Bayesian hierarchical clustering (BHC) algorithm can automatically infer the number of clusters and uses Bayesian model selection to improve clustering quality. In this paper, we present an extension of the BHC algorithm. Our Gaussian BHC (GBHC) algorithm represents data as a mixture of Gaussian distributions. It uses normal-gamma distribution as a conjugate prior on the mean and precision of each of the Gaussian components. We tested GBHC over 11 cancer and 3 synthetic datasets. The results on cancer datasets show that in sample clustering, GBHC on average produces a clustering partition that is more concordant with the ground truth than those obtained from other commonly used algorithms. Furthermore, GBHC frequently infers the number of clusters that is often close to the ground truth. In gene clustering, GBHC also produces a clustering partition that is more biologically plausible than several other state-of-the-art methods. This suggests GBHC as an alternative tool for studying gene expression data. The implementation of GBHC is available at https://sites. google.com/site/gaussianbhc

    Integration of the Old and New Lake Suigetsu (Japan) Terrestrial Radiocarbon Calibration Data Sets

    Get PDF
    The varved sediment profile of Lake Suigetsu, central Japan, offers an ideal opportunity from which to derive a terrestrial record of atmospheric radiocarbon across the entire range of the 14C dating method. Previous work by Kitagawa and van der Plicht (1998a,b, 2000) provided such a data set; however, problems with the varve-based age scale of their SG93 sediment core precluded the use of this data set for 14C calibration purposes. Lake Suigetsu was re-cored in summer 2006, with the retrieval of overlapping sediment cores from 4 parallel boreholes enabling complete recovery of the sediment profile for the present β€œSuigetsu Varves 2006” project (Nakagawa et al. 2012). Over 550 14C determinations have been obtained from terrestrial plant macrofossils picked from the latter SG06 composite sediment core, which, coupled with the core’s independent varve chronology, provides the only non-reservoir-corrected 14C calibration data set across the 14C dating range. Here, physical matching of archive U-channel sediment from SG93 to the continuous SG06 sediment profile is presented. We show the excellent agreement between the respective projects’ 14C data sets, allowing the integration of 243 14C eterminations from the original SG93 project into a composite Lake Suigetsu 14C alibration data set comprising 808 individual 14C determinations, spanning the last 52,800 cal yr
    • …
    corecore