3,061 research outputs found

    Primary production of a rough fescue ecosystem in Western Montana

    Get PDF

    Why People Fail to use Condoms for STD and HIV Prevention

    Get PDF
    The world is almost 30 years into the AIDS pandemic. People know how to prevent HIV by using abstinence, monogamy and condom use. Despite this awareness, people still put themselves at risk for HIV and other sexually transmitted diseases. Why? This thesis catalogues the various reasons why people fail to use condoms during sexual intercourse. The qualitative information represents specific selections from anonymous personal interviews with over 1500 individuals combined with other available data and information from other HIV field workers and organizations. The findings show four major categories of influences effecting an individual\u27s decision to engage in unprotected sexual intercourse. These major categories include 1. Partner influence 2. Perception of risk 3. Desire for health and 4. Personal barriers to condom use Each major category is explained and analyzed. Finally a series of practical solutions are offered to address each of the different barriers to HIV and STD prevention

    True single-cell proteomics using advanced ion mobility mass spectrometry

    Get PDF
    In this thesis, I present the development of a novel mass spectrometry (MS) platform and scan modes in conjunction with a versatile and robust liquid chromatography (LC) platform, which addresses current sensitivity and robustness limitations in MS-based proteomics. I demonstrate how this technology benefits the high-speed and ultra-high sensitivity proteomics studies on a large scale. This culminated in the first of its kind label-free MS-based single-cell proteomics platform and its application to spatial tissue proteomics. I also investigate the vastly underexplored ‘dark matter’ of the proteome, validating novel microproteins that contribute to human cellular function. First, we developed a novel trapped ion mobility spectrometry (TIMS) platform for proteomics applications, which multiplies sequencing speed and sensitivity by ‘parallel accumulation – serial fragmentation’ (PASEF) and applied it to first high-sensitivity and large-scale projects in the biomedical arena. Next, to explore the collisional cross section (CCS) dimension in TIMS, we measured over 1 million peptide CCS values, which enabled us to train a deep learning model for CCS prediction solely based on the linear amino acid sequence. We also translated the principles of TIMS and PASEF to the field of lipidomics, highlighting parallel benefits in terms of throughput and sensitivity. The core of my PhD is the development of a robust ultra-high sensitivity LC-MS platform for the high-throughput analysis of single-cell proteomes. Improvements in ion transfer efficiency, robust, very low flow LC and a PASEF data independent acquisition scan mode together increased measurement sensitivity by up to 100-fold. We quantified single-cell proteomes to a depth of up to 1,400 proteins per cell. A fundamental result from the comparisons to single-cell RNA sequencing data revealed that single cells have a stable core proteome, whereas the transcriptome is dominated by Poisson noise, emphasizing the need for both complementary technologies. Building on our achievements with the single-cell proteomics technology, we elucidated the image-guided spatial and cell-type resolved proteome in whole organs and tissues from minute sample amounts. We combined clearing of rodent and human organs, unbiased 3D-imaging, target tissue identification, isolation and MS-based unbiased proteomics to describe early-stage β-amyloid plaque proteome profiles in a disease model of familial Alzheimer’s. Automated artificial intelligence driven isolation and pooling of single cells of the same phenotype allowed us to analyze the cell-type resolved proteome of cancer tissues, revealing a remarkable spatial difference in the proteome. Last, we systematically elucidated pervasive translation of noncanonical human open reading frames combining state-of-the art ribosome profiling, CRISPR screens, imaging and MS-based proteomics. We performed unbiased analysis of small novel proteins and prove their physical existence by LC-MS as HLA peptides, essential interaction partners of protein complexes and cellular function

    Lean Principles, Learning, and Knowledge Work: Evidence from a Software Services Provider

    Get PDF
    In this paper, we examine the applicability of lean production to knowledge work by investigating the implementation of a lean production system at an Indian software services firm. We first discuss specific aspects of knowledge work—task uncertainty, process invisibility, and architectural ambiguity—that call into question the relevance of lean production in this setting. Then, combining a detailed case study and empirical analysis, we find that lean software projects perform better than non-lean software projects at the company for most performance outcomes. We document the influence of the lean initiative on internal processes and examine how the techniques affect learning by improving both problem identification and problem resolution. Finally, we extend the lean production framework by highlighting the need to (1) identify problems early in the process and (2) keep problems and solutions together in time, space, and person

    Cutting to Order in the Rough Mill: A Sampling Approach

    Get PDF
    A cutting order is a list of dimension parts along with demanded quantities. The cutting-order problem is to minimize the total cost of filling a cutting order from a given lumber supply. Similar cutting-order problems arise in many industrial situations outside of forest products. This paper adapts an earlier, linear programming approach that was developed for uniform, defect-free stock materials. The adaptation presented here allows the method to handle nonuniform stock material (e.g., lumber) that contains defects that are not known in advance of cutting. The main differences are the use of a random sample to construct the linear program and the use of prices rather than cutting patterns to specify a solution. The primary result of this research is that the expected cost of filling an order under the proposed method is approximately equal to the minimum possible expected cost for sufficiently large order and sample sizes. A secondary result is a lower bound on the minimum possible expected cost. Computer simulations suggest that the proposed method is capable of attaining nearly minimal expected costs in moderately large orders

    Aeroponic test bed for hypergravity research

    Get PDF
    Taking one pound of food to space costs over $10,000. A plant growth chamber in space would help reduce the cost of transporting food by creating a healthy, long-term source of food that can be used for extended space missions. Currently, there is a lack of knowledge in gravity response mechanisms of plants to facilitate employing such a system. The overarching goal of this project is to add to the current body of knowledge related to growing plants in space by conducting research regarding the effect of hypergravity on cherry belle radish growth. To successfully accomplish this goal, an aeroponic test bed that induces hypergravitational fields ranging from 3gs to 5gs while also providing the nutrients and lighting necessary for growing cherry belle radishes was constructed

    Robust Machine Learning Applied to Astronomical Datasets I: Star-Galaxy Classification of the SDSS DR3 Using Decision Trees

    Get PDF
    We provide classifications for all 143 million non-repeat photometric objects in the Third Data Release of the Sloan Digital Sky Survey (SDSS) using decision trees trained on 477,068 objects with SDSS spectroscopic data. We demonstrate that these star/galaxy classifications are expected to be reliable for approximately 22 million objects with r < ~20. The general machine learning environment Data-to-Knowledge and supercomputing resources enabled extensive investigation of the decision tree parameter space. This work presents the first public release of objects classified in this way for an entire SDSS data release. The objects are classified as either galaxy, star or nsng (neither star nor galaxy), with an associated probability for each class. To demonstrate how to effectively make use of these classifications, we perform several important tests. First, we detail selection criteria within the probability space defined by the three classes to extract samples of stars and galaxies to a given completeness and efficiency. Second, we investigate the efficacy of the classifications and the effect of extrapolating from the spectroscopic regime by performing blind tests on objects in the SDSS, 2dF Galaxy Redshift and 2dF QSO Redshift (2QZ) surveys. Given the photometric limits of our spectroscopic training data, we effectively begin to extrapolate past our star-galaxy training set at r ~ 18. By comparing the number counts of our training sample with the classified sources, however, we find that our efficiencies appear to remain robust to r ~ 20. As a result, we expect our classifications to be accurate for 900,000 galaxies and 6.7 million stars, and remain robust via extrapolation for a total of 8.0 million galaxies and 13.9 million stars. [Abridged]Comment: 27 pages, 12 figures, to be published in ApJ, uses emulateapj.cl
    • …
    corecore