22 research outputs found

    Enhanced Branch-and-Bound Framework for a Class of Sequencing Problems

    Get PDF

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Research on Joint Sparse Representation Learning Approaches

    No full text
    Dimensionality reduction techniques such as feature extraction and feature selection are critical tools employed in artificial intelligence, machine learning and pattern recognitions tasks. Previous studies of dimensionality reduction have three common problems: 1) The conventional techniques are disturbed by noise data. In the context of determining useful features, the noises may have adverse effects on the result. Given that noises are inevitable, it is essential for dimensionality reduction techniques to be robust from noises. 2) The conventional techniques separate the graph learning system apart from informative feature determination. These techniques used to construct a data structure graph first, and keep the graph unchanged to process the feature extraction or feature selection. Hence, the result of feature extraction or feature selection is strongly relying on the graph constructed. 3) The conventional techniques determine data intrinsic structure with less systematic and partial analyzation. They maintain either the data global structure or the data local manifold structure. As a result, it becomes difficult for one technique to achieve great performance in different datasets. We propose three learning models that overcome prementioned problems for various tasks under different learning environment. Specifically, our research outcomes are listing as followings: 1) We propose a novel learning model that joints Sparse Representation (SR) and Locality Preserving Projection (LPP), named Joint Sparse Representation and Locality Preserving Projection for Feature Extraction (JSRLPP), to extract informative features in the context of unsupervised learning environment. JSRLPP processes the feature extraction and data structure learning simultaneously, and is able to capture both the data global and local structure. The sparse matrix in the model operates directly to deal with different types of noises. We conduct comprehensive experiments and confirm that the proposed learning model performs impressive over the state-of-the-art approaches. 2) We propose a novel learning model that joints SR and Data Residual Relationships (DRR), named Unsupervised Feature Selection with Adaptive Residual Preserving (UFSARP), to select informative features in the context of unsupervised learning environment. Such model does not only reduce disturbance of different types of noise, but also effectively enforces similar samples to have similar reconstruction residuals. Besides, the model carries graph construction and feature determination simultaneously. Experimental results show that the proposed framework improves the effect of feature selection. 3) We propose a novel learning model that joints SR and Low-rank Representation (LRR), named Sparse Representation based Classifier with Low-rank Constraint (SRCLC), to extract informative features in the context of supervised learning environment. When processing the model, the Low-rank Constraint (LRC) regularizes both the within-class structure and between-class structure while the sparse matrix works to handle noises and irrelevant features. With extensive experiments, we confirm that SRLRC achieves impressive improvement over other approaches. To sum up, with the purpose of obtaining appropriate feature subset, we propose three novel learning models in the context of supervised learning and unsupervised learning to complete the tasks of feature extraction and feature selection respectively. Comprehensive experimental results on public databases demonstrate that our models are performing superior over the state-of-the-art approaches

    SVM-DT-based adaptive and collaborative intrusion detection

    No full text

    Sparse ranking model adaptation for cross domain learning to rank

    No full text
    corecore