8 research outputs found

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Compression of Deep Convolutional Neural Network Using Additional Importance-Weight-Based Filter Pruning Approach

    No full text
    The success of the convolutional neural network (CNN) comes with a tremendous growth of diverse CNN structures, making it hard to deploy on limited-resource platforms. These over-sized models contain a large amount of filters in the convolutional layers, which are responsible for almost 99% of the computation. The key question here arises: Do we really need all those filters? By removing entire filters, the computational cost can be significantly reduced. Hence, in this article, a filter pruning method, a process of discarding a subset of unimportant or weak filters from the original CNN model, is proposed, which alleviates the shortcomings of over-sized CNN architectures at the cost of storage space and time. The proposed filter pruning strategy is adopted to compress the model by assigning additional importance weights to convolutional filters. These additional importance weights help each filter learn its responsibility and contribute more efficiently. We adopted different initialization strategies to learn more about filters from different aspects and prune accordingly. Furthermore, unlike existing pruning approaches, the proposed method uses a predefined error tolerance level instead of the pruning rate. Extensive experiments on two widely used image segmentation datasets: Inria and AIRS, and two widely known CNN models for segmentation: TernausNet and standard U-Net, verify that our pruning approach can efficiently compress CNN models with almost negligible or no loss of accuracy. For instance, our approach could significantly reduce 85% of all floating point operations (FLOPs) from TernausNet on Inria with a negligible drop of 0.32% in validation accuracy. This compressed network is six-times smaller and almost seven-times faster (on a cluster of GPUs) than that of the original TernausNet, while the drop in the accuracy is less than 1%. Moreover, we reduced the FLOPs by 84.34% without significantly deteriorating the output performance on the AIRS dataset for TernausNet. The proposed pruning method effectively reduced the number of FLOPs and parameters of the CNN model, while almost retaining the original accuracy. The compact model can be deployed on any embedded device without any specialized hardware. We show that the performance of the pruned CNN model is very similar to that of the original unpruned CNN model. We also report numerous ablation studies to validate our approach

    An optimal-score-based filter pruning for deep convolutional neural networks

    No full text
    Convolutional Neural Networks (CNN) have achieved excellent performance in the processing of high-resolution images. Most of these networks contain many deep layers in quest of greater segmentation performance. However, over-sized CNN models result in overwhelming memory usage and large inference costs. Earlier studies have revealed that over-sized deep neural models tend to deal with abundant redundant filters that are very similar and provide tiny or no contribution in accelerating the inference of the model. Therefore, we have proposed a novel optimal-score-based filter pruning (OSFP) approach to prune redundant filters according to their relative similarity in feature space. OSFP not only speeds up learning in the network but also eradicates redundant filters leading to improvement in the segmentation performance. We empirically demonstrate on widely used segmentation network models (TernausNet, classical U-Net and VGG16 U-Net) and benchmark datasets (Inria Aerial Image Labeling Dataset and Aerial Imagery for Roof Segmentation (AIRS)) that computation costs (in terms of Float Point Operations (FLOPs) and parameters) are reduced significantly, while maintaining or even improving accuracy

    Octadecyltrichlorosilane-Modified Superhydrophobic-Superoleophilic Stainless Steel Mesh for Oil-Water Separation

    No full text
    Recently, oil-water separation has become an important topic due to its increasing commercial demand. Here, in this study, a simple technique to prepare superhydrophobic-superoleophilic mesh for oil-water separation is reported. The pre-cleaned stainless steel mesh is modified by octadecyltrichlorosilane (ODS) through solution immersion method. The modified mesh shows the water contact angle (WCA) of 158 ± 2° and oil contact angle (OCA) of 0°. The prepared superhydrophobic-superoleophilic mesh effectively separates various oils, including petrol, kerosene, diesel, vegetable oil, and coconut oil from oil–water mixtures with separation efficiency greater than 95%, and stable recyclability up to 10 cycles. In case of low viscosity oil (petrol), the modified mesh shows permeation flux of 2086.95 ± 104.34 L/(m2.h), which is higher than high viscosity oilsThis work is financially supported by DST ? INSPIRE Faculty Scheme, Department of Science and Technology (DST), Govt. of India. [DST/INSPIRE/04/2015/000281]. S.S.L. acknowledges financial assistance from the Henan University, Kaifeng, P. R. China. The authors greatly appreciate the support of the National Natural Science Foundation of China (21950410531).Scopu
    corecore