118 research outputs found

    Detection of Carbon Monoxide Using Polymer-Composite Films with a Porphyrin-Functionalized Polypyrrole

    Get PDF
    Post-fire air constituents that are of interest to NASA include CO and some acid gases (HCl and HCN). CO is an important analyte to be able to sense in human habitats since it is a marker for both prefire detection and post-fire cleanup. The need exists for a sensor that can be incorporated into an existing sensing array architecture. The CO sensor needs to be a low-power chemiresistor that operates at room temperature; the sensor fabrication techniques must be compatible with ceramic substrates. Early work on the JPL ElectronicNose indicated that some of the existing polymer-carbon black sensors might be suitable. In addition, the CO sensor based on polypyrrole functionalized with iron porphyrin was demonstrated to be a promising sensor that could meet the requirements. First, pyrrole was polymerized in a ferric chloride/iron porphyrin solution in methanol. The iron porphyrin is 5, 10, 15, 20-tetraphenyl-21H, 23Hporphine iron (III) chloride. This creates a polypyrrole that is functionalized with the porphyrin. After synthesis, the polymer is dried in an oven. Sensors were made from the functionalized polypyrrole by binding it with a small amount of polyethylene oxide (600 MW). This composite made films that were too resistive to be measured in the device. Subsequently, carbon black was added to the composite to bring the sensing film resistivity within a measurable range. A suspension was created in methanol using the functionalized polypyrrole (90% by weight), polyethylene oxide (600,000 MW, 5% by weight), and carbon black (5% by weight). The sensing films were then deposited, like the polymer-carbon black sensors. After deposition, the substrates were dried in a vacuum oven for four hours at 60 C. These sensors showed good response to CO at concentrations over 100 ppm. While the sensor is based on a functionalized pyrrole, the actual composite is more robust and flexible. A polymer binder was added to help keep the sensor material from delaminating from the electrodes, and carbon was added to improve the conductivity of the material

    System for detecting and estimating concentrations of gas or liquid analytes

    Get PDF
    A sensor system for detecting and estimating concentrations of various gas or liquid analytes. In an embodiment, the resistances of a set of sensors are measured to provide a set of responses over time where the resistances are indicative of gas or liquid sorption, depending upon the sensors. A concentration vector for the analytes is estimated by satisfying a criterion of goodness using the set of responses. Other embodiments are described and claimed

    Co-polymer films for sensors

    Get PDF
    Embodiments include a sensor comprising a co-polymer, the co-polymer comprising a first monomer and a second monomer. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is polystyrene and the second monomer is poly-2-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium benzylamine chloride. Other embodiments are described and claimed

    Weakly supervised approaches for quality estimation

    Full text link

    Binary credal classification under sparsity constraints.

    Get PDF
    Binary classification is a well known problem in statistics. Besides classical methods, several techniques such as the naive credal classifier (for categorical data) and imprecise logistic regression (for continuous data) have been proposed to handle sparse data. However, a convincing approach to the classification problem in high dimensional problems (i.e., when the number of attributes is larger than the number of observations) is yet to be explored in the context of imprecise probability. In this article, we propose a sensitivity analysis based on penalised logistic regression scheme that works as binary classifier for high dimensional cases. We use an approach based on a set of likelihood functions (i.e. an imprecise likelihood, if you like), that assigns a set of weights to the attributes, to ensure a robust selection of the important attributes, whilst training the model at the same time, all in one fell swoop. We do a sensitivity analysis on the weights of the penalty term resulting in a set of sparse constraints which helps to identify imprecision in the dataset

    Co-polymer Films for Sensors

    Get PDF
    Embodiments include a sensor comprising a co-polymer, the co-polymer comprising a first monomer and a second monomer. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is polystyrene and the second monomer is poly-2-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium benzylamine chloride. Other embodiments are described and claimed

    Experimental study of mercury removal from exhaust gases

    Get PDF
    An initial study has been made of the use of synthetic zeolites for mercury capture from exhaust gases. Synthetic zeolites (Na-X and Na-P1), and for comparison a natural zeolite (clinoptilolite) and activated carbon with bromine (AC/Br) were tested for mercury uptake from a gaseous stream. The materials were subjected to mercury adsorption tests and their thermal stability was evaluated. The untreated synthetic zeolites had negligible mercury uptake, but after impregnation with silver, the adsorption of mercury was markedly improved. The synthetic zeolite Na-X impregnated with silver adsorbed significantly more mercury before breakthrough than the activated carbon impregnated with bromine, indicating the potential of zeolite derived from coal fly ash as a new sorbent for capture of mercury from flue gases

    Gene and pathway identification with Lp penalized Bayesian logistic regression

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Identifying genes and pathways associated with diseases such as cancer has been a subject of considerable research in recent years in the area of bioinformatics and computational biology. It has been demonstrated that the magnitude of differential expression does not necessarily indicate biological significance. Even a very small change in the expression of particular gene may have dramatic physiological consequences if the protein encoded by this gene plays a catalytic role in a specific cell function. Moreover, highly correlated genes may function together on the same pathway biologically. Finally, in sparse logistic regression with <it>L</it><sub><it>p </it></sub>(<it>p </it>< 1) penalty, the degree of the sparsity obtained is determined by the value of the regularization parameter. Usually this parameter must be carefully tuned through cross-validation, which is time consuming.</p> <p>Results</p> <p>In this paper, we proposed a simple Bayesian approach to integrate the regularization parameter out analytically using a new prior. Therefore, there is no longer a need for parameter selection, as it is eliminated entirely from the model. The proposed algorithm (BLpLog) is typically two or three orders of magnitude faster than the original algorithm and free from bias in performance estimation. We also define a novel similarity measure and develop an integrated algorithm to hunt the regulatory genes with low expression changes but having high correlation with the selected genes. Pathways of those correlated genes were identified with DAVID <url>http://david.abcc.ncifcrf.gov/</url>.</p> <p>Conclusion</p> <p>Experimental results with gene expression data demonstrate that the proposed methods can be utilized to identify important genes and pathways that are related to cancer and build a parsimonious model for future patient predictions.</p

    Graphical modeling of binary data using the LASSO: a simulation study

    Get PDF
    Background: Graphical models were identified as a promising new approach to modeling high-dimensional clinical data. They provided a probabilistic tool to display, analyze and visualize the net-like dependence structures by drawing a graph describing the conditional dependencies between the variables. Until now, the main focus of research was on building Gaussian graphical models for continuous multivariate data following a multivariate normal distribution. Satisfactory solutions for binary data were missing. We adapted the method of Meinshausen and Buhlmann to binary data and used the LASSO for logistic regression. Objective of this paper was to examine the performance of the Bolasso to the development of graphical models for high dimensional binary data. We hypothesized that the performance of Bolasso is superior to competing LASSO methods to identify graphical models. Methods: We analyzed the Bolasso to derive graphical models in comparison with other LASSO based method. Model performance was assessed in a simulation study with random data generated via symmetric local logistic regression models and Gibbs sampling. Main outcome variables were the Structural Hamming Distance and the Youden Index. We applied the results of the simulation study to a real-life data with functioning data of patients having head and neck cancer. Results: Bootstrap aggregating as incorporated in the Bolasso algorithm greatly improved the performance in higher sample sizes. The number of bootstraps did have minimal impact on performance. Bolasso performed reasonable well with a cutpoint of 0.90 and a small penalty term. Optimal prediction for Bolasso leads to very conservative models in comparison with AIC, BIC or cross-validated optimal penalty terms. Conclusions: Bootstrap aggregating may improve variable selection if the underlying selection process is not too unstable due to small sample size and if one is mainly interested in reducing the false discovery rate. We propose using the Bolasso for graphical modeling in large sample sizes
    corecore