151 research outputs found
Detection of Carbon Monoxide Using Polymer-Composite Films with a Porphyrin-Functionalized Polypyrrole
Post-fire air constituents that are of interest to NASA include CO and some acid gases (HCl and HCN). CO is an important analyte to be able to sense in human habitats since it is a marker for both prefire detection and post-fire cleanup. The need exists for a sensor that can be incorporated into an existing sensing array architecture. The CO sensor needs to be a low-power chemiresistor that operates at room temperature; the sensor fabrication techniques must be compatible with ceramic substrates. Early work on the JPL ElectronicNose indicated that some of the existing polymer-carbon black sensors might be suitable. In addition, the CO sensor based on polypyrrole functionalized with iron porphyrin was demonstrated to be a promising sensor that could meet the requirements. First, pyrrole was polymerized in a ferric chloride/iron porphyrin solution in methanol. The iron porphyrin is 5, 10, 15, 20-tetraphenyl-21H, 23Hporphine iron (III) chloride. This creates a polypyrrole that is functionalized with the porphyrin. After synthesis, the polymer is dried in an oven. Sensors were made from the functionalized polypyrrole by binding it with a small amount of polyethylene oxide (600 MW). This composite made films that were too resistive to be measured in the device. Subsequently, carbon black was added to the composite to bring the sensing film resistivity within a measurable range. A suspension was created in methanol using the functionalized polypyrrole (90% by weight), polyethylene oxide (600,000 MW, 5% by weight), and carbon black (5% by weight). The sensing films were then deposited, like the polymer-carbon black sensors. After deposition, the substrates were dried in a vacuum oven for four hours at 60 C. These sensors showed good response to CO at concentrations over 100 ppm. While the sensor is based on a functionalized pyrrole, the actual composite is more robust and flexible. A polymer binder was added to help keep the sensor material from delaminating from the electrodes, and carbon was added to improve the conductivity of the material
System for detecting and estimating concentrations of gas or liquid analytes
A sensor system for detecting and estimating concentrations of various gas or liquid analytes. In an embodiment, the resistances of a set of sensors are measured to provide a set of responses over time where the resistances are indicative of gas or liquid sorption, depending upon the sensors. A concentration vector for the analytes is estimated by satisfying a criterion of goodness using the set of responses. Other embodiments are described and claimed
Co-polymer films for sensors
Embodiments include a sensor comprising a co-polymer, the co-polymer comprising a first monomer and a second monomer. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is polystyrene and the second monomer is poly-2-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium benzylamine chloride. Other embodiments are described and claimed
Experimental study of mercury removal from exhaust gases
An initial study has been made of the use of synthetic zeolites for mercury capture from exhaust gases. Synthetic zeolites (Na-X and Na-P1), and for comparison a natural zeolite (clinoptilolite) and activated carbon with bromine (AC/Br) were tested for mercury uptake from a gaseous stream. The materials were subjected to mercury adsorption tests and their thermal stability was evaluated. The untreated synthetic zeolites had negligible mercury uptake, but after impregnation with silver, the adsorption of mercury was markedly improved. The synthetic zeolite Na-X impregnated with silver adsorbed significantly more mercury before breakthrough than the activated carbon impregnated with bromine, indicating the potential of zeolite derived from coal fly ash as a new sorbent for capture of mercury from flue gases
Predicting risk for Alcohol Use Disorder using longitudinal data with multimodal biomarkers and family history: a machine learning study.
Predictive models have succeeded in distinguishing between individuals with Alcohol use Disorder (AUD) and controls. However, predictive models identifying who is prone to develop AUD and the biomarkers indicating a predisposition to AUD are still unclear. Our sample (n = 656) included offspring and non-offspring of European American (EA) and African American (AA) ancestry from the Collaborative Study of the Genetics of Alcoholism (COGA) who were recruited as early as age 12 and were unaffected at first assessment and reassessed years later as AUD (DSM-5) (n = 328) or unaffected (n = 328). Machine learning analysis was performed for 220 EEG measures, 149 alcohol-related single nucleotide polymorphisms (SNPs) from a recent large Genome-wide Association Study (GWAS) of alcohol use/misuse and two family history (mother DSM-5 AUD and father DSM-5 AUD) features using supervised, Linear Support Vector Machine (SVM) classifier to test which features assessed before developing AUD predict those who go on to develop AUD. Age, gender, and ancestry stratified analyses were performed. Results indicate significant and higher accuracy rates for the AA compared with the EA prediction models and a higher model accuracy trend among females compared with males for both ancestries. Combined EEG and SNP features model outperformed models based on only EEG features or only SNP features for both EA and AA samples. This multidimensional superiority was confirmed in a follow-up analysis in the AA age groups (12-15, 16-19, 20-30) and EA age group (16-19). In both ancestry samples, the youngest age group achieved higher accuracy score than the two other older age groups. Maternal AUD increased the model's accuracy in both ancestries' samples. Several discriminative EEG measures and SNPs features were identified, including lower posterior gamma, higher slow wave connectivity (delta, theta, alpha), higher frontal gamma ratio, higher beta correlation in the parietal area, and 5 SNPs: rs4780836, rs2605140, rs11690265, rs692854, and rs13380649. Results highlight the significance of sampling uniformity followed by stratified (e.g., ancestry, gender, developmental period) analysis, and wider selection of features, to generate better prediction scores allowing a more accurate estimation of AUD development
Weakly supervised approaches for quality estimation
International audienceCurrently, quality estimation (QE) is mostly addressed using supervised learning approaches. In this paper we show that unsupervised and weakly supervised approaches (using a small training set) perform almost as well as supervised ones, for a significantly lower cost. More generally, we study the various possible definitions, parameters, evaluation methods and approaches for QE, in order to show that there are multiple possible configurations for this task
Binary credal classification under sparsity constraints.
Binary classification is a well known problem in statistics. Besides classical methods, several techniques such as the naive credal classifier (for categorical data) and imprecise logistic regression (for continuous data) have been proposed to handle sparse data. However, a convincing approach to the classification problem in high dimensional problems (i.e., when the number of attributes is larger than the number of observations) is yet to be explored in the context of imprecise probability. In this article, we propose a sensitivity analysis based on penalised logistic regression scheme that works as binary classifier for high dimensional cases. We use an approach based on a set of likelihood functions (i.e. an imprecise likelihood, if you like), that assigns a set of weights to the attributes, to ensure a robust selection of the important attributes, whilst training the model at the same time, all in one fell swoop. We do a sensitivity analysis on the weights of the penalty term resulting in a set of sparse constraints which helps to identify imprecision in the dataset
Co-polymer Films for Sensors
Embodiments include a sensor comprising a co-polymer, the co-polymer comprising a first monomer and a second monomer. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is polystyrene and the second monomer is poly-2-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium benzylamine chloride. Other embodiments are described and claimed
Gene and pathway identification with Lp penalized Bayesian logistic regression
<p>Abstract</p> <p>Background</p> <p>Identifying genes and pathways associated with diseases such as cancer has been a subject of considerable research in recent years in the area of bioinformatics and computational biology. It has been demonstrated that the magnitude of differential expression does not necessarily indicate biological significance. Even a very small change in the expression of particular gene may have dramatic physiological consequences if the protein encoded by this gene plays a catalytic role in a specific cell function. Moreover, highly correlated genes may function together on the same pathway biologically. Finally, in sparse logistic regression with <it>L</it><sub><it>p </it></sub>(<it>p </it>< 1) penalty, the degree of the sparsity obtained is determined by the value of the regularization parameter. Usually this parameter must be carefully tuned through cross-validation, which is time consuming.</p> <p>Results</p> <p>In this paper, we proposed a simple Bayesian approach to integrate the regularization parameter out analytically using a new prior. Therefore, there is no longer a need for parameter selection, as it is eliminated entirely from the model. The proposed algorithm (BLpLog) is typically two or three orders of magnitude faster than the original algorithm and free from bias in performance estimation. We also define a novel similarity measure and develop an integrated algorithm to hunt the regulatory genes with low expression changes but having high correlation with the selected genes. Pathways of those correlated genes were identified with DAVID <url>http://david.abcc.ncifcrf.gov/</url>.</p> <p>Conclusion</p> <p>Experimental results with gene expression data demonstrate that the proposed methods can be utilized to identify important genes and pathways that are related to cancer and build a parsimonious model for future patient predictions.</p
- …
