57 research outputs found

    Principal Component Neural Networks for Modeling, Prediction, and Optimization of Hot Mix Asphalt Dynamics Modulus

    Get PDF
    The dynamic modulus of hot mix asphalt (HMA) is a fundamental material property that defines the stress-strain relationship based on viscoelastic principles and is a function of HMA properties, loading rate, and temperature. Because of the large number of efficacious predictors (factors) and their nonlinear interrelationships, developing predictive models for dynamic modulus can be a challenging task. In this research, results obtained from a series of laboratory tests including mixture dynamic modulus, aggregate gradation, dynamic shear rheometer (on asphalt binder), and mixture volumetric are used to create a database. The created database is used to develop a model for estimating the dynamic modulus. First, the highly correlated predictor variables are detected, then Principal Component Analysis (PCA) is used to first reduce the problem dimensionality, then to produce a set of orthogonal pseudo-inputs from which two separate predictive models were developed using linear regression analysis and Artificial Neural Networks (ANN). These models are compared to existing predictive models using both statistical analysis and Receiver Operating Characteristic (ROC) Analysis. Empirically-based predictive models can behave differently outside of the convex hull of their input variables space, and it is very risky to use them outside of their input space, so this is not common practice of design engineers. To prevent extrapolation, an input hyper-space is added as a constraint to the model. To demonstrate an application of the proposed framework, it was used to solve design-based optimization problems, in two of which optimal and inverse design are presented and solved using a mean-variance mapping optimization algorithm. The design parameters satisfy the current design specifications of asphalt pavement and can be used as a first step in solving real-life design problems

    A novel data mining method to identify assay-specific signatures in functional genomic studies

    Get PDF
    BACKGROUND: The highly dimensional data produced by functional genomic (FG) studies makes it difficult to visualize relationships between gene products and experimental conditions (i.e., assays). Although dimensionality reduction methods such as principal component analysis (PCA) have been very useful, their application to identify assay-specific signatures has been limited by the lack of appropriate methodologies. This article proposes a new and powerful PCA-based method for the identification of assay-specific gene signatures in FG studies. RESULTS: The proposed method (PM) is unique for several reasons. First, it is the only one, to our knowledge, that uses gene contribution, a product of the loading and expression level, to obtain assay signatures. The PM develops and exploits two types of assay-specific contribution plots, which are new to the application of PCA in the FG area. The first type plots the assay-specific gene contribution against the given order of the genes and reveals variations in distribution between assay-specific gene signatures as well as outliers within assay groups indicating the degree of importance of the most dominant genes. The second type plots the contribution of each gene in ascending or descending order against a constantly increasing index. This type of plots reveals assay-specific gene signatures defined by the inflection points in the curve. In addition, sharp regions within the signature define the genes that contribute the most to the signature. We proposed and used the curvature as an appropriate metric to characterize these sharp regions, thus identifying the subset of genes contributing the most to the signature. Finally, the PM uses the full dataset to determine the final gene signature, thus eliminating the chance of gene exclusion by poor screening in earlier steps. The strengths of the PM are demonstrated using a simulation study, and two studies of real DNA microarray data – a study of classification of human tissue samples and a study of E. coli cultures with different medium formulations. CONCLUSION: We have developed a PCA-based method that effectively identifies assay-specific signatures in ranked groups of genes from the full data set in a more efficient and simplistic procedure than current approaches. Although this work demonstrates the ability of the PM to identify assay-specific signatures in DNA microarray experiments, this approach could be useful in areas such as proteomics and metabolomics

    Use of Discrete-Time Forecast Modeling to Enhance Feedback Control and Physically Unrealizable Feedforward Control with Applications

    Get PDF
    When the manipulated variable (MV) has significantly large time delay in changing the control variable (CV), use of the currently measured CV in the feedback error can result in very deficient feedback control (FBC). However, control strategies that use forecast modeling to estimate future CV values and use them in the feedback error have the potential to control as well as a feedback controller with no MV deadtime using the measured value of CV. This work evaluates and compares FBC algorithms using discrete-time forecast modeling when MV has a large deadtime. When a feedforward control (FFC) law results in a physically unrealizable (PU) controller, the common approach is to use approximations to obtain a physically realizable feedforward controller. Using a discrete-time forecast modeling method, this work demonstrates an effective approach for PU FFC. The Smith Predictor is a popular control strategy when CV has measurement deadtime but not MV deadtime. The work demonstrates equivalency of this discrete-time forecast modeling approach to the Smith Predictor FBC approach. Thus, this work demonstrates effectiveness of the discrete-time forecast modeling approach for FBC with MV or DV deadtime and PU FFC

    Development of a Model-Based Noninvasive Glucose Monitoring Device for Non-Insulin Dependent People

    Get PDF
    Continuous-time glucose monitoring (CGM) effectively improves glucose control, as oppose to infrequent glucose measurements (i.e. using Lancet Meters), by providing frequent blood glucose concentration (BGC) to better associate this variation with changes in behavior. Currently, the most widely used CGM devices rely on a sensor that is inserted invasively under the skin. Because of the invasive nature and also the replacement cost of sensors, the primary users of current CGM devices are insulin dependent people (type 1 and some type 2 diabetics). Most non-insulin dependent diabetics use only lancet glucose measurements. The ultimate goal of this research is the development of CGM technology that overcomes these limitations (i.e. invasive sensors and their cost) in an effort to increase CGM applications among non-insulin dependent people. To meet this objective, this preliminary work has developed a methodology to mathematically infer BGC from measurements of non-invasive input variables which can be thought of as a “virtual” or “soft” sensor approach. In this work virtual sensors are developed and evaluated on 20 subjects using four BGC measurements per day and eight input variables representing meals, activity, stress, and clock time. Up to four weeks of data are collected for each subject. One evaluation consists of 3 days of training and up to 25 days of testing data. The second one consists of one week of training, one week of validation, and 2 weeks of testing data. The third one consists two weeks of training, one week of validation and one week of testing data. Model acceptability is determined on an individual basis based on the fitted correlation to CGM testing data. For 3 day, 1 week, and 2 weeks training studies, 35%, 55% and 65% of the subjects, respectively, met the Acceptability Criteria that we established based on the concept of usefulness
    • …
    corecore