11,015 research outputs found

    Model based analysis of real-time PCR data from DNA binding dye protocols

    Get PDF
    BACKGROUND: Reverse transcription followed by real-time PCR is widely used for quantification of specific mRNA, and with the use of double-stranded DNA binding dyes it is becoming a standard for microarray data validation. Despite the kinetic information generated by real-time PCR, most popular analysis methods assume constant amplification efficiency among samples, introducing strong biases when amplification efficiencies are not the same. RESULTS: We present here a new mathematical model based on the classic exponential description of the PCR, but modeling amplification efficiency as a sigmoidal function of the product yield. The model was validated with experimental results and used for the development of a new method for real-time PCR data analysis. This model based method for real-time PCR data analysis showed the best accuracy and precision compared with previous methods when used for quantification of in-silico generated and experimental real-time PCR results. Moreover, the method is suitable for the analyses of samples with similar or dissimilar amplification efficiency. CONCLUSION: The presented method showed the best accuracy and precision. Moreover, it does not depend on calibration curves, making it ideal for fully automated high-throughput applications

    Multi-Information Source Fusion and Optimization to Realize ICME: Application to Dual Phase Materials

    Get PDF
    Integrated Computational Materials Engineering (ICME) calls for the integration of computational tools into the materials and parts development cycle, while the Materials Genome Initiative (MGI) calls for the acceleration of the materials development cycle through the combination of experiments, simulation, and data. As they stand, both ICME and MGI do not prescribe how to achieve the necessary tool integration or how to efficiently exploit the computational tools, in combination with experiments, to accelerate the development of new materials and materials systems. This paper addresses the first issue by putting forward a framework for the fusion of information that exploits correlations among sources/models and between the sources and `ground truth'. The second issue is addressed through a multi-information source optimization framework that identifies, given current knowledge, the next best information source to query and where in the input space to query it via a novel value-gradient policy. The querying decision takes into account the ability to learn correlations between information sources, the resource cost of querying an information source, and what a query is expected to provide in terms of improvement over the current state. The framework is demonstrated on the optimization of a dual-phase steel to maximize its strength-normalized strain hardening rate. The ground truth is represented by a microstructure-based finite element model while three low fidelity information sources---i.e. reduced order models---based on different homogenization assumptions---isostrain, isostress and isowork---are used to efficiently and optimally query the materials design space.Comment: 19 pages, 11 figures, 5 table

    Automation and analysis of high-dimensionality experiments in biocatalytic reaction screening

    Get PDF
    Biological catalysts are increasingly used in industry in high-throughput screening for drug discovery or for the biocatalytic synthesis of active pharmaceutical intermediates (APIs). Their activity is dependent on high-dimensionality physiochemical processes which are affected by numerous potentially interacting factors such as temperature, pH, substrates, solvents, salinity, and so on. To generate accurate models that map the performance of such systems, it is critical to developing effective experimental and analytical frameworks. However, investigating numerous factors of interest can become unfeasible for conventional manual experimentation which can be time-consuming and prone to human error. In this thesis, an effective framework for the execution and analysis of highdimensionality experiments that implement a Design of Experiments (DoE) methodology was created. DoE applies a statistical framework to the simultaneous investigation of multiple factors of interest. To convert the DoE design into a physically executable experiment, the Synthace Life Sciences R&D cloud platform was used where experimental conditions were translated into liquid handling instructions and executed on multiple automated devices. The framework was exemplified by quantifying the activity of an industrially relevant biocatalyst, the CV2025 ωtransaminase enzyme from Chromobacterium violaceum, for the conversion of Smethylbenzylamine (MBA) and pyruvate into acetophenone and sodium alanine. The automation and analysis of high-dimensionality experiments for screening of the CV2025 TAm biocatalytic reaction were carried out in three sequential stages. In the first stage, the basic process of Synthace-driven automated DoE execution was demonstrated by executing traditional DoE studies. This comprised of a screening study that investigated the impact of nine factors of interest, after which an optimisation study was conducted by taking forward five factors of interest using two automated devices to optimise assay conditions further. In total, 480 experimental conditions were executed and analysed to generate mathematical models that identified an optimum. Robust assay conditions were identified which increased enzyme activity >3-fold over the starting conditions. In the second stage, nonbiological considerations that impact absorbance-based assay performance were systematically investigated. These considerations were critical to ensuring reliable and precise data generation from future high-dimensionality experiments and include confirming spectrophotometer settings, selecting microplate type and reaction volume, testing device precision, and managing evaporation as a function of time. The final stage of the work involved development of a framework for the implementation of a modern type of DoE design called a space-filling design (SFD). SFDs sample factors of interest at numerous settings and can provide a fine-grained characterisation of high-dimensional systems in a single experimental run. However, they are rarely used in biological research due to a large number of experiments required and their demanding, highly variable pipetting requirements. The established framework enabled the execution and analysis of an automated end-toend SFD where 3,456 experimental conditions were prepared to investigate a 12- dimensional space characterising CV2025 TAm activity. Factors of interest included temperature, pH, buffering agent types, enzyme stability, co-factor, substrate, salt, and solvent concentrations. MATLAB scripts were developed to calculate important biocatalysis metrics of product yield and initial rate which were then used to build mathematical models that were physically validated to confirm successful model prediction. The implementation of the framework provided greater insight into numerous factors influencing CV2025 TAm activity in more dimensions than what was previously reported in the literature and to our knowledge is the first large-scale study that employs a SFD for assay characterisation. The developed framework is generic in nature and represents a powerful tool for rapid one-step characterisation of high-dimensionality systems. Industrial implementation of the framework could help reduce the time and costs involved in the development of high throughput screens and biocatalytic reaction optimisation

    The effects of regional insolation differences upon advanced solar thermal electric power plant performance and energy costs

    Get PDF
    The performance and cost of four 10 MWe advanced solar thermal electric power plants sited in various regions of the continental United States was studied. Each region has different insolation characteristics which result in varying collector field areas, plant performance, capital costs and energy costs. The regional variation in solar plant performance was assessed in relation to the expected rise in the future cost of residential and commercial electricity supplied by conventional utility power systems in the same regions. A discussion of the regional insolation data base is presented along with a description of the solar systems performance and costs. A range for the forecast cost of conventional electricity by region and nationally over the next several decades is given

    The effects of regional insolation differences upon advanced solar thermal electric power plant performance and energy costs

    Get PDF
    The performance and cost of the 10 MWe advanced solar thermal electric power plants sited in various regions of the continental United States were determined. The regional insolation data base is discussed. A range for the forecast cost of conventional electricity by region and nationally over the next several cades are presented

    A new real-time PCR method to overcome significant quantitative inaccuracy due to slight amplification inhibition

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Real-time PCR analysis is a sensitive DNA quantification technique that has recently gained considerable attention in biotechnology, microbiology and molecular diagnostics. Although, the cycle-threshold (<it>Ct</it>) method is the present "gold standard", it is far from being a standard assay. Uniform reaction efficiency among samples is the most important assumption of this method. Nevertheless, some authors have reported that it may not be correct and a slight PCR efficiency decrease of about 4% could result in an error of up to 400% using the <it>Ct </it>method. This reaction efficiency decrease may be caused by inhibiting agents used during nucleic acid extraction or copurified from the biological sample.</p> <p>We propose a new method (<it>Cy</it><sub><it>0</it></sub>) that does not require the assumption of equal reaction efficiency between unknowns and standard curve.</p> <p>Results</p> <p>The <it>Cy</it><sub><it>0 </it></sub>method is based on the fit of Richards' equation to real-time PCR data by nonlinear regression in order to obtain the best fit estimators of reaction parameters. Subsequently, these parameters were used to calculate the <it>Cy</it><sub><it>0 </it></sub>value that minimizes the dependence of its value on PCR kinetic.</p> <p>The <it>Ct</it>, second derivative (<it>Cp</it>), sigmoidal curve fitting method (<it>SCF</it>) and <it>Cy</it><sub><it>0 </it></sub>methods were compared using two criteria: precision and accuracy. Our results demonstrated that, in optimal amplification conditions, these four methods are equally precise and accurate. However, when PCR efficiency was slightly decreased, diluting amplification mix quantity or adding a biological inhibitor such as IgG, the <it>SCF</it>, <it>Ct </it>and <it>Cp </it>methods were markedly impaired while the <it>Cy</it><sub><it>0 </it></sub>method gave significantly more accurate and precise results.</p> <p>Conclusion</p> <p>Our results demonstrate that <it>Cy</it><sub><it>0 </it></sub>represents a significant improvement over the standard methods for obtaining a reliable and precise nucleic acid quantification even in sub-optimal amplification conditions overcoming the underestimation caused by the presence of some PCR inhibitors.</p

    Simulation based dimensionless waterflood performance curves for predicting recovery

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2000Predicting waterflood recovery with simulation based dimensionless performance curves has advantages over the more traditional approaches in certain applications. This work discusses the advantages of the type curve approach in moderately mature fields where high resolution history matches are required. The method also has advantages when uncertainty analyses is important. The dimensionless type curve methodology can be applied to many different fields. A case study of a large, complex field is presented to show how the curves are created and how they can be applied. In this field, a study of the geology and stratigraphy indicated that reservoir continuity, permeability variance, and effects of faulting were the most important drivers of recovery efficiency. Simulations were performed on 45 datasets to describe waterflood performance over the range of variation. A spreadsheet program was created to predict recovery of any description, based on interpolations of the simulation results. The dimensionless curves can be used to predict full-field performance, as the basis of an integrated evaluation tool and/or for comparing actual performance to predicted performance. Using correlations to predict recoveries allows for ease of sensitivity analyses, and ease of application by casual users in an organization

    Comparison of multi-layer bus interconnection and a network on chip solution

    Get PDF
    Abstract. This thesis explains the basic subjects that are required to take in consideration when designing a network on chip solutions in the semiconductor world. For example, general topologies such as mesh, torus, octagon and fat tree are explained. In addition, discussion related to network interfaces, switches, arbitration, flow control, routing, error avoidance and error handling are provided. Furthermore, there is discussion related to design flow, a computer aided designing tools and a few comprehensive researches. However, several networks are designed for the minimum latency, although there are also versions which trade performance for decreased bus widths. These designed networks are compared with a corresponding multi-layer bus interconnection and both synthesis and register transfer level simulations are run. For example, results from throughput, latency, logic area and power consumptions are gathered and compared. It was discovered that overall throughput was well balanced with the network on chip solutions, although its maximum throughput was limited by protocol conversions. For example, the multi-layer bus interconnection was capable of providing a few times smaller latencies and higher throughputs when only a single interface was injected at the time. However, with parallel traffic and high-performance requirements a network on chip solution provided better results, even though the difference decreased when performance requirements were lower. Furthermore, it was discovered that the network on chip solutions required approximately 3–4 times higher total cell area than the multi-layer bus interconnection and that resources were mainly located at network interfaces and switches. In addition, power consumption was approximately 2–3 times higher and was mostly caused by dynamic consumption.Monitasoisen väyläarkkitehtuurin ja tietokoneverkkomaisen ratkaisun vertailua. Tiivistelmä. Tutkielmassa käsitellään tärkeimpiä aihealueita, jotka tulee huomioida suunniteltaessa tietokoneverkkomaisia väyläratkaisuja puolijohdemaailmassa. Esimerkiksi yleiset rakenteet, kuten verkko-, torus-, kahdeksankulmio- ja puutopologiat käsitellään lyhyesti. Lisäksi alustetaan verkon liitäntäkohdat, kytkimet, vuorottelu, vuon hallinta, reititys, virheiden välttely ja -käsittely. Lopuksi kerrotaan suunnitteluvuon oleellisimmat välivaiheet ja niihin soveltuvia kaupallisia työkaluja, sekä käsitellään lyhyesti muutaman aiemman julkaisun tuloksia. Tutkielmassa käytetään suunnittelutyökalua muutaman tietokoneverkkomaisen ratkaisun toteutukseen ja tavoitteena on saavuttaa pienin mahdollinen latenssi. Toisaalta myös hieman suuremman latenssin versioita suunnitellaan, mutta pienemmillä väylänleveyksillä. Lisäksi suunniteltuja tietokoneverkkomaisia ratkaisuja vertaillaan perinteisempään monitasoiseen väyläarkkitehtuuriin. Esimerkiksi synteesi- ja simulaatiotuloksia, kuten logiikan vaatimaa pinta-alaa, tehonkulutusta, latenssia ja suorituskykyä, vertaillaan keskenään. Tutkielmassa selvisi, että suunnittelutyökalulla toteutetut tietokoneverkkomaiset ratkaisut mahdollistivat tasaisemman suorituskyvyn, joskin niiden suurin saavutettu suorituskyky ja pienin latenssi määräytyivät protokollan käännöksen aiheuttamasta viiveestä. Tutkielmassa havaittiin, että perinteisemmillä menetelmillä saavutettiin noin kaksi kertaa suurempi suorituskyky ja pienempi latenssi, kun verkossa ei ollut muuta liikennettä. Rinnakkaisen liikenteen lisääntyessä tietokoneverkkomainen ratkaisu tarjosi keskimäärin paremman suorituskyvyn, kun sille asetetut tehokkuusvaateet olivat suuret, mutta suorituskykyvaatimuksien laskiessa erot kapenivat. Lisäksi huomattiin, että tietokoneverkkomaisten ratkaisujen käyttämä pinta-ala oli noin 3–4 kertaa suurempi kuin monitasoisella väyläarkkitehtuurilla ja että resurssit sijaitsivat enimmäkseen verkon liittymäkohdissa ja kytkimissä. Lisäksi tehonkulutuksen huomattiin olevan noin 2–3 kertaa suurempi, joskin sen havaittiin koostuvan pääosin dynaamisesta kulutuksesta

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    Vehicular Networks and Outdoor Pedestrian Localization

    Get PDF
    This thesis focuses on vehicular networks and outdoor pedestrian localization. In particular, it targets secure positioning in vehicular networks and pedestrian localization for safety services in outdoor environments. The former research topic must cope with three major challenges, concerning users’ privacy, computational costs of security and the system trust on user correctness. This thesis addresses those issues by proposing a new lightweight privacy-preserving framework for continuous tracking of vehicles. The proposed solution is evaluated in both dense and sparse vehicular settings through simulation and experiments in real-world testbeds. In addition, this thesis explores the benefit given by the use of low frequency bands for the transmission of control messages in vehicular networks. The latter topic is motivated by a significant number of traffic accidents with pedestrians distracted by their smartphones. This thesis proposes two different localization solutions specifically for pedestrian safety: a GPS-based approach and a shoe-mounted inertial sensor method. The GPS-based solution is more suitable for rural and suburban areas while it is not applicable in dense urban environments, due to large positioning errors. Instead the inertial sensor approach overcomes the limitations of previous technique in urban environments. Indeed, by exploiting accelerometer data, this architecture is able to precisely detect the transitions from safe to potentially unsafe walking locations without the need of any absolute positioning systems
    corecore