3,252 research outputs found

    Optimisation of Mobile Communication Networks - OMCO NET

    Get PDF
    The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University. The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing

    Evaluation of modelling approaches for predicting the spatial distribution of soil organic carbon stocks at the national scale

    Get PDF
    Soil organic carbon (SOC) plays a major role in the global carbon budget. It can act as a source or a sink of atmospheric carbon, thereby possibly influencing the course of climate change. Improving the tools that model the spatial distributions of SOC stocks at national scales is a priority, both for monitoring changes in SOC and as an input for global carbon cycles studies. In this paper, we compare and evaluate two recent and promising modelling approaches. First, we considered several increasingly complex boosted regression trees (BRT), a convenient and efficient multiple regression model from the statistical learning field. Further, we considered a robust geostatistical approach coupled to the BRT models. Testing the different approaches was performed on the dataset from the French Soil Monitoring Network, with a consistent cross-validation procedure. We showed that when a limited number of predictors were included in the BRT model, the standalone BRT predictions were significantly improved by robust geostatistical modelling of the residuals. However, when data for several SOC drivers were included, the standalone BRT model predictions were not significantly improved by geostatistical modelling. Therefore, in this latter situation, the BRT predictions might be considered adequate without the need for geostatistical modelling, provided that i) care is exercised in model fitting and validating, and ii) the dataset does not allow for modelling of local spatial autocorrelations, as is the case for many national systematic sampling schemes

    Synergistic Modeling of in-vitro and in-vivo data via Stochastic Kriging with Qualitative Factors (SKQ)

    Get PDF
    The objective of this study is to model in-vivo propranolol plasma concentration after administrating oral propranolol extended-release (ER) tablets. In-vivo data are typically expensive and scarce. To save time and cost needed to achieve high-quality in- vivo profile, this work utilizes both in-vitro and in-vivo data.;The ensemble of in-vitro and in-vivo data is modeled by stochastic kriging with qualitative factors (SKQ). It treats in-vivo and in-vitro as the two distinct levels of a qualitative factor. By synergistically modeling both types of data, SKQ is able to provide fitted in-vivo profiles whose quality is much higher than those obtained from modeling in-vivo data alone

    Comparison of Gaussian process modeling software

    Get PDF
    Gaussian process fitting, or kriging, is often used to create a model from a set of data. Many available software packages do this, but we show that very different results can be obtained from different packages even when using the same data and model. We describe the parameterization, features, and optimization used by eight different fitting packages that run on four different platforms. We then compare these eight packages using various data functions and data sets, revealing that there are stark differences between the packages. In addition to comparing the prediction accuracy, the predictive variance--which is important for evaluating precision of predictions and is often used in stopping criteria--is also evaluated

    Design of Computer Experiments

    Get PDF

    A Kriging Method for Modeling Cycle Time-Throughput Profiles in Manufacturing

    Get PDF
    In semiconductor manufacturing, the steady-state behavior of a wafer fab system can be characterized by its cycle time-throughput profiles. These profiles quantify the relationship between the cycle time of a product and the system throughput and product mix. The objective of this work is to efficiently generate such cycle time-throughput profiles in manufacturing which can further assist decision makings in production planning.;In this research, a metamodeling approach based on Stochastic Kriging model with Qualitative factors (SKQ) has been adopted to quantify the target relationship of interest. Furthermore, a sequential experimental design procedure is developed to improve the efficiency of simulation experiments. For the initial design, a Sequential Conditional Maximin algorithm is utilized. Regarding the follow-up designs, batches of design points are determined using a Particle Swarm Optimization algorithm.;The procedure is applied to a Jackson network, as well as a scale-down wafer fab system. In both examples, the prediction performance of the SKQ model is promising. It is also shown that the SKQ model provides narrower confidence intervals compared to the Stochastic Kriging model (SK) by pooling the information of the qualitative variables

    Kriging-Based Design of Experiments for Multi-Source Exposure-Response Studies in Nanotoxicology

    Get PDF
    One of the major challenges with toxicology studies of nanomaterials (NMs), compared to traditional materials or chemicals, lies in the large NM variety (or sources) caused by their various physico-chemical properties. How to efficiently design multi-source biological experiments for the toxicity characterization of NMs in terms of their exposure-response profiles? This work intends to address this question by a two-stage experimental design procedure, which is developed based on the statistical model, stochastic kriging with qualitative factors (SKQ). With a given experimental budget, the SKQ-based design method aims at achieving the highest-quality SKQ, which synergistically models the exposure-response data from multiple sources (e.g., NM types). The method determines the experimental design (that is, the sampling location as well as allocation) in such a way that the resulting sampling data allow SKQ to realize its maximum potential to pool information across multiple sources for efficient modeling. Built in a two-stage framework, which enables a learning process of the target exposure-response relationships, the SKQ-based design procedure also inherits the general advantages of stochastic kriging in the sense that the design is particularly tailored to model the possibly nonlinear and complex relationships and heterogeneous data variances. Through simulation studies, the efficiency of the SKQ-based procedure for multi-source experiments is demonstrated over the two alternative design methods

    A New Gaussian Process Method For Modeling and Design of Multi-Source Data in Exposure-Response Toxicology Studies

    Get PDF
    One of the most fundamental steps in risk assessment is to quantify the exposure-response relationship for the material/chemical of interest. This work develops a new statistical method, referred to as SKQ (stochastic kriging with qualitative factors), to synergistically model exposure-response data, which often arise from multiple sources (e.g., laboratories, animal providers, and shapes of nanomaterials) in toxicology studies. Compared to the existing methods, SKQ has several distinct features. First of all, SKQ integrates data across multiple sources, and allows for the derivation of more accurate information from limited data. Second, SKQ is highly flexible and able to model practically any continuous response surfaces (e.g., dose-time-response surface). Third, SKQ is able to accommodate variance heterogeneity across experimental conditions, and to provide valid statistical inference (i.e., quantify uncertainties of the model estimates). Through empirical studies, we have demonstrated SKQ\u27s ability to efficiently model exposure-response surfaces by pooling information across multiple data sources.;Based on the SKQ modeling and inference, a design of experiments (DOE) procedure is developed to guide biological experiments for the efficient quantification of exposure-response relationships. Built on SKQ, the DOE procedure inherits the advantages of SKQ and is particularly tailored for experimental data arising from multiple sources, with non-normality and variance heterogeneity, and mapping nonlinear exposure-response relationships. The design procedure is built in a sequential two-stage paradigm that allows for a learning process: In the first stage, preliminary experiments are performed to gain information regarding the underlying exposure-response curve and variance structure; in the second stage, the prior information obtained from the previous stage is utilized to guide the second-stage experiments. Matlab\u27s global optimization function MultiStart is employed to search for optimal designs that will lead to exposure-response models of the highest quality.;SKQ and SKQ-based DOE fit into the mosaic of efficient decision-making methods for assessing the risk of a tremendously large variety of nanomaterials, and helps to alleviate the sustainability concerns regarding the enormous new nanomaterials
    corecore