48,993 research outputs found

    Designing A Calibration Set in Spectral Space for Efficient Development of An NIR Method For Tablet Analysis

    Get PDF
    Designing a calibration set is the first step in developing a spectroscopic calibration method for quantitative analysis of pharmaceutical tablets. This step is critical because successful model development depends on the suitability of the calibration data. For spectroscopic-based methods, traditional concentration based techniques for designing calibration sets are prone to have redundant information while simultaneously lacking necessary information for a successful calibration model. The traditional method also follows the same design approach for different spectroscopic techniques and different formulations, thereby lacks the optimizing capability to be technique and formulation specific. A method for designing a calibration set in the Near Infrared (NIR) spectral space was developed for quantitative analysis of tablets. The pure component NIR spectra of a tablet formulation were used to define the spectral space of that formulation. This method minimizes sample requirements to provide an efficient means for developing multivariate spectroscopic calibration. Multiple comparative studies were conducted between commonly employed experimental design approaches to calibration development and the newly developed spectral space based technique. The comparisons were conducted on single API (Active Pharmaceutical Ingredient) and multiple API formulation to quantify model drugs using NIR spectroscopy. Partial least squares (PLS) models were developed from respective calibration designs. Model performance was comprehensively assessed based on the ability to predict API concentrations in independent prediction sets. Similar prediction performance was achieved using the smaller calibration set designed in spectral space, compared to the traditionally designed large calibration sets. An improved prediction performance was observed for the spectrally designed calibration sets compared to the traditionally designed calibration sets of equal sizes. Spectral space was also used to incorporate physico-chemical information into the calibration design to provide an efficient means of developing robust calibration model. Robust calibration model is critical to ensure consistent model performance during model lifecycle. A weight coefficient based technique was developed for selecting loading vector in PLS model to aid in building robust calibration model. It was also demonstrated that the optimal structures of calibration sets are different between NIR and Raman spectroscopy for the same tablet formulation. The optimum calibration structures are also different between two APIs for the same spectroscopic technique, indicating the criticality of the calibration design to be formulation and technique specific. This study demonstrates that a calibration set designed in spectral space provides an efficient means of developing spectroscopic multivariate calibration for tablet analysis. This study also provides opportunity to design formulation and technique specific calibration sets to optimize calibration capability

    Polyhedral Predictive Regions For Power System Applications

    Get PDF
    Despite substantial improvement in the development of forecasting approaches, conditional and dynamic uncertainty estimates ought to be accommodated in decision-making in power system operation and market, in order to yield either cost-optimal decisions in expectation, or decision with probabilistic guarantees. The representation of uncertainty serves as an interface between forecasting and decision-making problems, with different approaches handling various objects and their parameterization as input. Following substantial developments based on scenario-based stochastic methods, robust and chance-constrained optimization approaches have gained increasing attention. These often rely on polyhedra as a representation of the convex envelope of uncertainty. In the work, we aim to bridge the gap between the probabilistic forecasting literature and such optimization approaches by generating forecasts in the form of polyhedra with probabilistic guarantees. For that, we see polyhedra as parameterized objects under alternative definitions (under L1L_1 and L∞L_\infty norms), the parameters of which may be modelled and predicted. We additionally discuss assessing the predictive skill of such multivariate probabilistic forecasts. An application and related empirical investigation results allow us to verify probabilistic calibration and predictive skills of our polyhedra.Comment: 8 page

    Ellipsoidal Prediction Regions for Multivariate Uncertainty Characterization

    Get PDF
    While substantial advances are observed in probabilistic forecasting for power system operation and electricity market applications, most approaches are still developed in a univariate framework. This prevents from informing about the interdependence structure among locations, lead times and variables of interest. Such dependencies are key in a large share of operational problems involving renewable power generation, load and electricity prices for instance. The few methods that account for dependencies translate to sampling scenarios based on given marginals and dependence structures. However, for classes of decision-making problems based on robust, interval chance-constrained optimization, necessary inputs take the form of polyhedra or ellipsoids. Consequently, we propose a systematic framework to readily generate and evaluate ellipsoidal prediction regions, with predefined probability and minimum volume. A skill score is proposed for quantitative assessment of the quality of prediction ellipsoids. A set of experiments is used to illustrate the discrimination ability of the proposed scoring rule for misspecification of ellipsoidal prediction regions. Application results based on three datasets with wind, PV power and electricity prices, allow us to assess the skill of the resulting ellipsoidal prediction regions, in terms of calibration, sharpness and overall skill.Comment: 8 pages, 7 Figures, Submitted to IEEE Transactions on Power System

    Non-destructive soluble solids content determination for ‘Rocha’ Pear Based on VIS-SWNIR spectroscopy under ‘Real World’ sorting facility conditions

    Get PDF
    In this paper we report a method to determine the soluble solids content (SSC) of 'Rocha' pear (Pyrus communis L. cv. Rocha) based on their short-wave NIR reflectance spectra (500-1100 nm) measured in conditions similar to those found in packinghouse fruit sorting facilities. We obtained 3300 reflectance spectra from pears acquired from different lots, producers and with diverse storage times and ripening stages. The macroscopic properties of the pears, such as size, temperature and SSC were measured under controlled laboratory conditions. For the spectral analysis, we implemented a computational pipeline that incorporates multiple pre-processing techniques including a feature selection procedure, various multivariate regression models and three different validation strategies. This benchmark allowed us to find the best model/preproccesing procedure for SSC prediction from our data. From the several calibration models tested, we have found that Support Vector Machines provides the best predictions metrics with an RMSEP of around 0.82 ∘ Brix and 1.09 ∘ Brix for internal and external validation strategies respectively. The latter validation was implemented to assess the prediction accuracy of this calibration method under more 'real world-like' conditions. We also show that incorporating information about the fruit temperature and size to the calibration models improves SSC predictability. Our results indicate that the methodology presented here could be implemented in existing packinghouse facilities for single fruit SSC characterization.Funding Agency CEOT strategic project UID/Multi/00631/2019 project OtiCalFrut ALG-01-0247-FEDER-033652 Ideias em Caixa 2010, CAIXA GERAL DE DEPOSITOS Fundacao para a Ciencia e a Tecnologia (Ciencia)info:eu-repo/semantics/publishedVersio

    Quantitative spectroscopic analysis of heterogeneous mixtures: the correction of multiplicative effects caused by variations in physical properties of samples

    Get PDF
    Spectral measurements of complex heterogeneous types of mixture samples are often affected by significant multiplicative effects resulting from light scattering, due to physical variations (e.g. particle size and shape, sample packing and sample surface, etc.) inherent within the individual samples. Therefore, the separation of the spectral contributions due to variations in chemical compositions from those caused by physical variations is crucial to accurate quantitative spectroscopic analysis of heterogeneous samples. In this work, an improved strategy has been proposed to estimate the multiplicative parameters accounting for multiplicative effects in each measured spectrum, and hence mitigate the detrimental influence of multiplicative effects on the quantitative spectroscopic analysis of heterogeneous samples. The basic assumption of the proposed method is that light scattering due to physical variations has the same effects on the spectral contributions of each of the spectroscopically active chemical component in the same sample mixture. Based on this underlying assumption, the proposed method realizes the efficient estimation of the multiplicative parameters by solving a simple quadratic programming problem. The performance of the proposed method has been tested on two publicly available benchmark data sets (i.e. near-infrared total diffuse transmittance spectra of four-component suspension samples and near infrared spectral data of meat samples) and compared with some empirical approaches designed for the same purpose. It was found that the proposed method provided appreciable improvement in quantitative spectroscopic analysis of heterogeneous mixture samples. The study indicates that accurate quantitative spectroscopic analysis of heterogeneous mixture samples can be achieved through the combination of spectroscopic techniques with smart modeling methodology

    Estimating Nuisance Parameters in Inverse Problems

    Full text link
    Many inverse problems include nuisance parameters which, while not of direct interest, are required to recover primary parameters. Structure present in these problems allows efficient optimization strategies - a well known example is variable projection, where nonlinear least squares problems which are linear in some parameters can be very efficiently optimized. In this paper, we extend the idea of projecting out a subset over the variables to a broad class of maximum likelihood (ML) and maximum a posteriori likelihood (MAP) problems with nuisance parameters, such as variance or degrees of freedom. As a result, we are able to incorporate nuisance parameter estimation into large-scale constrained and unconstrained inverse problem formulations. We apply the approach to a variety of problems, including estimation of unknown variance parameters in the Gaussian model, degree of freedom (d.o.f.) parameter estimation in the context of robust inverse problems, automatic calibration, and optimal experimental design. Using numerical examples, we demonstrate improvement in recovery of primary parameters for several large- scale inverse problems. The proposed approach is compatible with a wide variety of algorithms and formulations, and its implementation requires only minor modifications to existing algorithms.Comment: 16 pages, 5 figure
    • …
    corecore