282,635 research outputs found

    Model-based Methods of Classification: Using the mclust Software in Chemometrics

    Get PDF
    Due to recent advances in methods and software for model-based clustering, and to the interpretability of the results, clustering procedures based on probability models are increasingly preferred over heuristic methods. The clustering process estimates a model for the data that allows for overlapping clusters, producing a probabilistic clustering that quantifies the uncertainty of observations belonging to components of the mixture. The resulting clustering model can also be used for some other important problems in multivariate analysis, including density estimation and discriminant analysis. Examples of the use of model-based clustering and classification techniques in chemometric studies include multivariate image analysis, magnetic resonance imaging, microarray image segmentation, statistical process control, and food authenticity. We review model-based clustering and related methods for density estimation and discriminant analysis, and show how the R package mclust can be applied in each instance.

    Comparing Implementations of Estimation Methods for Spatial Econometrics

    Get PDF
    Recent advances in the implementation of spatial econometrics model estimation techniques have made it desirable to compare results, which should correspond between implementations across software applications for the same data. These model estimation techniques are associated with methods for estimating impacts (emanating effects), which are also presented and compared. This review constitutes an up to date comparison of generalized method of moments (GMM) and maximum likelihood (ML) implementations now available. The comparison uses the cross sectional US county data set provided by Drukker, Prucha, and Raciborski (2011c, pp. 6-7). The comparisons will be cast in the context of alternatives using the MATLAB Spatial Econometrics toolbox, Stata, Python with PySAL (GMM) and R packages including sped, sphet and McSpatial

    Model-based Methods of Classification: Using the mclust Software in Chemometrics

    Get PDF
    Due to recent advances in methods and software for model-based clustering, and to the interpretability of the results, clustering procedures based on probability models are increasingly preferred over heuristic methods. The clustering process estimates a model for the data that allows for overlapping clusters, producing a probabilistic clustering that quantifies the uncertainty of observations belonging to components of the mixture. The resulting clustering model can also be used for some other important problems in multivariate analysis, including density estimation and discriminant analysis. Examples of the use of model-based clustering and classification techniques in chemometric studies include multivariate image analysis, magnetic resonance imaging, microarray image segmentation, statistical process control, and food authenticity. We review model-based clustering and related methods for density estimation and discriminant analysis, and show how the R package mclust can be applied in each instance

    Accelerated Predictive Healthcare Analytics with Pumas, a High Performance Pharmaceutical Modeling and Simulation Platform

    Get PDF
    Pharmacometric modeling establishes causal quantitative relationship between administered dose, tissue exposures, desired and undesired effects and patient’s risk factors. These models are employed to de-risk drug development and guide precision medicine decisions. Recent technological advances rendered collecting real-time and detailed data easy. However, the pharmacometric tools have not been designed to handle heterogeneous, big data and complex models. The estimation methods are outdated to solve modern healthcare challenges. We set out to design a platform that facilitates domain specific modeling and its integration with modern analytics to foster innovation and readiness to data deluge in healthcare. New specialized estimation methodologies have been developed that allow dramatic performance advances in areas that have not seen major improvements in decades. New ODE solver algorithms, such as coefficient-optimized higher order integrators and new automatic stiffness detecting algorithms which are robust to frequent discontinuities, give rise to up to 4x performance improvements across a wide range of stiff and non-stiff systems seen in pharmacometric applications. These methods combine with JIT compiler techniques and further specialize the solution process on the individual systems, allowing statically-sized optimizations and discrete sensitivity analysis via forward-mode automatic differentiation, to further enhance the accuracy and performance of the solving and parameter estimation process. We demonstrate that when all of these techniques are combined with a validated clinical trial dosing mechanism and non-compartmental analysis (NCA) suite, real applications like NLME parameter estimation see run times halved while retaining the same accuracy. Meanwhile in areas with less prior optimization of software, like optimal experimental design, we see orders of magnitude performance enhancements. Together we show a fast and modern domain specific modeling framework which lays a platform for innovation via upcoming integrations with modern analytics

    Accelerated Predictive Healthcare Analytics with Pumas, a High Performance Pharmaceutical Modeling and Simulation Platform

    Get PDF
    Pharmacometric modeling establishes causal quantitative relationship between administered dose, tissue exposures, desired and undesired effects and patient’s risk factors. These models are employed to de-risk drug development and guide precision medicine decisions. Recent technological advances rendered collecting real-time and detailed data easy. However, the pharmacometric tools have not been designed to handle heterogeneous, big data and complex models. The estimation methods are outdated to solve modern healthcare challenges. We set out to design a platform that facilitates domain specific modeling and its integration with modern analytics to foster innovation and readiness to data deluge in healthcare. New specialized estimation methodologies have been developed that allow dramatic performance advances in areas that have not seen major improvements in decades. New ODE solver algorithms, such as coefficient-optimized higher order integrators and new automatic stiffness detecting algorithms which are robust to frequent discontinuities, give rise to up to 4x performance improvements across a wide range of stiff and non-stiff systems seen in pharmacometric applications. These methods combine with JIT compiler techniques and further specialize the solution process on the individual systems, allowing statically-sized optimizations and discrete sensitivity analysis via forward-mode automatic differentiation, to further enhance the accuracy and performance of the solving and parameter estimation process. We demonstrate that when all of these techniques are combined with a validated clinical trial dosing mechanism and non-compartmental analysis (NCA) suite, real applications like NLME parameter estimation see run times halved while retaining the same accuracy. Meanwhile in areas with less prior optimization of software, like optimal experimental design, we see orders of magnitude performance enhancements. Together we show a fast and modern domain specific modeling framework which lays a platform for innovation via upcoming integrations with modern analytics

    Benchmarking CPUs and GPUs on embedded platforms for software receiver usage

    Get PDF
    Smartphones containing multi-core central processing units (CPUs) and powerful many-core graphics processing units (GPUs) bring supercomputing technology into your pocket (or into our embedded devices). This can be exploited to produce power-efficient, customized receivers with flexible correlation schemes and more advanced positioning techniques. For example, promising techniques such as the Direct Position Estimation paradigm or usage of tracking solutions based on particle filtering, seem to be very appealing in challenging environments but are likewise computationally quite demanding. This article sheds some light onto recent embedded processor developments, benchmarks Fast Fourier Transform (FFT) and correlation algorithms on representative embedded platforms and relates the results to the use in GNSS software radios. The use of embedded CPUs for signal tracking seems to be straight forward, but more research is required to fully achieve the nominal peak performance of an embedded GPU for FFT computation. Also the electrical power consumption is measured in certain load levels.Peer ReviewedPostprint (published version

    Thirty Years of Spatial Econometrics

    Get PDF
    In this paper, I give a personal view on the development of the field of spatial econometrics during the past thirty years. I argue that it has moved from the margins to the mainstream of applied econometrics and social science methodology. I distinguish three broad phases in the development, which I refer to as preconditions, takeoff and maturity. For each of these phases I describe the main methodological focus and list major contributions. I conclude with some speculations about future directions.
    corecore