251 research outputs found

    Sequential and adaptive Bayesian computation for inference and optimization

    Get PDF
    With the advent of cheap and ubiquitous measurement devices, today more data is measured, recorded, and archived in a relatively short span of time than all data recorded throughout history. Moreover, advances in computation have made it possible to model much more complicated phenomena and to use the vast amounts of data to calibrate the resulting high-dimensional models. In this thesis, we are interested in two fundamental problems which are repeatedly being faced in practice as the dimension of the models and datasets are growing steadily: the problem of inference in high-dimensional models and the problem of optimization for problems when the number of data points is very large. The inference problem gets diļ¬ƒcult when the model one wants to calibrate and estimate is deļ¬ned in a high-dimensional space. The behavior of computational algorithms in high-dimensional spaces is complicated and deļ¬es intuition. Computational methods which work accurately for inferring low-dimensional models, for example, may fail to generalize the same performance to high-dimensional models. In recent years, due to the signiļ¬cant interest in high-dimensional models, there has been a plethora of work in signal processing and machine learning to develop computational methods which are robust in high-dimensional spaces. In particular, the high-dimensional stochastic ļ¬ltering problem has attracted signiļ¬cant attention as it arises in multiple ļ¬elds which are of crucial importance such as geophysics, aerospace, control. In particular, a class of algorithms called particle ļ¬lters has received attention and become a fruitful ļ¬eld of research because of their accuracy and robustness in low-dimensional systems. In short, these methods keep a cloud of particles (samples in a state space), which describe the empirical probability distribution over the state variable of interest. The particle ļ¬lters use a model of the phenomenon of interest to propagate and predict the future states and use an observation model to assimilate the observations to correct the state estimates. The most common particle ļ¬lter, called the bootstrap particle ļ¬lter (BPF), consists of an iterative sampling-weighting-resampling scheme. However, BPFs also largely fail at inferring high-dimensional dynamical systems due to a number of reasons. In this work, we propose a novel particle ļ¬lter, named the nudged particle ļ¬lter (NuPF), which speciļ¬cally aims at improving the performance of particle ļ¬lters in high-dimensional systems. The algorithm relies on the idea of nudging, which has been widely used in the geophysics literature to tackle high-dimensional inference problems. In particular, in addition to standard sampling-weighting-resampling steps of the particle ļ¬lter, we deļ¬ne a general nudging step based on the gradient of the likelihoods, which generalize some of the nudging schemes proposed in the literature. This step is based on modifying the particles, generated in the sampling step, using the gradients of the likelihoods. In particular, the nudging step moves a fraction of the particles to the regions under which they have high-likelihoods. This scheme results in signiļ¬cantly improved behavior in high-dimensional models. The resulting NuPF is able to track high-dimensional systems successfully. Unlike the proposed nudging schemes in the literature, the NuPF does not rely on Gaussianity assumptions and can be deļ¬ned for a general likelihood. We analytically prove that, because we only move a fraction of the particles and not all of them, the algorithm has a convergence rate that matches standard Monte Carlo algorithms. More precisely, the NuPF has the same asymptotic convergence guarantees as the bootstrap particle ļ¬lter. As a byproduct, we also show that the nudging step improves the robustness of the particle ļ¬lter against model misspeciļ¬cation. In particular, model misspeciļ¬cation occurs when the true data-generating system and the model posed by the user of the algorithm diļ¬€er signiļ¬cantly. In this case, a majority of computational inference methods fail due to the discrepancy between the modeling assumptions and the observed data. We show that the nudging step increases the robustness of particle ļ¬lters against model misspeciļ¬cation. Specifically, we prove that the NuPF generates particle systems which have provably higher marginal likelihoods compared to the standard bootstrap particle ļ¬lter. This theoretical result is attained by showing that the NuPF can be interpreted as a bootstrap particle ļ¬lter for a modiļ¬ed state-space model. Finally, we demonstrate the empirical behavior of the NuPF with several examples. In particular, we show results on high-dimensional linear state-space models, a misspeciļ¬ed Lorenz 63 model, a high-dimensional Lorenz 96 model, and a misspeciļ¬ed object tracking model. In all examples, the NuPF infers the states successfully. The second problem, the so-called scability problem in optimization, occurs because of the large number of data points in modern datasets. With the increasing abundance of data, many problems in signal processing, statistical inference, and machine learning turn into a large-scale optimization problems. For example, in signal processing, one might be interested in estimating a sparse signal given a large number of corrupted observations. Similarly, maximum-likelihood inference problems in statistics result in large-scale optimization problems. Another signiļ¬cant application domain is machine learning, where all important training methods are deļ¬ned as optimization problems. To tackle these problems, computational optimization methods developed over the past decades are ineļ¬ƒcient since they need to compute function evaluations or gradients over all the data for a single iteration. Because of this reason, a class of optimization methods, termed stochastic optimization methods, have emerged. The algorithms of this class are designed to tackle problems which are deļ¬ned over a big number of data points. In short, these methods utilize a subsample of the dataset in order to update the parameter estimate and do so iteratively until some convergence criterion is met. However, there is a major diļ¬ƒculty that has to be addressed: Although the convergence theory for these algorithms is understood, they can have unstable behavior in practice. In particular, the most commonly used stochastic optimization method, namely the stochastic gradient descent, can diverge easily if its step-size is poorly set. Over the years, practitioners have developed a number of rules of thumb to alleviate stability issues. We argue in this thesis that one way to develop robust stochastic optimization methods is to frame them as inference methods. In particular, we show that stochastic optimization schemes can be recast as inference methods and can be understood as inference algorithms. Framing the problem as an inference problem opens the way to compare these methods to the optimal inference algorithms and understand why they might be failing or producing unstable behavior. In this vein, we show that there is an intrinsic relationship between a class of stochastic optimization methods, called incremental proximal methods, and Kalman (and extended Kalman) ļ¬lters. The ļ¬ltering approach to stochastic optimization results in an automatic calibration of the step-size, which removes the instability problems depending on the step-sizes. The probabilistic interpretation of stochastic optimization problems also paves the way to develop new optimization methods based on strategies which are popular in the inference literature. In particular, one can use a set of sampling methods in order to solve the inference problem and hence obtain the global minimum. In this manner, we propose a parallel sequential Monte Carlo optimizer (PSMCO), which is aiming at solving stochastic optimization problems. The PSMCO is designed as a zeroth order method which does not use gradients. It only uses subsets of the data points in order to move at each iteration. The PSMCO obtains an estimate of a global minimum at each iteration by utilizing a cheap kernel density estimator. We prove that the resulting estimator converges to a global minimum almost surely as the number of Monte Carlo samples tends to inļ¬nity. We also empirically demonstrate that the algorithm is able to reconstruct multiple global minima and solve diļ¬ƒcult global optimization problems. By further exploiting the relationship between inference and optimization, we also propose a probabilistic and online matrix factorization method, termed the dictionary ļ¬lter to solve large-scale matrix factorization problems. Matrix factorization methods have received signiļ¬cant interest from the machine learning community due to their expressive representations of high-dimensional data and interpretability of their estimates. As the majority of the matrix factorization methods are deļ¬ned as optimization problems, they suļ¬€er from the same issues as stochastic optimization methods. In particular, when using stochastic gradient descent, one might need to try and err many times before deciding to use a step-size. To alleviate these problems, we introduce a matrix-variate probabilistic model for which inference results in a matrix factorization scheme. The scheme is online, in the sense that it only uses a single data point at a time to update the factors. The algorithm bears relationship with optimization schemes, namely with the incremental proximal method deļ¬ned over a matrix-variate cost function. By way of intuition we developed for the optimization-inference relationship, we devise a model which results in similar update rules for matrix factorization as for the incremental proximal method. However, the probabilistic updates are more stable and eļ¬ƒcient. Moreover, the algorithm does not have a step-size parameter to tune, as its role is played by the posterior covariance matrix. We demonstrate the utility of the algorithm on a missing data problem and a video processing problem. We show that the algorithm can be successfully used in machine learning problems and several promising extensions of the method can be constructed easily.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Ricardo Cao Abad.- Secretario: Michael Peter Wiper.- Vocal: Nicholas Paul Whitele

    Convergence rates for optimised adaptive importance samplers

    Get PDF
    Adaptive importance samplers are adaptive Monte Carlo algorithms to estimate expectations with respect to some target distribution which adapt themselves to obtain better estimators over a sequence of iterations. Although it is straightforward to show that they have the same O(1/Nāˆ’āˆ’āˆš) convergence rate as standard importance samplers, where N is the number of Monte Carlo samples, the behaviour of adaptive importance samplers over the number of iterations has been left relatively unexplored. In this work, we investigate an adaptation strategy based on convex optimisation which leads to a class of adaptive importance samplers termed optimised adaptive importance samplers (OAIS). These samplers rely on the iterative minimisation of the Ļ‡2-divergence between an exponential family proposal and the target. The analysed algorithms are closely related to the class of adaptive importance samplers which minimise the variance of the weight function. We first prove non-asymptotic error bounds for the mean squared errors (MSEs) of these algorithms, which explicitly depend on the number of iterations and the number of samples together. The non-asymptotic bounds derived in this paper imply that when the target belongs to the exponential family, the L2 errors of the optimised samplers converge to the optimal rate of O(1/Nāˆ’āˆ’āˆš) and the rate of convergence in the number of iterations are explicitly provided. When the target does not belong to the exponential family, the rate of convergence is the same but the asymptotic L2 error increases by a factor Ļā‹†āˆ’āˆ’āˆš>1, where Ļā‹†āˆ’1 is the minimum Ļ‡2-divergence between the target and an exponential family proposal.This work was supported by The Alan Turing Institute for Data Science and AI under EPSRC Grant EP/N510129/1. J.M. acknowledges the support of the Spanish Agencia Estatal de InvestigaciĆ³n (awards TEC2015-69868-C2-1-R ADVENTURE and RTI2018-099655-B-I00 CLARA) and the Office of Naval Research (Award No. N00014-19-1-2226)

    Nudging the particle filter

    Get PDF
    Documento depositado en el repositorio arXiv.org. VersiĆ³n: arXiv:1708.07801v2 [stat.CO]We investigate a new sampling scheme to improve the performance of particle filters in scenarios where either (a) there is a significant mismatch between the assumed model dynamics and the actual system producing the available observations, or (b) the system of interest is high dimensional and the posterior probability tends to concentrate in relatively small regions of the state space. The proposed scheme generates nudged particles, i.e., subsets of particles which are deterministically pushed towards specific areas of the state space where the likelihood is expected to be high, an operation known as nudging in the geophysics literature. This is a device that can be plugged into any particle filtering scheme, as it does not involve modifications in the classical algorithmic steps of sampling, computation of weights, and resampling. Since the particles are modified, but the importance weights do not account for this modification, the use of nudging leads to additional bias in the resulting estimators. However, we prove analytically that particle filters equipped with the proposed device still attain asymptotic convergence (with the same error rates as conventional particle methods) as long as the nudged particles are generated according to simple and easy-to-implement rules. Finally, we show numerical results that illustrate the improvement in performance and robustness that can be attained using the proposed scheme. In particular, we show the results of computer experiments involving misspecified Lorenz 63 model, object tracking with misspecified models, and a large dimensional Lorenz 96 chaotic model. For the examples we have investigated, the new particle filter outperforms conventional algorithms empirically, while it has only negligible computational overhead.This work was partially supported by Ministerio de EconomĆ­a y Competitividad of Spain (TEC2015-69868-C2-1-R ADVENTURE), the Office of Naval Research Global (N62909-15-1-2011), and the regional government of Madrid (program CA SICAM-CM S2013/ICE-2845

    Application of Radiofrequency in Pain Management

    Get PDF
    The application of radiofrequency is a treatment for many clinical conditions such as trigeminal neuralgia, complex regional pain syndrome, chronic postsurgical pain, cancer pain, hyperhidrosis and facet joint pain requiring ablation of different nerve locations. In this procedure, a constant high-frequency, high-temperature electrical current is applied to target tissue. Sluijter has achieved significant pain relief using radiofrequency current at a temperature below 42Ā°C that produced strong electromagnetic field with no thermal lesion and referred as pulsed radiofrequency. The use of pulsed radiofrequency is a non-neurodestructive and therefore less painful technique, and it serves as an alternative method to continuous radiofrequency. Many studies have demonstrated favorable outcomes with pulsed radiofrequency compared to continuous radiofrequency

    The Effect of Propeller Pitch on Ship Propulsion

    Get PDF
    The appropriate choice of a marine engine identified by using self-propulsion model tests is compulsory, in particular with respect to the improvement of vessel performances. Numerical simulations or experimental methods provide insight into the problem of flow, where fixed pitch propellers or controllable pitch propellers are preferred. While calculation methods are time consuming and computationally demanding for both propeller types, hydrodynamic performance assessment has more workload in controllable pitch propellers. This paper aims to describe and demonstrate the practicability and effectiveness of the self-propulsion estimation (SPE) method in understanding the effect of propeller pitch on ship propulsion. Technically, the hydrostatic and geometric characteristics of the vessel and open-water propeller performances are the focal aspects that affect the self-propulsion parameters estimated by the SPE method. The input coefficients for SPE have been identified using a code that generates propeller open-water performance curves. The propellers utilized to study pitch variations have been based on the Wageningen B-series propeller database. The method was first validated on the full size Seiun Maru ship whose sea trial tests are available in literature. After extensive calculations for full size KCS and DTC at service speeds, the study focused on the effect of the Froude number on propulsion parameters. These calculations have demonstrated that greater propeller pitch does not improve propulsion efficiency, and that maximum propeller efficiency changes with a ship\u27s forward speed

    HRCT findings of pulmonary sarcoidosis; relation to pulmonary function tests

    Get PDF
    BACKGROUND: Chest-X-ray has several limitations in detecting the extent of pulmonary disease in sarcoidosis. It might not reflect the degree of pulmonary involvement in patients with sarcoidosis when compared to computed tomography of the thorax. We aimed to investigate the HRCT findings of pulmonary sarcoidosis and to find out the existence of possible relations between HRCT findings and PFTs. In addition, we aimed to investigate the accordance between HRCT findings and conventional chest-X-ray staging of pulmonary sarcoidosis. METHOD: 45 patients with sarcoidosis with a mean age 29.7+/āˆ’ 8.4 years were evaluated. Six of them were female and 39 were male. The type, distribution and extent of the parameters on HRCT/CTs were evaluated and scored. Chest-X-rays were evaluated for the stage of pulmonary sarcoidosis. Correlations were investigated between HRCT/CT parameter scores, Chest X-Ray stages and pulmonary function parameters. RESULTS: Nodule, micronodule, ground glass opacity and consolidation were the most common HRCT findings. There were significant correlations between pulmonary function parameters, HRCT pattern scores, and chest-X-ray stages. A significant correlation between chest-x-ray score and total HRCT score was found. CONCLUSIONS: Pulmonary sarcoidosis patients might have various pulmonary parenchymal changes on HRCT. Thorax HRCT was superior to chest-X-ray in detecting pulmonary parenchymal abnormalities. The degree of pulmonary involvement might be closely related to the loss of pulmonary function measured by PFTs. Chest-X-ray is considered to have a role in the evaluation of pulmonary sarcoidosis

    Dietary suppression of MHC-II expression in intestinal stem cells enhances intestinal tumorigenesis [preprint]

    Get PDF
    Little is known about how interactions between diet, immune recognition, and intestinal stem cells (ISCs) impact the early steps of intestinal tumorigenesis. Here, we show that a high fat diet (HFD) reduces the expression of the major histocompatibility complex II (MHC-II) genes in ISCs. This decline in ISC MHC-II expression in a HFD correlates with an altered intestinal microbiome composition and is recapitulated in antibiotic treated and germ-free mice on a control diet. Mechanistically, pattern recognition receptor and IFNg signaling regulate MHC-II expression in ISCs. Although MHC-II expression on ISCs is dispensable for stem cell function in organoid cultures in vitro, upon loss of the tumor suppressor gene Apc in a HFD, MHC-II- ISCs harbor greater in vivo tumor-initiating capacity than their MHC-II+ counterparts, thus implicating a role for epithelial MHC-II in suppressing tumorigenesis. Finally, ISC-specific genetic ablation of MHC-II in engineered Apc-mediated intestinal tumor models increases tumor burden in a cell autonomous manner. These findings highlight how a HFD alters the immune recognition properties of ISCs through the regulation of MHC-II expression in a manner that could contribute to intestinal tumorigenesis

    Relationship between psychosocial status, diabetes mellitus, and left ventricular systolic function in patients with stable multivessel coronary artery disease

    Get PDF
    Background: Negative emotional conditions contribute to the development of coronary artery disease (CAD). Depression and anxiety are prognostic factors in patients with CAD. The aim of our study was to investigate the association between emotional conditions and left ventricular (LV) systolic functions in CAD. Methods: 168 patients (102 men, 66 women, mean age 66.3 &#177; 9.9 years) with stable angina and multivessel disease (MVD) were included in the study. According to the LV ejection fraction (LVEF) in echocardiography, patients were divided into two groups, the preserved group (LVEF > 50%), and the impaired group (LVEF < 50%). The preserved group consisted of 94 patients and the impaired group consisted of 74 patients. Emotional status was evaluated using the Hamilton Depression (HAM-D), Hamilton Anxiety (HAM-A), Beck Depression Inventory (BDI), and Beck Anxiety Inventory (BAI) scores. Results: The prevalence of diabetes mellitus (DM) was significantly higher in the impaired group than in the preserved group (29.8% vs 56.8%, p < 0.01). The HAM-D, HAM-A, BAI and BDI scores were higher in the impaired group compared to the preserved group (HAM-D: 12.1 &#177; 3.3 vs 14.5 &#177; 2.3, p = 0.03; HAM-A: 12.7 &#177; 3.4 vs 14.3 &#177; 2.2, p = 0.01; BAI: 18.6 &#177; &#177; 6.4 vs 22.1 &#177; 6.6, p = 0.01 and BDI: 13.9 &#177; 2.5 vs 17.2 &#177; 2.0, p = 0.002, respectively). In multivariate analysis, BDI scores (odds ratio [OR]: 2.197, < 95% confidence interval [CI] 1.101&#8211;4.387; p = 0.026), HAM-A scores (OR: 1.912, < 95% Cl 1.092&#8211;2.974; p = 0.041) and DM (OR: 2.610, < 95% Cl 1.313&#8211;5.183; p = 0.006) were important risk factors for LV dysfunction in stable patients with MVD. Conclusions: This study demonstrated that emotional status and DM are factors associated with impaired LV systolic function in patients with stable CAD

    Determination of right ventricular dysfunction using the speckle tracking echocardiography method in patients with obstructive sleep apnea

    Get PDF
    Background: The speckle tracking echocardiography (STE) method shows the presence of right ventricular (RV) dysfunction before the advent of RV failure and pulmonary hypertension in patients with cardiopulmonary disease. We aimed to assess subclinical RV dysfunction in obstructive sleep apnea (OSA) using the STE method. Method: Twenty-one healthy individuals and 58 OSA patients were included. According to severity as determined by the apnea&#8211;hypopnea index (AHI), OSA patients were examined in three groups: mild, moderate and severe. RV free wall was used in STE examination. Results: Right ventricle strain (ST %) and systolic strain rate (STR-S 1/s) were decreasing along with the disease severity (ST &#8212; healthy: &#8211;34.05 &#177; &#8211;4.29; mild: &#8211;31.4 &#177; &#8211;5.37; moderate: &#8211;22.75 &#177; &#8211;4.89; severe: &#8211;20.89 &#177; &#8211;5.59; p < 0.003; STR-S &#8212; healthy: &#8211;2.93 &#177; &#8211;0.64; mild: &#8211;2.85 &#177; &#8211;0.73; moderate: &#8211;2.06 &#177; &#8211;0.43; severe: &#8211;1.43 &#177; &#8211;0.33; p < 0.03). Correlated with the disease severity, the RV early diastolic strain rate (STR-E) was decreasing and the late diastolic strain rate was increasing (STR-E &#8212; healthy: 2.38 &#177; 0.63; mild: 2.32 &#177; 0.84; moderate: 1.66 &#177; 0.55; severe: 1 &#177; 0.54; p < 0.003; STR-A &#8212; healthy: 2.25 &#177; 0.33; mild: 2.32 &#177; 0.54; moderate: 2.79 &#177; 0.66; severe: 3.29 &#177; 0.54; p < 0.03). The STR-E/A ratio was found to be in a decreasing trend along with the disease severity (healthy: 1.08 &#177; 0.34; mild: 1.06 &#177; 0.46; moderate: 0.62 &#177; 0.22; severe: 0.34 &#177; 0.23; p < 0.03). Conclusions: Subclinical RV dysfunction can be established in OSA patients even in the absence of pulmonary hypertension and pathologies which could have adverse effects on RV functions. In addition to the methods of conventional, Doppler and tissue Doppler echocardiography, using the STE method can determine RV dysfunction in the subclinical phase. (Cardiol J 2012; 19, 2: 130&#8211;139

    The role of oxidative stress and effect of alpha-lipoic acid in reexpansion pulmonary edema ā€“ an experimental study

    Get PDF
    Introduction: We investigated the role of oxidative stress in the pathogenesis of reexpansion pulmonary edema (RPE) and effect of alpha-lipoic acid (ALA) in the prevention of RPE
    • ā€¦
    corecore