135 research outputs found

    On Monte Carlo time-dependent variational principles

    Get PDF
    [no abstract

    Fuzzy EOQ Model with Trapezoidal and Triangular Functions Using Partial Backorder

    Get PDF
    EOQ fuzzy model is EOQ model that can estimate the cost from existing information. Using trapezoid fuzzy functions can estimate the costs of existing and trapezoid membership functions has some points that have a value of membership . TR ̃C value results of trapezoid fuzzy will be higher than usual TRC value results of EOQ model . This paper aims to determine the optimal amount of inventory in the company, namely optimal Q and optimal V, using the model of partial backorder will be known optimal Q and V for the optimal number of units each time a message . EOQ model effect on inventory very closely by using EOQ fuzzy model with triangular and trapezoid membership functions with partial backorder. Optimal Q and optimal V values for the optimal fuzzy models will have an increase due to the use of trapezoid and triangular membership functions that have a different value depending on the requirements of each membership function value. Therefore, by using a fuzzy model can solve the company's problems in estimating the costs for the next term

    Probabilistic Models for Joint Segmentation, Detection and Tracking

    Get PDF
    Migrace buněk a buněčných částic hraje důležitou roli ve fungování živých organismů. Systematický výzkum buněčné migrace byl umožněn v posledních dvaceti letech rychlým rozvojem neinvazivních zobrazovacích technik a digitálních snímačů. Moderní zobrazovací systémy dovolují studovat chování buněčných populací složených z mnoha ticíců buněk. Manuální analýza takového množství dat by byla velice zdlouhavá, protože některé experimenty vyžadují analyzovat tvar, rychlost a další charakteristiky jednotlivých buněk. Z tohoto důvodu je ve vědecké komunitě velká poptávka po automatických metodách.Migration of cells and subcellular particles plays a crucial role in many processes in living organisms. Despite its importance a systematic research of cell motility has only been possible in last two decades due to rapid development of non-invasive imaging techniques and digital cameras. Modern imaging systems allow to study large populations with thousands of cells. Manual analysis of the acquired data is infeasible, because in order to gain insight into underlying biochemical processes it is sometimes necessary to determine shape, velocity and other characteristics of individual cells. Thus there is a high demand for automatic methods

    Essays on risk and uncertainty in financial decision making: Bayesian inference of multi-factor affine term structure models and dynamic optimal portfolio choices for robust preferences

    Full text link
    Thesis (Ph.D.)--Boston UniversityThis thesis studies model inference about risk and decision making under model uncertainty in two specific settings. The first part of the thesis develops a Bayesian Markov Chain Monte Carlo (MCMC) estimation method for multi-factor affine term structure models. Affine term structure models are popular because they provide closed-form solutions for the valuation of fixed income securities. Efficient estimation methods for parameters of these models, however, are not readily available. The MCMC algorithms developed provide more accurate estimates, compared with alternative estimation methods. The superior performance of the MCMC algorithms is first documented in a simulation study. Convergence of the algorithm used to sample posterior distributions is documented in numerical experiments. The Bayesian MCMC methodology is then applied to yield data. The in-sample pricing errors obtained are significantly smaller than those of alternative methods. A Bayesian forecast analysis documents the significant superior predictive power of the MCMC approach. Finally, Bayesian model selection criteria are discussed. Incorporating aspects of model uncertainty for the optimal allocation of risk has become an important topic in finance. The second part of the thesis considers an optimal dynamic portfolio choice problem for an ambiguity-averse investor. It introduces new preferences that allow the separation of risk and ambiguity aversion. The novel representation is based on generalized divergence measures that capture richer forms of model uncertainty than traditional relative entropy measures. The novel preferences are shown to have a homothetic stochastic differential utility representation. Based on this representation, optimal portfolio policies are derived using numerical schemes for forward-backward stochastic differential equations. The optimal portfolio policy is shown to contain new hedging motives induced by the investor's attitude toward model uncertainty. Ambiguity concerns introduce additional horizon effects, boost effective risk aversion, and overall reduce optimal investment in risky assets. These findings have important implications for the design of optimal portfolios in the presence of model uncertainty

    Methods for Quantum Dynamics, Localization and Quantum Machine Learning

    Get PDF

    Sonar image interpretation for sub-sea operations

    Get PDF
    Mine Counter-Measure (MCM) missions are conducted to neutralise underwater explosives. Automatic Target Recognition (ATR) assists operators by increasing the speed and accuracy of data review. ATR embedded on vehicles enables adaptive missions which increase the speed of data acquisition. This thesis addresses three challenges; the speed of data processing, robustness of ATR to environmental conditions and the large quantities of data required to train an algorithm. The main contribution of this thesis is a novel ATR algorithm. The algorithm uses features derived from the projection of 3D boxes to produce a set of 2D templates. The template responses are independent of grazing angle, range and target orientation. Integer skewed integral images, are derived to accelerate the calculation of the template responses. The algorithm is compared to the Haar cascade algorithm. For a single model of sonar and cylindrical targets the algorithm reduces the Probability of False Alarm (PFA) by 80% at a Probability of Detection (PD) of 85%. The algorithm is trained on target data from another model of sonar. The PD is only 6% lower even though no representative target data was used for training. The second major contribution is an adaptive ATR algorithm that uses local sea-floor characteristics to address the problem of ATR robustness with respect to the local environment. A dual-tree wavelet decomposition of the sea-floor and an Markov Random Field (MRF) based graph-cut algorithm is used to segment the terrain. A Neural Network (NN) is then trained to filter ATR results based on the local sea-floor context. It is shown, for the Haar Cascade algorithm, that the PFA can be reduced by 70% at a PD of 85%. Speed of data processing is addressed using novel pre-processing techniques. The standard three class MRF, for sonar image segmentation, is formulated using graph-cuts. Consequently, a 1.2 million pixel image is segmented in 1.2 seconds. Additionally, local estimation of class models is introduced to remove range dependent segmentation quality. Finally, an A* graph search is developed to remove the surface return, a line of saturated pixels often detected as false alarms by ATR. The A* search identifies the surface return in 199 of 220 images tested with a runtime of 2.1 seconds. The algorithm is robust to the presence of ripples and rocks

    Quelques extensions des level sets et des graph cuts et leurs applications à la segmentation d'images et de vidéos

    Get PDF
    Image processing techniques are now widely spread out over a large quantity of domains: like medical imaging, movies post-production, games... Automatic detection and extraction of regions of interest inside an image, a volume or a video is challenging problem since it is a starting point for many applications in image processing. However many techniques were developed during the last years and the state of the art methods suffer from some drawbacks: The Level Sets method only provides a local minimum while the Graph Cuts method comes from Combinatorial Community and could take advantage of the specificity of image processing problems. In this thesis, we propose two extensions of the previously cited methods in order to soften or remove these drawbacks. We first discuss the existing methods and show how they are related to the segmentation problem through an energy formulation. Then we introduce stochastic perturbations to the Level Sets method and we build a more generic framework: the Stochastic Level Sets (SLS). Later we provide a direct application of the SLS to image segmentation that provides a better minimization of energies. Basically, it allows the contours to escape from local minimum. Then we propose a new formulation of an existing algorithm of Graph Cuts in order to introduce some interesting concept for image processing community: like initialization of the algorithm for speed improvement. We also provide a new approach for layer extraction from video sequence that retrieves both visible and hidden layers in it.Les techniques de traitement d'image sont maintenant largement répandues dans une grande quantité de domaines: comme l'imagerie médicale, la post-production de films, les jeux... La détection et l'extraction automatique de régions d'intérêt à l'intérieur d'une image, d'un volume ou d'une vidéo est réel challenge puisqu'il représente un point de départ pour un grand nombre d'applications en traitement d'image. Cependant beaucoup de techniques développées pendant ces dernières années et les méthodes de l'état de l'art souffrent de quelques inconvénients: la méthode des ensembles de niveaux fournit seulement un minimum local tandis que la méthode de coupes de graphe vient de la communauté combinatoire et pourrait tirer profit de la spécificité des problèmes de traitement d'image. Dans cette thèse, nous proposons deux prolongements des méthodes précédemment citées afin de réduire ou enlever ces inconvénients. Nous discutons d'abord les méthodes existantes et montrons comment elles sont liées au problème de segmentation via une formulation énergétique. Nous présentons ensuite des perturbations stochastiques a la méthode des ensembles de niveaux et nous établissons un cadre plus générique: les ensembles de niveaux stochastiques (SLS). Plus tard nous fournissons une application directe du SLS à la segmentation d'image et montrons qu'elle fournit une meilleure minimisation des énergies. Fondamentalement, il permet aux contours de s'échapper des minima locaux. Nous proposons ensuite une nouvelle formulation d'un algorithme existant des coupes de graphe afin d'introduire de nouveaux concepts intéressant pour la communauté de traitement d'image: comme l'initialisation de l'algorithme pour l'amélioration de vitesse. Nous fournissons également une nouvelle approche pour l'extraction de couches d'une vidéo par segmentation du mouvement et qui extrait à la fois les couches visibles et cachées présentes
    corecore