163 research outputs found

    Sequential Design for Optimal Stopping Problems

    Full text link
    We propose a new approach to solve optimal stopping problems via simulation. Working within the backward dynamic programming/Snell envelope framework, we augment the methodology of Longstaff-Schwartz that focuses on approximating the stopping strategy. Namely, we introduce adaptive generation of the stochastic grids anchoring the simulated sample paths of the underlying state process. This allows for active learning of the classifiers partitioning the state space into the continuation and stopping regions. To this end, we examine sequential design schemes that adaptively place new design points close to the stopping boundaries. We then discuss dynamic regression algorithms that can implement such recursive estimation and local refinement of the classifiers. The new algorithm is illustrated with a variety of numerical experiments, showing that an order of magnitude savings in terms of design size can be achieved. We also compare with existing benchmarks in the context of pricing multi-dimensional Bermudan options.Comment: 24 page

    Towards Learning Representations in Visual Computing Tasks

    Get PDF
    abstract: The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos. The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss. In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    A predictive energy management strategy for multi-mode plug-in hybrid electric vehicles based on multi neural networks

    Get PDF
    Online optimal energy management of plug-in hybrid electric vehicles has been continually investigated for better fuel economy. This paper proposed a predictive energy management strategy based on multi neural networks for a multi-mode plug-in hybrid electric vehicle. To attain it, firstly, the offline optimal results prepared for knowledge learning are derived by dynamic programming and Pontryagin’s minimum principle. Then, the mode recognition neural network is trained based on the optimal results of dynamic programming and the recurrent neural network is firstly exploited to realize online co-state estimation application. Consequently, the velocity prediction-based online model predictive control framework is established with the co-state correction and slacked constraints to solve the real-time optimal control sequence. A series of numerical simulation results validate that the optimal performance yielded from global optimal strategy can be exploited online to attain the satisfied cost reduction, compared with equivalent consumption minimum strategy, with the assistance of estimated real time co-state and slacked reference. In addition, the computation duration of proposed algorithm decreases by 23.40%, compared with conventional Pontryagin’s minimum principle-based model predictive control scheme, thereby proving its online application potential

    Invariance transformations for processing NDE signals

    Get PDF
    The ultimate objective in nondestructive evaluation (NDE) is the characterization of materials, on the basis of information in the response from energy/material interactions. This is commonly referred to as the inverse problem. Inverse problems are in general ill-posed and full analytical solutions to these problems are seldom tractable. Pragmatic approaches for solving them employ a constrained search technique by limiting the space of all possible solutions. A more modest goal is therefore to use the received signal for characterizing defects in objects in terms of the location, size and shape. However, the NDE signal received by the sensors is influenced not only by the defect, but also by the operational parameters associated with the experiment. This dissertation deals with the subject of invariant pattern recognition techniques that render NDE signals insensitive to operational variables, while at the same time, preserve or enhance defect related information. Such techniques are comprised of invariance transformations that operate on the raw signals prior to interpretation using subsequent defect characterization schemes. Invariance transformations are studied in the context of the magnetostatic flux leakage (MFL) inspection technique, which is the method of choice for inspecting natural gas transmission pipelines buried underground;The magnetic flux leakage signal received by the scanning device is very sensitive to a number of operational parameters. Factors that have a major impact on the signal include those caused by variations in the permeability of the pipe-wall material and the velocity of the inspection tool. This study describes novel approaches to compensate for the effects of these variables;Two types of invariance schemes, feature selection and signal compensation, are studied. In the feature selection approach, the invariance transformation is recast as a problem in interpolation of scattered, multi-dimensional data. A variety of interpolation techniques are explored, the most powerful among them being feed-forward neural networks. The second parametric variation is compensated by using restoration filters. The filter kernels are derived using a constrained, stochastic least square optimization technique or by adaptive methods. Both linear and non-linear filters are studied as tools for signal compensation;Results showing the successful application of these invariance transformations to real and simulated MFL data are presented

    The Importance of Quantum Information in the Stock Market and Financial Decision Making in Conditions of Radical Uncertainty

    Get PDF
    The Universe is a coin that’s already been flipped, heads or tails predetermined: all we’re doing is uncovering it the ‘paradox’ is only a conflict between reality and your feeling of what reality ‘ought to be’.Richard FeynmanThe aim of the research takes place through two parallel directions. The first is gaining an understanding of the applicability of quantum mechanics/quantum physics to human decision-making processes in the stock market with quantum information as a decision-making lever, and the second direction is neuroscience and artificial intelligence using postulates analogous to the postulates of quantum mechanics and radical uncertainty in conditions of radical uncertainty.The world of radical uncertainty (radical uncertainty is based on the knowledge of quantum mechanics from the claim that there is no causal certainty). it is everywhere in our world. "Radical uncertainty is characterized by vagueness, ignorance, indeterminacy, ambiguity and lack of information. He prefers to create 'mysteries' rather than 'puzzles' with defined solutions. Mysteries are ill-defined problems in which action is required, but the future is uncertain, the consequences unpredictable, and disagreement inevitable. "How should we make decisions in these circumstances?" (J. Kay and M. King, 2020), while "uncertainty and ambiguity are at the very core of the stock market. "Narratives are the currency of uncertainty" (N. Mangee, 2022)

    The Importance of Quantum Information in the Stock Market and Financial Decision Making in Conditions of Radical Uncertainty

    Get PDF
    The Universe is a coin that’s already been flipped, heads or tails predetermined: all we’re doing is uncovering it the ‘paradox’ is only a conflict between reality and your feeling of what reality ‘ought to be’.Richard FeynmanThe aim of the research takes place through two parallel directions. The first is gaining an understanding of the applicability of quantum mechanics/quantum physics to human decision-making processes in the stock market with quantum information as a decision-making lever, and the second direction is neuroscience and artificial intelligence using postulates analogous to the postulates of quantum mechanics and radical uncertainty in conditions of radical uncertainty.The world of radical uncertainty (radical uncertainty is based on the knowledge of quantum mechanics from the claim that there is no causal certainty). it is everywhere in our world. "Radical uncertainty is characterized by vagueness, ignorance, indeterminacy, ambiguity and lack of information. He prefers to create 'mysteries' rather than 'puzzles' with defined solutions. Mysteries are ill-defined problems in which action is required, but the future is uncertain, the consequences unpredictable, and disagreement inevitable. "How should we make decisions in these circumstances?" (J. Kay and M. King, 2020), while "uncertainty and ambiguity are at the very core of the stock market. "Narratives are the currency of uncertainty" (N. Mangee, 2022)

    Bio-inspired log-polar based color image pattern analysis in multiple frequency channels

    Get PDF
    The main topic addressed in this thesis is to implement color image pattern recognition based on the lateral inhibition subtraction phenomenon combined with a complex log-polar mapping in multiple spatial frequency channels. It is shown that the individual red, green and blue channels have different recognition performances when put in the context of former work done by Dragan Vidacic. It is observed that the green channel performs better than the other two channels, with the blue channel having the poorest performance. Following the application of a contrast stretching function the object recognition performance is improved in all channels. Multiple spatial frequency filters were designed to simulate the filtering channels that occur in the human visual system. Following these preprocessing steps Dragan Vidacic\u27s methodology is followed in order to determine the benefits that are obtained from the preprocessing steps being investigated. It is shown that performance gains are realized by using such preprocessing steps

    Reinforcement Learning Environment for Orbital Station-Keeping

    Get PDF
    In this thesis, a Reinforcement Learning Environment for orbital station-keeping is created and tested against one of the most used Reinforcement Learning algorithm called Proximal Policy Optimization (PPO). This thesis also explores the foundations of Reinforcement Learning, from the taxonomy to a description of PPO, and shows a thorough explanation of the physics required to make the RL environment. Optuna optimizes PPO\u27s hyper-parameters for the created environment via distributed computing. This thesis then shows and analysis the results from training a PPO agent six times
    • …
    corecore