4,120 research outputs found
Scalable approximate FRNN-OWA classification
Fuzzy Rough Nearest Neighbour classification with Ordered Weighted Averaging operators (FRNN-OWA) is an algorithm that classifies unseen instances according to their membership in the fuzzy upper and lower approximations of the decision classes. Previous research has shown that the use of OWA operators increases the robustness of this model. However, calculating membership in an approximation requires a nearest neighbour search. In practice, the query time complexity of exact nearest neighbour search algorithms in more than a handful of dimensions is near-linear, which limits the scalability of FRNN-OWA. Therefore, we propose approximate FRNN-OWA, a modified model that calculates upper and lower approximations of decision classes using the approximate nearest neighbours returned by Hierarchical Navigable Small Worlds (HNSW), a recent approximative nearest neighbour search algorithm with logarithmic query time complexity at constant near-100% accuracy. We demonstrate that approximate FRNN-OWA is sufficiently robust to match the classification accuracy of exact FRNN-OWA while scaling much more efficiently. We test four parameter configurations of HNSW, and evaluate their performance by measuring classification accuracy and construction and query times for samples of various sizes from three large datasets. We find that with two of the parameter configurations, approximate FRNN-OWA achieves near-identical accuracy to exact FRNN-OWA for most sample sizes within query times that are up to several orders of magnitude faster
Generating drawdown-realistic financial price paths using path signatures
A novel generative machine learning approach for the simulation of sequences
of financial price data with drawdowns quantifiably close to empirical data is
introduced. Applications such as pricing drawdown insurance options or
developing portfolio drawdown control strategies call for a host of
drawdown-realistic paths. Historical scenarios may be insufficient to
effectively train and backtest the strategy, while standard parametric Monte
Carlo does not adequately preserve drawdowns. We advocate a non-parametric
Monte Carlo approach combining a variational autoencoder generative model with
a drawdown reconstruction loss function. To overcome issues of numerical
complexity and non-differentiability, we approximate drawdown as a linear
function of the moments of the path, known in the literature as path
signatures. We prove the required regularity of drawdown function and
consistency of the approximation. Furthermore, we obtain close numerical
approximations using linear regression for fractional Brownian and empirical
data. We argue that linear combinations of the moments of a path yield a
mathematically non-trivial smoothing of the drawdown function, which gives one
leeway to simulate drawdown-realistic price paths by including drawdown
evaluation metrics in the learning objective. We conclude with numerical
experiments on mixed equity, bond, real estate and commodity portfolios and
obtain a host of drawdown-realistic paths
X-ray CT Image Reconstruction on Highly-Parallel Architectures.
Model-based image reconstruction (MBIR) methods for X-ray CT use accurate
models of the CT acquisition process, the statistics of the noisy measurements,
and noise-reducing regularization to produce potentially higher quality images
than conventional methods even at reduced X-ray doses. They do this by
minimizing a statistically motivated high-dimensional cost function; the high
computational cost of numerically minimizing this function has prevented MBIR
methods from reaching ubiquity in the clinic. Modern highly-parallel hardware
like graphics processing units (GPUs) may offer the computational resources to
solve these reconstruction problems quickly, but simply "translating" existing
algorithms designed for conventional processors to the GPU may not fully
exploit the hardware's capabilities.
This thesis proposes GPU-specialized image denoising and image reconstruction
algorithms. The proposed image denoising algorithm uses group coordinate
descent with carefully structured groups. The algorithm converges very
rapidly: in one experiment, it denoises a 65 megapixel image in about 1.5
seconds, while the popular Chambolle-Pock primal-dual algorithm running on the
same hardware takes over a minute to reach the same level of accuracy.
For X-ray CT reconstruction, this thesis uses duality and group coordinate
ascent to propose an alternative to the popular ordered subsets (OS) method.
Similar to OS, the proposed method can use a subset of the data to update the
image. Unlike OS, the proposed method is convergent. In one helical CT
reconstruction experiment, an implementation of the proposed algorithm using
one GPU converges more quickly than a state-of-the-art algorithm converges
using four GPUs. Using four GPUs, the proposed algorithm reaches near
convergence of a wide-cone axial reconstruction problem with over 220 million
voxels in only 11 minutes.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113551/1/mcgaffin_1.pd
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
- …