163 research outputs found
Cost-effective HPC clustering for computer vision applications
We will present a cost-effective and flexible realization of high performance computing (HPC) clustering and its potential in solving computationally intensive problems in computer vision. The featured software foundation to support the parallel programming is the GNU parallel Knoppix package with message passing interface (MPI) based Octave, Python and C interface capabilities. The implementation is especially of interest in applications where the main objective is to reuse the existing hardware infrastructure and to maintain the overall budget cost. We will present the benchmark results and compare and contrast the performances of Octave and MATLAB
Optimization of 3-D Wavelet Decomposition on Multiprocessors
In this work we discuss various ideas for the optimization of 3-D wavelet/subband decomposition on shared memory MIMD computers. We theoretically evaluate the characteristics of these approaches and verify the results on parallel computers. Experimental results are conducted on a shared memory as well as a virtual shared memory architecture
Distributed computing methodology for training neural networks in an image-guided diagnostic application
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used
Recommended from our members
On the block wavelet transform applied to the boundary element method
This paper follows an earlier work by Bucher et al. [1] on the application of wavelet transforms to the boundary element method, which shows how to reuse models stored in compressed form to solve new models with the same geometry but arbitrary load cases - the so-called virtual assembly technique. The extension presented in this paper involves a new computational procedure created to perform the required two-dimensional wavelet transforms by blocks, theoretically allowing the compression of matrices of arbitrary size. Details of the computer implementation that allows the use of this methodology for very large models or at high compression ratios are given. A numerical application shows a standard PC being used to solve a 131,072 DOF model, previously compressed, for 100 distinct load cases in less than 1 hour – or 33 seconds for each load case
Spectral Representations of One-Homogeneous Functionals
This paper discusses a generalization of spectral representations related to
convex one-homogeneous regularization functionals, e.g. total variation or
-norms. Those functionals serve as a substitute for a Hilbert space
structure (and the related norm) in classical linear spectral transforms, e.g.
Fourier and wavelet analysis. We discuss three meaningful definitions of
spectral representations by scale space and variational methods and prove that
(nonlinear) eigenfunctions of the regularization functionals are indeed atoms
in the spectral representation. Moreover, we verify further useful properties
related to orthogonality of the decomposition and the Parseval identity.
The spectral transform is motivated by total variation and further developed
to higher order variants. Moreover, we show that the approach can recover
Fourier analysis as a special case using an appropriate -type
functional and discuss a coupled sparsity example
Introducing the Filtered Park’s and Filtered Extended Park’s Vector Approach to Detect Broken Rotor Bars in Induction Motors Independently from the Rotor Slots Number
[EN] The Park's Vector Approach (PVA), together with its variations, has been one of the most widespread diagnostic methods for electrical machines and drives. Regarding the broken rotor bars fault diagnosis in induction motors, the common practice is to rely on the width increase of the Park's Vector (PV) ring and then apply some more sophisticated signal processing methods. It is shown in this paper that this method can be unreliable and is strongly dependent on the magnetic poles and rotor slot numbers. To overcome this constraint, the novel Filtered Park's/Extended Park's Vector Approach (FPVA/FEPVA) is introduced. The investigation is carried out with FEM simulations and experimental testing. The results prove to satisfyingly coincide, whereas the proposed advanced FPVA method is desirably reliable. (C) 2017 Elsevier Ltd. All rights reserved.The authors acknowledge the support of the Portuguese Foundation for Science and Technology under Project No. SFRH/BSAB/118741/2016, and also the support of the Spanish 'Ministerio de Economia y Competitividad' (MINECO) and FEDER program in the framework of the 'Proyectos I+D del Subprograma de Generacion de Conocimiento, Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia' (ref: DPI2014-52842-P).Gyftakis, KN.; Marques Cardoso, AJ.; Antonino-Daviu, J. (2017). Introducing the Filtered Park's and Filtered Extended Park's Vector Approach to Detect Broken Rotor Bars in Induction Motors Independently from the Rotor Slots Number. Mechanical Systems and Signal Processing. 93:30-50. https://doi.org/10.1016/j.ymssp.2017.01.046S30509
Fractal Image Compression on MIMD Architectures II: Classification Based Speed-up Methods
Since fractal image compression is computationally very expensive, speed-up techniques are required in addition to parallel processing in order to compress large images in reasonable time. In this paper we discuss parallel fractal image compression algorithms suited for MIMD architectures which employ block classification as speed-up method
Sampling Sparse Signals on the Sphere: Algorithms and Applications
We propose a sampling scheme that can perfectly reconstruct a collection of
spikes on the sphere from samples of their lowpass-filtered observations.
Central to our algorithm is a generalization of the annihilating filter method,
a tool widely used in array signal processing and finite-rate-of-innovation
(FRI) sampling. The proposed algorithm can reconstruct spikes from
spatial samples. This sampling requirement improves over
previously known FRI sampling schemes on the sphere by a factor of four for
large . We showcase the versatility of the proposed algorithm by applying it
to three different problems: 1) sampling diffusion processes induced by
localized sources on the sphere, 2) shot noise removal, and 3) sound source
localization (SSL) by a spherical microphone array. In particular, we show how
SSL can be reformulated as a spherical sparse sampling problem.Comment: 14 pages, 8 figures, submitted to IEEE Transactions on Signal
Processin
Recommended from our members
Advances in machine learning algorithms for financial risk management
In this thesis, three novel machine learning techniques are introduced to address distinct
yet interrelated challenges involved in financial risk management tasks. These approaches
collectively offer a comprehensive strategy, beginning with the precise classification of credit
risks, advancing through the nuanced forecasting of financial asset volatility, and ending
with the strategic optimisation of financial asset portfolios.
Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk
assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture
modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed
using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression
model is then applied to predict the probability of default using the heuristically balanced
datasets. The results underscore the effectiveness of our proposed technique, with superior
performance observed in comparison to other imbalanced preprocessing approaches. This
advancement in credit risk classification lays a solid foundation for understanding individual
financial behaviours, a crucial first step in the broader context of financial risk management.
Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a
Triple Discriminator Generative Adversarial Network with a continuous wavelet transform
is proposed. The proposed model has the ability to decompose volatility time series into
signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform
component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a
Generative Adversarial Network consisting of triple Discriminator and Generator networks.
The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised
loss and reconstruction loss as part of its framework. Data from nine financial assets are
employed to demonstrate the effectiveness of the proposed model. This approach not only
enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis.
Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio
optimisation using historical Low, High, and Close prices of assets as input with weights of
assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return
on investment based on deep reinforcement learning. To provide more learning stability in
an online training process, a Markov Differential Sharpe Ratio reward function has been
proposed as the reinforcement learning objective function. Additionally, a Multi-Memory
Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout
a specified trading period. The use of the insights gained from volatility forecasting into
this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving
superior results based on risk-adjusted reward performance measures.
In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the
accuracy of credit risk classification, through the improvement and understanding of market
volatility, to optimisation of investment strategies. These methodologies collectively show
the potential of the use of machine learning to improve financial risk management
WAVELET REGULARIZATION OF A FOURIER-GALERKIN METHOD FOR SOLVING THE 2D INCOMPRESSIBLE EULER EQUATIONS
International audienceWe employ a Fourier-Galerkin method to solve the 2D incompressible Euler equations, and study several ways to regularize the solution by wavelet filtering at each timestep. Real-valued orthogonal wavelets and complex-valued wavelets are considered, combined with either linear or non-linear filtering. The results are compared with those obtained via classical viscous and hyperviscous regularization methods. Wavelet regularization using complex-valued wavelets performs as well in terms of L2 convergence rate to the reference solution. The compression rate for homogeneous 2D turbulence is around 3 for this method, suggesting that memory and CPU time could be reduced in an adaptive wavelet computation. Our results also suggest L2 convergence to the reference solution without any regularization, in contrast to what is obtained for the 1D Burgers equation
- …