2,390 research outputs found
VI Workshop on Computational Data Analysis and Numerical Methods: Book of Abstracts
The VI Workshop on Computational Data Analysis and Numerical Methods (WCDANM) is going to be held on June 27-29, 2019, in the Department of Mathematics of the University of Beira Interior (UBI), CovilhĂŁ, Portugal and it is a unique opportunity to disseminate scientific research related to the areas of Mathematics in general, with particular relevance to the areas of Computational Data Analysis and Numerical Methods in theoretical and/or practical field, using new techniques, giving especial emphasis to applications in Medicine, Biology, Biotechnology, Engineering, Industry, Environmental Sciences, Finance, Insurance, Management and Administration. The meeting will provide a forum for discussion and debate of ideas with interest to the scientific community in general. With this meeting new scientific collaborations among colleagues, namely new collaborations in Masters and PhD projects are expected. The event is open to the entire scientific community (with or without communication/poster)
Enhanced independent vector analysis for audio separation in a room environment
Independent vector analysis (IVA) is studied as a frequency domain blind source separation method, which can theoretically avoid the permutation problem by retaining the dependency between different frequency bins of the same source vector while removing the dependency between different source vectors. This thesis focuses upon improving the performance of independent vector analysis when it is used to solve the audio separation problem in a room environment.
A specific stability problem of IVA, i.e. the block permutation problem, is identified and analyzed. Then a robust IVA method is proposed to solve this problem by exploiting the phase continuity of the unmixing matrix. Moreover, an auxiliary function based IVA algorithm with an overlapped chain type source prior is proposed as well to mitigate this problem.
Then an informed IVA scheme is proposed which combines the geometric information of the sources from video to solve the problem by providing an intelligent initialization for optimal convergence. The proposed informed IVA algorithm can also achieve a faster convergence in terms of iteration numbers and better separation performance. A pitch based evaluation method is defined to judge the separation performance objectively when the information describing the mixing matrix and sources is missing.
In order to improve the separation performance of IVA, an appropriate multivariate source prior is needed to better preserve the dependency structure within the source vectors. A particular multivariate generalized Gaussian distribution is adopted as the source prior. The nonlinear score function derived from this proposed source prior contains the fourth order relationships between different frequency bins, which provides a more informative and stronger dependency structure compared with the original IVA algorithm and thereby improves the separation performance.
Copula theory is a central tool to model the nonlinear dependency structure. The t copula is proposed to describe the dependency structure within the frequency domain speech signals due to its tail dependency property, which means if one variable has an extreme value, other variables are expected to have extreme values. A multivariate student's t distribution constructed by using a t copula with the univariate student's t marginal distribution is proposed as the source prior. Then the IVA algorithm with the proposed source prior is derived.
The proposed algorithms are tested with real speech signals in different reverberant room environments both using modelled room impulse response and real room recordings. State-of-the-art criteria are used to evaluate the separation performance, and the experimental results confirm the advantage of the proposed algorithms
Risk measure changes and portfolio optimization theory
Imperial Users onl
Recommended from our members
Essays on Quantitative Risk Management
The costly lessons from global crisis in the past decade reinforce the importance as well as challenges of risk management. This thesis explores several core concepts of quantitative risk management and provides further insight.
We start with rating migration risk and propose a Mixture of Markov Chains (MMC) model to account for stochastic business cycle effects in credit rating migration risk. The model shows superior in-sample estimation and out-of-sample predication than its rivals. Compared with the naive approach the economic application suggests banks with MMC estimator will increase capital requirement in economic expansion and free up capital during recession hence it is aligned with Basel III macroprudential imitative by reducing the recession-vs-expansion gap in capital buffers.
Subsequently we move to the key concept of dependence by investigating the importance of dynamic linkages between credit and equity markets. We propose a flexible regime-switching copula model to explore the dynamics of dependence and possible structure breaks with special consideration on tail dependence. The study reveals a high-dependence regime that coincides with the recent financial crisis. The backtesting results acknowledge the new model's superiority on out-of-sample VaR forecasting over purely dynamic or static copula. It can serve to emphasise the relevance for risk management of appropriately modeling complex dependence structures.
Finally we discuss the risk measures and how they affect the portfolio optimisation. We contend that more successful portfolio management can be achieved by combining extreme value analysis to describe downside tail risk and dynamic copulas to model nonlinear dependence structures. Conditional Value-at-Risk is adopted as pertinent measure of downside tail risk for portfolio optimisation. Using both realised portfolio returns and a set of out-of-sample Monte Carlo experiments, our novel portfolio strategy is confronted with the de facto mean-variance approach. The results suggest that the MV approach produces suboptimal portfolios or a less desirable risk-return tradeoff
Merging Data Sources to Predict Remaining Useful Life – An Automated Method to Identify Prognostic Parameters
The ultimate goal of most prognostic systems is accurate prediction of the remaining useful life (RUL) of individual systems or components based on their use and performance. This class of prognostic algorithms is termed Degradation-Based, or Type III Prognostics. As equipment degrades, measured parameters of the system tend to change; these sensed measurements, or appropriate transformations thereof, may be used to characterize degradation. Traditionally, individual-based prognostic methods use a measure of degradation to make RUL estimates. Degradation measures may include sensed measurements, such as temperature or vibration level, or inferred measurements, such as model residuals or physics-based model predictions. Often, it is beneficial to combine several measures of degradation into a single parameter. Selection of an appropriate parameter is key for making useful individual-based RUL estimates, but methods to aid in this selection are absent in the literature. This dissertation introduces a set of metrics which characterize the suitability of a prognostic parameter. Parameter features such as trendability, monotonicity, and prognosability can be used to compare candidate prognostic parameters to determine which is most useful for individual-based prognosis. Trendability indicates the degree to which the parameters of a population of systems have the same underlying shape. Monotonicity characterizes the underlying positive or negative trend of the parameter. Finally, prognosability gives a measure of the variance in the critical failure value of a population of systems. By quantifying these features for a given parameter, the metrics can be used with any traditional optimization technique, such as Genetic Algorithms, to identify the optimal parameter for a given system. An appropriate parameter may be used with a General Path Model (GPM) approach to make RUL estimates for specific systems or components. A dynamic Bayesian updating methodology is introduced to incorporate prior information in the GPM methodology. The proposed methods are illustrated with two applications: first, to the simulated turbofan engine data provided in the 2008 Prognostics and Health Management Conference Prognostics Challenge and, second, to data collected in a laboratory milling equipment wear experiment. The automated system was shown to identify appropriate parameters in both situations and facilitate Type III prognostic model development
Graduate School of Engineering and Management Catalog 2018-2019
The Graduate Catalog represents the offerings, programs, and requirements in effect at the time of publication
On semiparametric regression and data mining
Semiparametric regression is playing an increasingly large role in the analysis of datasets
exhibiting various complications (Ruppert, Wand & Carroll, 2003). In particular semiparametric
regression a plays prominent role in the area of data mining where such
complications are numerous (Hastie, Tibshirani & Friedman, 2001). In this thesis we
develop fast, interpretable methods addressing many of the difficulties associated with
data mining applications including: model selection, missing value analysis, outliers and
heteroscedastic noise.
We focus on function estimation using penalised splines via mixed model methodology
(Wahba 1990; Speed 1991; Ruppert et al. 2003). In dealing with the difficulties
associated with data mining applications many of the models we consider deviate from
typical normality assumptions. These models lead to likelihoods involving analytically
intractable integrals. Thus, in keeping with the aim of speed, we seek analytic approximations
to such integrals which are typically faster than numeric alternatives.
These analytic approximations not only include popular penalised quasi-likelihood
(PQL) approximations (Breslow & Clayton, 1993) but variational approximations. Originating
in physics, variational approximations are a relatively new class of approximations
(to statistics) which are simple, fast, flexible and effective. They have recently been
applied to statistical problems in machine learning where they are rapidly gaining popularity
(Jordan, Ghahramani, Jaakkola & Sau11999; Corduneanu & Bishop, 2001; Ueda &
Ghahramani, 2002; Bishop & Winn, 2003; Winn & Bishop 2005).
We develop variational approximations to: generalized linear mixed models
(GLMMs); Bayesian GLMMs; simple missing values models; and for outlier and heteroscedastic
noise models, which are, to the best of our knowledge, new. These methods
are quite effective and extremely fast, with fitting taking minutes if not seconds on a
typical 2008 computer.
We also make a contribution to variational methods themselves. Variational approximations
often underestimate the variance of posterior densities in Bayesian models
(Humphreys & Titterington, 2000; Consonni & Marin, 2004; Wang & Titterington, 2005).
We develop grid-based variational posterior approximations. These approximations combine
a sequence of variational posterior approximations, can be extremely accurate and are
reasonably fast
DEVELOPMENT AND TESTING OF UNIVERSAL PRESSURE DROP MODELS IN PIPELINES USING ABDUCTIVE AND ARTIFICIAL NEURAL NETWORKS
Determination of pressure drop in pipeline system is difficult. Conventional methods
(empirical correlations and mechanistic methods) were not successful in providing
accurate estimate. Artificial Neural Networks and polynomial Group Method of Data
Handling techniques had received wide recognition in terms of discovering hidden
and highly nonlinear relationships between input and output patterns. The potential of
both Artificial Neural Networks (ANN) and Abductory Induction Mechanism (AIM)
techniques has been revealed in this study by generating generic models for pressure
drop estimation in pipeline systems that carry multiphase fluids (oil, gas, and water)
and with wide range of angles of inclination. No past study was found that utilizes
both techniques in an attempt to solve this problem. A total number of 335 data sets
collected from different Middle Eastern fields have been used in developing the
models. The data covered a wide range of variables at different values such as oil rate
(2200 to 25000 bbl/d), water rate (up to 8424 bbl/d), angles of inclination (-52 to 208
degrees), length of the pipe (500 to 26700 ft) and gas rate (1078 to 19658 MSCFD).
For the ANN model, a ratio of 2: 1: 1 between training, validation, and testing sets
yielded the best training/testing performance. The ANN model has been developed
using resilient back-propagation learning algorithm. The purpose for generating
another model using the polynomial Group Method of Data Handling technique was
to reduce the problem of dimensionality that affects the accuracy of ANN modeling. It
was found that (by the Group Method of Data Handling algorithm), length of the pipe,
wellhead pressure, and angle of inclination have a pronounced effect on the pressure
drop estimation under these conditions. The best available empirical correlations and
mechanistic models adopted by the industry had been tested against the data and the
developed models.
Graphical and statistical tools had been utilized for comparing the performance of
the new models and other empirical correlations and mechanistic models.
Thorough verifications have indicated that the developed Artificial Neural Networks
model outperforms all tested empirical correlations and mechanistic models as well as
the polynomial Group Method of Data Handling model in terms of highest correlation
coefficient, lowest average absolute percent error, lowest standard deviation, lowest
maximum error, and lowest root mean square error.
The study offers reliable and quick means for pressure drop estimation in
pipelines carrying multiphase fluids with wide range of angles of inclination using
Artificial Neural Networks and Group Method of Data Handling techniques.
Graphical User Interface (GUI) has been generated to help apply the ANN model
results while an applicable equation can be used for Group Method of Data Handling
model. While the conventional methods were not successful in providing accurate
estimate of this property, the second approach (Group Method of Data Handling
technique) was able to provide a reliable estimate with only three-input parameters
involved. The modeling accuracy was not greatly harmed using this technique
Analysis and design of a capsule landing system and surface vehicle control system for Mars exploration
Problems related to an unmanned exploration of the planet Mars by means of an autonomous roving planetary vehicle are investigated. These problems include: design, construction and evaluation of the vehicle itself and its control and operating systems. More specifically, vehicle configuration, dynamics, control, propulsion, hazard detection systems, terrain sensing and modelling, obstacle detection concepts, path selection, decision-making systems, and chemical analyses of samples are studied. Emphasis is placed on development of a vehicle capable of gathering specimens and data for an Augmented Viking Mission or to provide the basis for a Sample Return Mission
- …