155 research outputs found
Bi-Objective Nonnegative Matrix Factorization: Linear Versus Kernel-Based Models
Nonnegative matrix factorization (NMF) is a powerful class of feature
extraction techniques that has been successfully applied in many fields, namely
in signal and image processing. Current NMF techniques have been limited to a
single-objective problem in either its linear or nonlinear kernel-based
formulation. In this paper, we propose to revisit the NMF as a multi-objective
problem, in particular a bi-objective one, where the objective functions
defined in both input and feature spaces are taken into account. By taking the
advantage of the sum-weighted method from the literature of multi-objective
optimization, the proposed bi-objective NMF determines a set of nondominated,
Pareto optimal, solutions instead of a single optimal decomposition. Moreover,
the corresponding Pareto front is studied and approximated. Experimental
results on unmixing real hyperspectral images confirm the efficiency of the
proposed bi-objective NMF compared with the state-of-the-art methods
A Novel Hybrid Dimensionality Reduction Method using Support Vector Machines and Independent Component Analysis
Due to the increasing demand for high dimensional data analysis from various applications such as electrocardiogram signal analysis and gene expression analysis for cancer detection, dimensionality reduction becomes a viable process to extracts essential information from data such that the high-dimensional data can be represented in a more condensed form with much lower dimensionality to both improve classification accuracy and reduce computational complexity. Conventional dimensionality reduction methods can be categorized into stand-alone and hybrid approaches. The stand-alone method utilizes a single criterion from either supervised or unsupervised perspective. On the other hand, the hybrid method integrates both criteria. Compared with a variety of stand-alone dimensionality reduction methods, the hybrid approach is promising as it takes advantage of both the supervised criterion for better classification accuracy and the unsupervised criterion for better data representation, simultaneously. However, several issues always exist that challenge the efficiency of the hybrid approach, including (1) the difficulty in finding a subspace that seamlessly integrates both criteria in a single hybrid framework, (2) the robustness of the performance regarding noisy data, and (3) nonlinear data representation capability.
This dissertation presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) from Support Vector Machine (SVM) and data independence (unsupervised criterion) from Independent Component Analysis (ICA). The projection from SVM directly contributes to classification performance improvement in a supervised perspective whereas maximum independence among features by ICA construct projection indirectly achieving classification accuracy improvement due to better intrinsic data representation in an unsupervised perspective. For linear dimensionality reduction model, I introduce orthogonality to interrelate both projections from SVM and ICA while redundancy removal process eliminates a part of the projection vectors from SVM, leading to more effective dimensionality reduction. The orthogonality-based linear hybrid dimensionality reduction method is extended to uncorrelatedness-based algorithm with nonlinear data representation capability. In the proposed approach, SVM and ICA are integrated into a single framework by the uncorrelated subspace based on kernel implementation.
Experimental results show that the proposed approaches give higher classification performance with better robustness in relatively lower dimensions than conventional methods for high-dimensional datasets
Efficient Covariance Matrix Update for Variable Metric Evolution Strategies
International audienceRandomized direct search algorithms for continuous domains, such as Evolution Strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, Covariance Matrix Adaptation (CMA) is considered state-of-the-art in Evolution Strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general operations, where is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to without resorting to outdated distributions. We derive new versions of the elitist Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Graph-based ship traffic partitioning for intelligent maritime surveillance in complex port waters
Maritime Situational Awareness (MSA) is a critical component of intelligent maritime traffic surveillance. However, it becomes increasingly challenging to gain MSA accurately given the growing complexity of ship traffic patterns due to multi-ship interactions possibly involving classical manned ships and emerging autonomous ships. This study proposes a new traffic partitioning methodology to realise the optimal maritime traffic partition in complex waters. The methodology combines conflict criticality and spatial distance to generate conflict-connected and spatially compact traffic clusters, thereby improving the interpretability of traffic patterns and supporting ship anti-collision risk management. First, a composite similarity measure is designed using a probabilistic conflict detection approach and a newly formulated maritime traffic route network learned through maritime knowledge mining. Then, an extended graph-based clustering framework is used to produce balanced traffic clusters with high intra-connections but low inter-connections. The proposed methodology is thoroughly demonstrated and tested using Automatic Identification System (AIS) trajectory data in the Ningbo-Zhoushan Port. The experimental results show that the proposed methodology 1) has effective performance in decomposing the traffic complexity, 2) can assist in identifying high-risk/density traffic clusters, and 3) is sufficiently generic to handle various traffic scenarios in complex geographical waters. Therefore, this study makes significant contributions to intelligent maritime surveillance and provides a theoretical foundation for promoting maritime anti-collision risk management for the future mixed traffic of both manned and autonomous ships
Recommended from our members
Multi-objective community detection applied to social and COVID-19 constructed networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonCommunity Detection plays an integral part in network analysis, as it facilitates understanding the structures and functional characteristics of the network. Communities organize real-world networks into densely connected groups of nodes. This thesis provides a critical analysis of the Community Detection and highlights the main areas including algorithms, evaluation metrics, applications, and datasets in social networks.
After defining the research gap, this thesis proposes two Attribute-Based Label Propagation algorithms that maximizes both Modularity and homogeneity. Homogeneity is considered as an objective function one time, and as a constraint another time. To better capture the homogeneity of real-world networks, a new Penalized Homogeneity degree (PHd) is proposed, that can be easily personalized based on the network characteristics.
For the first time, COVID-19 tracing data are utilized to form two dataset networks: one is based on the virus transition between the world countries. While the second dataset is an attributed network based on the virus transition among the contact-tracing in the Kingdom of Bahrain. This type of networks that is concerned in tracking a disease was not formed based on COVID-19 virus and has never been studied as a community detection problem. The proposed datasets are validated and tested in several experiments. The proposed Penalized Homogeneity measure is personalized and used to evaluate the proposed attributed network.
Extensive experiments and analysis are carried out to evaluate the proposed methods and benchmark the results with other well-known algorithms. The results are compared in terms of Modularity, proposed PHd, and accuracy measures. The proposed methods have achieved maximum performance among other methods, with 26.6% better performance in Modularity, and 33.96% in PHd on the proposed dataset, as well as noteworthy results on benchmarking datasets with improvement in Modularity measures of 7.24%, and 4.96% respectively, and proposed PHd values 27% and 81.9%
- …