10,425 research outputs found
An optimal factor analysis approach to improve the wavelet-based image resolution enhancement techniques
The existing wavelet-based image resolution enhancement techniques have many assumptions, such as limitation of the way to generate low-resolution images and the selection of wavelet functions, which limits their applications in different fields. This paper initially identifies the factors that effectively affect the performance of these techniques and quantitatively evaluates the impact of the existing assumptions. An approach called Optimal Factor Analysis employing the genetic algorithm is then introduced to increase the applicability and fidelity of the existing methods. Moreover, a new Figure of Merit is proposed to assist the selection of parameters and better measure the overall performance. The experimental results show that the proposed approach improves the performance of the selected image resolution enhancement methods and has potential to be extended to other methods
Forecasting bus passenger flows by using a clustering-based support vector regression approach
As a significant component of the intelligent transportation system, forecasting bus passenger
flows plays a key role in resource allocation, network planning, and frequency setting. However, it remains
challenging to recognize high fluctuations, nonlinearity, and periodicity of bus passenger flows due to
varied destinations and departure times. For this reason, a novel forecasting model named as affinity
propagation-based support vector regression (AP-SVR) is proposed based on clustering and nonlinear
simulation. For the addressed approach, a clustering algorithm is first used to generate clustering-based
intervals. A support vector regression (SVR) is then exploited to forecast the passenger flow for each
cluster, with the use of particle swarm optimization (PSO) for obtaining the optimized parameters. Finally,
the prediction results of the SVR are rearranged by chronological order rearrangement. The proposed model
is tested using real bus passenger data from a bus line over four months. Experimental results demonstrate
that the proposed model performs better than other peer models in terms of absolute percentage error and
mean absolute percentage error. It is recommended that the deterministic clustering technique with stable
cluster results (AP) can improve the forecasting performance significantly.info:eu-repo/semantics/publishedVersio
Image tag completion by local learning
The problem of tag completion is to learn the missing tags of an image. In
this paper, we propose to learn a tag scoring vector for each image by local
linear learning. A local linear function is used in the neighborhood of each
image to predict the tag scoring vectors of its neighboring images. We
construct a unified objective function for the learning of both tag scoring
vectors and local linear function parame- ters. In the objective, we impose the
learned tag scoring vectors to be consistent with the known associations to the
tags of each image, and also minimize the prediction error of each local linear
function, while reducing the complexity of each local function. The objective
function is optimized by an alternate optimization strategy and gradient
descent methods in an iterative algorithm. We compare the proposed algorithm
against different state-of-the-art tag completion methods, and the results show
its advantages
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
Magnification Control in Self-Organizing Maps and Neural Gas
We consider different ways to control the magnification in self-organizing
maps (SOM) and neural gas (NG). Starting from early approaches of magnification
control in vector quantization, we then concentrate on different approaches for
SOM and NG. We show that three structurally similar approaches can be applied
to both algorithms: localized learning, concave-convex learning, and winner
relaxing learning. Thereby, the approach of concave-convex learning in SOM is
extended to a more general description, whereas the concave-convex learning for
NG is new. In general, the control mechanisms generate only slightly different
behavior comparing both neural algorithms. However, we emphasize that the NG
results are valid for any data dimension, whereas in the SOM case the results
hold only for the one-dimensional case.Comment: 24 pages, 4 figure
Adaptive pattern recognition by mini-max neural networks as a part of an intelligent processor
In this decade and progressing into 21st Century, NASA will have missions including Space Station and the Earth related Planet Sciences. To support these missions, a high degree of sophistication in machine automation and an increasing amount of data processing throughput rate are necessary. Meeting these challenges requires intelligent machines, designed to support the necessary automations in a remote space and hazardous environment. There are two approaches to designing these intelligent machines. One of these is the knowledge-based expert system approach, namely AI. The other is a non-rule approach based on parallel and distributed computing for adaptive fault-tolerances, namely Neural or Natural Intelligence (NI). The union of AI and NI is the solution to the problem stated above. The NI segment of this unit extracts features automatically by applying Cauchy simulated annealing to a mini-max cost energy function. The feature discovered by NI can then be passed to the AI system for future processing, and vice versa. This passing increases reliability, for AI can follow the NI formulated algorithm exactly, and can provide the context knowledge base as the constraints of neurocomputing. The mini-max cost function that solves the unknown feature can furthermore give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical mini-max cost function consists of the sample variance of each class in the numerator, and separation of the center of each class in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously
- …