220 research outputs found
Hybridizing and applying computational intelligence techniques
As computers are increasingly relied upon to perform tasks of increasing complexity affecting many aspects of society, it is imperative that the underlying computational methods performing the tasks have high performance in terms of effectiveness and scalability. A common solution employed to perform such complex tasks are computational intelligence (CI) techniques. CI techniques use approaches influenced by nature to solve problems in which traditional modeling approaches fail due to impracticality, intractability, or mathematical ill-posedness. While CI techniques can perform considerably better than traditional modeling approaches when solving complex problems, the scalability performance of a given CI technique alone is not always optimal. Hybridization is a popular process by which a better performing CI technique is created from the combination of multiple existing techniques in a logical manner. In the first paper in this thesis, a novel hybridization of two CI techniques, accuracy-based learning classifier systems (XCS) and cluster analysis, is presented that improves upon the efficiency and, in some cases, the effectiveness of XCS. A number of tasks in software engineering are performed manually, such as defining expected output in model transformation testing. Especially since the number and size of projects that rely on tasks that must be performed manually, it is critical that automated approaches are employed to reduce or eliminate manual effort from these tasks in order to scale efficiently. The second paper in this thesis details a novel application of a CI technique, multi-objective simulated annealing, to the task of test case model generation to reduce the resulting effort required to manually update expected transformation output --Abstract, page iv
Massive Science with VO and Grids
There is a growing need for massive computational resources for the analysis
of new astronomical datasets. To tackle this problem, we present here our first
steps towards marrying two new and emerging technologies; the Virtual
Observatory (e.g, AstroGrid) and the computational grid (e.g. TeraGrid, COSMOS
etc.). We discuss the construction of VOTechBroker, which is a modular software
tool designed to abstract the tasks of submission and management of a large
number of computational jobs to a distributed computer system. The broker will
also interact with the AstroGrid workflow and MySpace environments. We discuss
our planned usages of the VOTechBroker in computing a huge number of n-point
correlation functions from the SDSS data and massive model-fitting of millions
of CMBfast models to WMAP data. We also discuss other applications including
the determination of the XMM Cluster Survey selection function and the
construction of new WMAP maps.Comment: Invited talk at ADASSXV conference published as ASP Conference
Series, Vol. XXX, 2005 C. Gabriel, C. Arviset, D. Ponz and E. Solano, eds. 9
page
MILCS: A mutual information learning classifier system
This paper introduces a new variety of learning classifier system (LCS), called MILCS, which utilizes mutual information as fitness feedback. Unlike most LCSs, MILCS is specifically designed for supervised learning. MILCS's design draws on an analogy to the structural learning approach of cascade correlation networks. We present preliminary results, and contrast them to results from XCS. We discuss the explanatory power of the resulting rule sets, and introduce a new technique for visualizing explanatory power. Final comments include future directions for this research, including investigations in neural networks and other systems. Copyright 2007 ACM
A brief history of learning classifier systems: from CS-1 to XCS and its variants
© 2015, Springer-Verlag Berlin Heidelberg. The direction set by Wilson’s XCS is that modern Learning Classifier Systems can be characterized by their use of rule accuracy as the utility metric for the search algorithm(s) discovering useful rules. Such searching typically takes place within the restricted space of co-active rules for efficiency. This paper gives an overview of the evolution of Learning Classifier Systems up to XCS, and then of some of the subsequent developments of Wilson’s algorithm to different types of learning
Diffusion Mechanism in Residual Neural Network: Theory and Applications
Diffusion, a fundamental internal mechanism emerging in many physical
processes, describes the interaction among different objects. In many learning
tasks with limited training samples, the diffusion connects the labeled and
unlabeled data points and is a critical component for achieving high
classification accuracy. Many existing deep learning approaches directly impose
the fusion loss when training neural networks. In this work, inspired by the
convection-diffusion ordinary differential equations (ODEs), we propose a novel
diffusion residual network (Diff-ResNet), internally introduces diffusion into
the architectures of neural networks. Under the structured data assumption, it
is proved that the proposed diffusion block can increase the distance-diameter
ratio that improves the separability of inter-class points and reduces the
distance among local intra-class points. Moreover, this property can be easily
adopted by the residual networks for constructing the separable hyperplanes.
Extensive experiments of synthetic binary classification, semi-supervised graph
node classification and few-shot image classification in various datasets
validate the effectiveness of the proposed method
- …