2,254 research outputs found
Covering rough sets based on neighborhoods: An approach without using neighborhoods
Rough set theory, a mathematical tool to deal with inexact or uncertain
knowledge in information systems, has originally described the indiscernibility
of elements by equivalence relations. Covering rough sets are a natural
extension of classical rough sets by relaxing the partitions arising from
equivalence relations to coverings. Recently, some topological concepts such as
neighborhood have been applied to covering rough sets. In this paper, we
further investigate the covering rough sets based on neighborhoods by
approximation operations. We show that the upper approximation based on
neighborhoods can be defined equivalently without using neighborhoods. To
analyze the coverings themselves, we introduce unary and composition operations
on coverings. A notion of homomorphismis provided to relate two covering
approximation spaces. We also examine the properties of approximations
preserved by the operations and homomorphisms, respectively.Comment: 13 pages; to appear in International Journal of Approximate Reasonin
A Novel Progressive Multi-label Classifier for Classincremental Data
In this paper, a progressive learning algorithm for multi-label
classification to learn new labels while retaining the knowledge of previous
labels is designed. New output neurons corresponding to new labels are added
and the neural network connections and parameters are automatically
restructured as if the label has been introduced from the beginning. This work
is the first of the kind in multi-label classifier for class-incremental
learning. It is useful for real-world applications such as robotics where
streaming data are available and the number of labels is often unknown. Based
on the Extreme Learning Machine framework, a novel universal classifier with
plug and play capabilities for progressive multi-label classification is
developed. Experimental results on various benchmark synthetic and real
datasets validate the efficiency and effectiveness of our proposed algorithm.Comment: 5 pages, 3 figures, 4 table
Sistemas granulares evolutivos
Orientador: Fernando Antonio Campos GomideTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Recentemente tem-se observado um crescente interesse em abordagens de modelagem computacional para lidar com fluxos de dados do mundo real. Métodos e algoritmos têm sido propostos para obtenção de conhecimento a partir de conjuntos de dados muito grandes e, a princÃpio, sem valor aparente. Este trabalho apresenta uma plataforma computacional para modelagem granular evolutiva de fluxos de dados incertos. Sistemas granulares evolutivos abrangem uma variedade de abordagens para modelagem on-line inspiradas na forma com que os humanos lidam com a complexidade. Esses sistemas exploram o fluxo de informação em ambiente dinâmico e extrai disso modelos que podem ser linguisticamente entendidos. Particularmente, a granulação da informação é uma técnica natural para dispensar atenção a detalhes desnecessários e enfatizar transparência, interpretabilidade e escalabilidade de sistemas de informação. Dados incertos (granulares) surgem a partir de percepções ou descrições imprecisas do valor de uma variável. De maneira geral, vários fatores podem afetar a escolha da representação dos dados tal que o objeto representativo reflita o significado do conceito que ele está sendo usado para representar. Neste trabalho são considerados dados numéricos, intervalares e fuzzy; e modelos intervalares, fuzzy e neuro-fuzzy. A aprendizagem de sistemas granulares é baseada em algoritmos incrementais que constroem a estrutura do modelo sem conhecimento anterior sobre o processo e adapta os parâmetros do modelo sempre que necessário. Este paradigma de aprendizagem é particularmente importante uma vez que ele evita a reconstrução e o retreinamento do modelo quando o ambiente muda. Exemplos de aplicação em classificação, aproximação de função, predição de séries temporais e controle usando dados sintéticos e reais ilustram a utilidade das abordagens de modelagem granular propostas. O comportamento de fluxos de dados não-estacionários com mudanças graduais e abruptas de regime é também analisado dentro do paradigma de computação granular evolutiva. Realçamos o papel da computação intervalar, fuzzy e neuro-fuzzy em processar dados incertos e prover soluções aproximadas de alta qualidade e sumário de regras de conjuntos de dados de entrada e saÃda. As abordagens e o paradigma introduzidos constituem uma extensão natural de sistemas inteligentes evolutivos para processamento de dados numéricos a sistemas granulares evolutivos para processamento de dados granularesAbstract: In recent years there has been increasing interest in computational modeling approaches to deal with real-world data streams. Methods and algorithms have been proposed to uncover meaningful knowledge from very large (often unbounded) data sets in principle with no apparent value. This thesis introduces a framework for evolving granular modeling of uncertain data streams. Evolving granular systems comprise an array of online modeling approaches inspired by the way in which humans deal with complexity. These systems explore the information flow in dynamic environments and derive from it models that can be linguistically understood. Particularly, information granulation is a natural technique to dispense unnecessary details and emphasize transparency, interpretability and scalability of information systems. Uncertain (granular) data arise from imprecise perception or description of the value of a variable. Broadly stated, various factors can affect one's choice of data representation such that the representing object conveys the meaning of the concept it is being used to represent. Of particular concern to this work are numerical, interval, and fuzzy types of granular data; and interval, fuzzy, and neurofuzzy modeling frameworks. Learning in evolving granular systems is based on incremental algorithms that build model structure from scratch on a per-sample basis and adapt model parameters whenever necessary. This learning paradigm is meaningful once it avoids redesigning and retraining models all along if the system changes. Application examples in classification, function approximation, time-series prediction and control using real and synthetic data illustrate the usefulness of the granular approaches and framework proposed. The behavior of nonstationary data streams with gradual and abrupt regime shifts is also analyzed in the realm of evolving granular computing. We shed light upon the role of interval, fuzzy, and neurofuzzy computing in processing uncertain data and providing high-quality approximate solutions and rule summary of input-output data sets. The approaches and framework introduced constitute a natural extension of evolving intelligent systems over numeric data streams to evolving granular systems over granular data streamsDoutoradoAutomaçãoDoutor em Engenharia Elétric
Some Thoughts on Hypercomputation
Hypercomputation is a relatively new branch of computer science that emerged
from the idea that the Church--Turing Thesis, which is supposed to describe
what is computable and what is noncomputable, cannot possible be true. Because
of its apparent validity, the Church--Turing Thesis has been used to
investigate the possible limits of intelligence of any imaginable life form,
and, consequently, the limits of information processing, since living beings
are, among others, information processors. However, in the light of
hypercomputation, which seems to be feasibly in our universe, one cannot impose
arbitrary limits to what intelligence can achieve unless there are specific
physical laws that prohibit the realization of something. In addition,
hypercomputation allows us to ponder about aspects of communication between
intelligent beings that have not been considered befor
Multiobjective programming for type-2 hierarchical fuzzy inference trees
This paper proposes a design of hierarchical fuzzy inference tree (HFIT). An HFIT produces an
optimum tree-like structure. Specifically, a natural hierarchical structure that accommodates simplicity by
combining several low-dimensional fuzzy inference systems (FISs). Such a natural hierarchical structure
provides a high degree of approximation accuracy. The construction of HFIT takes place in two phases.
Firstly, a nondominated sorting based multiobjective genetic programming (MOGP) is applied to obtain a
simple tree structure (low model’s complexity) with a high accuracy. Secondly, the differential evolution
algorithm is applied to optimize the obtained tree’s parameters. In the obtained tree, each node has a
different input’s combination, where the evolutionary process governs the input’s combination. Hence,
HFIT nodes are heterogeneous in nature, which leads to a high diversity among the rules generated
by the HFIT. Additionally, the HFIT provides an automatic feature selection because it uses MOGP
for the tree’s structural optimization that accept inputs only relevant to the knowledge contained in
data. The HFIT was studied in the context of both type-1 and type-2 FISs, and its performance was
evaluated through six application problems. Moreover, the proposed multiobjective HFIT was compared
both theoretically and empirically with recently proposed FISs methods from the literature, such as
McIT2FIS, TSCIT2FNN, SIT2FNN, RIT2FNS-WB, eT2FIS, MRIT2NFS, IT2FNN-SVR, etc. From the
obtained results, it was found that the HFIT provided less complex and highly accurate models compared
to the models produced by most of the other methods. Hence, the proposed HFIT is an efficient and
competitive alternative to the other FISs for function approximation and feature selectio
Memristors for the Curious Outsiders
We present both an overview and a perspective of recent experimental advances
and proposed new approaches to performing computation using memristors. A
memristor is a 2-terminal passive component with a dynamic resistance depending
on an internal parameter. We provide an brief historical introduction, as well
as an overview over the physical mechanism that lead to memristive behavior.
This review is meant to guide nonpractitioners in the field of memristive
circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page
How do life, economy and other complex systems escape the heat death?
The primordial confrontation underlying the existence of our universe can be
conceived as the battle between entropy and complexity. The law of
ever-increasing entropy (Boltzmann H-theorem) evokes an irreversible,
one-directional evolution (or rather involution) going uniformly and
monotonically from birth to death. Since the 19th century, this concept is one
of the cornerstones and in the same time puzzles of statistical mechanics. On
the other hand, there is the empirical experience where one witnesses the
emergence, growth and diversification of new self-organized objects with
ever-increasing complexity. When modeling them in terms of simple discrete
elements one finds that the emergence of collective complex adaptive objects is
a rather generic phenomenon governed by a new type of laws. These 'emergence'
laws, not connected directly with the fundamental laws of the physical reality,
nor acting 'in addition' to them but acting through them were called by Phil
Anderson 'More is Different', 'das Maass' by Hegel etc. Even though the
'emergence laws' act through the intermediary of the fundamental laws that
govern the individual elementary agents, it turns out that different systems
apparently governed by very different fundamental laws: gravity, chemistry,
biology, economics, social psychology, end up often with similar emergence laws
and outcomes. In particular the emergence of adaptive collective objects endows
the system with a granular structure which in turn causes specific macroscopic
cycles of intermittent fluctuations.Comment: 42 pages, 18 figure
Deep Learning Methods for Partial Differential Equations and Related Parameter Identification Problems
Recent years have witnessed a growth in mathematics for deep learning--which
seeks a deeper understanding of the concepts of deep learning with mathematics
and explores how to make it more robust--and deep learning for mathematics,
where deep learning algorithms are used to solve problems in mathematics. The
latter has popularised the field of scientific machine learning where deep
learning is applied to problems in scientific computing. Specifically, more and
more neural network architectures have been developed to solve specific classes
of partial differential equations (PDEs). Such methods exploit properties that
are inherent to PDEs and thus solve the PDEs better than standard feed-forward
neural networks, recurrent neural networks, or convolutional neural networks.
This has had a great impact in the area of mathematical modeling where
parametric PDEs are widely used to model most natural and physical processes
arising in science and engineering. In this work, we review such methods as
well as their extensions for parametric studies and for solving the related
inverse problems. We equally proceed to show their relevance in some industrial
applications
- …