573 research outputs found
Nonlinear and distributed sensory estimation
Methods to improve performance of sensors with regard to sensor nonlinearity, sensor noise and sensor bandwidths are investigated and new algorithms are developed. The necessity of the proposed research has evolved from the ever-increasing need for greater precision and improved reliability in sensor measurements. After describing the current state of the art of sensor related issues like nonlinearity and bandwidth, research goals are set to create a new trend on the usage of sensors. We begin the investigation with a detailed distortion analysis of nonlinear sensors. A need for efficient distortion compensation procedures is further justified by showing how a slight deviation from the linearity assumption leads to a very severe distortion in time and in frequency domains. It is argued that with a suitable distortion compensation technique the danger of having an infinite bandwidth nonlinear sensory operation, which is dictated by nonlinear distortion, can be avoided. Several distortion compensation techniques are developed and their performance is validated by simulation and experimental results. Like any other model-based technique, modeling errors or model uncertainty affects performance of the proposed scheme, this leads to the innovation of robust signal reconstruction. A treatment for this problem is given and a novel technique, which uses a nominal model instead of an accurate model and produces the results that are robust to model uncertainty, is developed. The means to attain a high operating bandwidth are developed by utilizing several low bandwidth pass-band sensors. It is pointed out that instead of using a single sensor to measure a high bandwidth signal, there are many advantages of using an array of several pass-band sensors. Having shown that employment of sensor arrays is an economic incentive and practical, several multi-sensor fusion schemes are developed to facilitate their implementation. Another aspect of this dissertation is to develop means to deal with outliers in sensor measurements. As fault sensor data detection is an essential element of multi-sensor network implementation, which is used to improve system reliability and robustness, several sensor scheduling configurations are derived to identify and to remove outliers
Distributed Random Set Theoretic Soft/Hard Data Fusion
Research on multisensor data fusion aims at providing the enabling technology to combine
information from several sources in order to form a unifi ed picture. The literature
work on fusion of conventional data provided by non-human (hard) sensors is vast and
well-established. In comparison to conventional fusion systems where input data are generated
by calibrated electronic sensor systems with well-defi ned characteristics, research
on soft data fusion considers combining human-based data expressed preferably in unconstrained
natural language form. Fusion of soft and hard data is even more challenging, yet
necessary in some applications, and has received little attention in the past. Due to being
a rather new area of research, soft/hard data fusion is still in a
edging stage with even
its challenging problems yet to be adequately de fined and explored.
This dissertation develops a framework to enable fusion of both soft and hard data
with the Random Set (RS) theory as the underlying mathematical foundation. Random
set theory is an emerging theory within the data fusion community that, due to its powerful
representational and computational capabilities, is gaining more and more attention among
the data fusion researchers. Motivated by the unique characteristics of the random set
theory and the main challenge of soft/hard data fusion systems, i.e. the need for a unifying
framework capable of processing both unconventional soft data and conventional hard data,
this dissertation argues in favor of a random set theoretic approach as the first step towards
realizing a soft/hard data fusion framework.
Several challenging problems related to soft/hard fusion systems are addressed in the
proposed framework. First, an extension of the well-known Kalman lter within random
set theory, called Kalman evidential filter (KEF), is adopted as a common data processing
framework for both soft and hard data. Second, a novel ontology (syntax+semantics)
is developed to allow for modeling soft (human-generated) data assuming target tracking
as the application. Third, as soft/hard data fusion is mostly aimed at large networks of
information processing, a new approach is proposed to enable distributed estimation of
soft, as well as hard data, addressing the scalability requirement of such fusion systems.
Fourth, a method for modeling trust in the human agents is developed, which enables the
fusion system to protect itself from erroneous/misleading soft data through discounting
such data on-the-fly. Fifth, leveraging the recent developments in the RS theoretic data
fusion literature a novel soft data association algorithm is developed and deployed to extend
the proposed target tracking framework into multi-target tracking case. Finally, the
multi-target tracking framework is complemented by introducing a distributed classi fication
approach applicable to target classes described with soft human-generated data.
In addition, this dissertation presents a novel data-centric taxonomy of data fusion
methodologies. In particular, several categories of fusion algorithms have been identifi ed
and discussed based on the data-related challenging aspect(s) addressed. It is intended to
provide the reader with a generic and comprehensive view of the contemporary data fusion
literature, which could also serve as a reference for data fusion practitioners by providing
them with conducive design guidelines, in terms of algorithm choice, regarding the specifi c
data-related challenges expected in a given application
Multivariable Fuzzy Control Based Mobile Robot Odor Source Localization via Semitensor Product
In order to take full advantage of the multisensor information, a MIMO fuzzy control system based on semitensor product (STP) is set up for mobile robot odor source localization (OSL). Multisensor information, such as vision, olfaction, laser, wind speed, and direction, is the input of the fuzzy control system and the relative searching strategies, such as random searching (RS), nearest distance-based vision searching (NDVS), and odor source declaration (OSD), are the outputs. Fuzzy control rules with algebraic equations are given according to the multisensor information via STP. Any output can be updated in the proposed fuzzy control system and has no influence on the other searching strategies. The proposed MIMO fuzzy control scheme based on STP can reach the theoretical system of the mobile robot OSL. Experimental results show the efficiency of the proposed method
Intelligent data mining using artificial neural networks and genetic algorithms : techniques and applications
Data Mining (DM) refers to the analysis of observational datasets to find
relationships and to summarize the data in ways that are both understandable
and useful. Many DM techniques exist. Compared with other DM techniques,
Intelligent Systems (ISs) based approaches, which include Artificial Neural
Networks (ANNs), fuzzy set theory, approximate reasoning, and derivative-free
optimization methods such as Genetic Algorithms (GAs), are tolerant of
imprecision, uncertainty, partial truth, and approximation. They provide
flexible information processing capability for handling real-life situations. This
thesis is concerned with the ideas behind design, implementation, testing and
application of a novel ISs based DM technique. The unique contribution of this
thesis is in the implementation of a hybrid IS DM technique (Genetic Neural
Mathematical Method, GNMM) for solving novel practical problems, the
detailed description of this technique, and the illustrations of several
applications solved by this novel technique.
GNMM consists of three steps: (1) GA-based input variable selection, (2) Multi-
Layer Perceptron (MLP) modelling, and (3) mathematical programming based
rule extraction. In the first step, GAs are used to evolve an optimal set of MLP
inputs. An adaptive method based on the average fitness of successive
generations is used to adjust the mutation rate, and hence the
exploration/exploitation balance. In addition, GNMM uses the elite group and
appearance percentage to minimize the randomness associated with GAs. In
the second step, MLP modelling serves as the core DM engine in performing
classification/prediction tasks. An Independent Component Analysis (ICA)
based weight initialization algorithm is used to determine optimal weights
before the commencement of training algorithms. The Levenberg-Marquardt
(LM) algorithm is used to achieve a second-order speedup compared to
conventional Back-Propagation (BP) training. In the third step, mathematical
programming based rule extraction is not only used to identify the premises of
multivariate polynomial rules, but also to explore features from the extracted
rules based on data samples associated with each rule. Therefore, the
methodology can provide regression rules and features not only in the
polyhedrons with data instances, but also in the polyhedrons without data
instances.
A total of six datasets from environmental and medical disciplines were used
as case study applications. These datasets involve the prediction of
longitudinal dispersion coefficient, classification of electrocorticography
(ECoG)/Electroencephalogram (EEG) data, eye bacteria Multisensor Data
Fusion (MDF), and diabetes classification (denoted by Data I through to Data VI). GNMM was applied to all these six datasets to explore its effectiveness,
but the emphasis is different for different datasets. For example, the emphasis
of Data I and II was to give a detailed illustration of how GNMM works; Data III
and IV aimed to show how to deal with difficult classification problems; the
aim of Data V was to illustrate the averaging effect of GNMM; and finally Data
VI was concerned with the GA parameter selection and benchmarking GNMM
with other IS DM techniques such as Adaptive Neuro-Fuzzy Inference System
(ANFIS), Evolving Fuzzy Neural Network (EFuNN), Fuzzy ARTMAP, and
Cartesian Genetic Programming (CGP). In addition, datasets obtained from
published works (i.e. Data II & III) or public domains (i.e. Data VI) where
previous results were present in the literature were also used to benchmark
GNMM’s effectiveness.
As a closely integrated system GNMM has the merit that it needs little human
interaction. With some predefined parameters, such as GA’s crossover
probability and the shape of ANNs’ activation functions, GNMM is able to
process raw data until some human-interpretable rules being extracted. This is
an important feature in terms of practice as quite often users of a DM system
have little or no need to fully understand the internal components of such a
system. Through case study applications, it has been shown that the GA-based
variable selection stage is capable of: filtering out irrelevant and noisy
variables, improving the accuracy of the model; making the ANN structure less
complex and easier to understand; and reducing the computational complexity
and memory requirements. Furthermore, rule extraction ensures that the MLP
training results are easily understandable and transferrable
Recommended from our members
The application of data fusion to reinforced concrete NDT - Volume 1
Research into different approaches to concrete non-destructive testing is presented. However, the main consideration is how data fusion methods can add value to the interpretation of the data gathered from different sensor sources. Mathematical models for data fusion and simultaneous adjustment of inhomogeneous data are used, which increase the accuracy and reliability of the subsequent surface repair decision.
The aim of the approach is to adjust any kind of data in a combined way by giving adequate weights to each measurement. An assessment of the quality of different sensor data is part of this. A comparison of different sensors is given.
A Graphical User Interface, developed in the research, gives surface representations of the spatial data. This allows the differences between surface reconstruction results to be examined.
The validation of the approach for multi sensor fusion for the reconstruction of surfaces is described and demonstrated by using data from areas of concrete that have been surveyed and subsequently repaired.
The results of the numerical experiments are interpreted and conclusions for the processing chain of the approach drawn. With further development the experimental software could be useful for industrial applications
- …