435 research outputs found
Neuro-fuzzy software for intelligent control and education
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Automação). Faculdade de Engenharia. Universidade do Porto. 200
AUTOMATED INTERPRETATION OF THE BACKGROUND EEG USING FUZZY LOGIC
A new framework is described for managing uncertainty and for deahng with artefact
corruption to introduce objectivity in the interpretation of the electroencephalogram
(EEG).
Conventionally, EEG interpretation is time consuming and subjective, and is known to
show significant inter- and intra-personnel variation. A need thus exists to automate the
interpretation of the EEG to provide a more consistent and efficient assessment.
However, automated analysis of EEGs by computers is complicated by two major
factors. The difficulty of adequately capturing in machine form, the skills and subjective
expertise of the experienced electroencephalbgrapher, and the lack of a reliable means of
dealing with the range of EEG artefacts (signal contamination). In this thesis, a new
framework is described which introduces objectivity in two important outcomes of
clinical evaluation of the EEG, namely, the clinical factual report and the clinical
'conclusion', by capturing the subjective expertise of the electroencephalographer and
dealing with the problem of artefact corruption.
The framework is separated into two stages .to assist piecewise optimisation and to cater
for different requirements. The first stage, 'quantitative analysis', relies on novel digital
signal processing algorithms and cluster analysis techniques to reduce data and identify
and describe background activities in the EEG. To deal with artefact corruption, an
artefact removal strategy, based on new reUable techniques for artefact identification is
used to ensure that artefact-free activities only are used in the analysis. The outcome is a
quantitative analysis, which efficiently describes the background activity in the record,
and can support future clinical investigations in neurophysiology. In clinical practice,
many of the EEG features are described by the clinicians in natural language terms, such
as very high, extremely irregular, somewhat abnormal etc. The second stage of the
framework, 'qualitative analysis', captures the subjectivity and linguistic uncertainty
expressed.by the clinical experts, using novel, intelligent models, based on fuzzy logic, to
provide an analysis closely comparable to the clinical interpretation made in practice.
The outcome of this stage is an EEG report with qualitative descriptions to complement
the quantitative analysis.
The system was evaluated using EEG records from 1 patient with Alzheimer's disease
and 2 age-matched normal controls for the factual report, and 3 patients with Alzheimer's
disease and 7 age-matched nonnal controls for the 'conclusion'. Good agreement was
found between factual reports produced by the system and factual reports produced by
qualified clinicians. Further, the 'conclusion' produced by the system achieved 100%
discrimination between the two subject groups. After a thorough evaluation, the system
should significantly aid the process of EEG interpretation and diagnosis
Fuzzy Natural Logic in IFSA-EUSFLAT 2021
The present book contains five papers accepted and published in the Special Issue, “Fuzzy Natural Logic in IFSA-EUSFLAT 2021”, of the journal Mathematics (MDPI). These papers are extended versions of the contributions presented in the conference “The 19th World Congress of the International Fuzzy Systems Association and the 12th Conference of the European Society for Fuzzy Logic and Technology jointly with the AGOP, IJCRS, and FQAS conferences”, which took place in Bratislava (Slovakia) from September 19 to September 24, 2021. Fuzzy Natural Logic (FNL) is a system of mathematical fuzzy logic theories that enables us to model natural language terms and rules while accounting for their inherent vagueness and allows us to reason and argue using the tools developed in them. FNL includes, among others, the theory of evaluative linguistic expressions (e.g., small, very large, etc.), the theory of fuzzy and intermediate quantifiers (e.g., most, few, many, etc.), and the theory of fuzzy/linguistic IF–THEN rules and logical inference. The papers in this Special Issue use the various aspects and concepts of FNL mentioned above and apply them to a wide range of problems both theoretically and practically oriented. This book will be of interest for researchers working in the areas of fuzzy logic, applied linguistics, generalized quantifiers, and their applications
Constructing 3D faces from natural language interface
This thesis presents a system by which 3D images of human faces can be constructed
using a natural language interface. The driving force behind the project was the need to
create a system whereby a machine could produce artistic images from verbal or
composed descriptions. This research is the first to look at constructing and modifying
facial image artwork using a natural language interface.
Specialised modules have been developed to control geometry of 3D polygonal head
models in a commercial modeller from natural language descriptions. These modules
were produced from research on human physiognomy, 3D modelling techniques and
tools, facial modelling and natural language processing. [Continues.
Intelligent Pattern Analysis of the Foetal Electrocardiogram
The aim of the project on which this thesis is based is to develop reliable techniques for
foetal electrocardiogram (ECG) based monitoring, to reduce incidents of unnecessary
medical intervention and foetal injury during labour. World-wide electronic foetal
monitoring is based almost entirely on the cardiotocogram (CTG), which is a continuous
display of the foetal heart rate (FHR) pattern together with the contraction of the womb.
Despite the widespread use of the CTG, there is no significant improvement in foetal
outcome. In the UK alone it is estimated that birth related negligence claims cost the health
authorities over £400M per-annum. An expert system, known as INFANT, has recently
been developed to assist CTG interpretation. However, the CTG alone does not always
provide all the information required to improve the outcome of labour. The widespread use
of ECG analysis has been hindered by the difficulties with poor signal quality and the
difficulties in applying the specialised knowledge required for interpreting ECG patterns, in
association with other events in labour, in an objective way.
A fundamental investigation and development of optimal signal enhancement techniques
that maximise the available information in the ECG signal, along with different techniques
for detecting individual waveforms from poor quality signals, has been carried out. To
automate the visual interpretation of the ECG waveform, novel techniques have been
developed that allow reliable extraction of key features and hence allow a detailed ECG
waveform analysis. Fuzzy logic is used to automatically classify the ECG waveform shape
using these features by using knowledge that was elicited from expert sources and derived
from example data. This allows the subtle changes in the ECG waveform to be
automatically detected in relation to other events in labour, and thus improve the clinicians
position for making an accurate diagnosis. To ensure the interpretation is based on reliable
information and takes place in the proper context, a new and sensitive index for assessing
the quality of the ECG has been developed.
New techniques to capture, for the first time in machine form, the clinical expertise /
guidelines for electronic foetal monitoring have been developed based on fuzzy logic and
finite state machines, The software model provides a flexible framework to further develop
and optimise rules for ECG pattern analysis. The signal enhancement, QRS detection and
pattern recognition of important ECG waveform shapes have had extensive testing and
results are presented. Results show that no significant loss of information is incurred as a
result of the signal enhancement and feature extraction techniques
Hybrid approaches based on computational intelligence and semantic web for distributed situation and context awareness
2011 - 2012The research work focuses on Situation Awareness and Context Awareness topics.
Specifically, Situation Awareness involves being aware of what is happening in the vicinity
to understand how information, events, and one’s own actions will impact goals and objectives,
both immediately and in the near future. Thus, Situation Awareness is especially
important in application domains where the information flow can be quite high and poor
decisions making may lead to serious consequences.
On the other hand Context Awareness is considered a process to support user applications
to adapt interfaces, tailor the set of application-relevant data, increase the precision of
information retrieval, discover services, make the user interaction implicit, or build smart
environments.
Despite being slightly different, Situation and Context Awareness involve common
problems such as: the lack of a support for the acquisition and aggregation of dynamic environmental
information from the field (i.e. sensors, cameras, etc.); the lack of formal approaches
to knowledge representation (i.e. contexts, concepts, relations, situations, etc.)
and processing (reasoning, classification, retrieval, discovery, etc.); the lack of automated
and distributed systems, with considerable computing power, to support the reasoning on a
huge quantity of knowledge, extracted by sensor data.
So, the thesis researches new approaches for distributed Context and Situation Awareness
and proposes to apply them in order to achieve some related research objectives such
as knowledge representation, semantic reasoning, pattern recognition and information retrieval.
The research work starts from the study and analysis of state of art in terms of
techniques, technologies, tools and systems to support Context/Situation Awareness. The
main aim is to develop a new contribution in this field by integrating techniques deriving
from the fields of Semantic Web, Soft Computing and Computational Intelligence. From
an architectural point of view, several frameworks are going to be defined according to the
multi-agent paradigm.
Furthermore, some preliminary experimental results have been obtained in some application
domains such as Airport Security, Traffic Management, Smart Grids and
Healthcare.
Finally, future challenges is going to the following directions: Semantic Modeling of
Fuzzy Control, Temporal Issues, Automatically Ontology Elicitation, Extension to other
Application Domains and More Experiments. [edited by author]XI n.s
INTELLIGENT TECHNIQUES FOR HANDLING UNCERTAINTY IN THE ASSESSMENT OF NEONATAL OUTCOME
Objective assessment of the neonatal outcome of labour is important, but it is a difficult
and challenging problem. It is an invaluable source of information which can be used to
provide feedback to clinicians, to audit a unit's overall performance, and can guide subsequent
neonatal care. Current methods are inadequate as they fail to distinguish damage that
occurred during labour from damage that occurred before or after labour. Analysis of the
chemical acid-base status of blood taken from the umbilical cord of an infant immediately
after delivery provides information on any damage suffered by the infant due to lack of oxygen
during labour. However, this process is complex and error prone, and requires expertise
which is not always available on labour wards.
A model of clinical expertise required for the accurate interpretation of umbilical acid-base
status was developed, and encapsulated in a rule-based expert system. This expert system
checks results to ensure their consistency, identifies whether the results come from arterial
or venous vessels, and then produces an interpretation of their meaning. This 'crisp' expert
system was validated, verified and commercially released, and has since been installed at
twenty two hospitals all around the United Kingdom.
The assessment of umbilical acid-base status is characterised by uncertainty in both the basic
data and the knowledge required for its interpretation. Fuzzy logic provides a technique
for representing both these forms of uncertainty in a single framework. A 'preliminary'
fuzzy-logic based expert system to interpret error-free results was developed, based on the
knowledge embedded in the crisp expert system. Its performance was compared against clinicians
in a validation test, but initially its performance was found to be poor in comparison
with the clinicians and inferior to the crisp expert system. An automatic tuning algorithm
was developed to modify the behaviour of the fuzzy model utilised in the expert system.
Sub-normal membership functions were used to weight terms in the fuzzy expert system in
a novel manner. This resulted in an improvement in the performance of the fuzzy expert
system to a level comparable to the clinicians, and superior to the crisp expert system.
Experimental work was carried out to evaluate the imprecision in umbilical cord acid-base
parameters. This information, in conjunction with fresh knowledge elicitation sessions, allowed
the creation of a more comprehensive fuzzy expert system, to validate and interpret
all acid-base data. This 'integrated' fuzzy expert system was tuned using the comparison
data obtained previously, and incorporated vessel identification rules and interpretation rules,
with numeric and linguistic outputs for each. The performance of each of the outputs was
evaluated in a rigorous validation study. This demonstrated excellent agreement with the
experts for the numeric outputs, and agreement on a par with the experts for the linguistic
outputs. The numeric interpretation produced by the fuzzy expert system is a novel single
dimensional measure that accurately represents the severity of acid-base results.
The development of the crisp and fuzzy expert systems represents a major achievement and
constitutes a significant contribution to the assessment of neonatal outcome.Plymouth Postgraduate Medical Schoo
A finder and representation system for knowledge carriers based on granular computing
In one of his publications Aristotle states ”All human beings by their nature desire to know” [Kraut 1991]. This desire is initiated the day we are born and accompanies us for the rest of our life. While at a young age our parents serve as one of the principle sources for knowledge, this changes over the course of time. Technological advances and particularly the introduction of the Internet, have given us new possibilities to share and access knowledge from almost anywhere at any given time. Being able to access and share large collections of written down knowledge is only one part of the equation. Just as important is the internalization of it, which in many cases can prove to be difficult to accomplish. Hence, being able to request assistance from someone who holds the necessary knowledge is of great importance, as it can positively stimulate the internalization procedure. However, digitalization does not only provide a larger pool of knowledge sources to choose from but also more people that can be potentially activated, in a bid to receive personalized assistance with a given problem statement or question. While this is beneficial, it imposes the issue that it is hard to keep track of who knows what. For this task so-called Expert Finder Systems have been introduced, which are designed to identify and suggest the most suited candidates to provide assistance. Throughout this Ph.D. thesis a novel type of Expert Finder System will be introduced that is capable of capturing the knowledge users within a community hold, from explicit and implicit data sources. This is accomplished with the use of granular computing, natural language processing and a set of metrics that have been introduced to measure and compare the suitability of candidates. Furthermore, are the knowledge requirements of a problem statement or question being assessed, in order to ensure that only the most suited candidates are being recommended to provide assistance
- …