732 research outputs found
Fuzzy Sets, Fuzzy Logic and Their Applications
The present book contains 20 articles collected from amongst the 53 total submitted manuscripts for the Special Issue “Fuzzy Sets, Fuzzy Loigic and Their Applications” of the MDPI journal Mathematics. The articles, which appear in the book in the series in which they were accepted, published in Volumes 7 (2019) and 8 (2020) of the journal, cover a wide range of topics connected to the theory and applications of fuzzy systems and their extensions and generalizations. This range includes, among others, management of the uncertainty in a fuzzy environment; fuzzy assessment methods of human-machine performance; fuzzy graphs; fuzzy topological and convergence spaces; bipolar fuzzy relations; type-2 fuzzy; and intuitionistic, interval-valued, complex, picture, and Pythagorean fuzzy sets, soft sets and algebras, etc. The applications presented are oriented to finance, fuzzy analytic hierarchy, green supply chain industries, smart health practice, and hotel selection. This wide range of topics makes the book interesting for all those working in the wider area of Fuzzy sets and systems and of fuzzy logic and for those who have the proper mathematical background who wish to become familiar with recent advances in fuzzy mathematics, which has entered to almost all sectors of human life and activity
Evolutionary Computation
This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field
The Fuzziness in Molecular, Supramolecular, and Systems Chemistry
Fuzzy Logic is a good model for the human ability to compute words. It is based on the theory of fuzzy set. A fuzzy set is different from a classical set because it breaks the Law of the Excluded Middle. In fact, an item may belong to a fuzzy set and its complement at the same time and with the same or different degree of membership. The degree of membership of an item in a fuzzy set can be any real number included between 0 and 1. This property enables us to deal with all those statements of which truths are a matter of degree. Fuzzy logic plays a relevant role in the field of Artificial Intelligence because it enables decision-making in complex situations, where there are many intertwined variables involved. Traditionally, fuzzy logic is implemented through software on a computer or, even better, through analog electronic circuits. Recently, the idea of using molecules and chemical reactions to process fuzzy logic has been promoted. In fact, the molecular word is fuzzy in its essence. The overlapping of quantum states, on the one hand, and the conformational heterogeneity of large molecules, on the other, enable context-specific functions to emerge in response to changing environmental conditions. Moreover, analog input–output relationships, involving not only electrical but also other physical and chemical variables can be exploited to build fuzzy logic systems. The development of “fuzzy chemical systems” is tracing a new path in the field of artificial intelligence. This new path shows that artificially intelligent systems can be implemented not only through software and electronic circuits but also through solutions of properly chosen chemical compounds. The design of chemical artificial intelligent systems and chemical robots promises to have a significant impact on science, medicine, economy, security, and wellbeing. Therefore, it is my great pleasure to announce a Special Issue of Molecules entitled “The Fuzziness in Molecular, Supramolecular, and Systems Chemistry.” All researchers who experience the Fuzziness of the molecular world or use Fuzzy logic to understand Chemical Complex Systems will be interested in this book
Target recognition techniques for multifunction phased array radar
This thesis, submitted for the degree of Doctor of Philosophy at University College London, is a
discussion and analysis of combined stepped-frequency and pulse-Doppler target recognition methods
which enable a multifunction phased array radar designed for automatic surveillance and multi-target
tracking to offer a Non Cooperative Target Recognition (NCTR) capability. The primary challenge
is to investigate the feasibility of NCTR via the use of high range resolution profiles. Given stepped
frequency waveforms effectively trade time for enhanced bandwidth, and thus resolution, attention is
paid to the design of a compromise between resolution and dwell time. A secondary challenge is to
investigate the additional benefits to overall target classification when the number of coherent pulses
within an NCTR wavefrom is expanded to enable the extraction of spectral features which can help
to differentiate particular classes of target. As with increased range resolution, the price for this extra
information is a further increase in dwell time. The response to the primary and secondary challenges
described above has involved the development of a number of novel techniques, which are summarized
below:
• Design and execution of a series of experiments to further the understanding of multifunction
phased array Radar NCTR techniques
• Development of a ‘Hybrid’ stepped frequency technique which enables a significant extension
of range profiles without the proportional trade in resolution as experienced with ‘Classical’
techniques
• Development of an ‘end to end’ NCTR processing and visualization pipeline
• Use of ‘Doppler fraction’ spectral features to enable aircraft target classification via propulsion
mechanism. Combination of Doppler fraction and physical length features to enable broad
aircraft type classification.
• Optimization of NCTR method classification performance as a function of feature and waveform
parameters.
• Generic waveform design tools to enable delivery of time costly NCTR waveforms within operational
constraints.
The thesis is largely based upon an analysis of experimental results obtained using the multifunction
phased array radar MESAR2, based at BAE Systems on the Isle of Wight. The NCTR
mode of MESAR2 consists of the transmission and reception of successive multi-pulse coherent bursts
upon each target being tracked. Each burst is stepped in frequency resulting in an overall bandwidth
sufficient to provide sub-metre range resolution. A sequence of experiments, (static trials, moving
point target trials and full aircraft trials) are described and an analysis of the robustness of target
length and Doppler spectra feature measurements from NCTR mode data recordings is presented. A
recorded data archive of 1498 NCTR looks upon 17 different trials aircraft using five different varieties
of stepped frequency waveform is used to determine classification performance as a function of
various signal processing parameters and extent (numbers of pulses) of the data used. From analysis
of the trials data, recommendations are made with regards to the design of an NCTR mode for an
operational system that uses stepped frequency techniques by design choice
Computational Approaches to Drug Profiling and Drug-Protein Interactions
Despite substantial increases in R&D spending within the pharmaceutical industry, denovo drug design has become a time-consuming endeavour. High attrition rates led to a
long period of stagnation in drug approvals. Due to the extreme costs associated with
introducing a drug to the market, locating and understanding the reasons for clinical failure
is key to future productivity. As part of this PhD, three main contributions were made in
this respect. First, the web platform, LigNFam enables users to interactively explore
similarity relationships between ‘drug like’ molecules and the proteins they bind. Secondly,
two deep-learning-based binding site comparison tools were developed, competing with
the state-of-the-art over benchmark datasets. The models have the ability to predict offtarget interactions and potential candidates for target-based drug repurposing. Finally, the
open-source ScaffoldGraph software was presented for the analysis of hierarchical scaffold
relationships and has already been used in multiple projects, including integration into a
virtual screening pipeline to increase the tractability of ultra-large screening experiments.
Together, and with existing tools, the contributions made will aid in the understanding of
drug-protein relationships, particularly in the fields of off-target prediction and drug
repurposing, helping to design better drugs faster
Methods for Analysing Endothelial Cell Shape and Behaviour in Relation to the Focal Nature of Atherosclerosis
The aim of this thesis is to develop automated methods for the analysis of the
spatial patterns, and the functional behaviour of endothelial cells, viewed under
microscopy, with applications to the understanding of atherosclerosis.
Initially, a radial search approach to segmentation was attempted in order to
trace the cell and nuclei boundaries using a maximum likelihood algorithm; it
was found inadequate to detect the weak cell boundaries present in the available
data. A parametric cell shape model was then introduced to fit an equivalent
ellipse to the cell boundary by matching phase-invariant orientation fields of the
image and a candidate cell shape. This approach succeeded on good quality
images, but failed on images with weak cell boundaries. Finally, a support
vector machines based method, relying on a rich set of visual features, and a
small but high quality training dataset, was found to work well on large numbers
of cells even in the presence of strong intensity variations and imaging noise.
Using the segmentation results, several standard shear-stress dependent parameters
of cell morphology were studied, and evidence for similar behaviour
in some cell shape parameters was obtained in in-vivo cells and their nuclei.
Nuclear and cell orientations around immature and mature aortas were broadly
similar, suggesting that the pattern of flow direction near the wall stayed approximately
constant with age. The relation was less strong for the cell and
nuclear length-to-width ratios.
Two novel shape analysis approaches were attempted to find other properties
of cell shape which could be used to annotate or characterise patterns, since a
wide variability in cell and nuclear shapes was observed which did not appear
to fit the standard parameterisations. Although no firm conclusions can yet be
drawn, the work lays the foundation for future studies of cell morphology.
To draw inferences about patterns in the functional response of cells to flow,
which may play a role in the progression of disease, single-cell analysis was performed
using calcium sensitive florescence probes. Calcium transient rates were
found to change with flow, but more importantly, local patterns of synchronisation
in multi-cellular groups were discernable and appear to change with flow.
The patterns suggest a new functional mechanism in flow-mediation of cell-cell
calcium signalling
Stories within Immersive Virtual Environments
[eng] How can we use immersive and interactive technologies to portray stories?How can we take advantage of the fact that within immersive virtual en-vironments people tend to respond realistically to virtual situations andevents to develop narrative content? Stories in such a media would allowthe participant to contribute to the story and interact with the virtualcharacters while the narrative plot would not change, or change only upto how it was decided a priori. Participants in such a narrative would beable to freely interact within the virtual environments and yet still beaware of the main trust of the stories presented. How can we preserve the‘respond as if it is real’ phenomenon induced by these technologies, butalso develop an unfolding plot in this environment? In other words, canwe develop a story, conserving the structure, its psychological and cul-tural richness and the emotional and cognitive involvement it supposes,in an interactive and immersive audiovisual space?In recent years Virtual Reality therapy has shown that an Immersive Vir-tual Environment (IVE) with a predetermined plot can be experienced asan interactive narrative. For example, in the context of Post TraumaticStress Disorder treatment, the reactions of the participants and the thera-peutic impact suggest that an IVE is a qualitatively different experiencethan classical audiovisual content. However, the methods to develop suchkind of content are not systematic, and the consistency of the experienceis only granted by a therapist or operator controlling in real time theunfolding narrative. Can a story with a strong classical plot be renderedin an automated and interactive immersive virtual environment?..[cat] Podem emprar la realitat virtual immersiva per contar històries? Com po-dem aprofitar el fet que dins dels entorns virtuals immersius les personestendeixen a respondre de manera realista a les situacions i esdevenimentsvirtuals per desenvolupar històries? Els participants en aquest tipus denarrativa podrien interactuar lliurement amb els entorns virtuals i noobstant això experimentarien les històries presentades com a plausibles iconsistents. Una història en aquest medi audiovisual permetria als parti-cipants interactuar amb els personatges virtuals i contribuir activamentals esdeveniments escenificats en l’entorn virtual. Malgrat això, la tramaestablerta a priori no canviaria, o canviaria només dins els marges es-tablerts per l’autor. Com podem preservar el fet que hom tendeix a "re-spondre com si fos real" induït per aquestes tecnologies mentre desenvolu-pem una trama en aquests entorns? En altres paraules, podem desenvolu-par una història conservant-ne l’estructura, la riquesa cultural i psicolò-gica i la implicació emocional i cognitiva que suposa, en una realitatvirtual immersiva i interactiva?Recentment la teràpia de realitat virtual ha mostrat que un entorn vir-tual amb un guió preestablert pot ser percebut com una narració inter-activa. Per exemple, en el context del tractament de Trastorns per EstrèsPostraumàtic, les reaccions i impactes terapèutics suggereixen que pro-voca una sensació de realitat que en fa una experiència qualitativamentdiferent als continguts audiovisuals clàssics. No obstant això, la consistèn-cia de l’experiència tan sols pot ser garantida si un un terapeuta o op-erador controla en temps real el flux dels esdeveniments constituint elguió narratiu. Podem representar un guió clàssic en un entorn virtualautomatitzat?..
Preliminaries of orthogonal layered defence using functional and assurance controls in industrial control systems
Industrial Control Systems (ICSs) are responsible for the automation of different processes and the overall control of systems that include highly sensitive potential targets such as nuclear facilities, energy-distribution, water-supply, and mass-transit systems. Given the increased complexity and rapid evolvement of their threat landscape, and the fact that these systems form part of the Critical National infrastructure (CNI), makes them an emerging domain of conflict, terrorist attacks, and a playground for cyberexploitation. Existing layered-defence approaches are increasingly criticised for their inability to adequately protect against resourceful and persistent adversaries. It is therefore essential that emerging techniques, such as orthogonality, be combined with existing security strategies to leverage defence advantages against adaptive and often asymmetrical attack vectors. The concept of orthogonality is relatively new and unexplored in an ICS environment and consists of having assurance control as well as functional control at each layer. Our work seeks to partially articulate a framework where multiple functional and assurance controls are introduced at each layer of ICS architectural design to further enhance security while maintaining critical real-time transfer of command and control traffic
Computational Analysis of Fundus Images: Rule-Based and Scale-Space Models
Fundus images are one of the most important imaging examinations in modern ophthalmology
because they are simple, inexpensive and, above all, noninvasive.
Nowadays, the acquisition and
storage of highresolution
fundus images is relatively easy and fast. Therefore, fundus imaging
has become a fundamental investigation in retinal lesion detection, ocular health monitoring and
screening programmes. Given the large volume and clinical complexity associated with these images,
their analysis and interpretation by trained clinicians becomes a timeconsuming
task and is
prone to human error. Therefore, there is a growing interest in developing automated approaches
that are affordable and have high sensitivity and specificity. These automated approaches need to
be robust if they are to be used in the general population to diagnose and track retinal diseases. To
be effective, the automated systems must be able to recognize normal structures and distinguish
them from pathological clinical manifestations.
The main objective of the research leading to this thesis was to develop automated systems capable
of recognizing and segmenting retinal anatomical structures and retinal pathological clinical
manifestations associated with the most common retinal diseases. In particular, these automated
algorithms were developed on the premise of robustness and efficiency to deal with the difficulties
and complexity inherent in these images. Four objectives were considered in the analysis of
fundus images. Segmentation of exudates, localization of the optic disc, detection of the midline
of blood vessels, segmentation of the vascular network and detection of microaneurysms.
In addition, we also evaluated the detection of diabetic retinopathy on fundus images using the
microaneurysm detection method. An overview of the state of the art is presented to compare the
performance of the developed approaches with the main methods described in the literature for
each of the previously described objectives. To facilitate the comparison of methods, the state of
the art has been divided into rulebased
methods and machine learningbased
methods.
In the research reported in this paper, rulebased
methods based on image processing methods
were preferred over machine learningbased
methods. In particular, scalespace
methods proved
to be effective in achieving the set goals.
Two different approaches to exudate segmentation were developed. The first approach is based on
scalespace
curvature in combination with the local maximum of a scalespace
blob detector and
dynamic thresholds. The second approach is based on the analysis of the distribution function of
the maximum values of the noise map in combination with morphological operators and adaptive
thresholds. Both approaches perform a correct segmentation of the exudates and cope well with
the uneven illumination and contrast variations in the fundus images.
Optic disc localization was achieved using a new technique called cumulative sum fields, which was
combined with a vascular enhancement method. The algorithm proved to be reliable and efficient,
especially for pathological images. The robustness of the method was tested on 8 datasets.
The detection of the midline of the blood vessels was achieved using a modified corner detector
in combination with binary philtres and dynamic thresholding. Segmentation of the vascular network
was achieved using a new scalespace
blood vessels enhancement method. The developed
methods have proven effective in detecting the midline of blood vessels and segmenting vascular
networks.
The microaneurysm detection method relies on a scalespace
microaneurysm detection and labelling
system. A new approach based on the neighbourhood of the microaneurysms was used
for labelling. Microaneurysm detection enabled the assessment of diabetic retinopathy detection.
The microaneurysm detection method proved to be competitive with other methods, especially with highresolution
images. Diabetic retinopathy detection with the developed microaneurysm
detection method showed similar performance to other methods and human experts.
The results of this work show that it is possible to develop reliable and robust scalespace
methods
that can detect various anatomical structures and pathological features of the retina. Furthermore,
the results obtained in this work show that although recent research has focused on machine learning
methods, scalespace
methods can achieve very competitive results and typically have greater
independence from image acquisition. The methods developed in this work may also be relevant
for the future definition of new descriptors and features that can significantly improve the results
of automated methods.As imagens do fundo do olho são hoje um dos principais exames imagiológicos da oftalmologia
moderna, pela sua simplicidade, baixo custo e acima de tudo pelo seu carácter nãoinvasivo.
A
aquisição e armazenamento de imagens do fundo do olho com alta resolução é também relativamente
simples e rápida. Desta forma, as imagens do fundo do olho são um exame fundamental
na identificação de alterações retinianas, monitorização da saúde ocular, e em programas de rastreio.
Considerando o elevado volume e complexidade clínica associada a estas imagens, a análise
e interpretação das mesmas por clínicos treinados tornase
uma tarefa morosa e propensa a erros
humanos. Assim, há um interesse crescente no desenvolvimento de abordagens automatizadas,
acessíveis em custo, e com uma alta sensibilidade e especificidade. Estas devem ser robustas para
serem aplicadas à população em geral no diagnóstico e seguimento de doenças retinianas. Para
serem eficazes, os sistemas de análise têm que conseguir detetar e distinguir estruturas normais
de sinais patológicos.
O objetivo principal da investigação que levou a esta tese de doutoramento é o desenvolvimento
de sistemas automáticos capazes de detetar e segmentar as estruturas anatómicas da retina, e os
sinais patológicos retinianos associados às doenças retinianas mais comuns. Em particular, estes
algoritmos automatizados foram desenvolvidos segundo as premissas de robustez e eficácia para
lidar com as dificuldades e complexidades inerentes a estas imagens.
Foram considerados quatro objetivos de análise de imagens do fundo do olho. São estes, a segmentação
de exsudados, a localização do disco ótico, a deteção da linha central venosa dos vasos
sanguíneos e segmentação da rede vascular, e a deteção de microaneurismas. De acrescentar que
usando o método de deteção de microaneurismas, avaliouse
também a capacidade de deteção da
retinopatia diabética em imagens do fundo do olho.
Para comparar o desempenho das metodologias desenvolvidas neste trabalho, foi realizado um
levantamento do estado da arte, onde foram considerados os métodos mais relevantes descritos na
literatura para cada um dos objetivos descritos anteriormente. Para facilitar a comparação entre
métodos, o estado da arte foi dividido em metodologias de processamento de imagem e baseadas
em aprendizagem máquina.
Optouse
no trabalho de investigação desenvolvido pela utilização de metodologias de análise espacial
de imagem em detrimento de metodologias baseadas em aprendizagem máquina. Em particular,
as metodologias baseadas no espaço de escalas mostraram ser efetivas na obtenção dos
objetivos estabelecidos.
Para a segmentação de exsudados foram usadas duas abordagens distintas. A primeira abordagem
baseiase
na curvatura em espaço de escalas em conjunto com a resposta máxima local de um detetor
de manchas em espaço de escalas e limiares dinâmicos. A segunda abordagem baseiase
na
análise do mapa de distribuição de ruído em conjunto com operadores morfológicos e limiares
adaptativos. Ambas as abordagens fazem uma segmentação dos exsudados de elevada precisão,
além de lidarem eficazmente com a iluminação nãouniforme
e a variação de contraste presente
nas imagens do fundo do olho. A localização do disco ótico foi conseguida com uma nova técnica
designada por campos de soma acumulativos, combinada com métodos de melhoramento da rede
vascular. O algoritmo revela ser fiável e eficiente, particularmente em imagens patológicas. A robustez
do método foi verificada pela sua avaliação em oito bases de dados. A deteção da linha central
dos vasos sanguíneos foi obtida através de um detetor de cantos modificado em conjunto com
filtros binários e limiares dinâmicos. A segmentação da rede vascular foi conseguida com um novo
método de melhoramento de vasos sanguíneos em espaço de escalas. Os métodos desenvolvidos mostraram ser eficazes na deteção da linha central dos vasos sanguíneos e na segmentação da rede
vascular. Finalmente, o método para a deteção de microaneurismas assenta num formalismo de
espaço de escalas na deteção e na rotulagem dos microaneurismas. Para a rotulagem foi utilizada
uma nova abordagem da vizinhança dos candidatos a microaneurismas. A deteção de microaneurismas
permitiu avaliar também a deteção da retinopatia diabética. O método para a deteção
de microaneurismas mostrou ser competitivo quando comparado com outros métodos, em particular
em imagens de alta resolução. A deteção da retinopatia diabética exibiu um desempenho
semelhante a outros métodos e a especialistas humanos.
Os trabalhos descritos nesta tese mostram ser possível desenvolver uma abordagem fiável e robusta
em espaço de escalas capaz de detetar diferentes estruturas anatómicas e sinais patológicos
da retina.
Além disso, os resultados obtidos mostram que apesar de a pesquisa mais recente concentrarse
em metodologias de aprendizagem máquina, as metodologias de análise espacial apresentam
resultados muito competitivos e tipicamente independentes do equipamento de aquisição das imagens.
As metodologias desenvolvidas nesta tese podem ser importantes na definição de novos
descritores e características, que podem melhorar significativamente o resultado de métodos automatizados
- …