745 research outputs found
A Novel Retinal Blood Vessel Segmentation Algorithm using Fuzzy segmentation
Assessment of blood vessels in retinal images is an important factor for many medical disorders. The changes in the retinal vessels due to the pathologies can be easily identified by segmenting the retinal vessels. Segmentation of retinal vessels is done to identify the early diagnosis of the disease like glaucoma, diabetic retinopathy, macular degeneration, hypertensive retinopathy and arteriosclerosis. In this paper, we propose an automatic blood vessel segmentation method. The proposed algorithm starts with the extraction of blood vessel centerline pixels. The final segmentation is obtained using an iterative region growing method that merges the binary images resulting from centerline detection part with the image resulting from fuzzy vessel segmentation part. In this proposed algorithm, the blood vessel is enhanced using modified morphological operations and the salt and pepper noises are removed from retinal images using Adaptive Fuzzy Switching Median filter. This method is applied on two publicly available databases, the DRIVE and the STARE and the experimental results obtained by using green channel images have been presented and compared with recently published methods. The results demonstrate that our algorithm is very effective method to detect retinal blood vessels.DOI:http://dx.doi.org/10.11591/ijece.v4i4.625
Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making
A survey on online active learning
Online active learning is a paradigm in machine learning that aims to select
the most informative data points to label from a data stream. The problem of
minimizing the cost associated with collecting labeled observations has gained
a lot of attention in recent years, particularly in real-world applications
where data is only available in an unlabeled form. Annotating each observation
can be time-consuming and costly, making it difficult to obtain large amounts
of labeled data. To overcome this issue, many active learning strategies have
been proposed in the last decades, aiming to select the most informative
observations for labeling in order to improve the performance of machine
learning models. These approaches can be broadly divided into two categories:
static pool-based and stream-based active learning. Pool-based active learning
involves selecting a subset of observations from a closed pool of unlabeled
data, and it has been the focus of many surveys and literature reviews.
However, the growing availability of data streams has led to an increase in the
number of approaches that focus on online active learning, which involves
continuously selecting and labeling observations as they arrive in a stream.
This work aims to provide an overview of the most recently proposed approaches
for selecting the most informative observations from data streams in the
context of online active learning. We review the various techniques that have
been proposed and discuss their strengths and limitations, as well as the
challenges and opportunities that exist in this area of research. Our review
aims to provide a comprehensive and up-to-date overview of the field and to
highlight directions for future work
Artificial Intelligence Application in Machine Condition Monitoring and Fault Diagnosis
The subject of machine condition monitoring and fault diagnosis as a part of system maintenance has gained a lot of interest due to the potential benefits to be learned from reduced maintenance budgets, enhanced productivity and improved machine availability. Artificial intelligence (AI) is a successful method of machine condition monitoring and fault diagnosis since these techniques are used as tools for routine maintenance. This chapter attempts to summarize and review the recent research and developments in the field of signal analysis through artificial intelligence in machine condition monitoring and fault diagnosis. Intelligent systems such as artificial neural network (ANN), fuzzy logic system (FLS), genetic algorithms (GA) and support vector machine (SVM) have previously developed many different methods. However, the use of acoustic emission (AE) signal analysis and AI techniques for machine condition monitoring and fault diagnosis is still rare. In the future, the applications of AI in machine condition monitoring and fault diagnosis still need more encouragement and attention due to the gap in the literature
Machine learning methods for sign language recognition: a critical review and analysis.
Sign language is an essential tool to bridge the communication gap between normal and hearing-impaired people. However, the diversity of over 7000 present-day sign languages with variability in motion position, hand shape, and position of body parts making automatic sign language recognition (ASLR) a complex system. In order to overcome such complexity, researchers are investigating better ways of developing ASLR systems to seek intelligent solutions and have demonstrated remarkable success. This paper aims to analyse the research published on intelligent systems in sign language recognition over the past two decades. A total of 649 publications related to decision support and intelligent systems on sign language recognition (SLR) are extracted from the Scopus database and analysed. The extracted publications are analysed using bibliometric VOSViewer software to (1) obtain the publications temporal and regional distributions, (2) create the cooperation networks between affiliations and authors and identify productive institutions in this context. Moreover, reviews of techniques for vision-based sign language recognition are presented. Various features extraction and classification techniques used in SLR to achieve good results are discussed. The literature review presented in this paper shows the importance of incorporating intelligent solutions into the sign language recognition systems and reveals that perfect intelligent systems for sign language recognition are still an open problem. Overall, it is expected that this study will facilitate knowledge accumulation and creation of intelligent-based SLR and provide readers, researchers, and practitioners a roadmap to guide future direction
Increasing the robustness of autonomous systems to hardware degradation using machine learning
Autonomous systems perform predetermined tasks (missions) with minimum supervision. In most applications, the state of the world changes with time. Sensors are employed to measure part or whole of the world’s state. However, sensors often fail amidst operation; feeding as such decision-making with wrong information about the world. Moreover, hardware degradation may alter dynamic behaviour, and subsequently the capabilities, of an autonomous system; rendering the original mission infeasible. This thesis applies machine learning to yield powerful and robust tools that can facilitate autonomy in modern systems. Incremental kernel regression is used for dynamic modelling. Algorithms of this sort are easy to train and are highly adaptive. Adaptivity allows for model adjustments, whenever the environment of operation changes. Bayesian reasoning provides a rigorous framework for addressing uncertainty. Moreover, using Bayesian Networks, complex inference regarding hardware degradation can be answered. Specifically, adaptive modelling is combined with Bayesian reasoning to yield recursive estimation algorithms that are robust to sensor failures. Two solutions are presented by extending existing recursive estimation algorithms from the robotics literature. The algorithms are deployed on an underwater vehicle and the performance is assessed in real-world experiments. A comparison against standard filters is also provided. Next, the previous algorithms are extended to consider sensor and actuator failures jointly. An algorithm that can detect thruster failures in an Autonomous Underwater Vehicle has been developed. Moreover, the algorithm adapts the dynamic model online to compensate for the detected fault. The performance of this algorithm was also tested in a real-world application. One step further than hardware fault detection, prognostics predict how much longer can a particular hardware component operate normally. Ubiquitous sensors in modern systems render data-driven prognostics a viable solution. However, training is based on skewed datasets; datasets where the samples from the faulty region of operation are much fewer than the ones from the healthy region of operation. This thesis presents a prognostic algorithm that tackles the problem of imbalanced (skewed) datasets
Character Recognition
Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field
- …