136 research outputs found
Forecasting bus passenger flows by using a clustering-based support vector regression approach
As a significant component of the intelligent transportation system, forecasting bus passenger
flows plays a key role in resource allocation, network planning, and frequency setting. However, it remains
challenging to recognize high fluctuations, nonlinearity, and periodicity of bus passenger flows due to
varied destinations and departure times. For this reason, a novel forecasting model named as affinity
propagation-based support vector regression (AP-SVR) is proposed based on clustering and nonlinear
simulation. For the addressed approach, a clustering algorithm is first used to generate clustering-based
intervals. A support vector regression (SVR) is then exploited to forecast the passenger flow for each
cluster, with the use of particle swarm optimization (PSO) for obtaining the optimized parameters. Finally,
the prediction results of the SVR are rearranged by chronological order rearrangement. The proposed model
is tested using real bus passenger data from a bus line over four months. Experimental results demonstrate
that the proposed model performs better than other peer models in terms of absolute percentage error and
mean absolute percentage error. It is recommended that the deterministic clustering technique with stable
cluster results (AP) can improve the forecasting performance significantly.info:eu-repo/semantics/publishedVersio
Evolving fuzzy and neuro-fuzzy approaches in clustering, regression, identification, and classification: A Survey
Major assumptions in computational intelligence and machine learning consist of the availability of a historical dataset for model development, and that the resulting model will, to some extent, handle similar instances during its online operation. However, in many real world applications, these assumptions may not hold as the amount of previously available data may be insufficient to represent the underlying system, and the environment and the system may change over time. As the amount of data increases, it is no longer feasible to process data efficiently using iterative algorithms, which typically require multiple passes over the same portions of data. Evolving modeling from data streams has emerged as a framework to address these issues properly by self-adaptation, single-pass learning steps and evolution as well as contraction of model components on demand and on the fly. This survey focuses on evolving fuzzy rule-based models and neuro-fuzzy networks for clustering, classification and regression and system identification in online, real-time environments where learning and model development should be performed incrementally. (C) 2019 Published by Elsevier Inc.Igor Škrjanc, Jose Antonio Iglesias and Araceli Sanchis would like to thank to the Chair of Excellence of Universidad Carlos III de Madrid, and the Bank of Santander Program for their support. Igor Škrjanc is grateful to Slovenian Research Agency with the research program P2-0219, Modeling, simulation and control. Daniel Leite acknowledges the Minas Gerais Foundation for Research and Development (FAPEMIG), process APQ-03384-18. Igor Škrjanc and Edwin Lughofer acknowledges the support by the ”LCM — K2 Center for Symbiotic Mechatronics” within the framework of the Austrian COMET-K2 program. Fernando Gomide is grateful to the Brazilian National Council for Scientific and Technological Development (CNPq) for grant
305906/2014-3
Imparting 3D representations to artificial intelligence for a full assessment of pressure injuries.
During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning methods. The main objective of this dissertation is to prove the efficiency of Deep Learning techniques in tackling one of the important health issues we are facing in our society, through medical imaging. Pressure injuries are a dermatology related health issue associated with increased morbidity and health care costs. Managing pressure injuries appropriately is increasingly important for all the professionals in wound care. Using 2D photographs and 3D meshes of these wounds, collected from collaborating hospitals, our mission is to create intelligent systems for a full non-intrusive assessment of these wounds. Five main tasks have been achieved in this study: a literature review of wound imaging methods using machine learning techniques, the classification and segmentation of the tissue types inside the pressure injury, the segmentation of these wounds and the design of an end-to-end system which measures all the necessary quantitative information from 3D meshes for an efficient assessment of PIs, and the integration of the assessment imaging techniques in a web-based application
Recommended from our members
Granular computing approach for intelligent classifier design
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Granular computing facilitates dealing with information by providing a theoretical framework to deal with information as granules at different levels of granularity (different levels of specificity/abstraction). It aims to provide an abstract explainable description of the data by forming granules that represent the features or the
underlying structure of corresponding subsets of the data. In this thesis, a granular computing approach to the design of intelligent classification systems is proposed. The proposed approach is employed for different
classification systems to investigate its efficiency. Fuzzy inference systems, neural networks, neuro-fuzzy systems and classifier ensembles are considered to evaluate the efficiency of the proposed approach. Each of the considered systems is designed using the proposed approach and classification performance is evaluated and compared to that of the standard system. The proposed approach is based on constructing information granules from data at multiple levels of granularity. The granulation process is performed using a modified fuzzy c-means algorithm that takes classification problem into account. Clustering is followed by a coarsening process that involves merging small clusters into large ones to form a lower granularity level. The resulted granules are used to build each of the considered binary classifiers in different settings and approaches.
Granules produced by the proposed granulation method are used to build a fuzzy classifier for each granulation level or set of levels. The performance of the classifiers is evaluated using real life data sets and measured by two classification performance measures: accuracy and area under receiver operating characteristic curve. Experimental results show that fuzzy systems constructed using the proposed method achieved better classification performance. In addition, the proposed approach is used for the design of neural network classifiers. Resulted granules from one or more granulation levels are used to train the classifiers at different levels of specificity/abstraction. Using this approach, the classification problem is broken down into the modelling of classification rules represented by the information granules resulting in more interpretable system. Experimental results show that neural network classifiers trained using the proposed approach have better classification performance for most of the data sets. In a similar manner, the proposed approach is used for the training of neuro-fuzzy systems resulting in similar improvement in classification performance. Lastly, neural networks built using the proposed approach are used to construct a classifier ensemble. Information granules are used to generate and train the base classifiers. The final ensemble output is produced by a weighted sum combiner. Based on the experimental results, the proposed approach has improved the classification performance of the base classifiers for most of the data sets. Furthermore, a genetic algorithm is used to determine the combiner weights automatically.Higher Committee for Education Development in Iraq (HCED
Risk prediction analysis for post-surgical complications in cardiothoracic surgery
Cardiothoracic surgery patients have the risk of developing surgical site infections
(SSIs), which causes hospital readmissions, increases healthcare costs and may lead to
mortality. The first 30 days after hospital discharge are crucial for preventing these
kind of infections. As an alternative to a hospital-based diagnosis, an automatic digital
monitoring system can help with the early detection of SSIs by analyzing daily images
of patient’s wounds. However, analyzing a wound automatically is one of the biggest
challenges in medical image analysis.
The proposed system is integrated into a research project called CardioFollowAI,
which developed a digital telemonitoring service to follow-up the recovery of cardiothoracic
surgery patients. This present work aims to tackle the problem of SSIs by predicting
the existence of worrying alterations in wound images taken by patients, with the help of
machine learning and deep learning algorithms. The developed system is divided into a
segmentation model which detects the wound region area and categorizes the wound type,
and a classification model which predicts the occurrence of alterations in the wounds.
The dataset consists of 1337 images with chest wounds (WC), drainage wounds (WD)
and leg wounds (WL) from 34 cardiothoracic surgery patients. For segmenting the images,
an architecture with a Mobilenet encoder and an Unet decoder was used to obtain
the regions of interest (ROI) and attribute the wound class. The following model was
divided into three sub-classifiers for each wound type, in order to improve the model’s
performance. Color and textural features were extracted from the wound’s ROIs to feed
one of the three machine learning classifiers (random Forest, support vector machine and
K-nearest neighbors), that predict the final output.
The segmentation model achieved a final mean IoU of 89.9%, a dice coefficient of
94.6% and a mean average precision of 90.1%, showing good results. As for the algorithms
that performed classification, the WL classifier exhibited the best results with a
87.6% recall and 52.6% precision, while WC classifier achieved a 71.4% recall and 36.0%
precision. The WD had the worst performance with a 68.4% recall and 33.2% precision.
The obtained results demonstrate the feasibility of this solution, which can be a start for
preventing SSIs through image analysis with artificial intelligence.Os pacientes submetidos a uma cirurgia cardiotorácica tem o risco de desenvolver
infeções no local da ferida cirúrgica, o que pode consequentemente levar a readmissões
hospitalares, ao aumento dos custos na saúde e à mortalidade. Os primeiros 30 dias
após a alta hospitalar são cruciais na prevenção destas infecções. Assim, como alternativa
ao diagnóstico no hospital, a utilização diária de um sistema digital e automático de
monotorização em imagens de feridas cirúrgicas pode ajudar na precoce deteção destas
infeções. No entanto, a análise automática de feridas é um dos grandes desafios em análise
de imagens médicas.
O sistema proposto integra um projeto de investigação designado CardioFollow.AI,
que desenvolveu um serviço digital de telemonitorização para realizar o follow-up da recuperação
dos pacientes de cirurgia cardiotorácica. Neste trabalho, o problema da infeção
de feridas cirúrgicas é abordado, através da deteção de alterações preocupantes na ferida
com ajuda de algoritmos de aprendizagem automática. O sistema desenvolvido divide-se
num modelo de segmentação, que deteta a região da ferida e a categoriza consoante o seu
tipo, e num modelo de classificação que prevê a existência de alterações na ferida.
O conjunto de dados consistiu em 1337 imagens de feridas do peito (WC), feridas
dos tubos de drenagem (WD) e feridas da perna (WL), provenientes de 34 pacientes de
cirurgia cardiotorácica. A segmentação de imagem foi realizada através da combinação
de Mobilenet como codificador e Unet como decodificador, de forma a obter-se as regiões
de interesse e atribuir a classe da ferida. O modelo seguinte foi dividido em três subclassificadores
para cada tipo de ferida, de forma a melhorar a performance do modelo.
Caraterísticas de cor e textura foram extraídas da região da ferida para serem introduzidas
num dos modelos de aprendizagem automática de forma a prever a classificação final
(Random Forest, Support Vector Machine and K-Nearest Neighbors).
O modelo de segmentação demonstrou bons resultados ao obter um IoU médio final
de 89.9%, um dice de 94.6% e uma média de precisão de 90.1%. Relativamente aos algoritmos
que realizaram a classificação, o classificador WL exibiu os melhores resultados
com 87.6% de recall e 62.6% de precisão, enquanto o classificador das WC conseguiu um recall de 71.4% e 36.0% de precisão. Por fim, o classificador das WD teve a pior performance
com um recall de 68.4% e 33.2% de precisão. Os resultados obtidos demonstram
a viabilidade desta solução, que constitui o início da prevenção de infeções em feridas
cirúrgica a partir da análise de imagem, com recurso a inteligência artificial
Cognitive Models and Computational Approaches for improving Situation Awareness Systems
2016 - 2017The world of Internet of Things is pervaded by complex environments
with smart services available every time and everywhere. In
such a context, a serious open issue is the capability of information
systems to support adaptive and collaborative decision processes
in perceiving and elaborating huge amounts of data. This requires
the design and realization of novel socio-technical systems based on
the “human-in-the-loop” paradigm. The presence of both humans
and software in such systems demands for adequate levels of Situation
Awareness (SA). To achieve and maintain proper levels of
SA is a daunting task due to the intrinsic technical characteristics
of systems and the limitations of human cognitive mechanisms.
In the scientific literature, such issues hindering the SA formation
process are defined as SA demons.
The objective of this research is to contribute to the resolution
of the SA demons by means of the identification of information
processing paradigms for an original support to the SA and the
definition of new theoretical and practical approaches based on
cognitive models and computational techniques.
The research work starts with an in-depth analysis and some
preliminary verifications of methods, techniques, and systems of
SA. A major outcome of this analysis is that there is only a limited
use of the Granular Computing paradigm (GrC) in the SA
field, despite the fact that SA and GrC share many concepts and
principles. The research work continues with the definition of contributions
and original results for the resolution of significant SA
demons, exploiting some of the approaches identified in the analysis
phase (i.e., ontologies, data mining, and GrC). The first contribution addresses the issues related to the bad perception of data
by users. We propose a semantic approach for the quality-aware
sensor data management which uses a data imputation technique
based on association rule mining. The second contribution proposes
an original ontological approach to situation management,
namely the Adaptive Goal-driven Situation Management. The approach
uses the ontological modeling of goals and situations and
a mechanism that suggests the most relevant goals to the users at
a given moment. Lastly, the adoption of the GrC paradigm allows
the definition of a novel model for representing and reasoning
on situations based on a set theoretical framework. This model
has been instantiated using the rough sets theory. The proposed
approaches and models have been implemented in prototypical systems.
Their capabilities in improving SA in real applications have
been evaluated with typical methodologies used for SA systems. [edited by Author]XXX cicl
Smart hierarchical WiFi localization system for indoors
Premio Extraordinario de Doctorado de la UAH en el año académico 2013-2014En los últimos años, el número de aplicaciones para smartphones y tablets ha crecido rápidamente. Muchas de estas aplicaciones hacen uso de las capacidades de localización de estos dispositivos. Para poder proporcionar su localización, es necesario identificar la posición del usuario de forma robusta y en tiempo real. Tradicionalmente, esta localización se ha realizado mediante el uso del GPS que proporciona posicionamiento preciso en exteriores. Desafortunadamente, su baja precisión en interiores imposibilita su uso. Para proporcionar localización en interiores se utilizan diferentes tecnologías. Entre ellas, la tecnología WiFi es una de las más usadas debido a sus importantes ventajas tales como la disponibilidad de puntos de acceso WiFi en la mayoría de edificios y que medir la señal WiFi no tiene coste, incluso en redes privadas. Desafortunadamente, también tiene algunas desventajas, ya que en interiores la señal es altamente dependiente de la estructura del edificio por lo que aparecen otros efectos no deseados, como el efecto multicamino o las variaciones de pequeña escala. Además, las redes WiFi están instaladas para maximizar la conectividad sin tener en cuenta su posible uso para localización, por lo que los entornos suelen estar altamente poblados de puntos de acceso, aumentando las interferencias co-canal, que causan variaciones en el nivel de señal recibido. El objetivo de esta tesis es la localización de dispositivos móviles en interiores utilizando como única información el nivel de señal recibido de los puntos de acceso existentes en el entorno. La meta final es desarrollar un sistema de localización WiFi para dispositivos móviles, que pueda ser utilizado en cualquier entorno y por cualquier dispositivo, en tiempo real. Para alcanzar este objetivo, se propone un sistema de localización jerárquico basado en clasificadores borrosos que realizará la localización en entornos descritos topológicamente. Este sistema proporcionará una localización robusta en diferentes escenarios, prestando especial atención a los entornos grandes. Para ello, el sistema diseñado crea una partición jerárquica del entorno usando K-Means. Después, el sistema de localización se entrena utilizando diferentes algoritmos de clasificación supervisada para localizar las nuevas medidas WiFi. Finalmente, se ha diseñado un sistema probabilístico para seguir la posición del dispositivo en movimiento utilizando un filtro Bayesiano. Este sistema se ha probado en un entorno real, con varias plantas, obteniendo un error medio total por debajo de los 3 metros
Self-labeling techniques for semi-supervised time series classification: an empirical study
An increasing amount of unlabeled time series data available render the semi-supervised paradigm a suitable approach to tackle classification problems with a reduced quantity of labeled data. Self-labeled techniques stand out from semi-supervised classification methods due to their simplicity and the lack of strong assumptions about the distribution of the labeled and unlabeled data. This paper addresses the relevance of these techniques in the time series classification context by means of an empirical study that compares successful self-labeled methods in conjunction with various learning schemes and dissimilarity measures. Our experiments involve 35 time series datasets with different ratios of labeled data, aiming to measure the transductive and inductive classification capabilities of the self-labeled methods studied. The results show that the nearest-neighbor rule is a robust choice for the base classifier. In addition, the amending and multi-classifier self-labeled-based approaches reveal a promising attempt to perform semi-supervised classification in the time series context
Data-stream driven Fuzzy-granular approaches for system maintenance
Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability
An interpretable multi-stage forecasting framework for energy consumption and CO2 emissions for the transportation sector
The transportation sector is deemed one of the primary sources of energy consumption and greenhouse gases throughout the world. To realise and design sustainable transport, it is imperative to comprehend relationships and evaluate interactions among a set of variables, which may influence transport energy consumption and CO2 emissions. Unlike recent published papers, this study strives to achieve a balance between machine learning (ML) model accuracy and model interpretability using the Shapley additive explanation (SHAP) method for forecasting the energy consumption and CO2 emissions in the UK's transportation sector. To this end, this paper proposes an interpretable multi-stage forecasting framework to simultaneously maximise the ML model accuracy and determine the relationship between the predictions and the influential variables by revealing the contribution of each variable to the predictions. For the UK's transportation sector, the experimental results indicate that road carbon intensity is found to be the most contributing variable to both energy consumption and CO2 emissions predictions. Unlike other studies, population and GDP per capita are found to be uninfluential variables. The proposed multi-stage forecasting framework may assist policymakers in making more informed energy decisions and establishing more accurate investment
- …