4,069 research outputs found

    A review of clustering techniques and developments

    Full text link
    © 2017 Elsevier B.V. This paper presents a comprehensive study on clustering: exiting methods and developments made at various times. Clustering is defined as an unsupervised learning where the objects are grouped on the basis of some similarity inherent among them. There are different methods for clustering the objects such as hierarchical, partitional, grid, density based and model based. The approaches used in these methods are discussed with their respective states of art and applicability. The measures of similarity as well as the evaluation criteria, which are the central components of clustering, are also presented in the paper. The applications of clustering in some fields like image segmentation, object and character recognition and data mining are highlighted

    NNVA: Neural Network Assisted Visual Analysis of Yeast Cell Polarization Simulation

    Full text link
    Complex computational models are often designed to simulate real-world physical phenomena in many scientific disciplines. However, these simulation models tend to be computationally very expensive and involve a large number of simulation input parameters which need to be analyzed and properly calibrated before the models can be applied for real scientific studies. We propose a visual analysis system to facilitate interactive exploratory analysis of high-dimensional input parameter space for a complex yeast cell polarization simulation. The proposed system can assist the computational biologists, who designed the simulation model, to visually calibrate the input parameters by modifying the parameter values and immediately visualizing the predicted simulation outcome without having the need to run the original expensive simulation for every instance. Our proposed visual analysis system is driven by a trained neural network-based surrogate model as the backend analysis framework. Surrogate models are widely used in the field of simulation sciences to efficiently analyze computationally expensive simulation models. In this work, we demonstrate the advantage of using neural networks as surrogate models for visual analysis by incorporating some of the recent advances in the field of uncertainty quantification, interpretability and explainability of neural network-based models. We utilize the trained network to perform interactive parameter sensitivity analysis of the original simulation at multiple levels-of-detail as well as recommend optimal parameter configurations using the activation maximization framework of neural networks. We also facilitate detail analysis of the trained network to extract useful insights about the simulation model, learned by the network, during the training process.Comment: Published at IEEE Transactions on Visualization and Computer Graphic

    An advanced short-term wind power forecasting framework based on the optimized deep neural network models

    Get PDF
    With the continued growth of wind power penetration into conventional power grid systems, wind power forecasting plays an increasingly competitive role in organizing and deploying electrical and energy systems. The wind power time series, though, often present non-linear and non-stationary characteristics, allowing them quite challenging to estimate precisely. The aim of this paper is in proposing a novel hybrid model named Evol-CNN in order to predict the short-term wind power at 10-min interval up to 3-hr based on deep convolutional neural network (CNN) and evolutionary search optimizer. Specifically, we develop an improved version of Grey Wolf Optimization (GWO) algorithm by incorporating two effective modifications in its original structure. The proposed GWO algorithm is more effective than the original version due to performing in a faster way and the ability to escape from local optima. The proposed GWO algorithm is utilized to find the optimal values of hyperparameters for deep CNN model. Moreover, the optimal CNN model is employed to predict wind power time series. The main advantage of the proposed Evol-CNN model is to enhance the capability of time series forecasting models in obtaining more accurate predictions. Several forecasting benchmarks are compared with the Evol-CNN model to address its effectiveness. The simulation results indicate that the Evol-CNN has a significant advantage over the competitive benchmarks and also, has the minimum error regarding of 10-min, 1-hr and 3-hr ahead forecasting.© 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    A mobile augmented reality application for supporting real-time skin lesion analysis based on deep learning

    Get PDF
    AbstractMelanoma is considered the deadliest skin cancer and when it is in an advanced state it is difficult to treat. Diagnoses are visually performed by dermatologists, by naked-eye observation. This paper proposes an augmented reality smartphone application for supporting the dermatologist in the real-time analysis of a skin lesion. The app augments the camera view with information related to the lesion features generally measured by the dermatologist for formulating the diagnosis. The lesion is also classified by a deep learning approach for identifying melanoma. The real-time process adopted for generating the augmented content is described. The real-time performances are also evaluated and a user study is also conducted. Results revealed that the real-time process may be entirely executed on the Smartphone and that the support provided is well judged by the target users

    Automatic detection of early repolarization pattern in ECG signals with waveform prototype-based learning

    Get PDF
    Abstract. Early repolarization (ER) pattern was considered a benign finding until 2008, when it was associated with sudden cardiac arrest (SCA). Since then, the interest of the medical community on the topic has grown, stating the need to develop methods to detect the pattern and analyze the risk of SCA. This thesis presents an automatic detection method of ER using supervised classification. The novelty of the method lies in the features used to construct the classification models. The features consist of prototypes that are composed by fragments of the ECG signal where the ER pattern is located. Three different classifier models were included and compared: linear discriminant analysis (LDA), k-nearest neighbor (KNN) algorithm and support vector machine (SVM). The method was tested in a dataset of 5676 subjects, manually labeled by an experienced analyst who followed the medical guidelines. The algorithm for the detection of ER is composed of different stages. First, the ECG signals are processed to locate characteristic points and remove unwanted noise. Then, the features are extracted from the signals and the classifiers are trained. Finally, the results are fused and the detection of ER is evaluated. Accuracies of the different classifiers showed results over 90%, demonstrating the discrimitative power of the features between ECG signals with and without the ER pattern. Additionally, dimensionality reduction of the features was implemented with Isomap and generalized regression neural networks (GRNN) without affecting the performance of the method. Moreover, analysis of critical cases that are difficult to label was performed based on the distances to the classifier decision boundary, improving the sensitivity of the detection. Hence, the method presented here could be used to discriminate between ECG signals with and without the ER pattern

    Design and validation of structural health monitoring system based on bio-inspired algorithms

    Get PDF
    The need of ensure the proper performance of the structures in service has made of structural health monitoring (SHM) a priority research area. Researchers all around the world have focused efforts on the development of new ways to continuous monitoring the structures and analyze the data collected from the inspection process in order to provide information about the current state and avoid possible catastrophes. To perform an effective analysis of the data, the development of methodologies is crucial in order to assess the structures with a low computational cost and with a high reliability. These desirable features can be found in biological systems, and these can be emulated by means of computational systems. The use of bio-inspired algorithms is a recent approach that has demonstrated its effectiveness in data analysis in different areas. Since these algorithms are based in the emulation of biological systems that have demonstrated its effectiveness for several generations, it is possible to mimic the evolution process and its adaptability characteristics by using computational algorithms. Specially in pattern recognition, several algorithms have shown good performance. Some widely used examples are the neural networks, the fuzzy systems and the genetic algorithms. This thesis is concerned about the development of bio-inspired methodologies for structural damage detection and classification. This document is organized in five chapters. First, an overview of the problem statement, the objectives, general results, a brief theoretical background and the description of the different experimental setups are included in Chapter 1 (Introduction). Chapters 2 to 4 include the journal papers published by the author of this thesis. The discussion of the results, some conclusions and the future work can be found on Chapter 5. Finally, Appendix A includes other contributions such as a book chapter and some conference papers.La necesidad de asegurar el correcto funcionamiento de las estructuras en servicio ha hecho de la monitorización de la integridad estructural un área de gran interés. Investigadores en todas las partes del mundo centran sus esfuerzos en el desarrollo de nuevas formas de monitorización contínua de estructuras que permitan analizar e interpretar los datos recogidos durante el proceso de inspección con el objetivo de proveer información sobre el estado actual de la estructura y evitar posibles catástrofes. Para desarrollar un análisis efectivo de los datos, es necesario el desarrollo de metodologías para inspeccionar la estructura con un bajo coste computacional y alta fiabilidad. Estas características deseadas pueden ser encontradas en los sistemas biológicos y pueden ser emuladas mediante herramientas computacionales. El uso de algoritmos bio-inspirados es una reciente técnica que ha demostrado su efectividad en el análisis de datos en diferentes áreas. Dado que estos algoritmos se basan en la emulación de sistemas biológicos que han demostrado su efectividad a lo largo de muchas generaciones, es posible imitar el proceso de evolución y sus características de adaptabilidad al medio usando algoritmos computacionales. Esto es así, especialmente, en reconocimiento de patrones, donde muchos de estos algoritmos brindan excelentes resultados. Algunos ejemplos ampliamente usados son las redes neuronales, los sistemas fuzzy y los algoritmos genéticos. Esta tesis involucra el desarrollo de unas metodologías bio-inspiradas para la detección y clasificación de daños estructurales. El documento está organizado en cinco capítulos. En primer lugar, se incluye una descripción general del problema, los objetivos del trabajo, los resultados obtenidos, un breve marco conceptual y la descripción de los diferentes escenarios experimentales en el Capítulo 1 (Introducción). Los Capítulos 2 a 4 incluyen los artículos publicados en diferentes revistas indexadas. La revisión de los resultados, conclusiones y el trabajo futuro se encuentra en el Capítulo 5. Finalmente, el Anexo A incluye otras contribuciones tales como un capítulo de libro y algunos trabajos publicados en conferencias

    Addendum to Informatics for Health 2017: Advancing both science and practice

    Get PDF
    This article presents presentation and poster abstracts that were mistakenly omitted from the original publication

    Intelligent conceptual mould layout design system (ICMLDS) : innovation report

    Get PDF
    Family Mould Cavity Runner Layout Design (FMCRLD) is the most demanding and critical task in the early Conceptual Mould Layout Design (CMLD) phase. Traditional experience-dependent manual FCMRLD workflow results in long design lead time, non-optimum designs and costs of errors. However, no previous research, existing commercial software packages or patented technologies can support FMCRLD automation and optimisation. The nature of FMCRLD is non-repetitive and generative. The complexity of FMCRLD optimisation involves solving a complex two-level combinatorial layout design optimisation problem. This research first developed the Intelligent Conceptual Mould Layout Design System (ICMLDS) prototype based on the innovative nature-inspired evolutionary FCMRLD approach for FMCRLD automation and optimisation using Genetic Algorithm (GA) and Shape Grammar (SG). The ICMLDS prototype has been proven to be a powerful intelligent design tool as well as an interactive design-training tool that can encourage and accelerate mould designers’ design alternative exploration, exploitation and optimisation for better design in less time. This previously unavailable capability enables the supporting company not only to innovate the existing traditional mould making business but also to explore new business opportunities in the high-value low-volume market (such as telecommunication, consumer electronic and medical devices) of high precision injection moulding parts. On the other hand, the innovation of this research also provides a deeper insight into the art of evolutionary design and expands research opportunities in the evolutionary design approach into a wide variety of new application areas including hot runner layout design, ejector layout design, cooling layout design and architectural space layout design

    Let’s augment the future together!:Augmented reality troubleshooting support for IT/OT rolling stock failures

    Get PDF
    The railway industry is moving to a socio-technological system that relies on computer-controlled and human-machine interfaces. Opportunities arise for creating new services and commercial business cases by using technological innovations and traffic management systems. The convergence of Information Technology (IT) with Operational Technology (OT) is critical for cost-effective and reliable railway operations. However, this convergence introduces complexities, leading to more intricate rolling stock system failures. Hence, operators necessitate assistance in their troubleshooting and maintenance strategy to simplify the decision-making and action-taking processes. Augmented Reality (AR) emerges as a pivotal tool for troubleshooting within this context. AR enhances the operator’s ability to visualize, contextualize, and understand complex data by overlaying real-time and virtual information onto physical objects. AR supports the identification of IT/OT rolling stock system failures, offers troubleshooting directions, and streamlines maintenance procedures, ultimately enhancing decision-making and action-taking processes. This thesis investigates how AR can support operators in navigating troubleshooting and maintenance challenges posed by IT/OT rolling stock system failures in the railway industry

    On Practical machine Learning and Data Analysis

    Get PDF
    This thesis discusses and addresses some of the difficulties associated with practical machine learning and data analysis. Introducing data driven methods in e.g industrial and business applications can lead to large gains in productivity and efficiency, but the cost and complexity are often overwhelming. Creating machine learning applications in practise often involves a large amount of manual labour, which often needs to be performed by an experienced analyst without significant experience with the application area. We will here discuss some of the hurdles faced in a typical analysis project and suggest measures and methods to simplify the process. One of the most important issues when applying machine learning methods to complex data, such as e.g. industrial applications, is that the processes generating the data are modelled in an appropriate way. Relevant aspects have to be formalised and represented in a way that allow us to perform our calculations in an efficient manner. We present a statistical modelling framework, Hierarchical Graph Mixtures, based on a combination of graphical models and mixture models. It allows us to create consistent, expressive statistical models that simplify the modelling of complex systems. Using a Bayesian approach, we allow for encoding of prior knowledge and make the models applicable in situations when relatively little data are available. Detecting structures in data, such as clusters and dependency structure, is very important both for understanding an application area and for specifying the structure of e.g. a hierarchical graph mixture. We will discuss how this structure can be extracted for sequential data. By using the inherent dependency structure of sequential data we construct an information theoretical measure of correlation that does not suffer from the problems most common correlation measures have with this type of data. In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. We describe how to create an incremental classification system based on a statistical model that is trained from empirical data, and show how the limited available background information can still be used initially for a functioning diagnosis system. To minimise the effort with which results are achieved within data analysis projects, we need to address not only the models used, but also the methodology and applications that can help simplify the process. We present a methodology for data preparation and a software library intended for rapid analysis, prototyping, and deployment. Finally, we will study a few example applications, presenting tasks within classification, prediction and anomaly detection. The examples include demand prediction for supply chain management, approximating complex simulators for increased speed in parameter optimisation, and fraud detection and classification within a media-on-demand system
    • …
    corecore