14 research outputs found

    Sistemas granulares evolutivos

    Get PDF
    Orientador: Fernando Antonio Campos GomideTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Recentemente tem-se observado um crescente interesse em abordagens de modelagem computacional para lidar com fluxos de dados do mundo real. Métodos e algoritmos têm sido propostos para obtenção de conhecimento a partir de conjuntos de dados muito grandes e, a princípio, sem valor aparente. Este trabalho apresenta uma plataforma computacional para modelagem granular evolutiva de fluxos de dados incertos. Sistemas granulares evolutivos abrangem uma variedade de abordagens para modelagem on-line inspiradas na forma com que os humanos lidam com a complexidade. Esses sistemas exploram o fluxo de informação em ambiente dinâmico e extrai disso modelos que podem ser linguisticamente entendidos. Particularmente, a granulação da informação é uma técnica natural para dispensar atenção a detalhes desnecessários e enfatizar transparência, interpretabilidade e escalabilidade de sistemas de informação. Dados incertos (granulares) surgem a partir de percepções ou descrições imprecisas do valor de uma variável. De maneira geral, vários fatores podem afetar a escolha da representação dos dados tal que o objeto representativo reflita o significado do conceito que ele está sendo usado para representar. Neste trabalho são considerados dados numéricos, intervalares e fuzzy; e modelos intervalares, fuzzy e neuro-fuzzy. A aprendizagem de sistemas granulares é baseada em algoritmos incrementais que constroem a estrutura do modelo sem conhecimento anterior sobre o processo e adapta os parâmetros do modelo sempre que necessário. Este paradigma de aprendizagem é particularmente importante uma vez que ele evita a reconstrução e o retreinamento do modelo quando o ambiente muda. Exemplos de aplicação em classificação, aproximação de função, predição de séries temporais e controle usando dados sintéticos e reais ilustram a utilidade das abordagens de modelagem granular propostas. O comportamento de fluxos de dados não-estacionários com mudanças graduais e abruptas de regime é também analisado dentro do paradigma de computação granular evolutiva. Realçamos o papel da computação intervalar, fuzzy e neuro-fuzzy em processar dados incertos e prover soluções aproximadas de alta qualidade e sumário de regras de conjuntos de dados de entrada e saída. As abordagens e o paradigma introduzidos constituem uma extensão natural de sistemas inteligentes evolutivos para processamento de dados numéricos a sistemas granulares evolutivos para processamento de dados granularesAbstract: In recent years there has been increasing interest in computational modeling approaches to deal with real-world data streams. Methods and algorithms have been proposed to uncover meaningful knowledge from very large (often unbounded) data sets in principle with no apparent value. This thesis introduces a framework for evolving granular modeling of uncertain data streams. Evolving granular systems comprise an array of online modeling approaches inspired by the way in which humans deal with complexity. These systems explore the information flow in dynamic environments and derive from it models that can be linguistically understood. Particularly, information granulation is a natural technique to dispense unnecessary details and emphasize transparency, interpretability and scalability of information systems. Uncertain (granular) data arise from imprecise perception or description of the value of a variable. Broadly stated, various factors can affect one's choice of data representation such that the representing object conveys the meaning of the concept it is being used to represent. Of particular concern to this work are numerical, interval, and fuzzy types of granular data; and interval, fuzzy, and neurofuzzy modeling frameworks. Learning in evolving granular systems is based on incremental algorithms that build model structure from scratch on a per-sample basis and adapt model parameters whenever necessary. This learning paradigm is meaningful once it avoids redesigning and retraining models all along if the system changes. Application examples in classification, function approximation, time-series prediction and control using real and synthetic data illustrate the usefulness of the granular approaches and framework proposed. The behavior of nonstationary data streams with gradual and abrupt regime shifts is also analyzed in the realm of evolving granular computing. We shed light upon the role of interval, fuzzy, and neurofuzzy computing in processing uncertain data and providing high-quality approximate solutions and rule summary of input-output data sets. The approaches and framework introduced constitute a natural extension of evolving intelligent systems over numeric data streams to evolving granular systems over granular data streamsDoutoradoAutomaçãoDoutor em Engenharia Elétric

    Procesamiento digital de imágenes citogenéticas para su clasificación según la técnica ensayo cometa para la detección de daños en el ADN

    Get PDF
    En este trabajo se estudian y analizan diferentes técnicas de clasificación, segmentación y extracción de contornos de imágenes digitales de origen biológico. En particular se centra la atención en los procesos que utilizan funciones con algoritmos neuro-fuzzy y de redes neuronales. Se presentan aquí algunos resultados logrados hasta el momento a través del desarrollo e implementación de un prototipo aplicado a imágenes obtenidas mediante microscopios, específicamente a aquellas parametrizadas según la técnica de ensayo cometa empleada por el Laboratorio de Citogenética General y Monitoreo Ambiental de la UNaM-IBS-CONICET para la detección de daños en el ADN con validación visual y numérica que avalan lo logrado.Eje: Agentes y Sistemas Inteligentes.Red de Universidades con Carreras en Informátic

    Roll compaction of pharmaceutical excipients and prediction using intelligent software

    Get PDF
    Roll compaction is a dry granulation method. In the pharmaceutical industry it assists in binding tablet ingredients together to form a larger mass. This is conducted to ease subsequent processing, decrease dust, improve flowability, improve material distribution, more suitable for moisture and heat sensitive materials than wet granulation methods, minimises operating space and suited for a continuous manufacturing set-up. In pharmaceutical roll compaction various types of powder material mixtures are compacted into ribbon that are subsequently milled and tableted. The aim of this research is to investigate the use of intelligent software (FormRules and INForm software) for predicting the effects of the roll compaction process and formulation characteristics on final ribbon quality. Firstly, the tablet formulations were characterised in terms of their particle size distribution, densities, compressibility, compactibility, effective angle of friction and angle of wall friction. These tablet formulations were then roll compacted. The tablet formulation characteristics and roll compaction results formed 64 datasets, which were then used in FormRules and INForm software training. FormRules software highlighted the key input variables (i.e. tablet formulations, characteristics and roll compaction process parameters). Next these key input variables were used as input variables in the model development training of INForm. The INForm software produced models which were successful in predicting experimental results. The predicted nip angle values of the INForm models were found to be within 5%, which was more accurate to those derived from Johanson’s model prediction. The Johanson’s model was not successful in predicting nip angle above the roll speed of 1 rpm due to air entrainment. It also over-predicted the experimental nip angle of DCPA and MCC by 200%, while the approximation using Johanson’s pressure profile under-predicted the experimental nip angle of DCPA by 5-20% and MCC by 20%

    Soft computing for tool life prediction a manufacturing application of neural - fuzzy systems

    Get PDF
    Tooling technology is recognised as an element of vital importance within the manufacturing industry. Critical tooling decisions related to tool selection, tool life management, optimal determination of cutting conditions and on-line machining process monitoring and control are based on the existence of reliable detailed process models. Among the decisive factors of process planning and control activities, tool wear and tool life considerations hold a dominant role. Yet, both off-line tool life prediction, as well as real tune tool wear identification and prediction are still issues open to research. The main reason lies with the large number of factors, influencing tool wear, some of them being of stochastic nature. The inherent variability of workpiece materials, cutting tools and machine characteristics, further increases the uncertainty about the machining optimisation problem. In machining practice, tool life prediction is based on the availability of data provided from tool manufacturers, machining data handbooks or from the shop floor. This thesis recognises the need for a data-driven, flexible and yet simple approach in predicting tool life. Model building from sample data depends on the availability of a sufficiently rich cutting data set. Flexibility requires a tool-life model with high adaptation capacity. Simplicity calls for a solution with low complexity and easily interpretable by the user. A neural-fuzzy systems approach is adopted, which meets these targets and predicts tool life for a wide range of turning operations. A literature review has been carried out, covering areas such as tool wear and tool life, neural networks, frizzy sets theory and neural-fuzzy systems integration. Various sources of tool life data have been examined. It is concluded that a combined use of simulated data from existing tool life models and real life data is the best policy to follow. The neurofuzzy tool life model developed is constructed by employing neural network-like learning algorithms. The trained model stores the learned knowledge in the form of frizzy IF-THEN rules on its structure, thus featuring desired transparency. Low model complexity is ensured by employing an algorithm which constructs a rule base of reduced size from the available data. In addition, the flexibility of the developed model is demonstrated by the ease, speed and efficiency of its adaptation on the basis of new tool life data. The development of the neurofuzzy tool life model is based on the Fuzzy Logic Toolbox (vl.0) of MATLAB (v4.2cl), a dedicated tool which facilitates design and evaluation of fuzzy logic systems. Extensive results are presented, which demonstrate the neurofuzzy model predictive performance. The model can be directly employed within a process planning system, facilitating the optimisation of turning operations. Recommendations aremade for further enhancements towards this direction

    Evolving fuzzy and neuro-fuzzy approaches in clustering, regression, identification, and classification: A Survey

    Get PDF
    Major assumptions in computational intelligence and machine learning consist of the availability of a historical dataset for model development, and that the resulting model will, to some extent, handle similar instances during its online operation. However, in many real world applications, these assumptions may not hold as the amount of previously available data may be insufficient to represent the underlying system, and the environment and the system may change over time. As the amount of data increases, it is no longer feasible to process data efficiently using iterative algorithms, which typically require multiple passes over the same portions of data. Evolving modeling from data streams has emerged as a framework to address these issues properly by self-adaptation, single-pass learning steps and evolution as well as contraction of model components on demand and on the fly. This survey focuses on evolving fuzzy rule-based models and neuro-fuzzy networks for clustering, classification and regression and system identification in online, real-time environments where learning and model development should be performed incrementally. (C) 2019 Published by Elsevier Inc.Igor Škrjanc, Jose Antonio Iglesias and Araceli Sanchis would like to thank to the Chair of Excellence of Universidad Carlos III de Madrid, and the Bank of Santander Program for their support. Igor Škrjanc is grateful to Slovenian Research Agency with the research program P2-0219, Modeling, simulation and control. Daniel Leite acknowledges the Minas Gerais Foundation for Research and Development (FAPEMIG), process APQ-03384-18. Igor Škrjanc and Edwin Lughofer acknowledges the support by the ”LCM — K2 Center for Symbiotic Mechatronics” within the framework of the Austrian COMET-K2 program. Fernando Gomide is grateful to the Brazilian National Council for Scientific and Technological Development (CNPq) for grant 305906/2014-3

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Data-driven sensors and their applications

    Get PDF
    Virtuální senzory jsou postupně se rozšiřující technikou v oblasti průmyslových měření. Jedná se o počítačové programy, které za pomoci dříve získaných dat poskytují další údaje podobně jako klasické hardwarové senzory. Tyto údaje získávají pomocí prediktivních modelů založených na metodách strojového učení jako jsou například neuronové sítě nebo support vector machines. Tato práce obsahuje především rešerši fungování, struktur a tvorby virtuálních senzorů. Dále popisuje strojové učení, rozdělení jeho algoritmů a seznamuje s metodami běžně využívanými v oblasti virtuálních senzorů. Ke konci autor popisuje jejich možný budoucí vývoj a směr dalších aplikací.Soft sensors are a gradually expanding technique in the field of industrial measurement. These sensors are computer programs that provide additional data using previously acquired data in a similar way to conventional hardware sensors. The additional data is obtained using predictive models based on machine learning methods such as neural networks or support vector machines. This work mainly includes a research on the function, structure and creation of soft sensors. It also describes machine learning, the distribution of its algorithms and introduces the methods commonly used in the field of virtual sensors. Towards the end, the author describes possible future development of soft sensors and the direction of further applications.

    New Fundamental Technologies in Data Mining

    Get PDF
    The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining

    Fuzzy Logic

    Get PDF
    The capability of Fuzzy Logic in the development of emerging technologies is introduced in this book. The book consists of sixteen chapters showing various applications in the field of Bioinformatics, Health, Security, Communications, Transportations, Financial Management, Energy and Environment Systems. This book is a major reference source for all those concerned with applied intelligent systems. The intended readers are researchers, engineers, medical practitioners, and graduate students interested in fuzzy logic systems

    Transformation of graphical models to support knowledge transfer

    Get PDF
    Menschliche Experten verfügen über die Fähigkeit, ihr Entscheidungsverhalten flexibel auf die jeweilige Situation abzustimmen. Diese Fähigkeit zahlt sich insbesondere dann aus, wenn Entscheidungen unter beschränkten Ressourcen wie Zeitrestriktionen getroffen werden müssen. In solchen Situationen ist es besonders vorteilhaft, die Repräsentation des zugrunde liegenden Wissens anpassen und Entscheidungsmodelle auf unterschiedlichen Abstraktionsebenen verwenden zu können. Weiterhin zeichnen sich menschliche Experten durch die Fähigkeit aus, neben unsicheren Informationen auch unscharfe Wahrnehmungen in die Entscheidungsfindung einzubeziehen. Klassische entscheidungstheoretische Modelle basieren auf dem Konzept der Rationalität, wobei in jeder Situation die nutzenmaximale Entscheidung einer Entscheidungsfunktion zugeordnet wird. Neuere graphbasierte Modelle wie Bayes\u27sche Netze oder Entscheidungsnetze machen entscheidungstheoretische Methoden unter dem Aspekt der Modellbildung interessant. Als Hauptnachteil lässt sich die Komplexität nennen, wobei Inferenz in Entscheidungsnetzen NP-hart ist. Zielsetzung dieser Dissertation ist die Transformation entscheidungstheoretischer Modelle in Fuzzy-Regelbasen als Zielsprache. Fuzzy-Regelbasen lassen sich effizient auswerten, eignen sich zur Approximation nichtlinearer funktionaler Beziehungen und garantieren die Interpretierbarkeit des resultierenden Handlungsmodells. Die Übersetzung eines Entscheidungsmodells in eine Fuzzy-Regelbasis wird durch einen neuen Transformationsprozess unterstützt. Ein Agent kann zunächst ein Bayes\u27sches Netz durch Anwendung eines in dieser Arbeit neu vorgestellten parametrisierten Strukturlernalgorithmus generieren lassen. Anschließend lässt sich durch Anwendung von Präferenzlernverfahren und durch Präzisierung der Wahrscheinlichkeitsinformation ein entscheidungstheoretisches Modell erstellen. Ein Transformationsalgorithmus kompiliert daraus eine Regelbasis, wobei ein Approximationsmaß den erwarteten Nutzenverlust als Gütekriterium berechnet. Anhand eines Beispiels zur Zustandsüberwachung einer Rotationsspindel wird die Praxistauglichkeit des Konzeptes gezeigt.Human experts are able to flexible adjust their decision behaviour with regard to the respective situation. This capability pays in situations under limited resources like time restrictions. It is particularly advantageous to adapt the underlying knowledge representation and to make use of decision models at different levels of abstraction. Furthermore human experts have the ability to include uncertain information and vague perceptions in decision making. Classical decision-theoretic models are based directly on the concept of rationality, whereby the decision behaviour prescribed by the principle of maximum expected utility. For each observation some optimal decision function prescribes an action that maximizes expected utility. Modern graph-based methods like Bayesian networks or influence diagrams make use of modelling. One disadvantage of decision-theoretic methods concerns the issue of complexity. Finding an optimal decision might become very expensive. Inference in decision networks is known to be NP-hard. This dissertation aimed at combining the advantages of decision-theoretic models with rule-based systems by transforming a decision-theoretic model into a fuzzy rule-based system. Fuzzy rule bases are an efficient implementation from a computational point of view, they can approximate non-linear functional dependencies and they are also intelligible. There was a need for establishing a new transformation process to generate rule-based representations from decision models, which provide an efficient implementation architecture and represent knowledge in an explicit, intelligible way. At first, an agent can apply the new parameterized structure learning algorithm to identify the structure of the Bayesian network. The use of learning approaches to determine preferences and the specification of probability information subsequently enables to model decision and utility nodes and to generate a consolidated decision-theoretic model. Hence, a transformation process compiled a rule base by measuring the utility loss as approximation measure. The transformation process concept has been successfully applied to the problem of representing condition monitoring results for a rotation spindle
    corecore