1,742 research outputs found

    Fuzzy linear programming problems : models and solutions

    No full text
    We investigate various types of fuzzy linear programming problems based on models and solution methods. First, we review fuzzy linear programming problems with fuzzy decision variables and fuzzy linear programming problems with fuzzy parameters (fuzzy numbers in the definition of the objective function or constraints) along with the associated duality results. Then, we review the fully fuzzy linear programming problems with all variables and parameters being allowed to be fuzzy. Most methods used for solving such problems are based on ranking functions, alpha-cuts, using duality results or penalty functions. In these methods, authors deal with crisp formulations of the fuzzy problems. Recently, some heuristic algorithms have also been proposed. In these methods, some authors solve the fuzzy problem directly, while others solve the crisp problems approximately

    Fuzzy Knowledge Based Reliability Evaluation and Its Application to Power Generating System

    Get PDF
    PhDThe method of using Fuzzy Sets Theory(FST) and Fuzzy Reasoning(FR) to aid reliability evaluation in a complex and uncertain environment is studied, with special reference to electrical power generating system reliability evaluation. Device(component) reliability prediction contributes significantly to a system's reliability through their ability to identify source and causes of unreliability. The main factors which affect reliability are identified in Reliability Prediction Process(RPP). However, the relation between reliability and each affecting factor is not a necessary and sufficient one. It is difficult to express this kind of relation precisely in terms of quantitative mathematics. It is acknowledged that human experts possesses some special characteristics that enable them to learn and reason in a vague and fuzzy environment based on their experience. Therefore, reliability prediction can be classified as a human engineer oriented decision process. A fuzzy knowledge based reliability prediction framework, in which speciality rather than generality is emphasised, is proposed in the first part of the thesis. For this purpose, various factors affected device reliability are investigated and the knowledge trees for predicting three reliability indices, i.e. failure rate, maintenance time and human error rate are presented. Human experts' empirical and heuristic knowledge are represented by fuzzy linguistic rules and fuzzy compositional rule of inference is employed as inference tool. Two approaches to system reliability evaluation are presented in the second part of this thesis. In first approach, fuzzy arithmetic are conducted as the foundation for system reliability evaluation under the fuzzy envimnment The objective is to extend the underlying fuzzy concept into strict mathematics framework in order to arrive at decision on system adequacy based on imprecise and qualitative information. To achieve this, various reliability indices are modelled as Trapezoidal Fuzzy Numbers(TFN) and are proceeded by extended fuzzy arithmetic operators. In second approach, the knowledge of system reliability evaluation are modelled in the form of fuzzy combination production rules and device combination sequence control algorithm. System reliability are evaluated by using fuzzy inference system. Comparison of two approaches are carried out through case studies. As an application, power generating system reliability adequacy is studied. Under the assumption that both unit reliability data and load data are subjectively estimated, these fuzzy data are modelled as triangular fuzzy numbers, fuzzy capacity outage model and fuzzy load model are developed by using fuzzy arithmetic operations. Power generating system adequacy is evaluated by convoluting fuzzy capacity outage model with fuzzy load model. A fuzzy risk index named "Possibility Of Load Loss" (POLL) is defined based on the concept of fuzzy containment The proposed new index is tested on IEEE Reliability Test System (RTS) and satisfactory results are obtained Finally, the implementation issues of Fuzzy Rule Based Expert System Shell (FRBESS) are reported. The application of ERBESS to device reliability prediction and system reliability evaluation is discussed

    A Redundancy Detection Algorithm for Fuzzy Stochastic Multi-Objective Linear Fractional Programming Problems

    Get PDF
    The computational complexity of linear and nonlinear programming problems depends on the number of objective functions and constraints involved and solving a large problem often becomes a difficult task. Redundancy detection and elimination provides a suitable tool for reducing this complexity and simplifying a linear or nonlinear programming problem while maintaining the essential properties of the original system. Although a large number of redundancy detection methods have been proposed to simplify linear and nonlinear stochastic programming problems, very little research has been developed for fuzzy stochastic (FS) fractional programming problems. We propose an algorithm that allows to simultaneously detect both redundant objective function(s) and redundant constraint(s) in FS multi-objective linear fractional programming problems. More precisely, our algorithm reduces the number of linear fuzzy fractional objective functions by transforming them in probabilistic-possibilistic constraints characterized by predetermined confidence levels. We present two numerical examples to demonstrate the applicability of the proposed algorithm and exhibit its efficacy

    Acta Cybernetica : Volume 21. Number 4.

    Get PDF

    Disease diagnosis in smart healthcare: Innovation, technologies and applications

    Get PDF
    To promote sustainable development, the smart city implies a global vision that merges artificial intelligence, big data, decision making, information and communication technology (ICT), and the internet-of-things (IoT). The ageing issue is an aspect that researchers, companies and government should devote efforts in developing smart healthcare innovative technology and applications. In this paper, the topic of disease diagnosis in smart healthcare is reviewed. Typical emerging optimization algorithms and machine learning algorithms are summarized. Evolutionary optimization, stochastic optimization and combinatorial optimization are covered. Owning to the fact that there are plenty of applications in healthcare, four applications in the field of diseases diagnosis (which also list in the top 10 causes of global death in 2015), namely cardiovascular diseases, diabetes mellitus, Alzheimer’s disease and other forms of dementia, and tuberculosis, are considered. In addition, challenges in the deployment of disease diagnosis in healthcare have been discussed

    Entwicklung eines auf Fuzzy-Regeln basierten Expertensystems zur Hochwasservorhersage im mesoskaligen Einzugsgebiet des Oberen Mains

    Get PDF
    People worldwide are faced with flood events of different magnitudes. A timely and reliable flood forecast is essential for the people to save goods and, more important, lives. The development of a fuzzy rule based flood forecast system considering extreme flood events within meso-scale catchments and with return periods of 100 years and more is the main objective of this work. Considering one river catchment extreme flood events are usually seldom. However, these data are essential for a reliable setup of warning systems. In this work the database is extended by simulations of possible flood events performing the hydrological model WaSiM-ETH (Water balance Simulation model ETH) driven by generated precipitation fields. The therefore required calibration of the hydrological model is performed applying the genetic optimization algorithm SCE (Shuffled Complex Evolution). Thereby, different SCE configuration setups are investigated and an optimization strategy for the Upper Main basin is developed in order to ensure reliable und satisfying calibration results. In this thesis the developed forecast system comprises different time horizons (3 days; 6, 12, and 48 hours) in order to ensure a reliable and continuous flood forecast at the three main gauges of the Upper Main river. Thereby, the focus of the different fuzzy inference systems lies on different discharge conditions, which together ensure a continuous flood forecast. In this work the performance of the two classical fuzzy inference systems, Mamdani and Takagi-Sugeno, is investigated considering all four forecast horizons. Thereby, a wide variety of different input features, among others Tukey data depth, is taken into consideration. For the training of the fuzzy inference systems the SA (Simulated Annealing) optimization algorithm is applied. A further performance comparison is carried out considering the 48 hour forecast behaviour of the two fuzzy inference systems and the hydrological model WaSiM-ETH. In this work the expert system ExpHo-HORIX is developed in order to combine the single, trained fuzzy inference systems to one overall flood warning system. This expert system ensures beside the fast forecast a quantification of uncertainties within a manageable, user-friendly, and transparent framework which can be easily implemented into an exiting environment.Menschen weltweit werden mit Hochwasserereignissen unterschiedlicher Stärke konfrontiert. Um Eigentum und, noch viel wichtiger, Leben zu retten, ist eine rechtzeitige und zuverlässige Hochwasserwarnung und folglich -vorhersage unerlässlich. Ziel dieser Arbeit ist es deshalb, ein auf Fuzzy-Regeln basiertes Hochwasserwarnsystem für mesoskalige Einzugsgebiete und die Vorhersage von extremen Hochwasserereignissen mit Wiederkehrperioden von 100 Jahren und mehr unter Berücksichtigung von Unsicherheiten zu entwickeln. Da extreme Hochwasserereignisse mit einer Jährlichkeit von 100 oder mehr Jahren in der Realität nicht in jedem Einzugsgebiet bereits beobachtet und aufgezeichnet wurden, ist eine Erweiterung der Datenbank auf Grund von Modellsimulationen zwingend notwendig. In dieser Arbeit werden hierzu das hydrologische Modell WaSiM-ETH (Wasserhaushalts-Simulations-Modell ETH) sowie von Bliefernicht et al. (2008) generierte Niederschlagsfelder verwendet. Die Kalibrierung des Modells erfolgt mit dem SCE (Shuffled Complex Evolution) Optimierungsalgorithmus. Um reproduzierbare Kalibrierungsergebnisse zu erzielen und die notwendige Kalibrierungszeit möglichst gering zu halten, werden unterschiedliche Optimierungskonfigurationen untersucht und eine Kalibrierungsstrategie für das mesoskalige Einzugsgebiet des Oberen Mains entwickelt. Um eine kontinuierliche und zuverlässige Vorhersage zu garantieren, ist die Idee entwickelt worden, Fuzzy-Regelsysteme für unterschiedliche Vorhersagehorizonte (3 Tage; 6, 12 und 48 Stunden) für die drei Hauptpegel des Oberen Mains aufzustellen, die im Zusammenspiel eine kontinuierliche Vorhersage sicher stellen. Der Fokus der 3-Tagesvorhersage liegt hierbei in der zuverlässigen Wiedergabe von geringen und mittleren Abflussbedingungen sowie der zuverlässigen und rechtzeitigen Vorhersage von Überschreitungen einer vordefinierten Meldestufe. Eine vorhergesagte Überschreitung der Meldestufe führt zu einem Wechsel der Vorhersagesysteme von der 3-Tages- zu der 6-, 12- und 48-Stundenvorhersage, deren Fokus auf der Vorhersage der Hochwasserganglinie liegt. In diesem Zusammenhang wird die Effizienz der beiden klassischen Regelsysteme,Mamdani und Takagi-Sugeno, sowie die Kombination unterschiedlicher Eingangsgrößen, unter anderem Tukey Tiefenfunktion, näher untersucht. Ein weiterer Effizienzvergleich wird zwischen den Mamdani Regelsystemen der 48-Stundenvorhersage und dem hydrologischen ModellWaSiM-ETH durchgeführt. Für das Training der beiden Regelsysteme wird der SA (Simulated Annealing) Optimierungsalgorithmus verwendet. Die einzelnen Fuzzy-Regelsysteme werden schließlich in dem entwickelten Hochwasserwarnsystem ExpHo-HORIX (Expertensystem Hochwasser - HORIX) zusammengefügt. Standardmäßig wird für jede Vorhersage die Niederschlagsunsicherheit auf Grund von Ensemble-Vorhersagen innerhalb ExpHo-HORIX analysiert und ausgewiesen. Im Hochwasserfall können für die stündlichen Fuzzy-RegelsystemeModellunsicherheiten des hydrologischenModells, das für die Generierung der Datenbank von Extremereignissen verwendet wurde, zusätzlich ausgewiesen werden. Hierzu müssen zusätzlich Ergebnisse der SCEM Analyse (Grundmann, 2009) vorliegen

    On-the-fly synthesizer programming with rule learning

    Get PDF
    This manuscript explores automatic programming of sound synthesis algorithms within the context of the performative artistic practice known as live coding. Writing source code in an improvised way to create music or visuals became an instrument the moment affordable computers were able to perform real-time sound synthesis with languages that keep their interpreter running. Ever since, live coding has dealt with real time programming of synthesis algorithms. For that purpose, one possibility is an algorithm that automatically creates variations out of a few presets selected by the user. However, the need for real-time feedback and the small size of the data sets (which can even be collected mid-performance) are constraints that make existing automatic sound synthesizer programmers and learning algorithms unfeasible. Also, the design of such algorithms is not oriented to create variations of a sound but rather to find the synthesizer parameters that match a given one. Other approaches create representations of the space of possible sounds, allowing the user to explore it by means of interactive evolution. Even though these systems are exploratory-oriented, they require longer run-times. This thesis investigates inductive rule learning for on-the-fly synthesizer programming. This approach is conceptually different from those found in both synthesizer programming and live coding literature. Rule models offer interpretability and allow working with the parameter values of the synthesis algorithms (even with symbolic data), making preprocessing unnecessary. RuLer, the proposed learning algorithm, receives a dataset containing user labeled combinations of parameter values of a synthesis algorithm. Among those combinations sharing the same label, it analyses the patterns based on dissimilarity. These patterns are described as an IF-THEN rule model. The algorithm parameters provide control to define what is considered a pattern. As patterns are the base for inducting new parameter settings, the algorithm parameters control the degree of consistency of the inducted settings respect to the original input data. An algorithm (named FuzzyRuLer) able to extend IF-THEN rules to hyperrectangles, which in turn are used as the cores of membership functions, is presented. The resulting fuzzy rule model creates a map of the entire input feature space. For such a pursuit, the algorithm generalizes the logical rules solving the contradictions by following a maximum volume heuristics. Across the manuscript it is discussed how, when machine learning algorithms are used as creative tools, glitches, errors or inaccuracies produced by the resulting models are sometimes desirable as they might offer novel, unpredictable results. The evaluation of the algorithms follows two paths. The first focuses on user tests. The second responds to the fact that this work was carried out within the computer science department and is intended to provide a broader, nonspecific domain evaluation of the algorithms performance using extrinsic benchmarks (i.e not belonging to a synthesizer's domain) for cross validation and minority oversampling. In oversampling tasks, using imbalanced datasets, the algorithm yields state-of-the-art results. Moreover, the synthetic points produced are significantly different from those created by the other algorithms and perform (controlled) exploration of more distant regions. Finally, accompanying the research, various performances, concerts and an album were produced with the algorithms and examples of this thesis. The reviews received and collections where the album has been featured show a positive reception within the community. Together, these evaluations suggest that rule learning is both an effective method and a promising path for further research.Aquest manuscrit explora la programació automàtica d’algorismes de síntesi de so dins del context de la pràctica artística performativa coneguda com a live coding. L'escriptura improvisada de codi font per crear música o visuals es va convertir en un instrument en el moment en què els ordinadors van poder realitzar síntesis de so en temps real amb llenguatges que mantenien el seu intèrpret en funcionament. D'aleshores ençà, el live coding comporta la programació en temps real d’algorismes de síntesi de so. Per a aquest propòsit, una possibilitat és tenir un algorisme que creï automàticament variacions a partir d'alguns presets seleccionats. No obstant, la necessitat de retroalimentació en temps real i la petita mida dels conjunts de dades són restriccions que fan que els programadors automàtics de sintetitzadors de so i els algorismes d’aprenentatge no siguin factibles d’utilitzar. A més, el seu disseny no està orientat a crear variacions d'un so, sinó a trobar els paràmetres del sintetitzador que aplicats a l'algorisme de síntesi produeixen un so determinat (target). Altres enfocaments creen representacions de l'espai de sons possibles, per permetre a l'usuari explorar-lo mitjançant l'evolució interactiva, però requereixen temps més llargs. Aquesta tesi investiga l'aprenentatge inductiu de regles per a la programació on-the-fly de sintetitzadors. Aquest enfocament és conceptualment diferent dels que es troben a la literatura. Els models de regles ofereixen interpretabilitat i permeten treballar amb els valors dels paràmetres dels algorismes de síntesi, sense processament previ. RuLer, l'algorisme d'aprenentatge proposat, rep dades amb combinacions etiquetades per l'usuari dels valors dels paràmetres d'un algorisme de síntesi. A continuació, analitza els patrons, basats en la dissimilitud, entre les combinacions de cada etiqueta. Aquests patrons es descriuen com un model de regles IF-THEN. Els paràmetres de l'algorisme proporcionen control per definir el que es considera un patró. Llavors, controlen el grau de consistència dels nous paràmetres de síntesi induïts respecte a les dades d'entrada originals. A continuació, es presenta un algorisme (FuzzyRuLer) capaç d’estendre les regles IF-THEN a hiperrectangles, que al seu torn s’utilitzen com a nuclis de funcions de pertinença. El model de regles difuses resultant crea un mapa complet de l'espai de la funció d'entrada. Per això, l'algorisme generalitza les regles lògiques seguint una heurística de volum màxim. Al llarg del manuscrit es discuteix com, quan s’utilitzen algorismes d’aprenentatge automàtic com a eines creatives, de vegades són desitjables glitches, errors o imprecisions produïdes pels models resultants, ja que poden oferir nous resultats imprevisibles. L'avaluació dels algorismes segueix dos camins. El primer es centra en proves d'usuari. El segon, que respon al fet que aquest treball es va dur a terme dins del departament de ciències de la computació, pretén proporcionar una avaluació més àmplia, no específica d'un domini, del rendiment dels algorismes mitjançant benchmarks extrínsecs utilitzats per cross-validation i minority oversampling. En tasques d'oversampling, mitjançant imbalanced data sets, l'algorisme proporciona resultats equiparables als de l'estat de l'art. A més, els punts sintètics produïts són significativament diferents als creats pels altres algorismes i realitzen exploracions (controlades) de regions més llunyanesEste manuscrito explora la programación automática de algoritmos de síntesis de sonido dentro del contexto de la práctica artística performativa conocida como live coding. La escritura de código fuente de forma improvisada para crear música o imágenes, se convirtió en un instrumento en el momento en que las computadoras asequibles pudieron realizar síntesis de sonido en tiempo real con lenguajes que mantuvieron su interprete en funcionamiento. Desde entonces, el live coding ha implicado la programación en tiempo real de algoritmos de síntesis. Para ese propósito, una posibilidad es tener un algoritmo que cree automáticamente variaciones a partir de unos pocos presets seleccionados. Sin embargo, la necesidad de retroalimentación en tiempo real y el pequeño tamaño de los conjuntos de datos (que incluso pueden recopilarse durante la misma actuación), limitan el uso de los algoritmos existentes, tanto de programación automática de sintetizadores como de aprendizaje de máquina. Además, el diseño de dichos algoritmos no está orientado a crear variaciones de un sonido, sino a encontrar los parámetros del sintetizador que coincidan con un sonido dado. Otros enfoques crean representaciones del espacio de posibles sonidos, para permitir al usuario explorarlo mediante evolución interactiva. Aunque estos sistemas están orientados a la exploración, requieren tiempos más largos. Esta tesis investiga el aprendizaje inductivo de reglas para la programación de sintetizadores on-the-fly. Este enfoque es conceptualmente diferente de los que se encuentran en la literatura, tanto de programación de sintetizadores como de live coding. Los modelos de reglas ofrecen interpretabilidad y permiten trabajar con los valores de los parámetros de los algoritmos de síntesis (incluso con datos simbólicos), haciendo innecesario el preprocesamiento. RuLer, el algoritmo de aprendizaje propuesto, recibe un conjunto de datos que contiene combinaciones, etiquetadas por el usuario, de valores de parámetros de un algoritmo de síntesis. Luego, analiza los patrones, en función de la disimilitud, entre las combinaciones de cada etiqueta. Estos patrones se describen como un modelo de reglas lógicas IF-THEN. Los parámetros del algoritmo proporcionan el control para definir qué se considera un patrón. Como los patrones son la base para inducir nuevas configuraciones de parámetros, los parámetros del algoritmo controlan también el grado de consistencia de las configuraciones inducidas con respecto a los datos de entrada originales. Luego, se presenta un algoritmo (llamado FuzzyRuLer) capaz de extender las reglas lógicas tipo IF-THEN a hiperrectángulos, que a su vez se utilizan como núcleos de funciones de pertenencia. El modelo de reglas difusas resultante crea un mapa completo del espacio de las clases de entrada. Para tal fin, el algoritmo generaliza las reglas lógicas resolviendo las contradicciones utilizando una heurística de máximo volumen. A lo largo del manuscrito se analiza cómo, cuando los algoritmos de aprendizaje automático se utilizan como herramientas creativas, los glitches, errores o inexactitudes producidas por los modelos resultantes son a veces deseables, ya que pueden ofrecer resultados novedosos e impredecibles. La evaluación de los algoritmos sigue dos caminos. El primero se centra en pruebas de usuario. El segundo, responde al hecho de que este trabajo se llevó a cabo dentro del departamento de ciencias de la computación y está destinado a proporcionar una evaluación más amplia, no de dominio específica, del rendimiento de los algoritmos utilizando beanchmarks extrínsecos para cross-validation y oversampling. En estas últimas pruebas, utilizando conjuntos de datos no balanceados, el algoritmo produce resultados equiparables a los del estado del arte. Además, los puntos sintéticos producidos son significativamente diferentes de los creados por los otros algoritmos y realizan una exploración (controlada) de regiones más distantes. Finalmente, acompañando la investigación, realicé diversas presentaciones, conciertos y un ´álbum utilizando los algoritmos y ejemplos de esta tesis. Las críticas recibidas y las listas donde se ha presentado el álbum muestran una recepción positiva de la comunidad. En conjunto, estas evaluaciones sugieren que el aprendizaje de reglas es al mismo tiempo un método eficaz y un camino prometedor para futuras investigaciones.Postprint (published version
    corecore