50 research outputs found

    A Fuzzy Logic Programming Environment for Managing Similarity and Truth Degrees

    Full text link
    FASILL (acronym of "Fuzzy Aggregators and Similarity Into a Logic Language") is a fuzzy logic programming language with implicit/explicit truth degree annotations, a great variety of connectives and unification by similarity. FASILL integrates and extends features coming from MALP (Multi-Adjoint Logic Programming, a fuzzy logic language with explicitly annotated rules) and Bousi~Prolog (which uses a weak unification algorithm and is well suited for flexible query answering). Hence, it properly manages similarity and truth degrees in a single framework combining the expressive benefits of both languages. This paper presents the main features and implementations details of FASILL. Along the paper we describe its syntax and operational semantics and we give clues of the implementation of the lattice module and the similarity module, two of the main building blocks of the new programming environment which enriches the FLOPER system developed in our research group.Comment: In Proceedings PROLE 2014, arXiv:1501.0169

    An augmented lagrangian fish swarm based method for global optimization

    Get PDF
    This paper presents an augmented Lagrangian methodology with a stochastic population based algorithm for solving nonlinear constrained global optimization problems. The method approximately solves a sequence of simple bound global optimization subproblems using a fish swarm intelligent algorithm. A stochastic convergence analysis of the fish swarm iterative process is included. Numerical results with a benchmark set of problems are shown, including a comparison with other stochastic-type algorithms.Fundação para a Ciência e a Tecnologia (FCT

    BENCHOP - The BENCHmarking project in Option Pricing

    Get PDF
    The aim of the BENCHOP project is to provide the finance community with a common suite of benchmark problems for option pricing. We provide a detailed description of the six benchmark problems together with methods to compute reference solutions. We have implemented fifteen different numerical methods for these problems, and compare their relative performance. All implementations are available on line and can be used for future development and comparison

    Técnicas inteligentes para el análisis de condiciones medioambientales

    Get PDF
    [ES] Como es bien sabido, la calidad del aire es un tema importante y preocupante en la actualidad que afecta no solamente a la salud humana sino a otros muchos aspectos como el cambio climático o la supervivencia de la biosfera. En los últimos años, numerosas entidades públicas se han ido adaptando a las restrictivas medidas de contaminación ambiental impuestas por las diversas normativas europeas, siendo España uno de los países obligados a cumplir estas normativas. Tanto en España como en otros países existen diversas redes de monitorización de la calidad del aire y de adquisición de valores meteorológicos de una forma continua. Estas redes de estaciones de medida no sólo están presentes en las grandes ciudades sino también en zonas periféricas, polígonos industriales y en zonas donde la preservación de la naturaleza es fundamental. Además, están sometidas a constantes procesos de reordenación para mejorar su función. En la presente Tesis Doctoral se han aplicado diversas técnicas inteligentes (Soft Computing más específicamente) a conjuntos de datos públicos con información meteorológica y/o de calidad del aire. Las técnicas aplicadas llevan a cabo fundamentalmente dos tareas: reducción de la dimensionalidad y agrupamiento (clustering). Estas se han aplicado de forma aislada y de forma combinada para mejorar los resultados obtenidos en el análisis de la información medioambiental. Las técnicas de reducción de la dimensionalidad aplicadas son: Principal Component Analysis (PCA) como técnica aplicada en primer lugar para obtener una primera aproximación a la estructura del conjunto de datos, Locally Linear Embedding (LLE) como técnica no lineal local, Maximum Likelihood Hebbian Learning (MLHL) y Cooperative Maximum Likelihood Hebbian Learning (CMLHL) como modelos neuronales que implementan Exploratory Projection Pursuit, Curvilinear Component Analysis (CCA) como modelo no lineal que intenta preservar la distancia entre los puntos en la salida, Multidimensional Scalling (MDS) como técnica global no lineal basada en la matriz de distancias, Isometric Mapping (ISOMAP) como técnica derivada de MDS y los Self-Organizing Maps (SOM), un importante modelo neuronal que implementa aprendizaje competitivo. Las técnicas de agrupamiento aplicadas han sido por una lado particionales: k-means como primer método a aplicar en agrupamiento y que busca la asignación de muestras a grupos aplicando métricas de distancia, SOM k-means que utiliza los algoritmos de SOM para la actualización de los pesos, k-medoids como técnica derivada de k-means y que asigna el centroide de cada grupo a uno de los puntos del mismo y fuzzy c-means, técnica que aplica lógica difusa para tareas de agrupamiento. Por otro lado, también se ha empleado el método aglomerativo jerárquico en el que se van formando los grupos de forma ascendente, junto con diversos métodos de evaluación de agrupamiento que sirven para determinar el posible número de grupos existentes en un conjunto de datos y dendrogramas para obtener una representación gráfica de la agrupación de los datos en forma de árbol. Los casos de estudio han sido cuidadosamente seleccionados y se extienden desde el ámbito local, regional hasta el nacional. Por otra parte, también se ha dado importancia a los periodos de tiempo seleccionados. En alguno de los estudios se analizan periodos de tiempo tan cortos como un día para el análisis de la meteorología/calidad del aire en un breve periodo de tiempo en un lugar determinado, mientras que en otros se emplean ventanas temporales próximas a una década y en los puntos más representativos climatológicamente en España. Partiendo de uno o más conjuntos de datos públicos con la información más completa posible acerca de las condiciones medioambientales (meteorológica, de calidad del aire o ambas), pero siempre analizando variables determinantes en la caracterización de las condiciones medioambientales, el objetivo es extraer la información fundamental almacenada en los conjuntos de datos mediante las técnicas inteligentes. De esta forma es posible analizar las condiciones medioambientales en los casos de estudio seleccionados. En cada uno de los casos de estudio se hace un análisis de la situación meteorológica o de calidad del aire en las localizaciones y periodos seleccionados, buscando semejanzas y diferencias en las muestras de datos analizadas y haciendo énfasis en aquellas situaciones anómalas detectadas y tratando de dar explicación a las mismas. También se hace un análisis comparativo de los resultados obtenidos con las distintas técnicas empleadas, planteando las ventajas e inconvenientes del uso de cada uno de ellas en cada caso de estudio. Las técnicas de reducción de la dimensionalidad resultan de gran utilidad para analizar gráficamente conjuntos de datos multidimensionales, encontrar relaciones en los datos y detectar situaciones anómalas. De manera complementaria, las técnicas de agrupamiento revelan la estructura de un conjunto de datos asignando las muestras de datos a los distintos grupos en función de las medidas de distancias y similitud aplicadas. Esto resulta de gran utilidad en el presente trabajo para entender las semejanzas y diferencias en la meteorología y/o calidad del aire de los distintos puntos seleccionados en cada caso de estudio. [EN] It is well known that air quality is an important and worrying issue nowadays, affecting not only human health but also many other aspects such as climate change or the survival of the biosphere. In recent years, many public institutions have been adapted to the restrictive normative about environmental pollution imposed by European regulations, being Spain one of the countries that must comply with these regulations. Both in Spain and in other countries there are various air-quality networks and stations for the continuous acquisition of meteorological parameters. These networks are not only present in big cities, but also in peripheral and industrial areas, as well as in places where the preservation of nature is fundamental key issue. Furthermore, they are constantly rearranged to improve their function. In present PhD Thesis, different intelligent techniques (more specifically, Soft Computing techniques) have been applied to publicly available databases with air quality and/or meteorological information. The applied techniques perform two fundamental tasks: dimensionality reduction and clustering. They have been applied in isolation and in conjunction in order to improve the results in the analysis of environmental conditions. The applied dimensionality reductions techniques are: Principal Component Analysis (PCA) as the technique firstly applied to obtain an approximation to the dataset structure, Locally Linear Embedding (LLE) as a non-linear local technique, Maximum Likelihood Hebbian Learning (MLHL) and Cooperative Maximum Likelihood Hebbian Learning (CMLHL) as neural models which implement Exploratory Projection Pursuit, Curvilinear Component Analysis (CCA) as a non-linear technique which tries to preserve the interpoint distance in the output space, Multidimensional Scalling (MDS) as a non-linear global technique operating with the distance matrix, Isometric Mapping (ISOMAP) as a technique derived from MDS and Self-Organizing Maps (SOM), as a competitive learning neural model. The applied clustering techniques are, on the one hand partitional techniques: k-means as the clustering technique firstly applied, which assigns samples to groups using distance metrics, SOM k-means which use the SOM algorithm for the weight updating process, k-medoids as a k-means derived technique which assigns the centroid of each cluster to one of the belonging samples, and fuzzy c-means as a fuzzy-logic based technique for grouping samples. On the other hand, hierarchical agglomerative techniques have also been applied (where groups are formed in an ascending way) together with different clustering evaluation indexes, used to determine the possible number of existing groups in a dataset, and finally dendrograms for a tree-form graphical representation of clustering. Case studies have been carefully selected and range from local, regional to national contexts. Similarly, the selected periods of time have also been a priority. In some of the studies, the analyzed period of time is one day long, considered for the analysis of meteorological / air quality in a short time interval in a certain place, while in other cases, long periods of time (close to a decade), are used to analyze some of the most climatological representative places in Spain. From one or more public datasets comprising all the information about environmental conditions (weather, air quality, or both), but always analyzing key variables in the characterization of environmental conditions, the goal is to extract the meaningfully information in the datasets by applying intelligent techniques. This leads to an analysis of the environmental conditions in the selected case studies. In each case study, an analysis of the weather or air quality conditions is carried out in the selected places and periods of time, searching for similarities and differences in the analyzed data samples, emphasizing those detected anomalous situations and trying to give an explanation to these phenomena’s. A comparative analysis of the results obtained with the different techniques applied is also performed, considering the advantages and disadvantages of using each of them in each case study Dimensionality reduction techniques are useful for graphically analyzing high-dimensional data sets, find relationships in datasets and detect anomalous situations. Complementarily, clustering techniques reveal the structure of datasets by assigning the data samples to different clusters depending on the applied distance and similarity measures. This is useful in present work to understand the similarities and differences in the meteorological and / or air quality conditions of the different locations selected in each case study

    Real-Time Implementation and Performance Optimization of Local Derivative Pattern Algorithm on GPUs

    Get PDF
    Pattern based texture descriptors are widely used in Content Based Image Retrieval (CBIR) for efficient retrieval of matching images. Local Derivative Pattern (LDP), a higher order local pattern operator, originally proposed for face recognition, encodes the distinctive spatial relationships contained in a local region of an image as the feature vector. LDP efficiently extracts finer details and provides efficient retrieval however, it was proposed for images of limited resolution. Over the period of time the development in the digital image sensors had paid way for capturing images at a very high resolution. LDP algorithm though very efficient in content-based image retrieval did not scale well when capturing features from such high-resolution images as it becomes computationally very expensive. This paper proposes how to efficiently extract parallelism from the LDP algorithm and strategies for optimally implementing it by exploiting some inherent General-Purpose Graphics Processing Unit (GPGPU) characteristics. By optimally configuring the GPGPU kernels, image retrieval was performed at a much faster rate. The LDP algorithm was ported on to Compute Unified Device Architecture (CUDA) supported GPGPU and a maximum speed up of around 240x was achieved as compared to its sequential counterpart

    Parallel approaches to shortest-path problems for multilevel heterogeneous computing

    Get PDF
    Existen diferentes algoritmos que solucionan problemas de computación del camino-más-corto. Estos problemas son clave dentro de la optimización combinatoria por sus múltiples aplicaciones en la vida real. Últimamente, el interés de la comunidad científica por ellos crece significativamente, no sólo por la amplia aplicabilidad de sus soluciones, sino también por el uso eficiente de la computación paralela. La aparición de nuevos modelos de programación junto con las modernas GPUs, ha enriquecido el rendimiento de los algoritmos paralelos anteriores, y ha propiciado la creación otros más eficientes. El uso conjunto de estos dispositivos junto con las CPUs conforman la herramienta perfecta para enfrentarse a los problemas más costosos del cálculo de caminos-más-cortos. Esta Tesis Doctoral aborda ambos contextos mediante: el desarrollo de nuevos planteamientos sobre GPUs para problemas de caminos-más-cortos, junto con el estudio de configuraciones óptimas; y el diseño de soluciones que combinan algoritmos secuenciales y paralelos en entornos heterogéneos.Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos

    Front Propagation in Random Media

    Get PDF
    This PhD thesis deals with the problem of the propagation of fronts under random circumstances. A statistical model to represent the motion of fronts when are evolving in a media characterized by microscopical randomness is discussed and expanded, in order to cope with three distinct applications: wild-land fire simulation, turbulent premixed combustion, biofilm modeling. In the studied formalism, the position of the average front is computed by making use of a sharp-front evolution method, such as the level set method. The microscopical spread of particles which takes place around the average front is given by the probability density function linked to the underlying diffusive process, that is supposedly known in advance. The adopted statistical front propagation framework allowed a deeper understanding of any studied field of application. The application of this model introduced eventually parameters whose impact on the physical observables of the front spread have been studied with Uncertainty Quantification and Sensitivity Analysis tools. In particular, metamodels for the front propagation system have been constructed in a non intrusive way, by making use of generalized Polynomial Chaos expansions and Gaussian Processes.The Thesis received funding from Basque Government through the BERC 2014-2017 program. It was also funded by the Spanish Ministry of Economy and Competitiveness MINECO via the BCAM Severo Ochoa SEV-2013-0323 accreditation. The PhD is fundend by La Caixa Foundation through the PhD grant “La Caixa 2014”. Funding from “Programma Operativo Nazionale Ricerca e Innovazione” (PONRI 2014-2020) , “Innotavive PhDs with Industrial Characterization” is kindly acknowledged for a research visit at the department of Mathematics and Applications “Renato Caccioppoli” of University “Federico II” of Naples
    corecore