289 research outputs found

    Soft Computing Techiniques for the Protein Folding Problem on High Performance Computing Architectures

    Get PDF
    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.This work is jointly supported by the FundaciónSéneca (Agencia Regional de Ciencia y Tecnología, Región de Murcia) under grants 15290/PI/2010 and 18946/JLI/13, by the Spanish MEC and European Commission FEDER under grant with reference TEC2012-37945-C02-02 and TIN2012-31345, by the Nils Coordinated Mobility under grant 012-ABEL-CM-2014A, in part financed by the European Regional Development Fund (ERDF). We also thank NVIDIA for hardware donation within UCAM GPU educational and research centers.Ingeniería, Industria y Construcció

    A Hybrid multi-agent architecture and heuristics generation for solving meeting scheduling problem

    Get PDF
    Agent-based computing has attracted much attention as a promising technique for application domains that are distributed, complex and heterogeneous. Current research on multi-agent systems (MAS) has become mature enough to be applied as a technology for solving problems in an increasingly wide range of complex applications. The main formal architectures used to describe the relationships between agents in MAS are centralised and distributed architectures. In computational complexity theory, researchers have classified the problems into the followings categories: (i) P problems, (ii) NP problems, (iii) NP-complete problems, and (iv) NP-hard problems. A method for computing the solution to NP-hard problems, using the algorithms and computational power available nowadays in reasonable time frame remains undiscovered. And unfortunately, many practical problems belong to this very class. On the other hand, it is essential that these problems are solved, and the only possibility of doing this is to use approximation techniques. Heuristic solution techniques are an alternative. A heuristic is a strategy that is powerful in general, but not absolutely guaranteed to provide the best (i.e. optimal) solutions or even find a solution. This demands adopting some optimisation techniques such as Evolutionary Algorithms (EA). This research has been undertaken to investigate the feasibility of running computationally intensive algorithms on multi-agent architectures while preserving the ability of small agents to run on small devices, including mobile devices. To achieve this, the present work proposes a new Hybrid Multi-Agent Architecture (HMAA) that generates new heuristics for solving NP-hard problems. This architecture is hybrid because it is "semi-distributed/semi-centralised" architecture where variables and constraints are distributed among small agents exactly as in distributed architectures, but when the small agents become stuck, a centralised control becomes active where the variables are transferred to a super agent, that has a central view of the whole system, and possesses much more computational power and intensive algorithms to generate new heuristics for the small agents, which find optimal solution for the specified problem. This research comes up with the followings: (1) Hybrid Multi-Agent Architecture (HMAA) that generates new heuristic for solving many NP-hard problems. (2) Two frameworks of HMAA have been implemented; search and optimisation frameworks. (3) New SMA meeting scheduling heuristic. (4) New SMA repair strategy for the scheduling process. (5) Small Agent (SMA) that is responsible for meeting scheduling has been developed. (6) “Local Search Programming” (LSP), a new concept for evolutionary approaches, has been introduced. (7) Two types of super-agent (LGP_SUA and LSP_SUA) have been implemented in the HMAA, and two SUAs (local and global optima) have been implemented for each type. (8) A prototype for HMAA has been implemented: this prototype employs the proposed meeting scheduling heuristic with the repair strategy on SMAs, and the four extensive algorithms on SUAs. The results reveal that this architecture is applicable to many different application domains because of its simplicity and efficiency. Its performance was better than many existing meeting scheduling architectures. HMAA can be modified and altered to other types of evolutionary approaches

    Optimización de algoritmos bioinspirados en sistemas heterogéneos CPU-GPU.

    Get PDF
    Los retos científicos del siglo XXI precisan del tratamiento y análisis de una ingente cantidad de información en la conocida como la era del Big Data. Los futuros avances en distintos sectores de la sociedad como la medicina, la ingeniería o la producción eficiente de energía, por mencionar sólo unos ejemplos, están supeditados al crecimiento continuo en la potencia computacional de los computadores modernos. Sin embargo, la estela de este crecimiento computacional, guiado tradicionalmente por la conocida “Ley de Moore”, se ha visto comprometido en las últimas décadas debido, principalmente, a las limitaciones físicas del silicio. Los arquitectos de computadores han desarrollado numerosas contribuciones multicore, manycore, heterogeneidad, dark silicon, etc, para tratar de paliar esta ralentización computacional, dejando en segundo plano otros factores fundamentales en la resolución de problemas como la programabilidad, la fiabilidad, la precisión, etc. El desarrollo de software, sin embargo, ha seguido un camino totalmente opuesto, donde la facilidad de programación a través de modelos de abstracción, la depuración automática de código para evitar efectos no deseados y la puesta en producción son claves para una viabilidad económica y eficiencia del sector empresarial digital. Esta vía compromete, en muchas ocasiones, el rendimiento de las propias aplicaciones; consecuencia totalmente inadmisible en el contexto científico. En esta tesis doctoral tiene como hipótesis de partida reducir las distancias entre los campos hardware y software para contribuir a solucionar los retos científicos del siglo XXI. El desarrollo de hardware está marcado por la consolidación de los procesadores orientados al paralelismo masivo de datos, principalmente GPUs Graphic Processing Unit y procesadores vectoriales, que se combinan entre sí para construir procesadores o computadores heterogéneos HSA. En concreto, nos centramos en la utilización de GPUs para acelerar aplicaciones científicas. Las GPUs se han situado como una de las plataformas con mayor proyección para la implementación de algoritmos que simulan problemas científicos complejos. Desde su nacimiento, la trayectoria y la historia de las tarjetas gráficas ha estado marcada por el mundo de los videojuegos, alcanzando altísimas cotas de popularidad según se conseguía más realismo en este área. Un hito importante ocurrió en 2006, cuando NVIDIA (empresa líder en la fabricación de tarjetas gráficas) lograba hacerse con un hueco en el mundo de la computación de altas prestaciones y en el mundo de la investigación con el desarrollo de CUDA “Compute Unified Device Arquitecture. Esta arquitectura posibilita el uso de la GPU para el desarrollo de aplicaciones científicas de manera versátil. A pesar de la importancia de la GPU, es interesante la mejora que se puede producir mediante su utilización conjunta con la CPU, lo que nos lleva a introducir los sistemas heterogéneos tal y como detalla el título de este trabajo. Es en entornos heterogéneos CPU-GPU donde estos rendimientos alcanzan sus cotas máximas, ya que no sólo las GPUs soportan el cómputo científico de los investigadores, sino que es en un sistema heterogéneo combinando diferentes tipos de procesadores donde podemos alcanzar mayor rendimiento. En este entorno no se pretende competir entre procesadores, sino al contrario, cada arquitectura se especializa en aquella parte donde puede explotar mejor sus capacidades. Donde mayor rendimiento se alcanza es en estos clústeres heterogéneos, donde múltiples nodos son interconectados entre sí, pudiendo dichos nodos diferenciarse no sólo entre arquitecturas CPU-GPU, sino también en las capacidades computacionales dentro de estas arquitecturas. Con este tipo de escenarios en mente, se presentan nuevos retos en los que lograr que el software que hemos elegido como candidato se ejecuten de la manera más eficiente y obteniendo los mejores resultados posibles. Estas nuevas plataformas hacen necesario un rediseño del software para aprovechar al máximo los recursos computacionales disponibles. Se debe por tanto rediseñar y optimizar los algoritmos existentes para conseguir que las aportaciones en este campo sean relevantes, y encontrar algoritmos que, por su propia naturaleza sean candidatos para que su ejecución en dichas plataformas de alto rendimiento sea óptima. Encontramos en este punto una familia de algoritmos denominados bioinspirados, que utilizan la inteligencia colectiva como núcleo para la resolución de problemas. Precisamente esta inteligencia colectiva es la que les hace candidatos perfectos para su implementación en estas plataformas bajo el nuevo paradigma de computación paralela, puesto que las soluciones pueden ser construidas en base a individuos que mediante alguna forma de comunicación son capaces de construir conjuntamente una solución común. Esta tesis se centrará especialmente en uno de estos algoritmos bioinspirados que se engloba dentro del término metaheurísticas bajo el paradigma del Soft Computing, el Ant Colony Optimization “ACO”. Se realizará una contextualización, estudio y análisis del algoritmo. Se detectarán las partes más críticas y serán rediseñadas buscando su optimización y paralelización, manteniendo o mejorando la calidad de sus soluciones. Posteriormente se pasará a implementar y testear las posibles alternativas sobre diversas plataformas de alto rendimiento. Se utilizará el conocimiento adquirido en el estudio teórico-práctico anterior para su aplicación a casos reales, más en concreto se mostrará su aplicación sobre el plegado de proteínas. Todo este análisis es trasladado a su aplicación a un caso concreto. En este trabajo, aunamos las nuevas plataformas hardware de alto rendimiento junto al rediseño e implementación software de un algoritmo bioinspirado aplicado a un problema científico de gran complejidad como es el caso del plegado de proteínas. Es necesario cuando se implementa una solución a un problema real, realizar un estudio previo que permita la comprensión del problema en profundidad, ya que se encontrará nueva terminología y problemática para cualquier neófito en la materia, en este caso, se hablará de aminoácidos, moléculas o modelos de simulación que son desconocidos para los individuos que no sean de un perfil biomédico.Ingeniería, Industria y Construcció

    Flexible provisioning of Web service workflows

    No full text
    Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures

    White Paper 11: Artificial intelligence, robotics & data science

    Get PDF
    198 p. : 17 cmSIC white paper on Artificial Intelligence, Robotics and Data Science sketches a preliminary roadmap for addressing current R&D challenges associated with automated and autonomous machines. More than 50 research challenges investigated all over Spain by more than 150 experts within CSIC are presented in eight chapters. Chapter One introduces key concepts and tackles the issue of the integration of knowledge (representation), reasoning and learning in the design of artificial entities. Chapter Two analyses challenges associated with the development of theories –and supporting technologies– for modelling the behaviour of autonomous agents. Specifically, it pays attention to the interplay between elements at micro level (individual autonomous agent interactions) with the macro world (the properties we seek in large and complex societies). While Chapter Three discusses the variety of data science applications currently used in all fields of science, paying particular attention to Machine Learning (ML) techniques, Chapter Four presents current development in various areas of robotics. Chapter Five explores the challenges associated with computational cognitive models. Chapter Six pays attention to the ethical, legal, economic and social challenges coming alongside the development of smart systems. Chapter Seven engages with the problem of the environmental sustainability of deploying intelligent systems at large scale. Finally, Chapter Eight deals with the complexity of ensuring the security, safety, resilience and privacy-protection of smart systems against cyber threats.18 EXECUTIVE SUMMARY ARTIFICIAL INTELLIGENCE, ROBOTICS AND DATA SCIENCE Topic Coordinators Sara Degli Esposti ( IPP-CCHS, CSIC ) and Carles Sierra ( IIIA, CSIC ) 18 CHALLENGE 1 INTEGRATING KNOWLEDGE, REASONING AND LEARNING Challenge Coordinators Felip Manyà ( IIIA, CSIC ) and Adrià Colomé ( IRI, CSIC – UPC ) 38 CHALLENGE 2 MULTIAGENT SYSTEMS Challenge Coordinators N. Osman ( IIIA, CSIC ) and D. López ( IFS, CSIC ) 54 CHALLENGE 3 MACHINE LEARNING AND DATA SCIENCE Challenge Coordinators J. J. Ramasco Sukia ( IFISC ) and L. Lloret Iglesias ( IFCA, CSIC ) 80 CHALLENGE 4 INTELLIGENT ROBOTICS Topic Coordinators G. Alenyà ( IRI, CSIC – UPC ) and J. Villagra ( CAR, CSIC ) 100 CHALLENGE 5 COMPUTATIONAL COGNITIVE MODELS Challenge Coordinators M. D. del Castillo ( CAR, CSIC) and M. Schorlemmer ( IIIA, CSIC ) 120 CHALLENGE 6 ETHICAL, LEGAL, ECONOMIC, AND SOCIAL IMPLICATIONS Challenge Coordinators P. Noriega ( IIIA, CSIC ) and T. Ausín ( IFS, CSIC ) 142 CHALLENGE 7 LOW-POWER SUSTAINABLE HARDWARE FOR AI Challenge Coordinators T. Serrano ( IMSE-CNM, CSIC – US ) and A. Oyanguren ( IFIC, CSIC - UV ) 160 CHALLENGE 8 SMART CYBERSECURITY Challenge Coordinators D. Arroyo Guardeño ( ITEFI, CSIC ) and P. Brox Jiménez ( IMSE-CNM, CSIC – US )Peer reviewe

    Organization based multiagent architecture for distributed environments

    Get PDF
    [EN]Distributed environments represent a complex field in which applied solutions should be flexible and include significant adaptation capabilities. These environments are related to problems where multiple users and devices may interact, and where simple and local solutions could possibly generate good results, but may not be effective with regards to use and interaction. There are many techniques that can be employed to face this kind of problems, from CORBA to multi-agent systems, passing by web-services and SOA, among others. All those methodologies have their advantages and disadvantages that are properly analyzed in this documents, to finally explain the new architecture presented as a solution for distributed environment problems. The new architecture for solving complex solutions in distributed environments presented here is called OBaMADE: Organization Based Multiagent Architecture for Distributed Environments. It is a multiagent architecture based on the organizations of agents paradigm, where the agents in the architecture are structured into organizations to improve their organizational capabilities. The reasoning power of the architecture is based on the Case-Based Reasoning methology, being implemented in a internal organization that uses agents to create services to solve the external request made by the users. The OBaMADE architecture has been successfully applied to two different case studies where its prediction capabilities have been properly checked. Those case studies have showed optimistic results and, being complex systems, have demonstrated the abstraction and generalizations capabilities of the architecture. Nevertheless OBaMADE is intended to be able to solve much other kind of problems in distributed environments scenarios. It should be applied to other varieties of situations and to other knowledge fields to fully develop its potencial.[ES]Los entornos distribuidos representan un campo de conocimiento complejo en el que las soluciones a aplicar deben ser flexibles y deben contar con gran capacidad de adaptación. Este tipo de entornos está normalmente relacionado con problemas donde varios usuarios y dispositivos entran en juego. Para solucionar dichos problemas, pueden utilizarse sistemas locales que, aunque ofrezcan buenos resultados en términos de calidad de los mismos, no son tan efectivos en cuanto a la interacción y posibilidades de uso. Existen múltiples técnicas que pueden ser empleadas para resolver este tipo de problemas, desde CORBA a sistemas multiagente, pasando por servicios web y SOA, entre otros. Todas estas mitologías tienen sus ventajas e inconvenientes, que se analizan en este documento, para explicar, finalmente, la nueva arquitectura presentada como una solución para los problemas generados en entornos distribuidos. La nueva arquitectura aquí se llama OBaMADE, que es el acrónimo del inglés Organization Based Multiagent Architecture for Distributed Environments (Arquitectura Multiagente Basada en Organizaciones para Entornos Distribuidos). Se trata de una arquitectura multiagente basasa en el paradigma de las organizaciones de agente, donde los agentes que forman parte de la arquitectura se estructuran en organizaciones para mejorar sus capacidades organizativas. La capacidad de razonamiento de la arquitectura está basada en la metodología de razonamiento basado en casos, que se ha implementado en una de las organizaciones internas de la arquitectura por medio de agentes que crean servicios que responden a las solicitudes externas de los usuarios. La arquitectura OBaMADE se ha aplicado de forma exitosa a dos casos de estudio diferentes, en los que se han demostrado sus capacidades predictivas. Aplicando OBaMADE a estos casos de estudio se han obtenido resultados esperanzadores y, al ser sistemas complejos, se han demostrado las capacidades tanto de abstracción como de generalización de la arquitectura presentada. Sin embargo, esta arquitectura está diseñada para poder ser aplicada a más tipo de problemas de entornos distribuidos. Debe ser aplicada a más variadas situaciones y a otros campos de conocimiento para desarrollar completamente el potencial de esta arquitectura

    Dagstuhl News January - December 2007

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Agents in bioinformatics, computational and systems biology

    Get PDF
    The adoption of agent technologies and multi-agent systems constitutes an emerging area in bioinformatics. In this article, we report on the activity of the Working Group on Agents in Bioinformatics (BIOAGENTS) founded during the first AgentLink III Technical Forum meeting on the 2nd of July, 2004, in Rome. The meeting provided an opportunity for seeding collaborations between the agent and bioinformatics communities to develop a different (agent-based) approach of computational frameworks both for data analysis and management in bioinformatics and for systems modelling and simulation in computational and systems biology.The collaborations gave rise to applica- tions and integrated tools that we summarize and discuss in context of the state of the art in this area. We investigate on future challenges and argue that the field should still be explored from many perspectives ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages to be used by information agents, and to the adoption of agents for computational grid

    Regionally distributed architecture for dynamic e-learning environment (RDADeLE)

    Get PDF
    e-Learning is becoming an influential role as an economic method and a flexible mode of study in the institutions of higher education today which has a presence in an increasing number of college and university courses. e-Learning as system of systems is a dynamic and scalable environment. Within this environment, e-learning is still searching for a permanent, comfortable and serviceable position that is to be controlled, managed, flexible, accessible and continually up-to-date with the wider university structure. As most academic and business institutions and training centres around the world have adopted the e-learning concept and technology in order to create, deliver and manage their learning materials through the web, it has become the focus of investigation. However, management, monitoring and collaboration between these institutions and centres are limited. Existing technologies such as grid, web services and agents are promising better results. In this research a new architecture has been developed and adopted to make the e-learning environment more dynamic and scalable by dividing it into regional data grids which are managed and monitored by agents. Multi-agent technology has been applied to integrate each regional data grid with others in order to produce an architecture which is more scalable, reliable, and efficient. The result we refer to as Regionally Distributed Architecture for Dynamic e-Learning Environment (RDADeLE). Our RDADeLE architecture is an agent-based grid environment which is composed of components such as learners, staff, nodes, regional grids, grid services and Learning Objects (LOs). These components are built and organised as a multi-agent system (MAS) using the Java Agent Development (JADE) platform. The main role of the agents in our architecture is to control and monitor grid components in order to build an adaptable, extensible, and flexible grid-based e-learning system. Two techniques have been developed and adopted in the architecture to build LOs' information and grid services. The first technique is the XML-based Registries Technique (XRT). In this technique LOs' information is built using XML registries to be discovered by the learners. The registries are written in Dublin Core Metadata Initiative (DCMI) format. The second technique is the Registered-based Services Technique (RST). In this technique the services are grid services which are built using agents. The services are registered with the Directory Facilitator (DF) of a JADE platform in order to be discovered by all other components. All components of the RDADeLE system, including grid service, are built as a multi-agent system (MAS). Each regional grid in the first technique has only its own registry, whereas in the second technique the grid services of all regional grids have to be registered with the DF. We have evaluated the RDADeLE system guided by both techniques by building a simulation of the prototype. The prototype has a main interface which consists of the name of the system (RDADeLE) and a specification table which includes Number of Regional Grids, Number of Nodes, Maximum Number of Learners connected to each node, and Number of Grid Services to be filled by the administrator of the RDADeLE system in order to create the prototype. Using the RST technique shows that the RDADeLE system can be built with more regional grids with less memory consumption. Moreover, using the RST technique shows that more grid services can be registered in the RDADeLE system with a lower average search time and the search performance is increased compared with the XRT technique. Finally, using one or both techniques, the XRT or the RST, in the prototype does not affect the reliability of the RDADeLE system.Royal Commission for Jubail and Yanbu - Directorate General For Jubail Project Kingdom of Saudi Arabi
    corecore