394 research outputs found

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    Semantic In-Network Complex Event Processing for an Energy Efficient Wireless Sensor Network

    Get PDF
    Wireless Sensor Networks (WSNs) consist of spatially distributed sensor nodes that perform monitoring tasks in a region and the gateway nodes that provide the acquired sensor data to the end user. With advances in the WSN technology, it has now become possible to have different types of sensor nodes within a region to monitor the environment. This provides the flexibility to monitor the environment in a more extensive manner than before. Sensor nodes are severely constrained devices with very limited battery sources and their resource scarcity remains a challenge. In traditional WSNs, the sensor nodes are used only for capturing data that is analysed later in more powerful gateway nodes. This continuous communication of data between sensor nodes and gateway nodes wastes energy at the sensor nodes, and consequently, the overall network lifetime is greatly reduced. Existing approaches to reduce energy consumption by processing at the sensor node level only work for homogeneous networks. This thesis presents a sensor node architecture for heterogeneous WSNs, called SEPSen, where data is processed locally at the sensor node level to reduce energy consumption. We use ontology fragments at the sensor nodes to enable data exchange between heterogeneous sensor nodes within the WSN. We employ a rule engine based on a pattern matching algorithm for filtering events at the sensor node level. The event routing towards the gateway nodes is performed using a context-aware routing scheme that takes both the energy consumption and the heterogeneity of the sensor nodes into account. As a proof of concept, we present a prototypical implementation of the SEPSen design in a simulation environment. By providing semantic support, in-network data processing capabilities and context-aware routing in SEPSen, the sensor nodes (1) communicate with each other despite their different sensor types, (2) filter events at the their own level to conserve the limited sensor node energy resources and (3) share the nodes' knowledge bases for collaboration between the sensor nodes using node-centric context-awareness in changing conditions. The SEPSen prototype has been evaluated based on a test case for water quality management. The results from the experiments show that the energy saved in SEPSen reaches almost 50% by processing events at the sensor node level and the overall network lifetime is increased by at least a factor of two against the shortest-path-first (Min-Hop) routing approach

    Architectural Model for Evaluating Space Communication Networks

    Get PDF
    [ANGLÈS] The space exploration endeavor started in 1957 with the launch and operation of the first manmade satellite, the URSS Sputnik 1. Since then, multiple space programs have been developed, pushing the limits of technology and science but foremost unveiling the mysteries of the universe. In all these cases, the need for flexible and reliable communication systems has been primordial, allowing the return of collected science data and, when necessary, ensuring the well-being and safety of astronauts. To that end, multiple space communication networks have been globally deployed, be it through geographically distributed ground assets or through space relay satellites. Until now most of these systems have relied upon mature technology standards that have been adapted to the specific needs of particular missions and customers. Nevertheless, current trends in the space programs suggest that a shift of paradigm is needed: an Internet-like space network would increase the capacity and reliability of an interplanetary network while dramatically reducing its overall costs. In this context, the System Architecting Paradigm can be a good starting point. Through its formal decomposition of the system, it can help determine the architecturally distinguishing decisions and identify potential areas of commonality and cost reduction. This thesis presents a general framework to evaluate space communication relay systems for the near Earth domain. It indicates the sources of complexity in the modeling process, and discusses the validity and appropriateness of past approaches to the problem. In particular, it proposes a discussion of current models vis-à-vis the System Architecting Paradigm and how they fit into tradespace exploration studies. Next, the thesis introduces a computational performance model for the analysis and fast simulation of space relay satellite systems. The tool takes advantage of a specifically built-in rule-based expert system for storing the constitutive elements of the architecture and perform logical interactions between them. Analogously, it uses numerical models to assess the network topology over a given timeframe, perform physical layer computations and calculate plausible schedules for the overall system. In particular, it presents a newly developed heuristic scheduler that guarantees prioritization of specific missions and services while ensuring manageable computational times.[CASTELLÀ] El inicio de la carrera espacial se inició en 1957 con el lanzamiento y operación del primer satélite artificial, el Sputnik 1 de la URSS. Desde entonces se han desarrollado múltiples programas espaciales que han llevado al límite tanto la tecnología como la ciencia y han permitido desvelar los misterios del universo. En todos estos casos, la necesidad de sistemas de comunicación flexibles y fiables ha sido primordial con el fin de asegurar el retorno de los datos científicos recopilados y, en ciertos casos, garantizar la seguridad de los astronautas. Como consecuencia, múltiples redes de comunicaciones espaciales han sido desplegadas, ya sea a través de antenas globalmente distribuidas a través de la superficie terrestre o mediante satélites repetidores. Hasta ahora la mayoría de estos sistemas se ha basado en estándares tecnológicos maduros y testeados, los cuales se han adaptado con el fin de satisfacer las necesidades específicas de cada misión y cliente. Sin embargo, las tendencias actuales en el diseño de los nuevos programas espaciales indica que un cambio de paradigma es necesario: una red espacial a imagen de Internet permitiría incrementar la capacidad y fiabilidad de las comunicaciones interplanetarias y, a la vez, reducir dramáticamente sus costes. En este contexto, el paradigma de arquitectura de sistemas puede ser un buen punto de partida. Mediante la descomposición formal del sistema, puede ayudar a determinar las decisiones que tienen un impacto cabal en el diseño de la arquitectura así como identificar las áreas con tecnologías similares y de menor coste. Esta tesis presenta un marco teórico general para evaluar sistemas de comunicaciones espaciales para misiones que orbitan la Tierra. Adicionalmente, la tesis indica los principales orígenes de complejidad durante el modelado del sistema y presenta una discusión sobre la validez de anteriores estrategias para analizar el problema. En concreto, propone una comparación de anteriores modelos respecto el paradigma de arquitectura de sistemas y su grado de adecuación para evaluar y comprar arquitecturas. A continuación, la tesis introduce un modelo computacional para simular y evaluar el rendimiento de sistemas de repetidores por satélite. La herramienta utiliza un rule-based expert system específicamente diseñado con el fin de almacenar los principales elementos constitutivos de la arquitectura y comprender las interacciones lógicas entre ellos. Análogamente, el modelo usa métodos numéricos con el fin de calcular la evolución temporal de la topología de la red en un determinado intervalo de tiempo, así como su capa física y un posible programa de contactos. En concreto, presenta un nuevo scheduler heurístico que garantiza la correcta ordenación de las misiones y servicios a la vez que asegura un tiempo computacional aceptable.[CATALÀ] L'inici de la cursa espacial va iniciar-se l'any 1957 amb el llançament i operació del primer satèl·lit artificial, l'Sputnik 1 de la URSS. Des d'aleshores s'han dut a terme múltiples programes espacials que han portat al límit tant la tecnologia com la ciència i han permès desvelar els misteris de l'univers. En tots aquests casos, la necessitat de sistemes de comunicació flexibles i fiables ha sigut primordial per tal d'assegurar el retorn de les dades científiques recopilades i, en certs casos, garantir el benestar i seguretat dels astronautes. Com a conseqüència, múltiples xarxes de comunicacions espacials han sigut desplegades, ja sigui a través d'antenes globalment distribuïdes a través de la superfície terrestre o mitjançant satèl·lits repetidors. Fins ara la majoria d'aquests sistemes s'han basat en estàndards tecnològics madurs i testats, els quals s'han adaptat per tal de satisfer les necessitats específiques de cada missió i client. Això no obstant, les tendències actuals en el disseny dels nous programes espacials indica que un canvi de paradigma és necessari: una xarxa espacial a imatge d'Internet permetria incrementar la capacitat i fiabilitat de les comunicacions interplanetàries i, alhora, reduir dramàticament els seu costs. En aquest context, el paradigma d'arquitectura de sistemes pot ser un bon punt de partida. Mitjançant la descomposició formal del sistema, pot ajudar a determinar les decisions que tenen un impacte cabdal en el disseny de l'arquitectura així com permetre identificar àrees amb tecnologies similars i de menor cost. Aquesta tesi presenta un marc teòric general per avaluar sistemes de comunicacions espacials per missions orbitant la Terra. Addicionalment, la tesi indica els principals orígens de complexitat durant el modelatge del sistema i presenta una discussió sobre la validesa d'anteriors estratègies per analitzar el problema. En concret, proposa una comparació d'anteriors models respecte el paradigma d'arquitectura de sistemes i el seu grau d'adequació per avaluar i comparar arquitectures. A continuació, la tesi introdueix un model computacional per simular i avaluar el rendiment de sistemes de repetidors per satèl·lit. L'eina empra un rule-based expert system específicament dissenyat per tal d'emmagatzemar els principals elements constitutius de l'arquitectura i comprendre les interaccions lògiques entre ells. Anàlogament, el model utilitza mètodes numèrics per tal de calcular l'evolució temporal de la topologia de la xarxa en un determinat interval de temps, així com calcular la seva capa física i un possible programa de contactes. En concret, presenta un nou scheduler heurístic que garanteix la correcte ordenació de les missions i serveis tot assegurant un temps de computació acceptable

    Optimisation and Decision Support during the Conceptual Stage of Building Design

    Get PDF
    Merged with duplicate record 10026.1/726 on 28.02.2017 by CS (TIS)Modern building design is complex and involves many different disciplines operating in a fragmented manner. Appropriate computer-based decision support (DS) tools are sought that can raise the level of integration of different activities at the conceptual stage, in order to help create better designs solutions. This project investigates opportunities that exist for using techniques based upon the Genetic Algorithm (GA) to support critical activities of conceptual building design (CBD). Collective independent studies have shown that the GA is a powerful optimisation and exploratory search technique with widespread application. The GA is essentially very simple yet it offers robustness and domain independence. The GA efficiently searches a domain to exploit highly suitable information. It maintains multiple solutions to problems simultaneously and is well suited to non-linear problems and those of a discontinuous nature found in engineering design. The literature search first examines traditional approaches to supporting conceptual design. Existing GA techniques and applications are discussed which include pioneering studies in the field of detailed structural design. Broader GA studies are also reported which have demonstrated possibilities for investigating geometrical, topological and member size variation. The tasks and goals of conceptual design are studied. A rationale is introduced, aimed at enabling the GA to be applied in a manner that provides the most effective support to the designer. Numerical experiments with floor planning are presented. These studies provide a basic foundation for a subsequent design support system (DSS) capable of generating structural design concepts. A hierarchical Structured GA (SGA) created by Dasgupta et al [1] is investigated to support the generation of diverse structural design concepts. The SGA supports variation in the size, shape and structural configuration of a building and in the choice of structural frame type and floor system. The benefits and limitations of the SGA approach are discussed. The creation of a prototype DSS system, abritrarily called Designer-Pro (DPRO), is described. A detailed building design model is introduced which is required for design development and appraisal. Simplifications, design rationale and generic component modelling are mentioned. A cost-based single criteria optimisation problem (SCOP) is created in which other constraints are represented as design parameters. The thesis describes the importance of the object-oriented programming (OOP) paradigm for creating a versatile design model and the need for complementary graphical user interface (GUI) tools to provide human-computer interaction (HCI) capabilities for control and intelligent design manipulation. Techniques that increase flexibility in the generation and appraisal of concept are presented. Tools presented include a convergence plot of design solutions that supports cursor-interrogation to reveal the details of individual concepts. The graph permits study of design progression, or evolution of optimum design solutions. A visualisation tool is also presented. The DPRO system supports multiple operating modes, including single-design appraisal and enumerative search (ES). Case study examples are provided which demonstrate the applicability of the DPRO system to a range of different design scenarios. The DPRO system performs well in all tests. A parametric study demonstrates the potential of the system for DS. Limitations of the current approach and opportunities to broaden the study form part of the scope for further work. Some suggestions for further study are made, based upon newly-emerging techniques

    "A KNOWLEDGE-BASED EXPERT SYSTEM IN BIOINFORMATICS: AN APPLICATION TO REVERSE ENGINEERING GENE REGULATORY NETWORK"

    Get PDF
    The huge amount of biological data has spread the development of plenty of bionformatics tools, databases and web services. In order to face a computational biology problem, there not exist only a way, but different methodologies and strategies, with their own pros and cons, can be applied. In this PhD thesis I present a knowledge-based expert system that aims at helping a bionformatics researcher in the choice of the proper strategy and heuristic in order to resolve a bioinformatics issue. The Knowledge Base of the system is structured by means of an ontology and codes the expertise about the application domain. KB is organized into decision-making modules that introduce a set of metareasoning levels. The proposed expert system is the core reasoning component of BORIS (Bionformatics Organized Resources - an Intelligent System) framework, a research project High Performance Computing and Networking Institute of National Research Council (ICAR-CNR). BORIS, based on a hybrid architecture, can be seen as a crossover between Decision Support System and Workflow Management System because it not only provides decision support, but it help the User in the proper configuration and running of algorithms, tools and services implementing the suggested strategies and, at the same time, builds a workflow that traces both the decision-making activity and the execution of tasks and tools. The whole system will be applied to an actual case study: the reverse engineering of Gene Regulatory Network

    Second CLIPS Conference Proceedings, volume 2

    Get PDF
    Papers presented at the 2nd C Language Integrated Production System (CLIPS) Conference held at the Lyndon B. Johnson Space Center (JSC) on 23-25 September 1991 are documented in these proceedings. CLIPS is an expert system tool developed by the Software Technology Branch at NASA JSC and is used at over 4000 sites by government, industry, and business. During the three days of the conference, over 40 papers were presented by experts from NASA, Department of Defense, other government agencies, universities, and industry

    Combined Networking and Control Strategies for Smart Micro Grids: Analysis, Co-simulation and Performance Assessment.

    Get PDF
    The constantly increasing number of power generation devices based on renewables calls for a transition from the centralized control of electrical distribution grids to a distributed control scenario. In this context, distributed generators are exploited to achieve other objectives beyond supporting loads, such as the minimization of the power losses along the distribution lines, the sustainability of the electrical network when operated in islanded-mode (i.e., when the the energy flow from the main energy supplier is not available) and the power peaks shaving. In order to fulfill the aforementioned goals, optimized techniques aimed at managing the electrical behavior of the distributed generators (i.e., the amount of active and reactive power injected into the grid by the distributed generators at any given time), are needed. These techniques, in order to dispatch information regarding the actual state of the network agents, rely on smart metering devices (measuring instantaneous electrical quantities as, for example, active and reactive power, loads impedance and loads voltage) and on a communication infrastructure (i.e., Powerline Communication - PLC) interconnecting the smart-grid agents and allowing for the exchange of the measured quantities. Moreover, suitable communication protocols, supporting the transmission channel access and data routing, are needed. In this doctoral thesis, firstly, a full-fledged system that extends existing state-of-the-art algorithms for the distributed minimization of power losses in smart micro grids is presented. There, practical aspects such as the design of a communication and coordination protocol that is resilient to link failures and manages channel access, message delivery and distributed generator coordination is taken into account. Design rules for the networking strategies that best fit the selected optimization approaches are provided. Finally, in the presence of lossy communication links, the impact of communication and electrical grid features is assessed. Specifically, communication failures, scheduling order for the distributed control, line impedance estimation error, network size and number of distributed generators are considered as major issues. Next, it will be shown that the convergence rate of the optimization algorithms, implemented in the aforementioned system, can be improved by suitably scheduling the order in which the smart-grid agents are activated. For stability purposes, a token ring approach is often implemented for the control, where at any given time a single node with communication and control capabilities (referred to as {\em smart node}) has the token and is the only node in charge of implementing the control action entailed by the algorithms (i.e., power injection). It will be shown that the token ring approach does not always ensure the fastest convergence rate. In order to improve the convergence rate of the selected optimization techniques, optimality criteria are defined and a lightweight, distributed and heuristic (suboptimal) scheduling algorithm is designed. Another important aspect considered in this thesis, is the one concerning the power demand peak shaving. Algorithms that exploit the distributed energy sources and rely on the smart-grid communication infrastructure in order to level out the peaks in the electrical power demand, can greatly reduce the workload of the main energy supplier, thus preventing unexpected hardware failures and blackouts. The importance of leveling out the power demand peaks is even greater when dealing with smart-grids operating in islanded mode, since avoiding power demand peaks can substantially improve the self-sustainability of the electrical grid. To this end, a lightweight and effective approach for the management of prosumer communities through the synergistic control of the power electronic converters acting therein is designed. An islanded operating mode is considered, and the control strategy aims at leveling peaks in the use of energy drained from or injected into the connection point with the main power supplier. All the aforementioned techniques rely on the use of distributed generators (whose energy comes from renewable sources) to contribute to the overall grid electrical efficiency. In a real-world setting, such control actions will however depend on market models and on the revenue (monetary income) that the final users will accrue through energy trading with other users and with the smart grid operator. For this reason, an optimized market model accounting for electrical efficiency constraints, along with the demand-offer rule, is designed. Novel market rules designed to provide economical benefits to all the smart grid players (i.e., the users and the grid operator), while also driving the power grid toward a satisfactory solution in terms of electrical performance are designed. To the best of the author's knowledge, a general framework for the study of the interaction between power grid optimization algorithms (electrical performance) and energy pricing and trading strategies (revenue) is not yet available in the related scientific literature
    corecore