10,866 research outputs found

    From raw data to agent perceptions for simulation, verification, and monitoring

    Get PDF
    In this paper we present a practical solution to the problem of connecting “real world” data exchanged between sensors and actuators with the higher level of abstraction used in frameworks for multiagent systems. In particular, we show how to connect an industry-standard publish-subscribe communication protocol for embedded systems called MQTT with two Belief-Desire-Intention agent modelling and programming languages: Jason/AgentSpeak and Brahms. In the paper we describe the details of our Java implementation and we release all the code open source

    From raw data to agent perceptions for simulation, verification, and monitoring

    Get PDF
    In this paper we present a practical solution to the problem of connecting “real world” data exchanged between sensors and actuators with the higher level of abstraction used in frameworks for multiagent systems. In particular, we show how to connect an industry-standard publish-subscribe communication protocol for embedded systems called MQTT with two Belief-Desire-Intention agent modelling and programming languages: Jason/AgentSpeak and Brahms. In the paper we describe the details of our Java implementation and we release all the code open source

    Agoric computation: trust and cyber-physical systems

    Get PDF
    In the past two decades advances in miniaturisation and economies of scale have led to the emergence of billions of connected components that have provided both a spur and a blueprint for the development of smart products acting in specialised environments which are uniquely identifiable, localisable, and capable of autonomy. Adopting the computational perspective of multi-agent systems (MAS) as a technological abstraction married with the engineering perspective of cyber-physical systems (CPS) has provided fertile ground for designing, developing and deploying software applications in smart automated context such as manufacturing, power grids, avionics, healthcare and logistics, capable of being decentralised, intelligent, reconfigurable, modular, flexible, robust, adaptive and responsive. Current agent technologies are, however, ill suited for information-based environments, making it difficult to formalise and implement multiagent systems based on inherently dynamical functional concepts such as trust and reliability, which present special challenges when scaling from small to large systems of agents. To overcome such challenges, it is useful to adopt a unified approach which we term agoric computation, integrating logical, mathematical and programming concepts towards the development of agent-based solutions based on recursive, compositional principles, where smaller systems feed via directed information flows into larger hierarchical systems that define their global environment. Considering information as an integral part of the environment naturally defines a web of operations where components of a systems are wired in some way and each set of inputs and outputs are allowed to carry some value. These operations are stateless abstractions and procedures that act on some stateful cells that cumulate partial information, and it is possible to compose such abstractions into higher-level ones, using a publish-and-subscribe interaction model that keeps track of update messages between abstractions and values in the data. In this thesis we review the logical and mathematical basis of such abstractions and take steps towards the software implementation of agoric modelling as a framework for simulation and verification of the reliability of increasingly complex systems, and report on experimental results related to a few select applications, such as stigmergic interaction in mobile robotics, integrating raw data into agent perceptions, trust and trustworthiness in orchestrated open systems, computing the epistemic cost of trust when reasoning in networks of agents seeded with contradictory information, and trust models for distributed ledgers in the Internet of Things (IoT); and provide a roadmap for future developments of our research

    A Human-Centric Approach to Group-Based Context-Awareness

    Full text link
    The emerging need for qualitative approaches in context-aware information processing calls for proper modeling of context information and efficient handling of its inherent uncertainty resulted from human interpretation and usage. Many of the current approaches to context-awareness either lack a solid theoretical basis for modeling or ignore important requirements such as modularity, high-order uncertainty management and group-based context-awareness. Therefore, their real-world application and extendability remains limited. In this paper, we present f-Context as a service-based context-awareness framework, based on language-action perspective (LAP) theory for modeling. Then we identify some of the complex, informational parts of context which contain high-order uncertainties due to differences between members of the group in defining them. An agent-based perceptual computer architecture is proposed for implementing f-Context that uses computing with words (CWW) for handling uncertainty. The feasibility of f-Context is analyzed using a realistic scenario involving a group of mobile users. We believe that the proposed approach can open the door to future research on context-awareness by offering a theoretical foundation based on human communication, and a service-based layered architecture which exploits CWW for context-aware, group-based and platform-independent access to information systems

    Multi-agent system for flood forecasting in Tropical River Basin

    Get PDF
    It is well known, the problems related to the generation of floods, their control, and management, have been treated with traditional hydrologic modeling tools focused on the study and the analysis of the precipitation-runoff relationship, a physical process which is driven by the hydrological cycle and the climate regime and that is directly proportional to the generation of floodwaters. Within the hydrological discipline, they classify these traditional modeling tools according to three principal groups, being the first group defined as trial-and-error models (e.g., "black-models"), the second group are the conceptual models, which are categorized in three main sub-groups as "lumped", "semi-lumped" and "semi-distributed", according to the special distribution, and finally, models that are based on physical processes, known as "white-box models" are the so-called "distributed-models". On the other hand, in engineering applications, there are two types of models used in streamflow forecasting, and which are classified concerning the type of measurements and variables required as "physically based models", as well as "data-driven models". The Physically oriented prototypes present an in-depth account of the dynamics related to the physical aspects that occur internally among the different systems of a given hydrographic basin. However, aside from being laborious to implement, they rely thoroughly on mathematical algorithms, and an understanding of these interactions requires the abstraction of mathematical concepts and the conceptualization of the physical processes that are intertwined among these systems. Besides, models determined by data necessitates an a-priori understanding of the physical laws controlling the process within the system, and they are bound to mathematical formulations, which require a lot of numeric information for field adjustments. Therefore, these models are remarkably different from each other because of their needs for data, and their interpretation of physical phenomena. Although there is considerable progress in hydrologic modeling for flood forecasting, several significant setbacks remain unresolved, given the stochastic nature of the hydrological phenomena, is the challenge to implement user-friendly, re-usable, robust, and reliable forecasting systems, the amount of uncertainty they must deal with when trying to solve the flood forecasting problem. However, in the past decades, with the growing environment and development of the artificial intelligence (AI) field, some researchers have seldomly attempted to deal with the stochastic nature of hydrologic events with the application of some of these techniques. Given the setbacks to hydrologic flood forecasting previously described this thesis research aims to integrate the physics-based hydrologic, hydraulic, and data-driven models under the paradigm of Multi-agent Systems for flood forecasting by designing and developing a multi-agent system (MAS) framework for flood forecasting events within the scope of tropical watersheds. With the emergence of the agent technologies, the "agent-based modeling" and "multiagent systems" simulation methods have provided applications for some areas of hydro base management like flood protection, planning, control, management, mitigation, and forecasting to combat the shocks produced by floods on society; however, all these focused on evacuation drills, and the latter not aimed at the tropical river basin, whose hydrological regime is extremely unique. In this catchment modeling environment approach, it was applied the multi-agent systems approach as a surrogate of the conventional hydrologic model to build a system that operates at the catchment level displayed with hydrometric stations, that use the data from hydrometric sensors networks (e.g., rainfall, river stage, river flow) captured, stored and administered by an organization of interacting agents whose main aim is to perform flow forecasting and awareness, and in so doing enhance the policy-making process at the watershed level. Section one of this document surveys the status of the current research in hydrologic modeling for the flood forecasting task. It is a journey through the background of related concerns to the hydrological process, flood ontologies, management, and forecasting. The section covers, to a certain extent, the techniques, methods, and theoretical aspects and methods of hydrological modeling and their types, from the conventional models to the present-day artificial intelligence prototypes, making special emphasis on the multi-agent systems, as most recent modeling methodology in the hydrological sciences. However, it is also underlined here that the section does not contribute to an all-inclusive revision, rather its purpose is to serve as a framework for this sort of work and a path to underline the significant aspects of the works. In section two of the document, it is detailed the conceptual framework for the suggested Multiagent system in support of flood forecasting. To accomplish this task, several works need to be carried out such as the sketching and implementation of the system’s framework with the (Belief-Desire-Intention model) architecture for flood forecasting events within the concept of the tropical river basin. Contributions of this proposed architecture are the replacement of the conventional hydrologic modeling with the use of multi-agent systems, which makes it quick for hydrometric time-series data administration and modeling of the precipitation-runoff process which conveys to flood in a river course. Another advantage is the user-friendly environment provided by the proposed multi-agent system platform graphical interface, the real-time generation of graphs, charts, and monitors with the information on the immediate event taking place in the catchment, which makes it easy for the viewer with some or no background in data analysis and their interpretation to get a visual idea of the information at hand regarding the flood awareness. The required agents developed in this multi-agent system modeling framework for flood forecasting have been trained, tested, and validated under a series of experimental tasks, using the hydrometric series information of rainfall, river stage, and streamflow data collected by the hydrometric sensor agents from the hydrometric sensors.Como se sabe, los problemas relacionados con la generación de inundaciones, su control y manejo, han sido tratados con herramientas tradicionales de modelado hidrológico enfocados al estudio y análisis de la relación precipitación-escorrentía, proceso físico que es impulsado por el ciclo hidrológico y el régimen climático y este esta directamente proporcional a la generación de crecidas. Dentro de la disciplina hidrológica, clasifican estas herramientas de modelado tradicionales en tres grupos principales, siendo el primer grupo el de modelos empíricos (modelos de caja negra), modelos conceptuales (o agrupados, semi-agrupados o semi-distribuidos) dependiendo de la distribución espacial y, por último, los basados en la física, modelos de proceso (o "modelos de caja blanca", y/o distribuidos). En este sentido, clasifican las aplicaciones de predicción de caudal fluvial en la ingeniería de recursos hídricos en dos tipos con respecto a los valores y parámetros que requieren en: modelos de procesos basados en la física y la categoría de modelos impulsados por datos. Los modelos basados en la física proporcionan una descripción detallada de la dinámica relacionada con los aspectos físicos que ocurren internamente entre los diferentes sistemas de una cuenca hidrográfica determinada. Sin embargo, aparte de ser complejos de implementar, se basan completamente en algoritmos matemáticos, y la comprensión de estas interacciones requiere la abstracción de conceptos matemáticos y la conceptualización de los procesos físicos que se entrelazan entre estos sistemas. Además, los modelos impulsados por datos no requieren conocimiento de los procesos físicos que gobiernan, sino que se basan únicamente en ecuaciones empíricas que necesitan una gran cantidad de datos y requieren calibración de los datos en el sitio. Los dos modelos difieren significativamente debido a sus requisitos de datos y de cómo expresan los fenómenos físicos. La elaboración de modelos hidrológicos para el pronóstico de inundaciones ha dado grandes pasos, pero siguen sin resolverse algunos contratiempos importantes, dada la naturaleza estocástica de los fenómenos hidrológicos, es el desafío de implementar sistemas de pronóstico fáciles de usar, reutilizables, robustos y confiables, la cantidad de incertidumbre que deben afrontar al intentar resolver el problema de la predicción de inundaciones. Sin embargo, en las últimas décadas, con el entorno creciente y el desarrollo del campo de la inteligencia artificial (IA), algunos investigadores rara vez han intentado abordar la naturaleza estocástica de los eventos hidrológicos con la aplicación de algunas de estas técnicas. Dados los contratiempos en el pronóstico de inundaciones hidrológicas descritos anteriormente, esta investigación de tesis tiene como objetivo integrar los modelos hidrológicos, basados en la física, hidráulicos e impulsados por datos bajo el paradigma de Sistemas de múltiples agentes para el pronóstico de inundaciones por medio del bosquejo y desarrollo del marco de trabajo del sistema multi-agente (MAS) para los eventos de predicción de inundaciones en el contexto de cuenca hidrográfica tropical. Con la aparición de las tecnologías de agentes, se han emprendido algunos enfoques de simulación recientes en la investigación hidrológica con modelos basados en agentes y sistema multi-agente, principalmente en alerta por inundaciones, seguridad y planificación de inundaciones, control y gestión de inundaciones y pronóstico de inundaciones, todos estos enfocado a simulacros de evacuación, y este último no dirigido a la cuenca tropical, cuyo régimen hidrológico es extremadamente único. En este enfoque de entorno de modelado de cuencas, se aplican los enfoques de sistemas multi-agente como un sustituto del modelado hidrológico convencional para construir un sistema que opera a nivel de cuenca con estaciones hidrométricas desplegadas, que utilizan los datos de redes de sensores hidrométricos (por ejemplo, lluvia , nivel del río, caudal del río) capturado, almacenado y administrado por una organización de agentes interactuantes cuyo objetivo principal es realizar pronósticos de caudal y concientización para mejorar las capacidades de soporte en la formulación de políticas a nivel de cuenca hidrográfica. La primera sección de este documento analiza el estado del arte sobre la investigación actual en modelos hidrológicos para la tarea de pronóstico de inundaciones. Es un viaje a través de los antecedentes preocupantes relacionadas con el proceso hidrológico, las ontologías de inundaciones, la gestión y la predicción. El apartado abarca, en cierta medida, las técnicas, métodos y aspectos teóricos y métodos del modelado hidrológico y sus tipologías, desde los modelos convencionales hasta los prototipos de inteligencia artificial actuales, haciendo hincapié en los sistemas multi-agente, como un enfoque de simulación reciente en la investigación hidrológica. Sin embargo, se destaca que esta sección no contribuye a una revisión integral, sino que su propósito es servir de marco para este tipo de trabajos y una guía para subrayar los aspectos significativos de los trabajos. En la sección dos del documento, se detalla el marco de trabajo propuesto para el sistema multi-agente para el pronóstico de inundaciones. Los trabajos realizados comprendieron el diseño y desarrollo del marco de trabajo del sistema multi-agente con la arquitectura (modelo Creencia-Deseo-Intención) para la predicción de eventos de crecidas dentro del concepto de cuenca hidrográfica tropical. Las contribuciones de esta arquitectura propuesta son el reemplazo del modelado hidrológico convencional con el uso de sistemas multi-agente, lo que agiliza la administración de las series de tiempo de datos hidrométricos y el modelado del proceso de precipitación-escorrentía que conduce a la inundación en el curso de un río. Otra ventaja es el entorno amigable proporcionado por la interfaz gráfica de la plataforma del sistema multi-agente propuesto, la generación en tiempo real de gráficos, cuadros y monitores con la información sobre el evento inmediato que tiene lugar en la cuenca, lo que lo hace fácil para el espectador con algo o sin experiencia en análisis de datos y su interpretación para tener una idea visual de la información disponible con respecto a la cognición de las inundaciones. Los agentes necesarios desarrollados en este marco de modelado de sistemas multi-agente para el pronóstico de inundaciones han sido entrenados, probados y validados en una serie de tareas experimentales, utilizando la información de la serie hidrométrica de datos de lluvia, nivel del río y flujo del curso de agua recolectados por los agentes sensores hidrométricos de los sensores hidrométricos de campo.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: María Araceli Sanchis de Miguel.- Secretario: Juan Gómez Romero.- Vocal: Juan Carlos Corrale

    Modeling a Chemical Battlefield and the Resulting Effects in a Theater-Level Combat Model

    Get PDF
    This thesis describes the development of a methodology to model chemical weapons use in the Joint Staff\u27s Joint Warfare Analysis Experimental Prototype (JWAEP) and to quantify the resulting effects. The methodology incorporates organic unit assets and theater-level chemical assets into JWAEP by using the three principles of nuclear, biological, and chemical defense (NBC) which reflect joint and Army doctrine, and combines them with the basic concepts already used in existing theater-level models. Other aspects of the problem include representing chemical \u27packages\u27 on the battlefield, determining attrition and time effects, adjusting unit effectiveness, determining chemical package intelligence acquisition procedures, identifying solution techniques, verifying the results, and making recommendations. The proposed solution techniques provide a feasible methodology for integrating high resolution modeling into a low resolution model. The algorithms incorporate the chemical estimate process, Mission Oriented Protective Posture (MOPP) analysis, and employment of appropriate doctrinal unit tactics based on a perception of existing or potential chemical weapons use. Thus, the methodology provides accurate input into the JWAEP for approximating real world results as well as a structured and quantifiable framework reflecting joint and Army doctrine that can be used for stand alone chemical effects analysis

    Development of a Safety Performance Decision-Making Tool for Flight Training Organizations

    Get PDF
    The purpose of the research was to create and validate a safety performance decision-making tool to transform a reactive safety model into a predictive, decision-making tool, specific to flight training organizations, to increase safety and aid in operational decision-making. Using Monte Carlo simulation, the study conducted simulation runs based on operational ranges to simulate the operating conditions with varying levels of controllable resources in terms of personnel (Aviation Maintenance Technicians and Instructor Pilots) and expenditures (active flight students and available aircraft). Four What-if Scenarios were conducted by manipulating the controllable inputs. Changes to the controllable inputs are reflected by variations to the outputs demonstrating the utility and potential for the safety performance decision-making tool. The outputs could be utilized by safety personnel and administrators to make more informed safety-related decisions without expending unnecessary resources

    An information assistant system for the prevention of tunnel vision in crisis management

    Get PDF
    In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions

    Application of a Blockchain Enabled Model in Disaster Aids Supply Network Resilience

    Get PDF
    The disaster area is a dynamic environment. The bottleneck in distributing the supplies may be from the damaged infrastructure or the unavailability of accurate information about the required amounts. The success of the disaster response network is based on collaboration, coordination, sovereignty, and equality in relief distribution. Therefore, a reliable dynamic communication system is required to facilitate the interactions, enhance the knowledge for the relief operation, prioritize, and coordinate the goods distribution. One of the promising innovative technologies is blockchain technology which enables transparent, secure, and real-time information exchange and automation through smart contracts. This study analyzes the application of blockchain technology on disaster management resilience. The influences of this most promising application on the disaster aid supply network resilience combined with the Internet of Things (IoT) and Dynamic Voltage Frequency Scaling (DVFS) algorithm are explored employing a network-based simulation. The theoretical analysis reveals an advancement in disaster-aids supply network strategies using smart contracts for collaborations. The simulation study indicates an enhance in resilience by improvement in collaboration and communication due to more time-efficient processing for disaster supply management. From the investigations, insights have been derived for researchers in the field and the managers interested in practical implementation

    Traceability -- A Literature Review

    Get PDF
    In light of recent food safety crises and international trade concerns associated with food or animal associated diseases, traceability has once again become important in the minds of public policymakers, business decision makers, consumers and special interest groups. This study reviews studies on traceability, government regulation and consumer behaviour, provide case studies of current traceability systems and a rough breakdown of various costs and benefits of traceability. This report aims to identify gaps that may currently exist in the literature on traceability in the domestic beef supply chain, as well as provide possible directions for future research into said issue. Three main conclusions can be drawn from this study. First, there is a lack of a common definition of traceability. Hence identifying similarities and differences across studies becomes difficult if not impossible. To this end, this study adopts CFIA’s definition of traceability. This definition has been adopted by numerous other agencies including the EU’s official definition of traceability however it may or may not be acceptable from the perspective of major Canadian beef and cattle trade partners. Second, the studies reviewed in this report address one or more of five key objectives; the impact of changing consumer behaviour on market participants, suppliers incentive to adopt or participate in traceability, impact of regulatory changes, supplier response to crisis and technical description of traceability systems. Drawing from the insights from the consumer studies, it seems as if consumers do not value traceability per se, traceability is a means for consumers to receive validation of another production or process attribute that they are interested in. Moreover, supply chain improvement, food safety control and accessing foreign market segments are strong incentives for primary producers and processors to participate in programs with traceability features. However the objectives addressed by the studies reviewed in this paper are not necessarily the objectives that are of most immediate relevance to decision makers about appropriate traceability standards to recommend, require, subsidize etc. In many cases the research objectives of previous work have been extremely narrow creating a body of literature that is incomplete in certain key areas. Third, case studies of existing traceability systems in Australia, the UK, Scotland, Brazil and Uruguay indicate that the pattern of development varies widely across sectors and regions. In summary, a traceability system by itself cannot provide value-added for all participants in the industry; it is merely a protocol for documenting and sharing information. Value is added to participants in the marketing chain through traceability in the form of reduced transactions costs in the case of a food safety incident and through the ability to shift liability. To ensure consumer benefit and have premiums returned to primary producers the type of information that consumers value is an important issue for future research. A successful program that peaks consumer interest and can enhance their eating experience can generate economic benefits to all sectors in the beef industry. International market access will increasingly require traceability in the marketing system in order to satisfy trade restrictions in the case of animal diseases and country of origin labelling, to name only a few examples. Designing appropriate traceability protocols industry wide is therefore becoming very important.traceability, institutions, Canada, consumer behaviour, producer behaviour, supply chain, Agricultural and Food Policy, Consumer/Household Economics, Food Consumption/Nutrition/Food Safety, Health Economics and Policy, International Relations/Trade, Livestock Production/Industries, Marketing, Production Economics, D020, D100, D200, Q100,
    corecore