222 research outputs found

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Efficient Decision Support Systems

    Get PDF
    This series is directed to diverse managerial professionals who are leading the transformation of individual domains by using expert information and domain knowledge to drive decision support systems (DSSs). The series offers a broad range of subjects addressed in specific areas such as health care, business management, banking, agriculture, environmental improvement, natural resource and spatial management, aviation administration, and hybrid applications of information technology aimed to interdisciplinary issues. This book series is composed of three volumes: Volume 1 consists of general concepts and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary domains. The book is shaped upon decision support strategies in the new infrastructure that assists the readers in full use of the creative technology to manipulate input data and to transform information into useful decisions for decision makers

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of “volunteer mappers”. Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protection

    Ontoloji tabanlı çok-etmenli sanal fabrika sisteminin tasarımı ve geliştirilmesi.

    Get PDF
    Major developments in computers and information technologies, enable industrial and mechanical engineers to establish new net based, virtual collaboration platforms for enterprises. Benefiting from virtual enterprise platform enterprises will be able to combine their resources and capabilities on project based collaborations meanwhile protect their independent mainstream policies and secure their secret information. This concept is called virtual enterprise(VE). Virtual Enterprise (VE) is a collaboration model between multiple business partners in a value chain. The VE model is particularly feasible and appropriate for Small and Medium Enterprises (SME) and industry parks containing multiple SMEs that have different vertical competencies. One of the main targets of this research is to create an Ontology based Multi Agent Virtual Enterprise (OMAVE) System to prepare an appropriate platform for collaboration between technology start-ups in techno-parks and SMEs in Organized Industrial Zones in order to produce high value added high-tech products. OMAVE aims to help SMEs to shift from classic trend of manufacturing part pieces towards high-tech, innovative and research based products. In this way and to reach this goal a new semantic data infrastructure to enhance Re-Configurability and Flexibility of virtual enterprise systems has been developed. In order to support flexibility in Virtual Enterprise business processes and enhance its integration to enterprises' available manufacturing systems (e.g. MRP) an ontology based domain model of VE system has been established. OWL DL semantic data structure of VE by defining concepts, axioms, rules and functions in VE system has been developed. TDB data store to keep VE data and information in form of triples developed. SPARQL semantic RDF query language is used to handle and manipulate data on developed system data store. This architecture supports structure flexibility for developed VE infrastructure and improve reusability of data and knowledge in VE life cycle. To establish a multi agent based partner selection platform different agent types have been developed. These agents collaborate and compete with each other to select the most appropriate partner for the forthcoming VE project consortium. The agent based auctioning platform is coupled with a Fuzzy-AHP-TOPSIS multi criteria decision making algorithm to evaluate incoming bids from agents and rank proposals in each iteration. It is also important to notice that here, agents interaction's semantic is provided by an agent ontology. This agent ontology provides concepts, properties and all message formats for agents to settle a common language in interactions between agents. Implementing concurrent engineering, collaborative design and Product Life Cycle Management (PLM) concepts by integrating Dassault systems web based CATIA/ENOVIA V6 design and PLM tools to OMAVE system. To test and verify these achievements a case study to produce a test product by using developed OMAVE tools is established. This test product manufactured by contributions of SMEs from OSTIM organized Industrial Zone Aviation and Defense Cluster.Ph.D. - Doctoral Progra

    Towards semantics-driven modelling and simulation of context-aware manufacturing systems

    Get PDF
    Systems modelling and simulation are two important facets for thoroughly and effectively analysing manufacturing processes. The ever-growing complexity of the latter, the increasing amount of knowledge, and the use of Semantic Web techniques adhering meaning to data have led researchers to explore and combine together methodologies by exploiting their best features with the purpose of supporting manufacturing system's modelling and simulation applications. In the past two decades, the use of ontologies has proven to be highly effective for context modelling and knowledge management. Nevertheless, they are not meant for any kind of model simulations. The latter, instead, can be achieved by using a well-known workflow-oriented mathematical modelling language such as Petri Net (PN), which brings in modelling and analytical features suitable for creating a digital copy of an industrial system (also known as "digital twin"). The theoretical framework presented in this dissertation aims to exploit W3C standards, such as Semantic Web Rule Language (SWRL) and Web Ontology Language (OWL), to transform each piece of knowledge regarding a manufacturing system into Petri Net modelling primitives. In so doing, it supports the semantics-driven instantiation, analysis and simulation of what we call semantically-enriched PN-based manufacturing system digital twins. The approach proposed by this exploratory research is therefore based on the exploitation of the best features introduced by state-of-the-art developments in W3C standards for Linked Data, such as OWL and SWRL, together with a multipurpose graphical and mathematical modelling tool known as Petri Net. The former is used for gathering, classifying and properly storing industrial data and therefore enhances our PN-based digital copy of an industrial system with advanced reasoning features. This makes both the system modelling and analysis phases more effective and, above all, paves the way towards a completely new field, where semantically-enriched PN-based manufacturing system digital twins represent one of the drivers of the digital transformation already in place in all companies facing the industrial revolution. As a result, it has been possible to outline a list of indications that will help future efforts in the application of complex digital twin support oriented solutions, which in turn is based on semantically-enriched manufacturing information systems. Through the application cases, five key topics have been tackled, namely: (i) semantic enrichment of industrial data using the most recent ontological models in order to enhance its value and enable new uses; (ii) context-awareness, or context-adaptiveness, aiming to enable the system to capture and use information about the context of operations; (iii) reusability, which is a core concept through which we want to emphasize the importance of reusing existing assets in some form within the industrial modelling process, such as industrial process knowledge, process data, system modelling primitives, and the like; (iv) the ultimate goal of semantic Interoperability, which can be accomplished by adding data about the metadata, linking each data element to a controlled, shared vocabulary; finally, (v) the impact on modelling and simulation applications, which shows how we could automate the translation process of industrial knowledge into a digital manufacturing system and empower it with quantitative and qualitative analytical technics

    A framework for context-aware sensor fusion

    Get PDF
    Mención Internacional en el título de doctorSensor fusion is a mature but very active research field, included in the more general discipline of information fusion. It studies how to combine data coming from different sensors, in such way that the resulting information is better in some sense –more complete, accurate or stable– than any of the original sources used individually. Context is defined as everything that constraints or affects the process of solving a problem, without being part of the problem or the solution itself. Over the last years, the scientific community has shown a remarkable interest in the potential of exploiting this context information for building smarter systems that can make a better use of the available information. Traditional sensor fusion systems are based in fixed processing schemes over a predefined set of sensors, where both the employed algorithms and domain are assumed to remain unchanged over time. Nowadays, affordable mobile and embedded systems have a high sensory, computational and communication capabilities, making them a perfect base for building sensor fusion applications. This fact represents an opportunity to explore fusion system that are bigger and more complex, but pose the challenge of offering optimal performance under changing and unexpected circumstances. This thesis proposes a framework supporting the creation of sensor fusion systems with self-adaptive capabilities, where context information plays a crucial role. These two aspects have never been integrated in a common approach for solving the sensor fusion problem before. The proposal includes a preliminary theoretical analysis of both problem aspects, the design of a generic architecture capable for hosting any type of centralized sensor fusion application, and a description of the process to be followed for applying the architecture in order to solve a sensor fusion problem. The experimental section shows how to apply this thesis’ proposal, step by step, for creating a context-aware sensor fusion system with self-adaptive capabilities. This process is illustrated for two different domains: a maritime/coastal surveillance application, and ground vehicle navigation in urban environment. Obtained results demonstrate the viability and validity of the implemented prototypes, as well as the benefit of including context information to enhance sensor fusion processes.La fusión de sensores es un campo de investigación maduro pero no por ello menos activo, que se engloba dentro de la disciplina más amplia de la fusión de información. Su papel consiste en mezclar información de dispositivos sensores para proporcionar un resultado que mejora en algún aspecto –completitud, precisión, estabilidad– al que se puede obtener de las diversas fuentes por separado. Definimos contexto como todo aquello que restringe o afecta el proceso de resolución de un problema, sin ser parte del problema o de su solución. En los últimos años, la comunidad científica ha demostrado un gran interés en el potencial que ofrece el contexto para construir sistemas más inteligentes, capaces de hacer un mejor uso de la información disponible. Por otro lado, el desarrollo de sistemas de fusión de sensores ha respondido tradicionalmente a esquemas de procesado poco flexibles sobre un conjunto prefijado de sensores, donde los algoritmos y el dominio de problema permanecen inalterados con el paso del tiempo. En la actualidad, el abaratamiento de dispositivos móviles y embebidos con gran capacidad sensorial, de comunicación y de procesado plantea nuevas oportunidades. La comunidad científica comienza a explorar la creación de sistemas con mayor grado de complejidad y autonomía, que sean capaces de adaptarse a circunstancias inesperadas y ofrecer un rendimiento óptimo en cada caso. En esta tesis se propone un framework que permite crear sistemas de fusión de sensores con capacidad de auto-adaptación, donde la información contextual juega un papel fundamental. Hasta la fecha, ambos aspectos no han sido integrados en un enfoque conjunto. La propuesta incluye un análisis teórico de ambos aspectos del problema, el diseño de una arquitectura genérica capaz de dar cabida a cualquier aplicación de fusión de sensores centralizada, y la descripción del proceso a seguir para aplicar dicha arquitectura a cualquier problema de fusión de sensores. En la sección experimental se demuestra cómo aplicar nuestra propuesta, paso por paso, para crear un sistema de fusión de sensores adaptable y sensible al contexto. Este proceso de diseño se ilustra sobre dos problemas pertenecientes a dominios tan distintos como la vigilancia costera y la navegación de vehículos en entornos urbanos. El análisis de resultados incluye experimentos concretos que demuestran la validez de los prototipos implementados, así como el beneficio de usar información contextual para mejorar los procesos de fusión de sensores.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Javier Bajo Pérez.- Secretario: Antonio Berlanga de Jesús.- Vocal: Lauro Snidar

    Data quality issues in electronic health records for large-scale databases

    Get PDF
    Data Quality (DQ) in Electronic Health Records (EHRs) is one of the core functions that play a decisive role to improve the healthcare service quality. The DQ issues in EHRs are a noticeable trend to improve the introduction of an adaptive framework for interoperability and standards in Large-Scale Databases (LSDB) management systems. Therefore, large data communications are challenging in the traditional approaches to satisfy the needs of the consumers, as data is often not capture directly into the Database Management Systems (DBMS) in a seasonably enough fashion to enable their subsequent uses. In addition, large data plays a vital role in containing plenty of treasures for all the fields in the DBMS. EHRs technology provides portfolio management systems that allow HealthCare Organisations (HCOs) to deliver a higher quality of care to their patients than that which is possible with paper-based records. EHRs are in high demand for HCOs to run their daily services as increasing numbers of huge datasets occur every day. Efficient EHR systems reduce the data redundancy as well as the system application failure and increase the possibility to draw all necessary reports. However, one of the main challenges in developing efficient EHR systems is the inherent difficulty to coherently manage data from diverse heterogeneous sources. It is practically challenging to integrate diverse data into a global schema, which satisfies the need of users. The efficient management of EHR systems using an existing DBMS present challenges because of incompatibility and sometimes inconsistency of data structures. As a result, no common methodological approach is currently in existence to effectively solve every data integration problem. The challenges of the DQ issue raised the need to find an efficient way to integrate large EHRs from diverse heterogeneous sources. To handle and align a large dataset efficiently, the hybrid algorithm method with the logical combination of Fuzzy-Ontology along with a large-scale EHRs analysis platform has shown the results in term of improved accuracy. This study investigated and addressed the raised DQ issues to interventions to overcome these barriers and challenges, including the provision of EHRs as they pertain to DQ and has combined features to search, extract, filter, clean and integrate data to ensure that users can coherently create new consistent data sets. The study researched the design of a hybrid method based on Fuzzy-Ontology with performed mathematical simulations based on the Markov Chain Probability Model. The similarity measurement based on dynamic Hungarian algorithm was followed by the Design Science Research (DSR) methodology, which will increase the quality of service over HCOs in adaptive frameworks

    Multi-agent system for flood forecasting in Tropical River Basin

    Get PDF
    It is well known, the problems related to the generation of floods, their control, and management, have been treated with traditional hydrologic modeling tools focused on the study and the analysis of the precipitation-runoff relationship, a physical process which is driven by the hydrological cycle and the climate regime and that is directly proportional to the generation of floodwaters. Within the hydrological discipline, they classify these traditional modeling tools according to three principal groups, being the first group defined as trial-and-error models (e.g., "black-models"), the second group are the conceptual models, which are categorized in three main sub-groups as "lumped", "semi-lumped" and "semi-distributed", according to the special distribution, and finally, models that are based on physical processes, known as "white-box models" are the so-called "distributed-models". On the other hand, in engineering applications, there are two types of models used in streamflow forecasting, and which are classified concerning the type of measurements and variables required as "physically based models", as well as "data-driven models". The Physically oriented prototypes present an in-depth account of the dynamics related to the physical aspects that occur internally among the different systems of a given hydrographic basin. However, aside from being laborious to implement, they rely thoroughly on mathematical algorithms, and an understanding of these interactions requires the abstraction of mathematical concepts and the conceptualization of the physical processes that are intertwined among these systems. Besides, models determined by data necessitates an a-priori understanding of the physical laws controlling the process within the system, and they are bound to mathematical formulations, which require a lot of numeric information for field adjustments. Therefore, these models are remarkably different from each other because of their needs for data, and their interpretation of physical phenomena. Although there is considerable progress in hydrologic modeling for flood forecasting, several significant setbacks remain unresolved, given the stochastic nature of the hydrological phenomena, is the challenge to implement user-friendly, re-usable, robust, and reliable forecasting systems, the amount of uncertainty they must deal with when trying to solve the flood forecasting problem. However, in the past decades, with the growing environment and development of the artificial intelligence (AI) field, some researchers have seldomly attempted to deal with the stochastic nature of hydrologic events with the application of some of these techniques. Given the setbacks to hydrologic flood forecasting previously described this thesis research aims to integrate the physics-based hydrologic, hydraulic, and data-driven models under the paradigm of Multi-agent Systems for flood forecasting by designing and developing a multi-agent system (MAS) framework for flood forecasting events within the scope of tropical watersheds. With the emergence of the agent technologies, the "agent-based modeling" and "multiagent systems" simulation methods have provided applications for some areas of hydro base management like flood protection, planning, control, management, mitigation, and forecasting to combat the shocks produced by floods on society; however, all these focused on evacuation drills, and the latter not aimed at the tropical river basin, whose hydrological regime is extremely unique. In this catchment modeling environment approach, it was applied the multi-agent systems approach as a surrogate of the conventional hydrologic model to build a system that operates at the catchment level displayed with hydrometric stations, that use the data from hydrometric sensors networks (e.g., rainfall, river stage, river flow) captured, stored and administered by an organization of interacting agents whose main aim is to perform flow forecasting and awareness, and in so doing enhance the policy-making process at the watershed level. Section one of this document surveys the status of the current research in hydrologic modeling for the flood forecasting task. It is a journey through the background of related concerns to the hydrological process, flood ontologies, management, and forecasting. The section covers, to a certain extent, the techniques, methods, and theoretical aspects and methods of hydrological modeling and their types, from the conventional models to the present-day artificial intelligence prototypes, making special emphasis on the multi-agent systems, as most recent modeling methodology in the hydrological sciences. However, it is also underlined here that the section does not contribute to an all-inclusive revision, rather its purpose is to serve as a framework for this sort of work and a path to underline the significant aspects of the works. In section two of the document, it is detailed the conceptual framework for the suggested Multiagent system in support of flood forecasting. To accomplish this task, several works need to be carried out such as the sketching and implementation of the system’s framework with the (Belief-Desire-Intention model) architecture for flood forecasting events within the concept of the tropical river basin. Contributions of this proposed architecture are the replacement of the conventional hydrologic modeling with the use of multi-agent systems, which makes it quick for hydrometric time-series data administration and modeling of the precipitation-runoff process which conveys to flood in a river course. Another advantage is the user-friendly environment provided by the proposed multi-agent system platform graphical interface, the real-time generation of graphs, charts, and monitors with the information on the immediate event taking place in the catchment, which makes it easy for the viewer with some or no background in data analysis and their interpretation to get a visual idea of the information at hand regarding the flood awareness. The required agents developed in this multi-agent system modeling framework for flood forecasting have been trained, tested, and validated under a series of experimental tasks, using the hydrometric series information of rainfall, river stage, and streamflow data collected by the hydrometric sensor agents from the hydrometric sensors.Como se sabe, los problemas relacionados con la generación de inundaciones, su control y manejo, han sido tratados con herramientas tradicionales de modelado hidrológico enfocados al estudio y análisis de la relación precipitación-escorrentía, proceso físico que es impulsado por el ciclo hidrológico y el régimen climático y este esta directamente proporcional a la generación de crecidas. Dentro de la disciplina hidrológica, clasifican estas herramientas de modelado tradicionales en tres grupos principales, siendo el primer grupo el de modelos empíricos (modelos de caja negra), modelos conceptuales (o agrupados, semi-agrupados o semi-distribuidos) dependiendo de la distribución espacial y, por último, los basados en la física, modelos de proceso (o "modelos de caja blanca", y/o distribuidos). En este sentido, clasifican las aplicaciones de predicción de caudal fluvial en la ingeniería de recursos hídricos en dos tipos con respecto a los valores y parámetros que requieren en: modelos de procesos basados en la física y la categoría de modelos impulsados por datos. Los modelos basados en la física proporcionan una descripción detallada de la dinámica relacionada con los aspectos físicos que ocurren internamente entre los diferentes sistemas de una cuenca hidrográfica determinada. Sin embargo, aparte de ser complejos de implementar, se basan completamente en algoritmos matemáticos, y la comprensión de estas interacciones requiere la abstracción de conceptos matemáticos y la conceptualización de los procesos físicos que se entrelazan entre estos sistemas. Además, los modelos impulsados por datos no requieren conocimiento de los procesos físicos que gobiernan, sino que se basan únicamente en ecuaciones empíricas que necesitan una gran cantidad de datos y requieren calibración de los datos en el sitio. Los dos modelos difieren significativamente debido a sus requisitos de datos y de cómo expresan los fenómenos físicos. La elaboración de modelos hidrológicos para el pronóstico de inundaciones ha dado grandes pasos, pero siguen sin resolverse algunos contratiempos importantes, dada la naturaleza estocástica de los fenómenos hidrológicos, es el desafío de implementar sistemas de pronóstico fáciles de usar, reutilizables, robustos y confiables, la cantidad de incertidumbre que deben afrontar al intentar resolver el problema de la predicción de inundaciones. Sin embargo, en las últimas décadas, con el entorno creciente y el desarrollo del campo de la inteligencia artificial (IA), algunos investigadores rara vez han intentado abordar la naturaleza estocástica de los eventos hidrológicos con la aplicación de algunas de estas técnicas. Dados los contratiempos en el pronóstico de inundaciones hidrológicas descritos anteriormente, esta investigación de tesis tiene como objetivo integrar los modelos hidrológicos, basados en la física, hidráulicos e impulsados por datos bajo el paradigma de Sistemas de múltiples agentes para el pronóstico de inundaciones por medio del bosquejo y desarrollo del marco de trabajo del sistema multi-agente (MAS) para los eventos de predicción de inundaciones en el contexto de cuenca hidrográfica tropical. Con la aparición de las tecnologías de agentes, se han emprendido algunos enfoques de simulación recientes en la investigación hidrológica con modelos basados en agentes y sistema multi-agente, principalmente en alerta por inundaciones, seguridad y planificación de inundaciones, control y gestión de inundaciones y pronóstico de inundaciones, todos estos enfocado a simulacros de evacuación, y este último no dirigido a la cuenca tropical, cuyo régimen hidrológico es extremadamente único. En este enfoque de entorno de modelado de cuencas, se aplican los enfoques de sistemas multi-agente como un sustituto del modelado hidrológico convencional para construir un sistema que opera a nivel de cuenca con estaciones hidrométricas desplegadas, que utilizan los datos de redes de sensores hidrométricos (por ejemplo, lluvia , nivel del río, caudal del río) capturado, almacenado y administrado por una organización de agentes interactuantes cuyo objetivo principal es realizar pronósticos de caudal y concientización para mejorar las capacidades de soporte en la formulación de políticas a nivel de cuenca hidrográfica. La primera sección de este documento analiza el estado del arte sobre la investigación actual en modelos hidrológicos para la tarea de pronóstico de inundaciones. Es un viaje a través de los antecedentes preocupantes relacionadas con el proceso hidrológico, las ontologías de inundaciones, la gestión y la predicción. El apartado abarca, en cierta medida, las técnicas, métodos y aspectos teóricos y métodos del modelado hidrológico y sus tipologías, desde los modelos convencionales hasta los prototipos de inteligencia artificial actuales, haciendo hincapié en los sistemas multi-agente, como un enfoque de simulación reciente en la investigación hidrológica. Sin embargo, se destaca que esta sección no contribuye a una revisión integral, sino que su propósito es servir de marco para este tipo de trabajos y una guía para subrayar los aspectos significativos de los trabajos. En la sección dos del documento, se detalla el marco de trabajo propuesto para el sistema multi-agente para el pronóstico de inundaciones. Los trabajos realizados comprendieron el diseño y desarrollo del marco de trabajo del sistema multi-agente con la arquitectura (modelo Creencia-Deseo-Intención) para la predicción de eventos de crecidas dentro del concepto de cuenca hidrográfica tropical. Las contribuciones de esta arquitectura propuesta son el reemplazo del modelado hidrológico convencional con el uso de sistemas multi-agente, lo que agiliza la administración de las series de tiempo de datos hidrométricos y el modelado del proceso de precipitación-escorrentía que conduce a la inundación en el curso de un río. Otra ventaja es el entorno amigable proporcionado por la interfaz gráfica de la plataforma del sistema multi-agente propuesto, la generación en tiempo real de gráficos, cuadros y monitores con la información sobre el evento inmediato que tiene lugar en la cuenca, lo que lo hace fácil para el espectador con algo o sin experiencia en análisis de datos y su interpretación para tener una idea visual de la información disponible con respecto a la cognición de las inundaciones. Los agentes necesarios desarrollados en este marco de modelado de sistemas multi-agente para el pronóstico de inundaciones han sido entrenados, probados y validados en una serie de tareas experimentales, utilizando la información de la serie hidrométrica de datos de lluvia, nivel del río y flujo del curso de agua recolectados por los agentes sensores hidrométricos de los sensores hidrométricos de campo.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: María Araceli Sanchis de Miguel.- Secretario: Juan Gómez Romero.- Vocal: Juan Carlos Corrale
    corecore