49 research outputs found

    Geometry of Radial Basis Neural Networks for Safety Biased Approximation of Unsafe Regions

    Full text link
    Barrier function-based inequality constraints are a means to enforce safety specifications for control systems. When used in conjunction with a convex optimization program, they provide a computationally efficient method to enforce safety for the general class of control-affine systems. One of the main assumptions when taking this approach is the a priori knowledge of the barrier function itself, i.e., knowledge of the safe set. In the context of navigation through unknown environments where the locally safe set evolves with time, such knowledge does not exist. This manuscript focuses on the synthesis of a zeroing barrier function characterizing the safe set based on safe and unsafe sample measurements, e.g., from perception data in navigation applications. Prior work formulated a supervised machine learning algorithm whose solution guaranteed the construction of a zeroing barrier function with specific level-set properties. However, it did not explore the geometry of the neural network design used for the synthesis process. This manuscript describes the specific geometry of the neural network used for zeroing barrier function synthesis, and shows how the network provides the necessary representation for splitting the state space into safe and unsafe regions.Comment: Accepted into American Control Conference (ACC) 202

    A forecasting solution to the oil spill problem based on a hybrid intelligent system

    Get PDF
    Oil spills represent one of the most destructive environmental disasters. Predicting the possibility of finding oil slicks in a certain area after an oil spill can be critical in reducing environmental risks. The system presented here uses the Case-Based Reasoning (CBR) methodology to forecast the presence or absence of oil slicks in certain open sea areas after an oil spill. CBR is a computational methodology designed to generate solutions to certain problems by analysing previous solutions given to previously solved problems. The proposed CBR system includes a novel network for data classification and retrieval. This type of network, which is constructed by using an algorithm to summarize the results of an ensemble of Self-Organizing Maps, is explained and analysed in the present study. The Weighted Voting Superposition (WeVoS) algorithm mainly aims to achieve the best topographically ordered representation of a dataset in the map. This study shows how the proposed system, called WeVoS-CBR, uses information such as salinity, temperature, pressure, number and area of the slicks, obtained from various satellites to accurately predict the presence of oil slicks in the north-west of the Galician coast, using historical data

    CROS: A Contingency Response multi-agent system for Oil Spills situations

    Get PDF
    This paper presents CROS, a Contingency Response multi-agent system for Oil Spill situations. The system uses the Case-Based Reasoning methodology to generate predictions to determine the probability of finding oil slicks in certain areas of the ocean. CBR uses past information to generate new solutions to the current problem. The system employs a SOA-based multi-agent architecture so that the main components of the system can be remotely accessed. Therefore, all functionalities (applications and services) can communicate in a distributed way, even from mobile devices. The core of the system is a group of deliberative agents acting as controllers and administrators for all applications and services. CROS manages information such as sea salinity, sea temperature, wind speed, ocean currents and atmosphere pressure, obtained from several sources, including satellite images. The system has been trained using historical data obtained after the Prestige accident on the Galician west coast of Spain. Results have demonstrated that the system can accurately predict the presence of oil slicks in determined zones after an oil spill. The use of a distributed multi-agent architecture has been shown to enhance the overall performance of the system

    Forest Fires Prediction by an Organization Based System

    Get PDF
    In this study, a new organization based system for forest fires prediction is presented. It is an Organization Based System for Forest Fires Forecasting (OBSFFF). The core of the system is based on the Case-Based Reasoning methodology, and it is able to generate a prediction about the evolution of the forest fires in certain areas. CBR uses historical data to create new solutions to current problems. The system employs a distributed multi-agent architecture so that the main components of the system can be remotely accessed. All the elements building the final system, communicate in a distributed way, from different type of interfaces and devices. OBSFFF has been applied to generate predictions in real forest fire situations, using historical data both to train the system and to check the results. Results have demonstrated that the system accurately predicts the evolution of the fires. It has been demonstrated that using a distributed architecture enhances the overall performance of the system

    Smart Bagged Tree-based Classifier optimized by Random Forests (SBT-RF) to Classify Brain- Machine Interface Data

    Get PDF
    Brain-Computer Interface (BCI) is a new technology that uses electrodes and sensors to connect machines and computers with the human brain to improve a person\u27s mental performance. Also, human intentions and thoughts are analyzed and recognized using BCI, which is then translated into Electroencephalogram (EEG) signals. However, certain brain signals may contain redundant information, making classification ineffective. Therefore, relevant characteristics are essential for enhancing classification performance. . Thus, feature selection has been employed to eliminate redundant data before sorting to reduce computation time. BCI Competition III Dataset Iva was used to investigate the efficacy of the proposed system. A Smart Bagged Tree-based Classifier (SBT-RF) technique is presented to determine the importance of the features for selecting and classifying the data. As a result, SBT-RF is better at improving the mean accuracy of the dataset. It also decreases computation cost and training time and increases prediction speed. Furthermore, fewer features mean fewer electrodes, thus lowering the risk of damage to the brain. The proposed algorithm has the greatest average accuracy of ~98% compared to other relevant algorithms in the literature. SBT-RF is compared to state-of-the-art algorithms based on the following performance metrics: Confusion Matrix, ROC-AUC, F1-Score, Training Time, Prediction speed, and Accuracy

    (OBIFS) isotropic image analysis for improving a predicting agent based systems

    Get PDF
    In this interdisciplinary study a novel hybrid forecasting system is presented, in which an isotropic buffer operator is applied for case-based creation within the structure of the organization-based multi-agent system. Commonly used as an image analysis technique by commercial Geographic Information Systems (GIS), the buffer operator in this particular system calculates the area of a forest fire for prediction and visualization tasks. The use of the buffer operator improves the quality of the data used by the system and in consequence the quality of the results obtained. The system has been successfully tested using real historical data on forest fires evolution, by generating accurate predictions

    Real-time implementation of a sensor validation scheme for a heavy-duty diesel engine

    Get PDF
    With ultra-low exhaust emissions standards, heavy-duty diesel engines (HDDEs) are dependent upon a myriad of sensors to optimize power output and exhaust emissions. Apart from acquiring and processing sensor signals, engine control modules should also have capabilities to report and compensate for sensors that have failed. The global objective of this research was to develop strategies to enable HDDEs to maintain nominal in-use performance during periods of sensor failures. Specifically, the work explored the creation of a sensor validation scheme to detect, isolate, and accommodate sensor failures in HDDEs. The scheme not only offers onboard diagnostic (OBD) capabilities, but also control of engine performance in the event of sensor failures. The scheme, known as Sensor Failure Detection Isolation and Accommodation (SFDIA), depends on mathematical models for its functionality. Neural approximators served as the modeling tool featuring online adaptive capabilities. The significance of the SFDIA is that it can enhance an engine management system (EMS) capability to control performance under any operating conditions when sensors fail. The SFDIA scheme updates models during the lifetime of an engine under real world, in-use conditions. The central hypothesis for the work was that the SFDIA scheme would allow continuous normal operation of HDDEs under conditions of sensor failures. The SFDIA was tested using the boost pressure, coolant temperature, and fuel pressure sensors to evaluate its performance. The test engine was a 2004 MackRTM MP7-355E (11 L, 355 hp). Experimental work was conducted at the Engine and Emissions Research Laboratory (EERL) at West Virginia University (WVU). Failure modes modeled were abrupt, long-term drift and intermittent failures. During the accommodation phase, the SFDIA restored engine power up to 0.64% to nominal. In addition, oxides of nitrogen (NOx) emissions were maintained at up to 1.41% to nominal

    Organization based multiagent architecture for distributed environments

    Get PDF
    [EN]Distributed environments represent a complex field in which applied solutions should be flexible and include significant adaptation capabilities. These environments are related to problems where multiple users and devices may interact, and where simple and local solutions could possibly generate good results, but may not be effective with regards to use and interaction. There are many techniques that can be employed to face this kind of problems, from CORBA to multi-agent systems, passing by web-services and SOA, among others. All those methodologies have their advantages and disadvantages that are properly analyzed in this documents, to finally explain the new architecture presented as a solution for distributed environment problems. The new architecture for solving complex solutions in distributed environments presented here is called OBaMADE: Organization Based Multiagent Architecture for Distributed Environments. It is a multiagent architecture based on the organizations of agents paradigm, where the agents in the architecture are structured into organizations to improve their organizational capabilities. The reasoning power of the architecture is based on the Case-Based Reasoning methology, being implemented in a internal organization that uses agents to create services to solve the external request made by the users. The OBaMADE architecture has been successfully applied to two different case studies where its prediction capabilities have been properly checked. Those case studies have showed optimistic results and, being complex systems, have demonstrated the abstraction and generalizations capabilities of the architecture. Nevertheless OBaMADE is intended to be able to solve much other kind of problems in distributed environments scenarios. It should be applied to other varieties of situations and to other knowledge fields to fully develop its potencial.[ES]Los entornos distribuidos representan un campo de conocimiento complejo en el que las soluciones a aplicar deben ser flexibles y deben contar con gran capacidad de adaptación. Este tipo de entornos está normalmente relacionado con problemas donde varios usuarios y dispositivos entran en juego. Para solucionar dichos problemas, pueden utilizarse sistemas locales que, aunque ofrezcan buenos resultados en términos de calidad de los mismos, no son tan efectivos en cuanto a la interacción y posibilidades de uso. Existen múltiples técnicas que pueden ser empleadas para resolver este tipo de problemas, desde CORBA a sistemas multiagente, pasando por servicios web y SOA, entre otros. Todas estas mitologías tienen sus ventajas e inconvenientes, que se analizan en este documento, para explicar, finalmente, la nueva arquitectura presentada como una solución para los problemas generados en entornos distribuidos. La nueva arquitectura aquí se llama OBaMADE, que es el acrónimo del inglés Organization Based Multiagent Architecture for Distributed Environments (Arquitectura Multiagente Basada en Organizaciones para Entornos Distribuidos). Se trata de una arquitectura multiagente basasa en el paradigma de las organizaciones de agente, donde los agentes que forman parte de la arquitectura se estructuran en organizaciones para mejorar sus capacidades organizativas. La capacidad de razonamiento de la arquitectura está basada en la metodología de razonamiento basado en casos, que se ha implementado en una de las organizaciones internas de la arquitectura por medio de agentes que crean servicios que responden a las solicitudes externas de los usuarios. La arquitectura OBaMADE se ha aplicado de forma exitosa a dos casos de estudio diferentes, en los que se han demostrado sus capacidades predictivas. Aplicando OBaMADE a estos casos de estudio se han obtenido resultados esperanzadores y, al ser sistemas complejos, se han demostrado las capacidades tanto de abstracción como de generalización de la arquitectura presentada. Sin embargo, esta arquitectura está diseñada para poder ser aplicada a más tipo de problemas de entornos distribuidos. Debe ser aplicada a más variadas situaciones y a otros campos de conocimiento para desarrollar completamente el potencial de esta arquitectura
    corecore