825 research outputs found

    06031 Abstracts Collection -- Organic Computing -- Controlled Emergence

    Get PDF
    Organic Computing has emerged recently as a challenging vision for future information processing systems, based on the insight that we will soon be surrounded by large collections of autonomous systems equipped with sensors and actuators to be aware of their environment, to communicate freely, and to organize themselves in order to perform the actions and services required. Organic Computing Systems will adapt dynamically to the current conditions of its environment, they will be self-organizing, self-configuring, self-healing, self-protecting, self-explaining, and context-aware. From 15.01.06 to 20.01.06, the Dagstuhl Seminar 06031 ``Organic Computing -- Controlled Emergence\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. The seminar was characterized by the very constructive search for common ground between engineering and natural sciences, between informatics on the one hand and biology, neuroscience, and chemistry on the other. The common denominator was the objective to build practically usable self-organizing and emergent systems or their components. An indicator for the practical orientation of the seminar was the large number of OC application systems, envisioned or already under implementation, such as the Internet, robotics, wireless sensor networks, traffic control, computer vision, organic systems on chip, an adaptive and self-organizing room with intelligent sensors or reconfigurable guiding systems for smart office buildings. The application orientation was also apparent by the large number of methods and tools presented during the seminar, which might be used as building blocks for OC systems, such as an evolutionary design methodology, OC architectures, especially several implementations of observer/controller structures, measures and measurement tools for emergence and complexity, assertion-based methods to control self-organization, wrappings, a software methodology to build reflective systems, and components for OC middleware. Organic Computing is clearly oriented towards applications but is augmented at the same time by more theoretical bio-inspired and nature-inspired work, such as chemical computing, theory of complex systems and non-linear dynamics, control mechanisms in insect swarms, homeostatic mechanisms in the brain, a quantitative approach to robustness, abstraction and instantiation as a central metaphor for understanding complex systems. Compared to its beginnings, Organic Computing is coming of age. The OC vision is increasingly padded with meaningful applications and usable tools, but the path towards full OC systems is still complex. There is progress in a more scientific understanding of emergent processes. In the future, we must understand more clearly how to open the configuration space of technical systems for on-line modification. Finally, we must make sure that the human user remains in full control while allowing the systems to optimize

    A model-based approach for automatic recovery from memory leaks in enterprise applications

    Get PDF
    Large-scale distributed computing systems such as data centers are hosted on heterogeneous and networked servers that execute in a dynamic and uncertain operating environment, caused by factors such as time-varying user workload and various failures. Therefore, achieving stringent quality-of-service goals is a challenging task, requiring a comprehensive approach to performance control, fault diagnosis, and failure recovery. This work presents a model-based approach for fault management, which integrates limited lookahead control (LLC), diagnosis, and fault-tolerance concepts that: (1) enables systems to adapt to environment variations, (2) maintains the availability and reliability of the system, (3) facilitates system recovery from failures. We focused on memory leak errors in this thesis. A characterization function is designed to detect memory leaks. Then, a LLC is applied to enable the computing system to adapt efficiently to variations in the workload, and to enable the system recover from memory leaks and maintain functionality

    Towards Self-Healing in Wireless Sensor Networks.

    No full text
    Faults in WSN are very common and appear in different levels of the system. For pervasive applications to be adopted by end-users there is a need for autonomic selfhealing. This paper discusses our initial approach to selfhealing in WSN and describes experiments with two case studies of body sensor deployment. We evaluate the impact of sensor faults on activity and gesture classification accuracy respectively and develop mechanisms that will allow detection of those faults during systems operation. © 2009 IEEE

    Sensor-based autonomous pipeline monitoring robotic system

    Get PDF
    The field of robotics applications continues to advance. This dissertation addresses the computational challenges of robotic applications and translations of actions using sensors. One of the most challenging fields for robotics applications is pipeline-based applications which have become an indispensable part of life. Proactive monitoring and frequent inspections are critical in maintaining pipeline health. However, these tasks are highly expensive using traditional maintenance systems, knowing that pipeline systems can be largely deployed in an inaccessible and hazardous environment. Thus, we propose a novel cost effective, scalable, customizable, and autonomous sensor-based robotic system, called SPRAM System (Sensor-based Autonomous Pipeline Monitoring Robotic System). It combines robot agent based technologies with sensing technologies for efficiently locating health related events and allows active and corrective monitoring and maintenance of the pipelines. The SPRAM System integrates RFID systems with mobile sensors and autonomous robots. While the mobile sensor motion is based on the fluid transported by the pipeline, the fixed sensors provide event and mobile sensor location information and contribute efficiently to the study of health history of the pipeline. In addition, it permits a good tracking of the mobile sensors. Using the output of event analysis, a robot agent gets command from the controlling system, travels inside the pipelines for detailed inspection and repairing of the reported incidents (e.g., damage, leakage, or corrosion). The key innovations of the proposed system are 3-fold: (a) the system can apply to a large variety of pipeline systems; (b) the solution provided is cost effective since it uses low cost powerless fixed sensors that can be setup while the pipeline system is operating; (c) the robot is autonomous and the localization technique allows controllable errors. In this dissertation, some simulation experiments described along with prototyping activities demonstrate the feasibility of the proposed system

    Consensus-Based Data Management within Fog Computing For the Internet of Things

    Get PDF
    The Internet of Things (IoT) infrastructure forms a gigantic network of interconnected and interacting devices. This infrastructure involves a new generation of service delivery models, more advanced data management and policy schemes, sophisticated data analytics tools, and effective decision making applications. IoT technology brings automation to a new level wherein nodes can communicate and make autonomous decisions in the absence of human interventions. IoT enabled solutions generate and process enormous volumes of heterogeneous data exchanged among billions of nodes. This results in Big Data congestion, data management, storage issues and various inefficiencies. Fog Computing aims at solving the issues with data management as it includes intelligent computational components and storage closer to the data sources. Often, an IoT-enabled infrastructure is shared among many users with various requirements. Sharing resources, sharing operational costs and collective decision making (consensus) among many stakeholders is frequently neglected. This research addresses an essential requirement for adaptive, autonomous and consensus-based Fog computational solutions which are able to support distributed and in-network schemes and policies. These network schemes and policies need to meet the requirements of many users. In this work, innovative consensus-based computational solutions are investigated. These proposed solutions aim to correlate and organise data for effective management and decision making in Fog. Instead of individual decision making, the algorithms aim to aggregate several decisions into a consensus decision representing a collective agreement, benefiting from the individuals variant knowledge and meeting multiple stakeholders requirements. In order to validate the proposed solutions, hybrid research methodology is involved that includes the design of a test-bed and the execution of several experiments. In order to investigate the effectiveness of the paradigm, three experiments were designed and validated. Real-life sensor data and synthetic statistical data was collected, processed and analysed. Bayesian Machine Learning models and Analytics were used to consolidate the design and evaluate the performance of the algorithms. In the Fog environment, the first scenario tests the Aggregation by Distribution algorithm. The solution contribute in achieving a notable efficiency of data delivery obtained with a minimal loss in precision. The second scenario validates the merits of the approach in predicting the activities of high mobility IoT applications. The third scenario tests the applications related to smart home IoT. All proposed Consensus algorithms use statistical analysis to support effective decision making in Fog and enable data aggregation for optimal storage, data transmission, processing and analytics. The final results of all experiments showed that all the implemented consensus approaches surpass the individual ones in different performance terms. Formal results also showed that the paradigm is a good fit in many IoT environments and can be suitable for different scenarios when applying data analysis to correlate data with the design. Finally, the design demonstrates that Fog Computing can compete with Cloud Computing in terms of accuracy with an added preference of locality

    Self-* distributed query region covering in sensor networks

    Full text link
    Wireless distributed sensor networks are used to monitor a multitude of environments for both civil and military applications. Sensors may be deployed to unreachable or inhospitable areas. Thus, they cannot be replaced easily. However, due to various factors, sensors\u27 internal memory, or the sensors themselves, can become corrupted. Hence, there is a need for more robust sensor networks. Sensors are most commonly densely deployed, but keeping all sensors continually active is not energy efficient. Our aim is to select the minimum number of sensors which can entirely cover a particular monitored area, while remaining strongly connected. This concept is called a Minimum Connected Cover of a query region in a sensor network. In this research, we have designed two fully distributed, robust, self-* solutions to the minimum connected cover of query regions that can cope with both transient faults and sensor crashes. We considered the most general case in which every sensor has a different sensing and communication radius. We have also designed extended versions of the algorithms that use multi-hop information to obtain better results utilizing small atomicity (i.e., each sensor reads only one of its neighbors\u27 variables at a time, instead of reading all neighbors\u27 variables). With this, we have proven self-* (self-configuration, self-stabilization, and self-healing) properties of our solutions, both analytically and experimentally. The simulation results show that our solutions provide better performance in terms of coverage than pre-existing self-stabilizing algorithms

    Intelligent flight control systems

    Get PDF
    The capabilities of flight control systems can be enhanced by designing them to emulate functions of natural intelligence. Intelligent control functions fall in three categories. Declarative actions involve decision-making, providing models for system monitoring, goal planning, and system/scenario identification. Procedural actions concern skilled behavior and have parallels in guidance, navigation, and adaptation. Reflexive actions are spontaneous, inner-loop responses for control and estimation. Intelligent flight control systems learn knowledge of the aircraft and its mission and adapt to changes in the flight environment. Cognitive models form an efficient basis for integrating 'outer-loop/inner-loop' control functions and for developing robust parallel-processing algorithms

    Emergency Response Information System Interoperability: Development of Chemical Incident Response Data Model

    Get PDF
    Emergency response requires an efficient information supply chain for the smooth operations of intra- and inter-organizational emergency management processes. However, the breakdown of this information supply chain due to the lack of consistent data standards presents a significant problem. In this paper, we adopt a theory- driven novel approach to develop an XML-based data model that prescribes a comprehensive set of data standards (semantics and internal structures) for emergency management to better address the challenges of information interoperability. Actual documents currently being used in mitigating chemical emergencies from a large number of incidents are used in the analysis stage. The data model development is guided by Activity Theory and is validated through a RFC-like process used in standards development. This paper applies the standards to the real case of a chemical incident scenario. Further, it complies with the national leading initiatives in emergency standards (National Information Exchange Model
    • …
    corecore