26 research outputs found

    Multiset-Based Assessment of Resilience of Sociotechnological Systems to Natural Hazards

    Get PDF
    The chapter describes multiset-based approach to the assessment of resilience/vulnerability of the distributed sociotechnological systems (DSTS) to natural hazards (NH). DSTS contain highly interconnected and intersected consuming and producing segments, and also resource base (RB), providing their existence and operation. NH impacts may destroy some local elements of these segments, as well as some parts of RB, thus initiating multiple chain effects, leading to negative consequences far away from the NH local strikes. To assess DSTS resilience to such impacts, multigrammatical representation of DSTS is used. A criterion of DSTS sustainability to NH, being generalization of similar criterion, known for industrial (producing) systems, is proposed. Application of this criterion to critical infrastructures is considered, as well as solution of the reverse problem, concerning subsystems of DSTS, which may stay functional after NH impact

    Multi-Agent Implementation of Filtering Multiset Grammars

    Get PDF
    Chapter is dedicated to the application of multi-agent technology to generation of sets of terminal multisets (TMS) defined by filtering multiset grammars (FMG). Proposed approach is based on creation of multi-agent system (MAS), corresponding to specific FMG in such a way, that every rule of FMG is represented by independently acting agent. Such MAS provides high-parallel generation of TMS and may be effectively used in any proper hardware environment. Directions of further development of the proposed approach are discussed

    Motion Tracking and Potentially Dangerous Situations Recognition in Complex Environment

    Get PDF
    In recent years, video surveillance systems have been playing a significantly important role in the human safety and security field by monitoring public or private areas. In this chapter, we have discussed the development of an intelligent surveillance system to detect, track and identify potentially hazardous events that may occur at level crossings (LC). This system starts by detecting and tracking objects on the level crossing. Then, a danger evaluation method is built using hidden Markov model in order to predict trajectories of the detected objects. The trajectories are analyzed with a credibility model to evaluate dangerous situations at level crossings. Synthetics and real data are used to test the effectiveness and the robustness of the proposed algorithms and the whole approach by considering various scenarios within several situations

    Randomized protocols for asynchronous consensus

    Full text link
    The famous Fischer, Lynch, and Paterson impossibility proof shows that it is impossible to solve the consensus problem in a natural model of an asynchronous distributed system if even a single process can fail. Since its publication, two decades of work on fault-tolerant asynchronous consensus algorithms have evaded this impossibility result by using extended models that provide (a) randomization, (b) additional timing assumptions, (c) failure detectors, or (d) stronger synchronization mechanisms than are available in the basic model. Concentrating on the first of these approaches, we illustrate the history and structure of randomized asynchronous consensus protocols by giving detailed descriptions of several such protocols.Comment: 29 pages; survey paper written for PODC 20th anniversary issue of Distributed Computin

    Fusion of the Power from Citations: Enhance your Influence by Integrating Information from References

    Full text link
    Influence prediction plays a crucial role in the academic community. The amount of scholars' influence determines whether their work will be accepted by others. Most existing research focuses on predicting one paper's citation count after a period or identifying the most influential papers among the massive candidates, without concentrating on an individual paper's negative or positive impact on its authors. Thus, this study aims to formulate the prediction problem to identify whether one paper can increase scholars' influence or not, which can provide feedback to the authors before they publish their papers. First, we presented the self-adapted ACC (Average Annual Citation Counts) metric to measure authors' impact yearly based on their annual published papers, paper citation counts, and contributions in each paper. Then, we proposed the RD-GAT (Reference-Depth Graph Attention Network) model to integrate heterogeneous graph information from different depth of references by assigning attention coefficients on them. Experiments on AMiner dataset demonstrated that the proposed ACC metrics could represent the authors influence effectively, and the RD-GAT model is more efficiently on the academic citation network, and have stronger robustness against the overfitting problem compared with the baseline models. By applying the framework in this work, scholars can identify whether their papers can improve their influence in the future

    A Graphical Approach to Prove the Semantic Preservation of UML/OCL Refactoring Rules

    Get PDF
    Refactoring is a powerful technique to improve the quality of software models including implementation code. The software developer applies successively so-called refactoring rules on the current software model and transforms it into a new model. Ideally, the application of a refactoring rule preserves the semantics of the model on which it is applied. In this paper, we present a simple criterion and a proof technique for the semantic preservation of refactoring rules that are defined for UML class diagrams and OCL constraints. Our approach is based on a novel formalization of the OCL semantics in form of graph transformation rules. We illustrate our approach using the refactoring rule MoveAttribute

    Quantifying Success in Science: An Overview

    Get PDF
    Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions
    corecore