17,348 research outputs found

    Event-Cloud Platform to Support Decision- Making in Emergency Management

    Full text link
    The challenge of this paper is to underline the capability of an Event-Cloud Platform to support efficiently an emergency situation. We chose to focus on a nuclear crisis use case. The proposed approach consists in modeling the business processes of crisis response on the one hand, and in supporting the orchestration and execution of these processes by using an Event-Cloud Platform on the other hand. This paper shows how the use of Event-Cloud techniques can support crisis management stakeholders by automatizing non-value added tasks and by directing decision- makers on what really requires their capabilities of choice. If Event-Cloud technology is a very interesting and topical subject, very few research works have considered this to improve emergency management. This paper tries to fill this gap by considering and applying these technologies on a nuclear crisis use-case

    Redundancy, Deduction Schemes, and Minimum-Size Bases for Association Rules

    Full text link
    Association rules are among the most widely employed data analysis methods in the field of Data Mining. An association rule is a form of partial implication between two sets of binary variables. In the most common approach, association rules are parameterized by a lower bound on their confidence, which is the empirical conditional probability of their consequent given the antecedent, and/or by some other parameter bounds such as "support" or deviation from independence. We study here notions of redundancy among association rules from a fundamental perspective. We see each transaction in a dataset as an interpretation (or model) in the propositional logic sense, and consider existing notions of redundancy, that is, of logical entailment, among association rules, of the form "any dataset in which this first rule holds must obey also that second rule, therefore the second is redundant". We discuss several existing alternative definitions of redundancy between association rules and provide new characterizations and relationships among them. We show that the main alternatives we discuss correspond actually to just two variants, which differ in the treatment of full-confidence implications. For each of these two notions of redundancy, we provide a sound and complete deduction calculus, and we show how to construct complete bases (that is, axiomatizations) of absolutely minimum size in terms of the number of rules. We explore finally an approach to redundancy with respect to several association rules, and fully characterize its simplest case of two partial premises.Comment: LMCS accepted pape

    Minimal Decision Rules Based on the A Priori Algorithm

    Full text link
    Based on rough set theory many algorithms for rules extraction from data have been proposed. Decision rules can be obtained directly from a database. Some condition values may be unnecessary in a decision rule produced directly from the database. Such values can then be eliminated to create a more comprehensi- ble (minimal) rule. Most of the algorithms that have been proposed to calculate minimal rules are based on rough set theory or machine learning. In our ap- proach, in a post-processing stage, we apply the Apriori algorithm to reduce the decision rules obtained through rough sets. The set of dependencies thus obtained will help us discover irrelevant attribute values

    Coordinating negotiations in data-intensive collaborative working environments using an agent-based model-driven platform

    Get PDF
    This paper tackles the interoperability problems of enterprise information systems by presenting a distributive model-driven platform for parallel coordination of multiple negotiations in data-intensive collaborative working environments. The proposed model was validated and verified by an industrial application scenario within the European research project H2020 C2NET (Cloud Collaborative Manufacturing Networks). This real scenario developed data-intensive collaborative and cloud-enabled tools that allow the optimisation of the supply network of manufacturing SMEs, proposing a negotiation solution based on a model-driven interoperable decentralised architecture.info:eu-repo/semantics/acceptedVersio

    Mining complete, precise and simple process models

    Get PDF
    Process discovery algorithms are generally used to discover the underlying process that has been followed to achieve an objective. In general, these algorithms do not take into account any domain knowledge to derive process models, allowing to apply them in a general manner. However, depending on the selected approach, a different kind of process models can be discovered, as each technique has its strengths and weaknesses, e.g., the expressiveness of the used notation. Hence, it is important to take into account the requirements of the domain when deciding which algorithm to use, as the correct assumptions can lead to richer process models. For instance, among the different domains of application of process mining we can identify several fields that share an interesting requirement about the discovered process models. In security audits, discovered processes have to fulfill strict requisites. This means that the process model should reproduce as much behavior as possible; otherwise some violations may go undetected (replay fitness). On the other hand, in order to avoid false positives, process models should reproduce only the recorded behavior (precision). Finally, process models should be easily readable to better detect deviations (simplicity). Another clear example concerns the educational domain, as in order to be of value for both teachers and learners, a discovered learning process should satisfy the aforementioned requirements. That is, to guarantee feasible and correct evaluations, teachers need to access to all the activities performed by learners, thereby the learning process should be able to reproduce as much behavior as possible (replay fitness). Furthermore, the learning process should focus on the recorded behavior seen in the event log (precision), i.e., show only what the students did, and not what they might have done, while being easily interpretable by the teachers (simplicity). One of the previous requirements is related to the readability of process models: simplicity. In process mining, one of the identified challenges is the appropriate visualization of process models, i.e., to present the results of process discovery in such a way that people actually gain insights about the process. Process models that are unnecessary complex can hinder the real behavior of the process rather than to provide an intuition of what is really happening in an organization. However, achieving a good level of readability is not always straightforward, for instance, due the used representation. Within the different approaches focused to reduce the complexity of a process model, the interest in this PhD Thesis relies on two techniques. On the one hand, to improve the readability of an already discovered process model through the inclusion of duplicate labels. On the other hand, the hierarchization of a process model, i.e., to provide a well known structure to the process model. However, regarding the latter, this technique requires to take into account domain knowledge, as different domains may rely on different requirements when improving the readability of the process model. In other words, in order to improve the interpretability and understandability of a process model, the hierarchization has to be driven by the domain. To sum up, concerning the aim of this PhD Thesis, we can identify two main topics of interest. On the one hand, we are interested in retrieving process models that reproduce as much behavior recorded in the log as possible, without introducing unseen behavior. On the other hand, we try to reduce the complexity of the mined models in order to improve their readability. Hence, the aim of this PhD Thesis is to discover process models considering replay fitness, precision and simplicity, while paying special attention in retrieving highly interpretable process models

    A hybrid algorithm for Bayesian network structure learning with application to multi-label learning

    Get PDF
    We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC's ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author

    Spark solutions for discovering fuzzy association rules in Big Data

    Get PDF
    The research reported in this paper was partially supported the COPKIT project from the 8th Programme Framework (H2020) research and innovation programme (grant agreement No 786687) and from the BIGDATAMED projects with references B-TIC-145-UGR18 and P18-RT-2947.The high computational impact when mining fuzzy association rules grows significantly when managing very large data sets, triggering in many cases a memory overflow error and leading to the experiment failure without its conclusion. It is in these cases when the application of Big Data techniques can help to achieve the experiment completion. Therefore, in this paper several Spark algorithms are proposed to handle with massive fuzzy data and discover interesting association rules. For that, we based on a decomposition of interestingness measures in terms of α-cuts, and we experimentally demonstrate that it is sufficient to consider only 10equidistributed α-cuts in order to mine all significant fuzzy association rules. Additionally, all the proposals are compared and analysed in terms of efficiency and speed up, in several datasets, including a real dataset comprised of sensor measurements from an office building.COPKIT project from the 8th Programme Framework (H2020) research and innovation programme 786687BIGDATAMED projects B-TIC-145-UGR18 P18-RT-294
    • …
    corecore