2,729 research outputs found

    Adaptations of nitrogen metabolism to oxygen deprivation in plants

    Get PDF

    Kinetic Anomalies in Addition-Aggregation Processes

    Full text link
    We investigate irreversible aggregation in which monomer-monomer, monomer-cluster, and cluster-cluster reactions occur with constant but distinct rates K_{MM}, K_{MC}, and K_{CC}, respectively. The dynamics crucially depends on the ratio gamma=K_{CC}/K_{MC} and secondarily on epsilon=K_{MM}/K_{MC}. For epsilon=0 and gamma<2, there is conventional scaling in the long-time limit, with a single mass scale that grows linearly in time. For gamma >= 2, there is unusual behavior in which the concentration of clusters of mass k, c_k decays as a stretched exponential in time within a boundary layer k<k* propto t^{1-2/gamma} (k* propto ln t for gamma=2), while c_k propto t^{-2} in the bulk region k>k*. When epsilon>0, analogous behaviors emerge for gamma<2 and gamma >= 2.Comment: 6 pages, 2 column revtex4 format, for submission to J. Phys.

    Prefix Imputation of Orphan Events in Event Stream Processing

    Get PDF
    In the context of process mining, event logs consist of process instances called cases. Conformance checking is a process mining task that inspects whether a log file is conformant with an existing process model. This inspection is additionally quantifying the conformance in an explainable manner. Online conformance checking processes streaming event logs by having precise insights into the running cases and timely mitigating non-conformance, if any. State-of-the-art online conformance checking approaches bound the memory by either delimiting storage of the events per case or limiting the number of cases to a specific window width. The former technique still requires unbounded memory as the number of cases to store is unlimited, while the latter technique forgets running, not yet concluded, cases to conform to the limited window width. Consequently, the processing system may later encounter events that represent some intermediate activity as per the process model and for which the relevant case has been forgotten, to be referred to as orphan events. The naĂŻve approach to cope with an orphan event is to either neglect its relevant case for conformance checking or treat it as an altogether new case. However, this might result in misleading process insights, for instance, overestimated non-conformance. In order to bound memory yet effectively incorporate the orphan events into processing, we propose an imputation of missing-prefix approach for such orphan events. Our approach utilizes the existing process model for imputing the missing prefix. Furthermore, we leverage the case storage management to increase the accuracy of the prefix prediction. We propose a systematic forgetting mechanism that distinguishes and forgets the cases that can be reliably regenerated as prefix upon receipt of their future orphan event. We evaluate the efficacy of our proposed approach through multiple experiments with synthetic and three real event logs while simulating a streaming setting. Our approach achieves considerably higher realistic conformance statistics than the state of the art while requiring the same storage.</p

    Exact and Approximated Log Alignments for Processes with Inter-case Dependencies

    Full text link
    The execution of different cases of a process is often restricted by inter-case dependencies through e.g., queueing or shared resources. Various high-level Petri net formalisms have been proposed that are able to model and analyze coevolving cases. In this paper, we focus on a formalism tailored to conformance checking through alignments, which introduces challenges related to constraints the model should put on interacting process instances and on resource instances and their roles. We formulate requirements for modeling and analyzing resource-constrained processes, compare several Petri net extensions that allow for incorporating inter-case constraints. We argue that the Resource Constrained ν\nu-net is an appropriate formalism to be used the context of conformance checking, which traditionally aligns cases individually failing to expose deviations on inter-case dependencies. We provide formal mathematical foundations of the globally aligned event log based on theory of partially ordered sets and propose an approximation technique based on the composition of individually aligned cases that resolves inter-case violations locally

    Conformance checking of process event streams with constraints on data retention

    Get PDF
    Conformance checking (CC) techniques in process mining determine the conformity of cases, by means of their event sequences, with respect to a business process model. Online conformance checking (OCC) techniques perform such analysis for cases in event streams. Cases in streams may essentially not be concluded. Therefore, OCC techniques usually neglect the memory limitation and store all the observed cases whether seemingly concluded or unconcluded. Such indefinite storage of cases is inconsistent with the spirit of privacy regulations, such as GDPR, which advocate the retention of minimal data for a definite period of time. Catering to the aforementioned constraints, we propose two classes of novel approaches that partially or fully forget cases but can still properly estimate the conformance of their future events. All our proposed approaches bound the number of cases in memory and forget those in excess of the defined limit on the basis of prudent forgetting criteria. One class of these proposed approaches retains a meaningful summary of the forgotten events in order to resume the CC of their cases in the future, while the other class leverages classification for this purpose. We highlight the effectiveness of all our proposed approaches compared to a state of the art OCC technique lacking any forgetting mechanism through experiments using real-life as well as synthetic event data under a streaming setting. Our approaches substantially reduce the amount of data required to be retained while minimally impacting the accuracy of the conformance statistics

    Alignment-based trace clustering

    Get PDF
    A novel method to cluster event log traces is presented in this paper. In contrast to the approaches in the literature, the clustering approach of this paper assumes an additional input: a process model that describes the current process. The core idea of the algorithm is to use model traces as centroids of the clusters detected, computed from a generalization of the notion of alignment. This way, model explanations of observed behavior are the driving force to compute the clusters, instead of current model agnostic approaches, e.g., which group log traces merely on their vector-space similarity. We believe alignment-based trace clustering provides results more useful for stakeholders. Moreover, in case of log incompleteness, noisy logs or concept drift, they can be more robust for dealing with highly deviating traces. The technique of this paper can be combined with any clustering technique to provide model explanations to the clusters computed. The proposed technique relies on encoding the individual alignment problems into the (pseudo-)Boolean domain, and has been implemented in our tool DarkSider that uses an open-source solver.Peer ReviewedPostprint (author's final draft

    Nontrivial Polydispersity Exponents in Aggregation Models

    Full text link
    We consider the scaling solutions of Smoluchowski's equation of irreversible aggregation, for a non gelling collision kernel. The scaling mass distribution f(s) diverges as s^{-tau} when s->0. tau is non trivial and could, until now, only be computed by numerical simulations. We develop here new general methods to obtain exact bounds and good approximations of Ď„\tau. For the specific kernel KdD(x,y)=(x^{1/D}+y^{1/D})^d, describing a mean-field model of particles moving in d dimensions and aggregating with conservation of ``mass'' s=R^D (R is the particle radius), perturbative and nonperturbative expansions are derived. For a general kernel, we find exact inequalities for tau and develop a variational approximation which is used to carry out the first systematic study of tau(d,D) for KdD. The agreement is excellent both with the expansions we derived and with existing numerical values. Finally, we discuss a possible application to 2d decaying turbulence.Comment: 16 pages (multicol.sty), 6 eps figures (uses epsfig), Minor corrections. Notations improved, as published in Phys. Rev. E 55, 546

    Communities of Local Optima as Funnels in Fitness Landscapes

    Get PDF
    We conduct an analysis of local optima networks extracted from fitness landscapes of the Kauffman NK model under iterated local search. Applying the Markov Cluster Algorithm for community detection to the local optima networks, we find that the landscapes consist of multiple clusters. This result complements recent findings in the literature that landscapes often decompose into multiple funnels, which increases their difficulty for iterated local search. Our results suggest that the number of clusters as well as the size of the cluster in which the global optimum is located are correlated to the search difficulty of landscapes. We conclude that clusters found by community detection in local optima networks offer a new way to characterize the multi-funnel structure of fitness landscapes
    • …
    corecore