4,035 research outputs found
Method for Repairing Process Models with Selection Structures Based on Token Replay
Enterprise information systems (EIS) play an important role in business process management. Process mining techniques that can mine a large number of event logs generated in EIS become a very hot topic. There always exist some deviations between a process model of EIS and event logs. Therefore, a process model needs to be repaired. For the process model with selection structures, the mining accuracy of the existing methods is reduced because of the additional self-loops and invisible transitions. In this paper, a method for repairing Logical-Petri-nets-based process models with selection structures is proposed. According to the relationship between the input and output places of a sub-model, the deviation position is determined by a token replay method. Then, some algorithms are designed to repair the process models based on logical Petri nets. Finally, the effectiveness of the proposed method is illustrated by some experiments, and the proposed method has relatively high fitness and precision compared with its peers
Repairing Alignments of Process Models
Process mining represents a collection of data driven techniques that support the analysis, understanding and improvement of business processes. A core branch of process mining is conformance checking, i.e., assessing to what extent a business process model conforms to observed business process execution data. Alignments are the de facto standard instrument to compute such conformance statistics. However, computing alignments is a combinatorial problem and hence extremely costly. At the same time, many process models share a similar structure and/or a great deal of behavior. For collections of such models, computing alignments from scratch is inefficient, since large parts of the alignments are likely to be the same. This paper presents a technique that exploits process model similarity and repairs existing alignments by updating those parts that do not fit a given process model. The technique effectively reduces the size of the combinatorial alignment problem, and hence decreases computation time significantly. Moreover, the potential loss of optimality is limited and stays within acceptable bounds
A Combination of the Evolutionary Tree Miner and Simulated Annealing
In recent years, process mining is important to discover process model from event logs; however the existing methods have not achieved good in overall fitness. In this context, this paper proposes a combination of the Evolutionary Tree Miner (ETM) and Simulated Annealing (SA). The ETM aims to reduce randomness of population so that it can improved the quality of individuals. SA aims to increase overall fitness in the population. The results of the proposed method which was compared to other approaches show that the proposes method had better in overall fitness and better quality of individuals
Innovations in the Analysis of Chandra-ACIS Observations
As members of the instrument team for the Advanced CCD Imaging Spectrometer
(ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we
have developed a wide variety of data analysis methods that we believe are
useful to the Chandra community, and have constructed a significant body of
publicly-available software (the ACIS Extract package) addressing important
ACIS data and science analysis tasks. This paper seeks to describe these data
analysis methods for two purposes: to document the data analysis work performed
in our own science projects, and to help other ACIS observers judge whether
these methods may be useful in their own projects (regardless of what tools and
procedures they choose to implement those methods).
The ACIS data analysis recommendations we offer here address much of the
workflow in a typical ACIS project, including data preparation, point source
detection via both wavelet decomposition and image reconstruction, masking
point sources, identification of diffuse structures, event extraction for both
point and diffuse sources, merging extractions from multiple observations,
nonparametric broad-band photometry, analysis of low-count spectra, and
automation of these tasks. Many of the innovations presented here arise from
several, often interwoven, complications that are found in many Chandra
projects: large numbers of point sources (hundreds to several thousand), faint
point sources, misaligned multiple observations of an astronomical field, point
source crowding, and scientifically relevant diffuse emission.Comment: Accepted by the ApJ, 2010 Mar 10 (\#343576) 39 pages, 16 figure
A Survey on Economic-driven Evaluations of Information Technology
The economic-driven evaluation of information technology (IT) has become an important instrument in the management of IT projects. Numerous approaches have been developed to quantify the costs of an IT investment and its assumed profit, to evaluate its impact on business process performance, and to analyze the role of IT regarding the achievement of enterprise objectives. This paper discusses approaches for evaluating IT from an economic-driven perspective. Our comparison is based on a framework distinguishing between classification criteria and evaluation criteria. The former allow for the categorization of evaluation approaches based on their similarities and differences. The latter, by contrast, represent attributes that allow to evaluate the discussed approaches. Finally, we give an example of a typical economic-driven IT evaluation
Compliance flow: an intelligent workflow management system to support engineering processes
This work is about extending the scope of current workflow management systems to support
engineering processes. On the one hand engineering processes are relatively dynamic, and on the
other their specification and performance are constrained by industry standards and guidelines
for the sake of product acceptability, such as IEC 61508 for safety and ISO 9001 for quality.
A number of technologies have been proposed to increase the adaptability of current workflow
systems to deal with dynamic situations. A primary concern is how to support open-ended
processes that cannot be completely specified in detail prior to their execution. A survey of
adaptive workflow systems is given and the enabling technologies are discussed.
Engineering processes are studied and their characteristics are identified and discussed. Current
workflow systems have been successfully used in managing "administrative" processes for some
time, but they lack the flexibility to support dynamic, unpredictable, collaborative, and highly
interdependent engineering processes. [Continues.
Recommended from our members
Enabling Resilience in Cyber-Physical-Human Water Infrastructures
Rapid urbanization and growth in urban populations have forced community-scale infrastructures (e.g., water, power and natural gas distribution systems, and transportation networks) to operate at their limits. Aging (and failing) infrastructures around the world are becoming increasingly vulnerable to operational degradation, extreme weather, natural disasters and cyber attacks/failures. These trends have wide-ranging socioeconomic consequences and raise public safety concerns. In this thesis, we introduce the notion of cyber-physical-human infrastructures (CPHIs) - smart community-scale infrastructures that bridge technologies with physical infrastructures and people. CPHIs are highly dynamic stochastic systems characterized by complex physical models that exhibit regionwide variability and uncertainty under disruptions. Failures in these distributed settings tend to be difficult to predict and estimate, and expensive to repair. Real-time fault identification is crucial to ensure continuity of lifeline services to customers at adequate levels of quality. Emerging smart community technologies have the potential to transform our failing infrastructures into robust and resilient future CPHIs.In this thesis, we explore one such CPHI - community water infrastructures. Current urban water infrastructures, that are decades (sometimes over a 100 years) old, encompass diverse geophysical regimes. Water stress concerns include the scarcity of supply and an increase in demand due to urbanization. Deterioration and damage to the infrastructure can disrupt water service; contamination events can result in economic and public health consequences. Unfortunately, little investment has gone into modernizing this key lifeline.To enhance the resilience of water systems, we propose an integrated middleware framework for quick and accurate identification of failures in complex water networks that exhibit uncertain behavior. Our proposed approach integrates IoT-based sensing, domain-specific models and simulations with machine learning methods to identify failures (pipe breaks, contamination events). The composition of techniques results in cost-accuracy-latency tradeoffs in fault identification, inherent in CPHIs due to the constraints imposed by cyber components, physical mechanics and human operators. Three key resilience problems are addressed in this thesis; isolation of multiple faults under a small number of failures, state estimation of the water systems under extreme events such as earthquakes, and contaminant source identification in water networks using human-in-the-loop based sensing. By working with real world water agencies (WSSC, DC and LADWP, LA), we first develop an understanding of operations of water CPHI systems. We design and implement a sensor-simulation-data integration framework AquaSCALE, and apply it to localize multiple concurrent pipe failures. We use a mixture of infrastructure measurements (i.e., historical and live water pressure/flow), environmental data (i.e., weather) and human inputs (i.e., twitter feeds), combined and enhanced with the domain model and supervised learning techniques to locate multiple failures at fine levels of granularity (individual pipeline level) with detection time reduced by orders of magnitude (from hours/days to minutes). We next consider the resilience of water infrastructures under extreme events (i.e., earthquakes) - the challenge here is the lack of apriori knowledge and the increased number and severity of damages to infrastructures. We present a graphical model based approach for efficient online state estimation, where the offline graph factorization partitions a given network into disjoint subgraphs, and the belief propagation based inference is executed on-the-fly in a distributed manner on those subgraphs. Our proposed approach can isolate 80% broken pipes and 99% loss-of-service to end-users during an earthquake.Finally, we address issues of water quality - today this is a human-in-the-loop process where operators need to gather water samples for lab tests. We incorporate the necessary abstractions with event processing methods into a workflow, which iteratively selects and refines the set of potential failure points via human-driven grab sampling. Our approach utilizes Hidden Markov Model based representations for event inference, along with reinforcement learning methods for further refining event locations and reducing the cost of human efforts.The proposed techniques are integrated into a middleware architecture, which enables components to communicate/collaborate with one another. We validate our approaches through a prototype implementation with multiple real-world water networks, supply-demand patterns from water utilities and policies set by the U.S. EPA. While our focus here is on water infrastructures in a community, the developed end-to-end solution is applicable to other infrastructures and community services which operate in disruptive and resource-constrained environments
- …