28,884 research outputs found
Mining Event Logs to Support Workflow Resource Allocation
Workflow technology is widely used to facilitate the business process in
enterprise information systems (EIS), and it has the potential to reduce design
time, enhance product quality and decrease product cost. However, significant
limitations still exist: as an important task in the context of workflow, many
present resource allocation operations are still performed manually, which are
time-consuming. This paper presents a data mining approach to address the
resource allocation problem (RAP) and improve the productivity of workflow
resource management. Specifically, an Apriori-like algorithm is used to find
the frequent patterns from the event log, and association rules are generated
according to predefined resource allocation constraints. Subsequently, a
correlation measure named lift is utilized to annotate the negatively
correlated resource allocation rules for resource reservation. Finally, the
rules are ranked using the confidence measures as resource allocation rules.
Comparative experiments are performed using C4.5, SVM, ID3, Na\"ive Bayes and
the presented approach, and the results show that the presented approach is
effective in both accuracy and candidate resource recommendations.Comment: T. Liu et al., Mining event logs to support workflow resource
allocation, Knowl. Based Syst. (2012), http://dx.doi.org/
10.1016/j.knosys.2012.05.01
Process mining: A recent framework for extracting a model from event logs
Business Process Management (BPM) is a well-known discipline, with roots in previous theories related with optimizing management and improving businesses results. One can trace BPM back to the beginning of this century, although it was in more recent years when it gained a special focus of attention. Usually, traditional BPM approaches start from top and analyse the organization according some known rules from its structure or from the type of business. Process Mining (PM) is a completely different approach, since it aims to extract knowledge from event logs, which are widely present in many of today’s organizations. PM uses specialized data-mining algorithms, trying to uncover patterns and trends in these logs, and it is an alternative approach where formal process specification is not easily obtainable or is not cost-effective. This paper makes a literature review of major works issued about this theme.(undefined
Predicting deadline transgressions using event logs
Effective risk management is crucial for any organisation. One of its key steps is risk identification, but few tools exist to support this process. Here we present a method for the automatic discovery of a particular type of process-related risk, the danger of deadline transgressions or overruns, based on the analysis of event logs. We define a set of time-related process risk indicators, i.e., patterns observable in event logs that highlight the likelihood of an overrun, and then show how instances of these patterns can be identified automatically using statistical principles. To demonstrate its feasibility, the approach has been implemented as a plug-in module to the process mining framework ProM and tested using an event log from a Dutch financial institution
Interestingness of traces in declarative process mining: The janus LTLPf Approach
Declarative process mining is the set of techniques aimed at extracting behavioural constraints from event logs. These constraints are inherently of a reactive nature, in that their activation restricts the occurrence of other activities. In this way, they are prone to the principle of ex falso quod libet: they can be satisfied even when not activated. As a consequence, constraints can be mined that are hardly interesting to users or even potentially misleading. In this paper, we build on the observation that users typically read and write temporal constraints as if-statements with an explicit indication of the activation condition. Our approach is called Janus, because it permits the specification and verification of reactive constraints that, upon activation, look forward into the future and backwards into the past of a trace. Reactive constraints are expressed using Linear-time Temporal Logic with Past on Finite Traces (LTLp f). To mine them out of event logs, we devise a time bi-directional valuation technique based on triplets of automata operating in an on-line fashion. Our solution proves efficient, being at most quadratic w.r.t. trace length, and effective in recognising interestingness of discovered constraints
Integrating E-Commerce and Data Mining: Architecture and Challenges
We show that the e-commerce domain can provide all the right ingredients for
successful data mining and claim that it is a killer domain for data mining. We
describe an integrated architecture, based on our expe-rience at Blue Martini
Software, for supporting this integration. The architecture can dramatically
reduce the pre-processing, cleaning, and data understanding effort often
documented to take 80% of the time in knowledge discovery projects. We
emphasize the need for data collection at the application server layer (not the
web server) in order to support logging of data and metadata that is essential
to the discovery process. We describe the data transformation bridges required
from the transaction processing systems and customer event streams (e.g.,
clickstreams) to the data warehouse. We detail the mining workbench, which
needs to provide multiple views of the data through reporting, data mining
algorithms, visualization, and OLAP. We con-clude with a set of challenges.Comment: KDD workshop: WebKDD 200
Recommended from our members
A MapReduce architecture for web site user behaviour monitoring in real time
Monitoring the behaviour of large numbers of web site users in real time poses significant performance challenges, due to the decentralised location and volume of generated data. This paper proposes a MapReduce-style architecture where the processing of event series from the Web users is performed by a number of cascading mappers, reducers and rereducers, local to the event origin. With the use of static analysis and a prototype implementation, we show how this architecture is capable to carry out time series analysis in real time for very large web data sets, based on the actual events, instead of resorting to sampling or other extrapolation techniques
- …