31,863 research outputs found
Service Querying to Support Process Variant Development
International audienceDeveloping process variants enables enterprises to effectively adapt their business models to different markets. Existing approaches focus on business process models to support the variant development. The assignment of services in a business process, which ensures the process variability, has not been widely examined. In this paper, we present an innovative approach that focuses on component services instead of process models. We target to recommend services to a selected position in a business process. We define the service composition context as the relationships between a service and its neighbors. We compute the similarity between services based on the matching of their composition contexts. Then, we propose a query language that considers the composition context matching for service querying. We developed an application to demonstrate our approach and performed different experiments on a public dataset of real process models. Experimental results show that our approach is feasible and efficient
Comparison of Simple Graphical Process Models
Comparing the structure of graphical process models can reveal a number of process variations. Since most contemporary norms for process modelling rely on directed connectivity of objects in the model, connections between objects form sequences which can be translated into performing scenarios. Whereas sequences can be tested for completeness in performing process activities using simulation methods, the similarity or difference in static characteristics of sequences in different model variants are difficult to explore. The goal of the paper is to test the application of a method for comparison of graphical models by analyzing and comparing static characteristics of process models. Consequently, a metamodel for process models is developed followed by a comparison procedure conducted using a graphical model comparison algorithm
On the Reverse Engineering of the Citadel Botnet
Citadel is an advanced information-stealing malware which targets financial
information. This malware poses a real threat against the confidentiality and
integrity of personal and business data. A joint operation was recently
conducted by the FBI and the Microsoft Digital Crimes Unit in order to take
down Citadel command-and-control servers. The operation caused some disruption
in the botnet but has not stopped it completely. Due to the complex structure
and advanced anti-reverse engineering techniques, the Citadel malware analysis
process is both challenging and time-consuming. This allows cyber criminals to
carry on with their attacks while the analysis is still in progress. In this
paper, we present the results of the Citadel reverse engineering and provide
additional insight into the functionality, inner workings, and open source
components of the malware. In order to accelerate the reverse engineering
process, we propose a clone-based analysis methodology. Citadel is an offspring
of a previously analyzed malware called Zeus; thus, using the former as a
reference, we can measure and quantify the similarities and differences of the
new variant. Two types of code analysis techniques are provided in the
methodology, namely assembly to source code matching and binary clone
detection. The methodology can help reduce the number of functions requiring
manual analysis. The analysis results prove that the approach is promising in
Citadel malware analysis. Furthermore, the same approach is applicable to
similar malware analysis scenarios.Comment: 10 pages, 17 figures. This is an updated / edited version of a paper
appeared in FPS 201
Process model comparison based on cophenetic distance
The automated comparison of process models has received increasing attention in the last decade, due to the growing existence of process models and repositories, and the consequent need to assess similarities between the underlying processes. Current techniques for process model comparison are either structural (based on graph edit
distances), or behavioural (through activity profiles or the analysis of the execution semantics). Accordingly, there is a gap between the quality of the information provided by these two families, i.e., structural techniques may be fast but inaccurate, whilst behavioural are accurate but complex. In this paper we present a novel technique, that is based on a well-known technique to compare labeled trees through the notion of Cophenetic distance. The technique lays between
the two families of methods for comparing a process model: it has an structural nature, but can provide accurate information on the differences/similarities of two process models. The experimental evaluation on various benchmarks sets are reported, that position the proposed technique as a valuable tool for process model comparison.Peer ReviewedPostprint (author's final draft
Recurrent Poisson Factorization for Temporal Recommendation
Poisson factorization is a probabilistic model of users and items for
recommendation systems, where the so-called implicit consumer data is modeled
by a factorized Poisson distribution. There are many variants of Poisson
factorization methods who show state-of-the-art performance on real-world
recommendation tasks. However, most of them do not explicitly take into account
the temporal behavior and the recurrent activities of users which is essential
to recommend the right item to the right user at the right time. In this paper,
we introduce Recurrent Poisson Factorization (RPF) framework that generalizes
the classical PF methods by utilizing a Poisson process for modeling the
implicit feedback. RPF treats time as a natural constituent of the model and
brings to the table a rich family of time-sensitive factorization models. To
elaborate, we instantiate several variants of RPF who are capable of handling
dynamic user preferences and item specification (DRPF), modeling the
social-aspect of product adoption (SRPF), and capturing the consumption
heterogeneity among users and items (HRPF). We also develop a variational
algorithm for approximate posterior inference that scales up to massive data
sets. Furthermore, we demonstrate RPF's superior performance over many
state-of-the-art methods on synthetic dataset, and large scale real-world
datasets on music streaming logs, and user-item interactions in M-Commerce
platforms.Comment: Submitted to KDD 2017 | Halifax, Nova Scotia - Canada - sigkdd, Codes
are available at https://github.com/AHosseini/RP
MultiFarm: A benchmark for multilingual ontology matching
In this paper we present the MultiFarm dataset, which has been designed as a benchmark for multilingual
ontology matching. The MultiFarm dataset is composed of a set of ontologies translated in different
languages and the corresponding alignments between these ontologies. It is based on the OntoFarm dataset, which has been used successfully for several years in the Ontology Alignment Evaluation Initiative (OAEI). By translating the ontologies of the OntoFarm dataset into eight different languages – Chinese, Czech, Dutch, French, German, Portuguese, Russian, and Spanish – we created a comprehensive set of realistic test cases. Based on these test cases, it is possible to evaluate and compare the performance of matching approaches with a special focus on multilingualism
- …