3,090 research outputs found
Data-Flow Modeling: A Survey of Issues and Approaches
This paper presents a survey of previous research on modeling the data flow perspective of business processes. When it comes to modeling and analyzing business process models the current research focuses on control flow modeling (i.e. the activities of the process) and very little attention is paid to the data-flow perspective. But data is essential in a process. In order to execute a workflow, the tasks need data. Without data or without data available on time, the control flow cannot be executed. For some time, various researchers tried to investigate the data flow perspective of process models or to combine the control and data flow in one model. This paper surveys those approaches. We conclude that there is no model showing a clear data flow perspective focusing on how data changes during a process execution. The literature offers some similar approaches ranging from data modeling using elements from relational database domain, going through process model verification and ending with elements related to Web Services
Mining Product Data Models: A Case Study
This paper presents two case studies used to prove the validity of some data-flow mining algorithms. We proposed the data-flow mining algorithms because most part of mining algorithms focuses on the control-flow perspective. First case study uses event logs generated by an ERP system (Navision) after we set several trackers on the data elements needed in the process analyzed; while the second case study uses the event logs generated by YAWL system. We offered a general solution of data-flow model extraction from different data sources. In order to apply the data-flow mining algorithms the event logs must comply a certain format (using InputOutput extension). But to respect this format, a set of conversion tools is needed. We depicted the conversion tools used and how we got the data-flow models. Moreover, the data-flow model is compared to the control-flow model
Recommended from our members
ToScA North America (6 – 8 June 2017, The University of Texas, Austin, TX) Program
ToScA North America will address key areas of science,
including Multi-modal Imaging, Geosciences, Forensics, Increasing Contrast,
Educational Outreach, Data, Materials Science and Medical and Biological
Science.University of Texas High-Resolution X-ray CT Facility (UTCT);
Jackson School of Geosciences, The University of Texas at Austin;
Natural History Museum (London);
Royal Microscopical Society (Oxford, UK)Geological Science
Definition and Representation of Requirement Engineering/Management : A Process-Oriented Approach
Requirements are important in software development, product development, projects, processes, and systems. However, a review of the requirements literature indicates several problems. First, there is confusion between the terms ?requirements engineering? and ?requirements management.? Similarities and/or differences between the two terms are resolved through a literature review; resulting in comprehensive definitions of each term. Second, current literature recognizes the importance of requirements but offers few methodologies or solutions for defining and managing requirements. Hence, a flexible methodology or framework is provided for defining and managing requirements. Third, requirements methodologies are represented in various ways, each with their respective strengths and weaknesses. A tabular view and hybrid graphical view for representing the requirements process are provided
DALiuGE: A Graph Execution Framework for Harnessing the Astronomical Data Deluge
The Data Activated Liu Graph Engine - DALiuGE - is an execution framework for
processing large astronomical datasets at a scale required by the Square
Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex
data reduction pipelines consisting of both data sets and algorithmic
components and an implementation run-time to execute such pipelines on
distributed resources. By mapping the logical view of a pipeline to its
physical realisation, DALiuGE separates the concerns of multiple stakeholders,
allowing them to collectively optimise large-scale data processing solutions in
a coherent manner. The execution in DALiuGE is data-activated, where each
individual data item autonomously triggers the processing on itself. Such
decentralisation also makes the execution framework very scalable and flexible,
supporting pipeline sizes ranging from less than ten tasks running on a laptop
to tens of millions of concurrent tasks on the second fastest supercomputer in
the world. DALiuGE has been used in production for reducing interferometry data
sets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide
Spectral Radioheliograph; and is being developed as the execution framework
prototype for the Science Data Processor (SDP) consortium of the Square
Kilometre Array (SKA) telescope. This paper presents a technical overview of
DALiuGE and discusses case studies from the CHILES and MUSER projects that use
DALiuGE to execute production pipelines. In a companion paper, we provide
in-depth analysis of DALiuGE's scalability to very large numbers of tasks on
two supercomputing facilities.Comment: 31 pages, 12 figures, currently under review by Astronomy and
Computin
A Semantic Framework for Declarative and Procedural Knowledge
In any scientic domain, the full set of data and programs has reached an-ome status, i.e. it has grown massively. The original article on the Semantic Web describes the evolution of a Web of actionable information, i.e.\ud
information derived from data through a semantic theory for interpreting the symbols. In a Semantic Web, methodologies are studied for describing, managing and analyzing both resources (domain knowledge) and applications (operational knowledge) - without any restriction on what and where they\ud
are respectively suitable and available in the Web - as well as for realizing automatic and semantic-driven work\ud
ows of Web applications elaborating Web resources.\ud
This thesis attempts to provide a synthesis among Semantic Web technologies, Ontology Research, Knowledge and Work\ud
ow Management. Such a synthesis is represented by Resourceome, a Web-based framework consisting of two components which strictly interact with each other: an ontology-based and domain-independent knowledge manager system (Resourceome KMS) - relying on a knowledge model where resource and operational knowledge are contextualized in any domain - and a semantic-driven work ow editor, manager and agent-based execution system (Resourceome WMS).\ud
The Resourceome KMS and the Resourceome WMS are exploited in order to realize semantic-driven formulations of work\ud
ows, where activities are semantically linked to any involved resource. In the whole, combining the use of domain ontologies and work ow techniques, Resourceome provides a exible domain and operational knowledge organization, a powerful engine for semantic-driven work\ud
ow composition, and a distributed, automatic and\ud
transparent environment for work ow execution
Implementação de Processos RH em SAP
Nowadays, HR management is a vital element for any business area in the world. For large scale companies, having a central tool to execute and approve all personnel data changes (including hiring and transferring employees) allows them to easily keep data up to date and reduce costs; all of this by using a global standard process.
The purpose of this dissertation is to prove how SAP HCM Processes & Forms (HCM P&F) can be used to create an interactive web application to manage all HR processes within a company (supporting any technology in the front-end) and how it compares to legacy applications that hold their own business logic instead of leveraging SAP ERP rules and functionalities.
Consequently, this document will describe in detail an implementation of HCM P&F developed by Konkconsulting, to show how it stands out in comparison to other application models. This implementation is currently used by a multinational corporation and supports over 10 types of HR processes with 200 fields.Hoje em dia, a gestão de Recursos Humanos é um elemento fundamental para qualquer negócio no mundo. Empresas de média-grande magnitude necessitam de ferramentas centrais para gerir inúmeros processos (por exemplo: contratações, transferências e promoções), bem como facilitar as alterações de dados aos seus colaboradores. O facto de ter estes processos bem definidos permite comunicação rápida entre colaboradores, redução de burocracia e consequentemente, redução de custos.
Este documento descreve um modelo de desenvolvimento, orientado a SAP, para a criação de uma aplicação MSS/ESS que permita atingir os objectivos enunciados acima.
Para isto, será apresentada a framework SAP HCM Processes & Forms (HCM P&F) e demonstrado como esta pode ser utilizada para criar uma aplicação interactiva, que tire partido de todas as regras de negócio do ERP SAP – uma vantagem evidente sobre outros modelos de desenvolvimento. A solução apresentada está também preparada para suportar qualquer tecnologia de apresentação.
A implementação deste modelo foi levada a cabo pela Konkconsulting para uma multinacional com um modelo de Recursos Humanos complexo. A aplicação é neste momento utilizada por milhares de utilizadores e suporta mais de 10 tipos de processos
Business process model customisation using domain-driven controlled variability management and rule generation
Business process models are abstract descriptions and as such should be applicable in different situations. In order for a single process model to be reused, we need support for configuration and customisation. Often, process objects and activities are domain-specific. We use this observation and allow domain models to drive the customisation. Process variability models, known from product line modelling and manufacturing, can control this customisation by taking into account the domain models. While activities and objects have already been studied, we investigate here the constraints that govern a process execution. In order to integrate these constraints into a process model, we use a rule-based constraints language for a workflow and process model. A modelling framework will be presented as a development approach for customised rules through a feature model. Our use case is content processing, represented by an abstract ontology-based domain model in the framework and implemented by a customisation engine. The key contribution is a conceptual definition of a domain-specific rule variability language
- …