28 research outputs found
Basic Principles of Financial Process Mining A Journey through Financial Data in Accounting Information Systems
Auditors and process managers often face a huge amount of financial entries in accounting information systems. For many reasons like auditing the internal control system a process-oriented view would be more helpful to understand how a set of transactions produced financial entries. For this reason we present an algorithm capable to mine financial entries and open items to reconstruct the process instances which produced the financial entries. In this way, auditors can trace how balance sheet items have been produced in the system. Traditional process mining techniques only reconstruct processes but pay no regard to the financial dimension. The paper wants to close this gap and integrate the process view with the accounting view
CONTINUOUS COMPLIANCE MONITORING IN ERP SYSTEMS - A METHOD FOR IDENTIFYING SEGREGATION OF DUTIES CONFLICTS
Segregation of Duties (SOD) can be seen as one major class of control activities within a company\u27s Internal Control framework, contributing to the reliability of financial reporting. In recent years, SOD controls in terms of user access rights have experienced a surge of attention in particular, mostly due to the growing reliance of business processes on ERP systems. This paper presents a method for automatically identifying SOD conflicts in user access rights as one component of a continuous compliance monitoring framework. The paper further demonstrates the application of the proposed method in a real world project
Towards Automated Analysis of Business Processes for Financial Audits
Financial audits play a significant role in the economy by safeguarding the correctness of published financial information. Public auditors face the challenge to audit financial statements that are created by increasingly integrated and complex information systems. This paper addresses a specific problem in the auditing process. A major challenge in this process is the analysis and audit of business processes that produce financial entries. We illustrate results from applying business process mining techniques to extensive test and real life data and discuss gained insights from the application for the development of automated business process analysis methods in the context of financial audits
Statistical Basics of a Reliable World Wide Web Peer to Peer Storage System
Peer-to-peer networks are highly distributed and unreliable networks. Peers log in and off the network at their own needs without any overall plan. In the real peer-to-peer case there are no central nodes planning the resources of the network or having an overview about the state of the network. The paper on hand describes and mathematically analyzes a storage algorithm allowing information to be stored within the network without the originator of the information needs to stay online. Information is optimally “blurred” within the network meaning that the information is reconstructable with a high probability and a long time interval, but stored as least redundant as possible. The main focus is to analyze the mathematical and statistical properties of the presented peer-to-peer storage algorithm. Technical procedures are described at a high level and need further improvement. Thus, the paper on hand is primarily purely statistically peer-to-peer theory at this stage of research
Participatory Design of Web 2.0 Applications in SME Networks
In increasingly complex and dynamic markets, small and medium sized enterprises (SME) face new challenges. Amongst others, these are innovativeness and technological expertise. In order to counteract the challenges, SMEs cooperate in corporate networks. Here, information and communication technologies are main drivers. At this point, Web 2.0 technologies are uttermost important. Until now, the development and implementation of Web 2.0 applications in SMEs was proceeded independently from the future users. We aim at bridging this gap by developing a participatory procedural model. The presented model includes the futures users from the beginning of the development process. The model respects SME specific characteristics
Tackling Complexity: Process Reconstruction and Graph Transformation for Financial Audits
A key objective of implementing business intelligence tools and methods is to analyze voluminous data and to derive information that would otherwise not be available. Although the overall significance of business intelligence has increased with the general growth of processed and available data it is almost absent in the auditing industry. Public accountants face the challenge to provide an opinion on financial statements that are based on the data produced by the automated processing of countless business transactions in ERP systems. Methods for mining and reconstructing financially relevant process instances can be used as a data analysis tool in the specific context of auditing. In this article we introduce and evaluate an algorithm that effectively reduces the complexity of mined process instances. The presented methods provide a part of the foundation for implementing automated analysis and audit procedures that can assist auditors to perform more efficient and effective audits
Finitely generated free Heyting algebras via Birkhoff duality and coalgebra
Algebras axiomatized entirely by rank 1 axioms are algebras for a functor and
thus the free algebras can be obtained by a direct limit process. Dually, the
final coalgebras can be obtained by an inverse limit process. In order to
explore the limits of this method we look at Heyting algebras which have mixed
rank 0-1 axiomatizations. We will see that Heyting algebras are special in that
they are almost rank 1 axiomatized and can be handled by a slight variant of
the rank 1 coalgebraic methods
DASC-PM v1.0 : ein Vorgehensmodell für Data-Science-Projekte
Das Thema Data Science hat in den letzten Jahren in vielen Organisationen stark an Aufmerksamkeit gewonnen. Häufig herrscht jedoch weiterhin große Unklarheit darüber, wie diese Disziplin von anderen abzugrenzen ist, welche Besonderheiten der Ablauf eines Data-Science-Projekts besitzt und welche Kompetenzen vorhanden sein müssen, um ein solches Projekt durchzuführen. In der Hoffnung, einen kleinen Beitrag zur Beseitigung dieser Unklarheiten leisten zu können, haben wir von April 2019 bis Februar 2020 in einer offenen und virtuellen Arbeitsgruppe mit Vertretern aus Theorie und Praxis das vorliegende Dokument erarbeitet, in dem ein Vorgehensmodell für Data-Science-Projekte beschrieben wird – das Data Science Process Model (DASC-PM). Ziel war es dabei nicht, neue Herangehensweisen zu entwickeln, sondern viel-mehr, vorhandenes Wissen zusammenzutragen und in geeigneter Form zu strukturieren. Die Ausarbeitung ist als Zusammenführung der Erfahrung sämtlicher Teilnehmerinnen und Teilnehmer dieser Arbeitsgruppe zu verstehen