6 research outputs found

    Täpne ja tõhus protsessimudelite automaatne koostamine sündmuslogidest

    Get PDF
    Töötajate igapäevatöö koosneb tegevustest, mille eesmärgiks on teenuste pakkumine või toodete valmistamine. Selliste tegevuste terviklikku jada nimetatakse protsessiks. Protsessi kvaliteet ja efektiivsus mõjutab otseselt kliendi kogemust – tema arvamust ja hinnangut teenusele või tootele. Kliendi kogemus on eduka ettevõtte arendamise oluline tegur, mis paneb ettevõtteid järjest rohkem pöörama tähelepanu oma protsesside kirjeldamisele, analüüsimisele ja parendamisele. Protsesside kirjeldamisel kasutatakse tavaliselt visuaalseid vahendeid, sellisel kujul koostatud kirjeldust nimetatakse protsessimudeliks. Kuna mudeli koostaja ei suuda panna kirja kõike erandeid, mis võivad reaalses protsessis esineda, siis ei ole need mudelid paljudel juhtudel terviklikud. Samuti on probleemiks suur töömaht - inimese ajakulu protsessimudeli koostamisel on suur. Protsessimudelite automaatne koostamine (protsessituvastus) võimaldab genereerida protsessimudeli toetudes tegevustega seotud andmetele. Protsessituvastus aitab meil vähendada protsessimudeli loomisele kuluvat aega ja samuti on tulemusena tekkiv mudel (võrreldes käsitsi tehtud mudeliga) kvaliteetsem. Protsessituvastuse tulemusel loodud mudeli kvaliteet sõltub nii algandmete kvaliteedist kui ka protsessituvastuse algoritmist. Antud doktoritöös anname ülevaate erinevatest protsessituvastuse algoritmidest. Toome välja puudused ja pakume välja uue algoritmi Split Miner. Võrreldes olemasolevate algoritmidega on Splint Miner kiirem ja annab tulemuseks kvaliteetsema protsessimudeli. Samuti pakume välja uue lähenemise automaatselt koostatud protsessimudeli korrektsuse hindamiseks, mis on võrreldes olemasolevate meetoditega usaldusväärsem. Doktoritöö näitab, kuidas kasutada optimiseerimise algoritme protsessimudeli korrektsuse suurendamiseks.Everyday, companies’ employees perform activities with the goal of providing services (or products) to their customers. A sequence of such activities is known as business process. The quality and the efficiency of a business process directly influence the customer experience. In a competitive business environment, achieving a great customer experience is fundamental to be a successful company. For this reason, companies are interested in identifying their business processes to analyse and improve them. To analyse and improve a business process, it is generally useful to first write it down in the form of a graphical representation, namely a business process model. Drawing such process models manually is time-consuming because of the time it takes to collect detailed information about the execution of the process. Also, manually drawn process models are often incomplete because it is difficult to uncover every possible execution path in the process via manual data collection. Automated process discovery allows business analysts to exploit process' execution data to automatically discover process models. Discovering high-quality process models is extremely important to reduce the time spent enhancing them and to avoid mistakes during process analysis. The quality of an automatically discovered process model depends on both the input data and the automated process discovery application that is used. In this thesis, we provide an overview of the available algorithms to perform automated process discovery. We identify deficiencies in existing algorithms, and we propose a new algorithm, called Split Miner, which is faster and consistently discovers more accurate process models than existing algorithms. We also propose a new approach to measure the accuracy of automatically discovered process models in a fine-grained manner, and we use this new measurement approach to optimize the accuracy of automatically discovered process models.https://www.ester.ee/record=b530061

    Agenda-driven case management

    Get PDF
    Im Gegensatz zu Routinetätigkeiten lassen sich wissensintensive Geschäftsprozesse – also Prozesse mit einem hohen Anteil an wissensintensiven Tätigkeiten, die von sogenannten Wissensarbeitern durchgeführt werden – nur schwer durch IT unterstützen. Das liegt vor allem daran, dass über den konkreten Lösungsweg und die dafür benötigten Daten nichts oder nur wenig im Vorfeld bekannt ist. Zwei wesentliche Ursachen hierfür sind, dass erstens der Ablauf von sehr vielen Parametern abhängig ist und dass zweitens diese Parameter sich auch über die Zeit verändern können. Solche Prozesse lassen sich unter anderem bei Trägern von Sozialleistungen oder in der privaten Versicherungswirtschaft beobachten. Dort steuern als Case Manager bezeichnete Wissensarbeiter komplizierte Leistungsfälle und koordinieren erforderliche Maßnahmen so, dass die Leistungen wirtschaftlich und bedarfsgerecht erbracht werden. Case Manager sind aufgrund ihrer Erfahrung, ihres breitgefächerten Fachwissens und der starken Vernetzung mit anderen Experten in der Lage, die wesentlichen Parameter der Prozesse zu erkennen, deren Veränderung stets nachzuverfolgen und den Ablauf entsprechend anzupassen. Wie in der Dissertation gezeigt wird, können wissensintensive Prozesse nicht mit den herkömmlichen Methoden des Process Mining analysiert und mit Workflow-Managementsystemen unterstützt werden. Deshalb werden neue Konzepte und alternative Ansätze vorgestellt und erprobt, um solche Prozesse analysierbar zu machen und Case Manager bei deren Ausführung zu unterstützen. Die zentralen Beiträge der Dissertation sind ein Metamodell mit den adCM-Grundkonzepten, ein Konzept zur anwendungsübergreifenden Protokollierung der Aktivitäten eines Case Managers unter Berücksichtigung des Metamodells (Monitoring), eine Methode zur Messung von Ereignisprotokollkomplexität, eine Methode zur Erhebung von Wissen über den Prozess auf Grundlage der Ereignisprotokolle (Discovery) und eine Werkzeugarchitektur zur operativen Unterstützung von Wissensarbeitern, um das Wissen über den Prozess kontextbezogen bereitzustellen

    An experimental evaluation of passage-based process discovery

    No full text
    In the area of process mining, the ILP Miner is known for the fact that it always returns a Petri net that perfectly fits a given event log. However, the downside of the ILP Miner is that its complexity is exponential in the number of event classes in that event log. As a result, the ILP Miner may take a very long time to return a Petri net. Partitioning the traces in the event log over multiple event logs does not really alleviate this problem. Like for most process discovery algorithms, the complexity is linear in the size of the event log and exponential in the number of event classes (i.e., distinct activities). Hence, the potential gain by partitioning the event classes is much higher. This paper proposes to use the so-called passages to split up the event classes over multiple event logs, and shows what the results are for seven large event logs. The results show that indeed the use of passages alleviates the complexity, but that much hinges on the size of the largest passage detected: The smaller this passage, the better the run time

    An experimental evaluation of passage-based process discovery

    No full text
    In the area of process mining, the ILP Miner is known for the fact that it always returns a Petri net that perfectly fits a given event log. Like for most process discovery algorithms, its complexity is linear in the size of the event log and exponential in the number of event classes (i.e., distinct activities). As a result, the potential gain by partitioning the event classes is much higher than the potential gain by partitioning the traces in the event log over multiple event logs. This paper proposes to use the so-called passages to split up the event classes over multiple event logs, and shows the results are for seven large real-life event logs and one artificial event log: The use of passages indeed alleviates the complexity, but much hinges on the size of the largest passage detected

    An experimental evaluation of passage-based process discovery

    No full text
    In the area of process mining, the ILP Miner is known for the fact that it always returns a Petri net that perfectly fits a given event log. Like for most process discovery algorithms, its complexity is linear in the size of the event log and exponential in the number of event classes (i.e., distinct activities). As a result, the potential gain by partitioning the event classes is much higher than the potential gain by partitioning the traces in the event log over multiple event logs. This paper proposes to use the so-called passages to split up the event classes over multiple event logs, and shows the results are for seven large real-life event logs and one artificial event log: The use of passages indeed alleviates the complexity, but much hinges on the size of the largest passage detected
    corecore