439 research outputs found
Tackling Dierent Business Process Perspectives
Business Process Management (BPM) has emerged as a discipline to design, control, analyze, and optimize business operations. Conceptual models lie at the core of BPM. In particular, business process models have been taken up by organizations as a means to describe the main activities that are performed to achieve a specific business goal. Process models generally cover different perspectives that underlie separate yet interrelated representations for analyzing and presenting process information. Being primarily driven by process improvement objectives, traditional business process modeling languages focus on capturing the control flow perspective of business processes, that is, the temporal and logical coordination of activities. Such approaches are usually characterized as \u201cactivity-centric\u201d. Nowadays, activity-centric process modeling languages, such as the Business Process Model and Notation (BPMN) standard, are still the most used in practice and benefit from industrial tool support. Nevertheless, evidence shows that such process modeling languages still lack of support for modeling non-control-flow perspectives, such as the temporal, informational, and decision perspectives, among others. This thesis centres on the BPMN standard and addresses the modeling the temporal, informational, and decision perspectives of process models, with particular attention to processes enacted in healthcare domains. Despite being partially interrelated, the main contributions of this thesis may be partitioned according to the modeling perspective they concern. The temporal perspective deals with the specification, management, and formal verification of temporal constraints. In this thesis, we address the specification and run-time management of temporal constraints in BPMN, by taking advantage of process modularity and of event handling mechanisms included in the standard. Then, we propose three different mappings from BPMN to formal models, to validate the behavior of the proposed process models and to check whether they are dynamically controllable. The informational perspective represents the information entities consumed, produced or manipulated by a process. This thesis focuses on the conceptual connection between processes and data, borrowing concepts from the database domain to enable the representation of which part of a database schema is accessed by a certain process activity. This novel conceptual view is then employed to detect potential data inconsistencies arising when the same data are accessed erroneously by different process activities. The decision perspective encompasses the modeling of the decision-making related to a process, considering where decisions are made in the process and how decision outcomes affect process execution. In this thesis, we investigate the use of the Decision Model and Notation (DMN) standard in conjunction with BPMN starting from a pattern-based approach to ease the derivation of DMN decision models from the data represented in BPMN processes. Besides, we propose a methodology that focuses on the integrated use of BPMN and DMN for modeling decision-intensive care pathways in a real-world application domain
Dynamic Workflow-Engine
We present and assess the novel thesis that a language commonly accepted for requirement elicitation is worth using for configuration of business process automation systems. We suggest that Cockburn's well accepted requirements elicitation language - the written use case language, with a few extensions, ought to be used as a workflow modelling language. We evaluate our thesis by studying in detail an industrial implementation of a workflow engine whose workflow modelling language is our extended written use case language; by surveying the variety of business processes that can be expressed by our extended written use case language; and by empirically assessing the readability of our extended written use case language. Our contribution is sixfold: (i) an architecture with which a workflow engine whose workflow modelling language is an extended written use case language can be built, configured, used and monitored; (ii) a detailed study of an industrial implementation of use case oriented workflow engine; (iii) assessment of the expressive power of the extended written use case language which is based on a known pattern catalogue; (iv) another assessments of the expressive power of the extended written use case language which is based on an equivalence to a formal model that is known to be expressive; (v) an empirical evaluation in industrial context of the readability of our extended written use case language in comparison to the readability of the incumbent graphical languages; and (vi) reflections upon the state of the art, methodologies, our results, and opportunities for further research. Our conclusions are that a workflow engine whose workflow modelling language is an extended written use case language can be built, configured, used and monitored; that in an environment that calls upon an extended written use case language as a workflow modelling language, the transition between the modelling and verification state, enactment state, and monitoring state is dynamic; that a use case oriented workflow engine was implemented in industrial settings and that the approach was well accepted by management, workflow configuration officers and workflow participants alike; that the extended written use case language is quite expressive, as much as the incumbent graphical languages; and that in industrial context an extended written use case language is an efficient communication device amongst stakeholders
Performance Problem Diagnostics by Systematic Experimentation
Diagnostics of performance problems requires deep expertise in performance engineering and entails a high manual effort. As a consequence, performance evaluations are postponed to the last minute of the development process. In this thesis, we introduce an automatic, experiment-based approach for performance problem diagnostics in enterprise software systems. With this approach, performance engineers can concentrate on their core competences instead of conducting repeating tasks
Performance Problem Diagnostics by Systematic Experimentation
In this book, we introduce an automatic, experiment-based approach for performance problem diagnostics in enterprise software systems. The proposed approach systematically searches for root causes of detected performance problems by executing series of systematic performance tests. The presented approach is evaluated by various case studies showing that the presented approach is applicable to a wide range of contexts
Software agents & human behavior
People make important decisions in emergencies. Often these decisions involve high stakes in terms of lives and property. Bhopal disaster (1984), Piper Alpha disaster (1988), Montara blowout (2009), and explosion on Deepwater Horizon (2010) are a few examples among many industrial incidents. In these incidents, those who were in-charge took critical decisions under various ental stressors such as time, fatigue, and panic. This thesis presents an application of naturalistic decision-making (NDM), which is a recent decision-making theory inspired by experts making decisions in real emergencies.
This study develops an intelligent agent model that can be programed to make human-like decisions in emergencies. The agent model has three major components: (1) A spatial learning module, which the agent uses to learn escape routes that are designated routes in a facility for emergency evacuation, (2) a situation recognition module, which is used to recognize or distinguish among evolving emergency situations, and (3) a decision-support module, which exploits modules in (1) and (2), and implements an NDM based decision-logic for producing human-like decisions in emergencies.
The spatial learning module comprises a generalized stochastic Petri net-based model of spatial learning. The model classifies routes into five classes based on landmarks, which are objects with salient spatial features. These classes deal with the question of how difficult a landmark turns out to be when an agent observes it the first time during a route traversal. An extension to the spatial learning model is also proposed where the question of how successive route traversals may impact retention of a route in the agent’s memory is investigated.
The situation awareness module uses Markov logic network (MLN) to define different offshore emergency situations using First-order Logic (FOL) rules. The purpose of this module is to give the agent the necessary experience of dealing with emergencies. The potential of this module lies in the fact that different training samples can be used to produce agents having different experience or capability to deal with an emergency situation. To demonstrate this fact, two agents were developed and trained using two different sets of empirical observations. The two are found to be different in recognizing the prepare-to-abandon-platform alarm (PAPA ), and similar to each other in recognition of an emergency using other cues.
Finally, the decision-support module is proposed as a union of spatial-learning module, situation awareness module, and NDM based decision-logic. The NDM-based decision-logic is inspired by Klein’s (1998) recognition primed decision-making (RPDM) model. The agent’s attitudes related to decision-making as per the RPDM are represented in the form of belief, desire, and intention (BDI). The decision-logic involves recognition of situations based on experience (as proposed in situation-recognition module), and recognition of situations based on classification, where ontological classification is used to guide the agent in cases where the agent’s experience about confronting a situation is inadequate. At the planning stage, the decision-logic exploits the agent’s spatial knowledge (as proposed in spatial-learning module) about the layout of the environment to make adjustments in the course of actions relevant to a decision that has already been made as a by-product of situation recognition.
The proposed agent model has potential to be used to improve virtual training environment’s fidelity by adding agents that exhibit human-like intelligence in performing tasks related to emergency evacuation. Notwithstanding, the potential to exploit the basis provided here, in the form of an agent representing human fallibility, should not be ignored for fields like human reliability analysis
USSR Space Life Sciences Digest, issue 32
This is the thirty-second issue of NASA's USSR Space Life Sciences Digest. It contains abstracts of 34 journal or conference papers published in Russian and of 4 Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. The abstracts in this issue have been identified as relevant to 18 areas of space biology and medicine. These areas include: adaptation, aviation medicine, biological rhythms, biospherics, cardiovascular and respiratory systems, developmental biology, exobiology, habitability and environmental effects, human performance, hematology, mathematical models, metabolism, microbiology, musculoskeletal system, neurophysiology, operational medicine, and reproductive system
Colored model based testing for software product lines (CMBT-SWPL)
Over the last decade, the software product line domain has emerged as
one of the mostpromising software development paradigms. The main benefits
of a software product lineapproach are improvements in productivity, time
to market, product quality, and customersatisfaction.Therefore, one topic
that needs greater emphasis is testing of software product lines toachieve
the required software quality assurance. Our concern is how to test a
softwareproduct line as early as possible in order to detect errors,
because the cost of error detectedIn early phases is much less compared to
the cost of errors when detected later.The method suggested in this thesis
is a model-based, reuse-oriented test technique calledColored Model Based
Testing for Software Product Lines (CMBT-SWPL). CMBT-SWPLis a
requirements-based approach for efficiently generating tests for products
in a soft-ware product line. This testing approach is used for validation
and verification of productlines. It is a novel approach to test product
lines using a Colored State Chart (CSC), whichconsiders variability early
in the product line development process. More precisely, the vari-ability
will be introduced in the main components of the CSC. Accordingly, the
variabilityis preserved in test cases, as they are generated from colored
test models automatically.During domain engineering, the CSC is derived
from the feature model. By coloring theState Chart, the behavior of
several product line variants can be modeled simultaneouslyin a single
diagram and thus address product line variability early. The CSC
representsthe test model, from which test cases using statistical testing
are derived.During application engineering, these colored test models are
customized for a specificapplication of the product line. At the end of
this test process, the test cases are generatedagain using statistical
testing, executed and the test results are ready for evaluation.
Inxaddition, the CSC will be transformed to a Colored Petri Net (CPN) for
verification andsimulation purposes.The main gains of applying the
CMBT-SWPL method are early detection of defects inrequirements, such as
ambiguities incompleteness and redundancy which is then reflectedin saving
the test effort, time, development and maintenance costs
A framework for knowledge-based team training
Teamwork is crucial to many disciplines, from activities such as organized sports to
economic and military organizations. Team training is difficult and as yet there are few
automated tools to assist in the training task. As with the training of individuals,
effective training depends upon practice and proper training protocols.
In this research, we defined a team training framework for constructing team
training systems in domains involving command and control teams. This team training
framework provides an underlying model of teamwork and programming interfaces to
provide services that ease the construction of team training systems. Also, the
framework enables experimentation with training protocols and coaching to be
conducted more readily, as team training systems incorporating new protocols or
coaching capabilities can be more easily built.
For this framework (called CAST-ITT) we developed an underlying intelligent
agent architecture known as CAST (Collaborative Agents Simulating Teamwork).
CAST provides the underlying model of teamwork and agents to simulate virtual team
members. CAST-ITT (Intelligent Team Trainer) uses CAST to also monitor trainees,
and support performance assessment and coaching for the purposes of evaluating the performance of a trainee as a member of a team. CAST includes a language for
describing teamwork called MALLET (Multi-Agent Logic Language for Encoding
Teamwork). MALLET allows us to codify the behaviors of team members (both as
virtual agents and as trainees) for use by CAST.
In demonstrating CAST-ITT through an implemented team training system
called TWP-DDD we have shown that a team training system can be built that uses the
framework (CAST-ITT) and has good performance and can be used for achieving real
world training objectives
Research Paper: Process Mining and Synthetic Health Data: Reflections and Lessons Learnt
Analysing the treatment pathways in real-world health data can provide valuable insight for clinicians and decision-makers. However, the procedures for acquiring real-world data for research can be restrictive, time-consuming and risks disclosing identifiable information. Synthetic data might enable representative analysis without direct access to sensitive data. In the first part of our paper, we propose an approach for grading synthetic data for process analysis based on its fidelity to relationships found in real-world data. In the second part, we apply our grading approach by assessing cancer patient pathways in a synthetic healthcare dataset (The Simulacrum provided by the English National Cancer Registration and Analysis Service) using process mining. Visualisations of the patient pathways within the synthetic data appear plausible, showing relationships between events confirmed in the underlying non-synthetic data. Data quality issues are also present within the synthetic data which reflect real-world problems and artefacts from the synthetic dataset’s creation. Process mining of synthetic data in healthcare is an emerging field with novel challenges. We conclude that researchers should be aware of the risks when extrapolating results produced from research on synthetic data to real-world scenarios and assess findings with analysts who are able to view the underlying data
- …