191 research outputs found
Soil erosion and sediment yield in the upper Yangtze, China
Soil erosion and sedimentation are key environmental problems in the Upper Yangtze because of the ongoing Three Gorges Project (TGP), the largest hydro-power project in the world. There is growing concern about the rapid increase of soil erosion over the last few decades and its consequence for potential sedimentation in the reservoir. The study aims to examine controls on the spatial and temporal distributions of sediment transfer within the Upper Yangtze and the hydrological consequences of land use changes, using varied approaches at different catchment scales. First, soil erosion and sedimentation are examined using the radionuclide Cs-137 as a tracer within a small reservoir catchment in the Three Gorges Area. The results indicates that soil erosion on sloping arable land and the rates of reservoir sedimentation have been severe during the past 40 years, mainly due to cultivation on steep slopes. Changes in reservoir sedimentation rates are mainly attributed to land use changes. The suitability of the Cs-137 techniques for investigating soil erosion and sedimentation in intensely cultivated subtropical environments is also considered. The use of the technique for erosion investigation may have limitations due to the abundance of coarse soil textures, uncertainty about fallout deposition rates and the high incidence of human disturbance, but the technique shows promising perspectives for sedimentation investigation since a few dating horizons might be identified. Second, sediment and runoff measurement data for around 30 years from over 250 hydrological stations within the Upper Yangtze have been examined within a GIS framework. The dataset has been integrated with catchment characteristics derived from a variety of environmental datasets and manipulated with Arc/Info GIS. The analysis of the sediment load data has permitted identification of the most important locations of sediment sources, the shifting pattern of source areas in relation to land use change and sub-catchments exhibiting trending sediment yields corrected for hydrological variability. The study demonstrates the importance of scale dependency of sediment yield in both the identification of temporal change and the modelling of relationships between sediment yield and environmental variables, suggesting that the treatment of the scale problem is crucial for temporal-spatial studies of sediment yield
Folders: A Visual Organization System for MIT App Inventor
In blocks programming languages, such as MIT App Inventor, programs are built by composing puzzle-shaped fragments on a 2D workspace. Their visual nature makes programming more accessible to novices, but it also has numerous drawbacks. Users must decide where to place blocks on the workspace, and these placements may require the reorganization of other blocks. Block representations are less space efficient than their textual equivalents. Finally, the fundamental 2D nature of the blocks workspace makes it more challenging to search and navigate than the traditional linear workflow. Because of these barriers, users have difficulty creating and navigating complex programs.
In order to address these drawbacks, I have developed Folders, a visual organization system, for App Inventor. Folders, which are modeled after the hierarchical desktop metaphor folders, allow users to nest blocks within them, and solve many of the aforementioned problems. First, users can use Folders, rather than spatial closeness, to place and organize blocks, thereby explicitly indicating a relationship between them. Second, Folders allow users to selectively hide and show particular groups of blocks and address the issue of limited visible space. Lastly, users are already familiar with the folder metaphor from other applications, so their introduction does not complicate App Inventor.
Unfortunately, Folders also introduce new obstacles. Users might expect that putting blocks into Folders removes them from the main workspace semantically. However, Folders are only for organizing blocks and decluttering the workspace, and their contained blocks are still considered part of the main workspace. Furthermore, Folders exacerbate the search and navigation problem. Since blocks can now be hidden in collapsed Folders, finding a usage or declaration of a variable, procedure, or component can be more difficult. I have received preliminary feedback on my initial implementation of Folders and am designing a user study to evaluate my Folders system
Identifying Patient Groups based on Frequent Patterns of Patient Samples
Grouping patients meaningfully can give insights about the different types of
patients, their needs, and the priorities. Finding groups that are meaningful
is however very challenging as background knowledge is often required to
determine what a useful grouping is. In this paper we propose an approach that
is able to find groups of patients based on a small sample of positive examples
given by a domain expert. Because of that, the approach relies on very limited
efforts by the domain experts. The approach groups based on the activities and
diagnostic/billing codes within health pathways of patients. To define such a
grouping based on the sample of patients efficiently, frequent patterns of
activities are discovered and used to measure the similarity between the care
pathways of other patients to the patients in the sample group. This approach
results in an insightful definition of the group. The proposed approach is
evaluated using several datasets obtained from a large university medical
center. The evaluation shows F1-scores of around 0.7 for grouping kidney injury
and around 0.6 for diabetes
The Effects of Transformational Leadership on Employee’s Pro-social Rule Breaking
The construct of pro-social rule breaking occupies an important, but largely neglected position within exi-sting frameworks of organizational deviance Pro-social rule breaking (PSRB) is a form of constructive de-viance characterized by volitional rule breaking in the interest of the organization or its stakeholders.Usi-ng survey data collected from 252 employees in different organizations in China,the researchers empirically examines the relationship between transformational leadership and employee’s pro-social rule breaking and the mediating role of job autonomy. Results indicate that transformational leadership is positively rel-ated to pro-social rule breaking,job autonomy fully mediates the relationships between transformational le-adership and employee’s pro-social rule breaking. Theoretical and practical implications are discussed. A set of future research directions are offered
Turning Logs into Lumber: Preprocessing Tasks in Process Mining
Event logs are invaluable for conducting process mining projects, offering
insights into process improvement and data-driven decision-making. However,
data quality issues affect the correctness and trustworthiness of these
insights, making preprocessing tasks a necessity. Despite the recognized
importance, the execution of preprocessing tasks remains ad-hoc, lacking
support. This paper presents a systematic literature review that establishes a
comprehensive repository of preprocessing tasks and their usage in case
studies. We identify six high-level and 20 low-level preprocessing tasks in
case studies. Log filtering, transformation, and abstraction are commonly used,
while log enriching, integration, and reduction are less frequent. These
results can be considered a first step in contributing to more structured,
transparent event log preprocessing, enhancing process mining reliability.Comment: Accepted by EdbA'23 workshop, co-located with ICPM 202
CREATED: Generating Viable Counterfactual Sequences for Predictive Process Analytics
Predictive process analytics focuses on predicting future states, such as the
outcome of running process instances. These techniques often use machine
learning models or deep learning models (such as LSTM) to make such
predictions. However, these deep models are complex and difficult for users to
understand. Counterfactuals answer ``what-if'' questions, which are used to
understand the reasoning behind the predictions. For example, what if instead
of emailing customers, customers are being called? Would this alternative lead
to a different outcome? Current methods to generate counterfactual sequences
either do not take the process behavior into account, leading to generating
invalid or infeasible counterfactual process instances, or heavily rely on
domain knowledge. In this work, we propose a general framework that uses
evolutionary methods to generate counterfactual sequences. Our framework does
not require domain knowledge. Instead, we propose to train a Markov model to
compute the feasibility of generated counterfactual sequences and adapt three
other measures (delta in outcome prediction, similarity, and sparsity) to
ensure their overall viability. The evaluation shows that we generate viable
counterfactual sequences, outperform baseline methods in viability, and yield
similar results when compared to the state-of-the-art method that requires
domain knowledge
Measuring the Stability of Process Outcome Predictions in Online Settings
Predictive Process Monitoring aims to forecast the future progress of process
instances using historical event data. As predictive process monitoring is
increasingly applied in online settings to enable timely interventions,
evaluating the performance of the underlying models becomes crucial for
ensuring their consistency and reliability over time. This is especially
important in high risk business scenarios where incorrect predictions may have
severe consequences. However, predictive models are currently usually evaluated
using a single, aggregated value or a time-series visualization, which makes it
challenging to assess their performance and, specifically, their stability over
time. This paper proposes an evaluation framework for assessing the stability
of models for online predictive process monitoring. The framework introduces
four performance meta-measures: the frequency of significant performance drops,
the magnitude of such drops, the recovery rate, and the volatility of
performance. To validate this framework, we applied it to two artificial and
two real-world event logs. The results demonstrate that these meta-measures
facilitate the comparison and selection of predictive models for different
risk-taking scenarios. Such insights are of particular value to enhance
decision-making in dynamic business environments.Comment: 8 pages, 3 figures, Proceedings of the 5th International Conference
on Process Mining (ICPM 2023
Seeing the Signs of Workarounds: A Mixed-Methods Approach to the Detection of Nurses’ Process Deviations
Workarounds are intentional deviations from prescribed processes. They are most commonly studied in healthcare settings, where nurses are known for frequently deviating from the intended way of using health information systems. However, workarounds in healthcare have only been studied using qualitative methods, such as observations and interviews. We conduct a case study in a Dutch hospital and use a mixed-methods approach that draws not only on interviews and observations, but also on process mining, to detect and analyse eight workarounds that occur in a clinical care process. We contribute to theory by demonstrating that it is possible to use data to determine the occurrence of a rich variety of workarounds found using qualitative methods. Practically, this implies that workarounds that are identified qualitatively can be further analysed and monitored using quantitative methods. Once identified, workarounds also provide an attractive starting point for organisational learning and improvement
Uncovering Complex Relations in Patient Pathways based on Statistics: the Impact of Clinical Actions
Process mining is a family of techniques that can aid healthcare organizations in improving their processes. Most existing process mining techniques do not provide insights into the impact that activities can have on the process. Some novel techniques try to address this issue, but these techniques are either not generic in their approach or cannot provide insights into complex relations in organizational processes. We propose a novel and generic approach with the goal of producing insights into statistical relations within healthcare processes. We apply the approach on a public data set on sepsis in an emergency room. We find that the hospital might optimize its process in two respects: (1) their cost-benefit balance for patient care by considering their activities in terms of continuous monitoring and substance administration, and (2) their policies on discharging patients as to ensure patients are not discharged too early and return to the emergency room
Continuous Performance Evaluation for Business Process Outcome Monitoring
While a few approaches to online predictive monitoring have focused on concept drift model adaptation, none have considered in depth the issue of performance evaluation for online process outcome prediction. Without such a continuous evaluation, users may be unaware of the performance of predictive models, resulting in inaccurate and misleading predictions. This paper fills this gap by proposing a framework for evaluating online process outcome predictions, comprising two different evaluation methods. These methods are partly inspired by the literature on streaming classification with delayed labels and complement each other to provide a comprehensive evaluation of process monitoring techniques: one focuses on real-time performance evaluation, i.e., evaluating the performance of the most recent predictions, whereas the other focuses on progress-based evaluation, i.e., evaluating the ability of a model to output correct predictions at different prefix lengths. We present an evaluation involving three publicly available event logs, including a log characterised by concept drift
- …