11 research outputs found
How to Cope with Change? - Preserving Validity of Predictive Services over Time
Companies more and more rely on predictive services which are constantly monitoring and analyzing the available data streams for better service offerings. However, sudden or incremental changes in those streams are a challenge for the validity and proper functionality of the predictive service over time. We develop a framework which allows to characterize and differentiate predictive services with regard to their ongoing validity. Furthermore, this work proposes a research agenda of worthwhile research topics to improve the long-term validity of predictive services. In our work, we especially focus on different scenarios of true label availability for predictive services as well as the integration of expert knowledge. With these insights at hand, we lay an important foundation for future research in the field of valid predictive services
Handling Concept Drift for Predictions in Business Process Mining
Predictive services nowadays play an important role across all business
sectors. However, deployed machine learning models are challenged by changing
data streams over time which is described as concept drift. Prediction quality
of models can be largely influenced by this phenomenon. Therefore, concept
drift is usually handled by retraining of the model. However, current research
lacks a recommendation which data should be selected for the retraining of the
machine learning model. Therefore, we systematically analyze different data
selection strategies in this work. Subsequently, we instantiate our findings on
a use case in process mining which is strongly affected by concept drift. We
can show that we can improve accuracy from 0.5400 to 0.7010 with concept drift
handling. Furthermore, we depict the effects of the different data selection
strategies
Handling Concept Drifts in Regression Problems -- the Error Intersection Approach
Machine learning models are omnipresent for predictions on big data. One
challenge of deployed models is the change of the data over time, a phenomenon
called concept drift. If not handled correctly, a concept drift can lead to
significant mispredictions. We explore a novel approach for concept drift
handling, which depicts a strategy to switch between the application of simple
and complex machine learning models for regression tasks. We assume that the
approach plays out the individual strengths of each model, switching to the
simpler model if a drift occurs and switching back to the complex model for
typical situations. We instantiate the approach on a real-world data set of
taxi demand in New York City, which is prone to multiple drifts, e.g. the
weather phenomena of blizzards, resulting in a sudden decrease of taxi demand.
We are able to show that our suggested approach outperforms all regarded
baselines significantly
Unterstützung der Wissensarbeit durch Künstliche Intelligenz - Anforderungen an die Gestaltung maschinellen Lernens
Entwicklungen der Informationstechnologie stellen zunehmend Möglichkeiten bereit, den menschlichen „Wissensarbeiter“ durch kognitive Assistenzsysteme auf Basis Künstlicher Intelligenz (KI) in seinen Entscheidungen oder Aktionen zu unterstützen. Im vorliegenden Beitrag wollen wir beleuchten, welche Anforderungen an die zugrundeliegenden maschinellen Lernverfahren zu stellen sind, um die individuelle sowie gesellschaftliche Akzeptanz solcher „Augmented Intelligence“ zu gewährleisten
Handling Concept Drift for Predictions in Business Process Mining
Predictive services nowadays play an important role across all business sectors. However, deployed machine learning models are challenged by changing data streams over time which is described as concept drift. Prediction quality of models can be largely influenced by this phenomenon. Therefore, concept drift is usually handled by retraining of the model. However, current research lacks a recommendation which data should be selected for the retraining of the machine learning model. Therefore, we systematically analyze different data selection strategies in this work. Subsequently, we instantiate our findings on a use case in process mining which is strongly affected by concept drift. We can show that we can improve accuracy from 0.5400 to 0.7010 with concept drift handling. Furthermore, we depict the effects of the different data selection strategies
Artificial intelligence and machine learning
Within the last decade, the application of "artificial intelligence" and "machine learning" has become popular across multiple disciplines, especially in information systems. The two terms are still used inconsistently in academia and industry—sometimes as synonyms, sometimes with different meanings. With this work, we try to clarify the relationship between these concepts. We review the relevant literature and develop a conceptual framework to specify the role of machine learning in building (artificial) intelligent agents. Additionally, we propose a consistent typology for AI-based information systems. We contribute to a deeper understanding of the nature of both concepts and to more terminological clarity and guidance—as a starting point for interdisciplinary discussions and future research
Organizational Learning in the Rise of Machine Learning
Organizational learning (OL) is associated with experience and knowledge in an organization. Information Technology (IT) enables the creation, dissemination, and use of knowledge, and as such, plays an important role in an organization’s learning process. This role has inspired a large body of literature studying the link between OL and IT and the relation between IT and knowledge exploration and exploitation. The recent rise of Machine Learning (ML) with its Deep Learning (DL) capabilities has nevertheless brought about new ways of creating, retaining, and transferring knowledge. I argue that the learning occurring within the machine plays a role in the learning occurring within the organization, calling for revisiting OL in light of this disruptive IT. In this paper, I focus on three different ways in which the machine achieves its learning, namely supervised, unsupervised, and reinforcement learning, and advance propositions on how each impacts OL differently
How to Conduct Rigorous Supervised Machine Learning in Information Systems Research: The Supervised Machine Learning Reportcard [in press]
Within the last decade, the application of supervised machine learning (SML) has become increasingly popular in the field of information systems (IS) research. Although the choices among different data preprocessing techniques, as well as different algorithms and their individual implementations, are fundamental building blocks of SML results, their documentation—and therefore reproducibility—is inconsistent across published IS research papers.
This may be quite understandable, since the goals and motivations for SML applications vary and since the field has been rapidly evolving within IS. For the IS research community, however, this poses a big challenge, because even with full access to the data neither a complete evaluation of the SML approaches nor a replication of the research results is possible.
Therefore, this article aims to provide the IS community with guidelines for comprehensively and rigorously conducting, as well as documenting, SML research: First, we review the literature concerning steps and SML process frameworks to extract relevant problem characteristics and relevant choices to be made in the application of SML. Second, we integrate these into a comprehensive “Supervised Machine Learning Reportcard (SMLR)” as an artifact to be used in future SML endeavors. Third, we apply this reportcard to a set of 121 relevant articles published in renowned IS outlets between 2010 and 2018 and demonstrate how and where the documentation of current IS research articles can be improved. Thus, this work should contribute to a more complete and rigorous application and documentation of SML approaches, thereby enabling a deeper evaluation and reproducibility / replication of results in IS research