158 research outputs found

    Research 2.0: Improving participation in online research communities

    Get PDF
    It is still an open issue of designing and adapting (data-driven) decision support systems and data warehouses to determine relevant content and in particular (performance) measures. In fact, some classic approaches to information requirements determination such as Rockart’s critical success factors method help with structuring decision makers’ information requirements and identifying thematically appropriate measures. In many cases, however, it remains unclear which and how many measures should eventually be used. Therefore, an optimization model is presented that integrates informational and economic objectives. The model incorporates (statistic) interdependencies among measures – i. e. the information they provide about one another –, decision makers’ and reporting tools’ ability of coping with information complexity as well as negative economic effects due to measure selection and usage. We show that in general the selection policies of all-or-none or themore-the-better are not reasonable although they are often conducted in business practice. Finally, the model’s application is illustrated by the German business-to-business sales organization of a global electronics and electrical engineering company as example

    Verification of Web Service Compositions – An Operationalization of Correctness and a Requirements Framework for Service-oriented Modeling Techniques

    Get PDF
    Web service compositions coordinate Web services of different enterprises. They are expected to constitute the foundation of service-oriented architectures, to improve business processes as well as to foster intra- and inter-organizational integration. Especially in inter-organizational contexts, quality of service referring to non-functional requirements and conformance to functional requirements are becoming vital properties. With Web service compositions being asynchronous and distributed systems, the latter property – which is also called correctness – can be shown best by verification. This paper examines from a system-theoretic perspective how correctness can be operationalized for Web service compositions. It also proposes a requirements framework for service-oriented modeling techniques so that correctness can be shown by verification and Web service compositions can be modeled intuitively. In order to show the framework’s principle applicability, an example approach is analyzed with respect to the corresponding requirements

    WHAT MAKES A USEFUL MATURITY MODEL? A FRAMEWORK OF GENERAL DESIGN PRINCIPLES FOR MATURITY MODELS AND ITS DEMONSTRATION IN BUSINESS PROCESS MANAGEMENT

    Get PDF
    Since the Software Engineering Institute has launched the Capability Maturity Model almost twenty years ago, hundreds of maturity models have been proposed by researchers and practitioners across multiple application domains. With process orientation being a central paradigm of organizational design and continuous process improvement taking top positions on CIO agendas, maturity models are also prospering in business process management. Although the application of maturity models is increasing in quantity and breadth, the concept of maturity models is frequently subject to criticism. Indeed, numerous shortcomings have been disclosed referring to both maturity models as design products and the process of maturity model design. Whereas research has already substantiated the design process, there is no holistic understanding of the principles of form and function – that is, the design principles – maturity models should meet. We therefore propose a pragmatic, yet well-founded framework of general design principles justified by existing literature and grouped according to typical purposes of use. The framework is demonstrated using an exemplary set of maturity models related to business process management. We finally give a brief outlook on implications and topics for further research

    Prioritization of Interconnected Processes

    Get PDF
    Deciding which business processes to improve is a challenge for all organizations. The literature on business process management (BPM) offers several approaches that support process prioritization. As many approaches share the individual process as unit of analysis, they determine the processes’ need for improvement mostly based on performance indicators, but neglect how processes are interconnected. So far, the interconnections of processes are only captured for descriptive purposes in process model repositories or business process architectures (BPAs). Prioritizing processes without catering for their interconnectedness, however, biases prioritization decisions and causes a misallocation of corporate funds. What is missing are process prioritization approaches that consider the processes’ individual need for improvement and their interconnectedness. To address this research problem, the authors propose the ProcessPageRank (PPR) as their main contribution. The PPR prioritizes processes of a given BPA by ranking them according to their network-adjusted need for improvement. The PPR builds on knowledge from process performance management, BPAs, and network analysis – particularly the Google PageRank. As for evaluation, the authors validated the PPR’s design specification against empirically validated and theory-backed design propositions. They also instantiated the PPR’s design specification as a software prototype and applied the prototype to a real-world BPA

    Improving customer satisfaction in proactive service design: a Kano model approach

    Get PDF

    THE FUTURE OF BUSINESS PROCESS MANAGEMENT IN THE FUTURE OF WORK

    Get PDF
    Business process management (BPM) is a corporate capability that strives for efficient and effective work. As a matter of fact, work is rapidly changing due to technological, economic, and demographic developments. New digital affordances, work attitudes, and collaboration models are revolutionizing how work is performed. These changes are referred to as the future of work. Despite the obvious con-nection between the future of work and BPM, neither current initiatives on the future of BPM nor exist-ing BPM capability frameworks account for the characteristics of the future of work. Hence, there is a need for evolving BPM as a corporate capability in light of the future of work. As a first step to triggering a community-wide discussion, we compiled propositions that capture constitutive characteristics of the future of work. We then let a panel of BPM experts map these propositions to the six factors of Rosemann and vom Brocke’s BPM capability framework, which captures how BPM is conceptualized today. On this foundation, we discussed how BPM should evolve in light of the future of work and distilled over-arching topics which we think will reshape BPM as a corporate capability

    Machine Learning in Business Process Monitoring: A Comparison of Deep Learning and Classical Approaches Used for Outcome Prediction

    Get PDF
    Predictive process monitoring aims at forecasting the behavior, performance, and outcomes of business processes at runtime. It helps identify problems before they occur and re-allocate resources before they are wasted. Although deep learning (DL) has yielded breakthroughs, most existing approaches build on classical machine learning (ML) techniques, particularly when it comes to outcome-oriented predictive process monitoring. This circumstance reflects a lack of understanding about which event log properties facilitate the use of DL techniques. To address this gap, the authors compared the performance of DL (i.e., simple feedforward deep neural networks and long short term memory networks) and ML techniques (i.e., random forests and support vector machines) based on five publicly available event logs. It could be observed that DL generally outperforms classical ML techniques. Moreover, three specific propositions could be inferred from further observations: First, the outperformance of DL techniques is particularly strong for logs with a high variant-to-instance ratio (i.e., many non-standard cases). Second, DL techniques perform more stably in case of imbalanced target variables, especially for logs with a high event-to-activity ratio (i.e., many loops in the control flow). Third, logs with a high activity-to-instance payload ratio (i.e., input data is predominantly generated at runtime) call for the application of long short term memory networks. Due to the purposive sampling of event logs and techniques, these findings also hold for logs outside this study

    How to Structure Business Transformation Projects: The Case of Infineon’s Finance IT Roadmap

    Get PDF
    Although project management, benefits management, change management, and transformation management are everyday terms in many organizations, projects still experience high failure rates. Business transformation projects in particular are prone to fail because they affect multiple enterprise architecture layers, involve many stakeholders, last several years, and tie up considerable amounts of corporate capital. To handle their complexity, scholars recommend structuring business transformation projects into portfolios of interdependent, yet smaller and, thus, manageable projects. So far, little guidance on how to do so exists. To share first-hand experience and stimulate research, we present and reflect on a project conducted with Infineon Technologies in which we co-developed Infineon’s finance IT roadmap. The finance IT roadmap served as the foundation for transforming Infineon’s finance IT setup to tackle future challenges of financial management in the semiconductor industry from an integrated business, process, and IT perspective
    • …
    corecore