119 research outputs found

    Service-Oriented Cognitive Analytics for Smart Service Systems: A Research Agenda

    Get PDF
    The development of analytical solutions for smart services systems relies on data. Typically, this data is distributed across various entities of the system. Cognitive learning allows to find patterns and to make predictions across these distributed data sources, yet its potential is not fully explored. Challenges that impede a cross-entity data analysis concern organizational challenges (e.g., confidentiality), algorithmic challenges (e.g., robustness) as well as technical challenges (e.g., data processing). So far, there is no comprehensive approach to build cognitive analytics solutions, if data is distributed across different entities of a smart service system. This work proposes a research agenda for the development of a service-oriented cognitive analytics framework. The analytics framework uses a centralized cognitive aggregation model to combine predictions being made by each entity of the service system. Based on this research agenda, we plan to develop and evaluate the cognitive analytics framework in future research

    Towards a Technician Marketplace using Capacity-Based Pricing

    Get PDF
    Today, industrial maintenance is organized as an on-call business: Upon a customer’ s service request, the maintenance provider schedules a service technician to perform the demanded service at a suitable time. In this work, we address two drawbacks of this scheduling approach: First, the provider typically prioritizes service demand based on a subjective perception of urgency. Second, the pricing of technician services is inefficient, since services are priced on a time and material basis without accounting for additional service quality (e.g. shorter response time). We propose the implementation of a technician marketplace that allows customers to book technician capacity for fixed time slots. The price per time slot depends on the remaining capacity and therefore incentivizes customers to claim slots that match their objective task urgency. The approach is evaluated using a simulation study. Results show the capabilities of capacity-based pricing mechanisms to prioritize service demand according to customers’ opportunity costs

    Incorporating Business Impact into Service Offers – A Procedure to Select Cost-Optimal Service Contracts

    Get PDF
    In this work we address an IT service customer’s challenge of selecting the cost-optimal service among different offers by external providers. We model the customer’s optimization problem by considering the potential negative monetary impact of different combinations of sequential service incidents on a customer business process – reflected via “business cost”. First, we describe which information a customer typically bases service level agreement decisions upon and analyze which additional information is needed to take a well-founded decision. Second, we define a set of constructs that supports customers and providers when selecting or defining service offers, which address the required service criticality. Third, we develop a procedure enabling customers to solve their optimization problem – given different service offers by risk-neutral providers – using a procurement auction. Introducing this approach, we suggest that customers and providers collaborate to define “business cost measures” which allow providers to better tailor service offers to customers’ business requirements

    A Data Quality Metrics Hierarchy for Reliability Data

    Get PDF
    In this paper, we describe an approach to understanding data quality issues in field data used for the calculation of reliability metrics such as availability, reliability over time, or MTBF. The focus lies on data from sources such as maintenance management systems or warranty databases which contain information on failure times, failure modes for all units. We propose a hierarchy of data quality metrics which identify and assess key problems in the input data. The metrics are organized in such a way that they guide the data analyst to those problems with the most impact on the calculation and provide a prioritised action plan for the improvement of data quality. The metrics cover issues such as missing, wrong, implausible and inaccurate data. We use examples with real-world data to showcase our software prototype and to illustrate how the metrics have helped with data preparation. Using this way, analysts can reduce the amount of wrong conclusions drawn from the data to mistakes in the input values

    How to Conduct Rigorous Supervised Machine Learning in Information Systems Research: The Supervised Machine Learning Reportcard [in press]

    Get PDF
    Within the last decade, the application of supervised machine learning (SML) has become increasingly popular in the field of information systems (IS) research. Although the choices among different data preprocessing techniques, as well as different algorithms and their individual implementations, are fundamental building blocks of SML results, their documentation—and therefore reproducibility—is inconsistent across published IS research papers. This may be quite understandable, since the goals and motivations for SML applications vary and since the field has been rapidly evolving within IS. For the IS research community, however, this poses a big challenge, because even with full access to the data neither a complete evaluation of the SML approaches nor a replication of the research results is possible. Therefore, this article aims to provide the IS community with guidelines for comprehensively and rigorously conducting, as well as documenting, SML research: First, we review the literature concerning steps and SML process frameworks to extract relevant problem characteristics and relevant choices to be made in the application of SML. Second, we integrate these into a comprehensive “Supervised Machine Learning Reportcard (SMLR)” as an artifact to be used in future SML endeavors. Third, we apply this reportcard to a set of 121 relevant articles published in renowned IS outlets between 2010 and 2018 and demonstrate how and where the documentation of current IS research articles can be improved. Thus, this work should contribute to a more complete and rigorous application and documentation of SML approaches, thereby enabling a deeper evaluation and reproducibility / replication of results in IS research
    corecore