85 research outputs found

    Using Analytical Information for Digital Business Transformation through DataOps: A Review and Conceptual Framework

    Get PDF
    Organisations are increasingly practising business analytics to generate actionable insights that can guide their digital business transformation. Transforming business digitally using business analytics is an ongoing process that requires an integrated and disciplined approach to leveraging analytics and promoting collaboration. An emerging business analytics practice, Data Operations (DataOps), provides a disciplined approach for organisations to collaborate using analytical information for digital business transformation. We propose a conceptual framework by reviewing the literature on business analytics, DataOps and organisational information processing theory (OIPT). This conceptual framework explains how organisations can employ DataOps as an integrated and disciplined approach for developing the analytical information processing capability and facilitating boundary-spanning activities required for digital business transformation. This research (a) extends current knowledge on digital transformation by linking it with business analytics from the perspective of OIPT and boundary-spanning activities, and (b) presents DataOps as a novel approach for using analytical information for digital business transformation

    From Ad-Hoc Data Analytics to DataOps

    Get PDF
    The collection of high-quality data provides a key competitive advantage to companies in their decision-making process. It helps to understand customer behavior and enables the usage and deployment of new technologies based on machine learning. However, the process from collecting the data, to clean and process it to be used by data scientists and applications is often manual, non-optimized and error-prone. This increases the time that the data takes to deliver value for the business. To reduce this time companies are looking into automation and validation of the data processes. Data processes are the operational side of data analytic workflow.DataOps, a recently coined term by data scientists, data analysts and data engineers refer to a general process aimed to shorten the end-to-end data analytic life-cycle time by introducing automation in the data collection, validation, and verification process. Despite its increasing popularity among practitioners, research on this topic has been limited and does not provide a clear definition for the term or how a data analytic process evolves from ad-hoc data collection to fully automated data analytics as envisioned by DataOps.This research provides three main contributions. First, utilizing multi-vocal literature we provide a definition and a scope for the general process referred to as DataOps. Second, based on a case study with a large mobile telecommunication organization, we analyze how multiple data analytic teams evolve their infrastructure and processes towards DataOps. Also, we provide a stairway showing the different stages of the evolution process. With this evolution model, companies can identify the stage which they belong to and also, can try to move to the next stage by overcoming the challenges they encounter in the current stage

    DataOps as a Prerequisite for the Next Level of Self-Service Analytics – Balancing User Agency and Central Control

    Get PDF
    The area of Business Intelligence and Analytics (BIA) has repeatedly oscillated between more central, efficiency-oriented, professionalized approaches and decentral, agility-oriented, user-driven ones. We investigate whether and how to alleviate that tradeoff by combining an agility-oriented self-service BIA approach with the professionalization-driven DataOps concept: DataOps aims at transferring ideas from DevOps to the realm of analytics, namely a mutual integration of Development and Operations and a high degree of professionalization and automation. From a case study with a series of interviews and a workshop we generate insights into the viability of such a combination. Our results inspire a theoretical concept for capturing the economics behind the approaches that is considering the (opportunity) costs of the components “user agency” and “central control”. The concept has been evaluated with representatives from the case study. Based on our results, we argue that the discussed combination can push BIA solutions towards fine-tuned federated environments

    Study of DataOps as a concept for Aker BP to enable data-driven assets.

    Get PDF
    The oil and gas industry have faced many obstacles over the decades and in recent years the industry has had to endure increased pressure from the market, a global crisis caused by the coronavirus and oil prices reaching an all time low. Low oil prices stimulate a reaction from the industry having to do more with less, finding new ways of working and exploring previously untapped opportunities for improvement through digitalization. Aker BP aim to be at the forefront of digitizing the exploration and production (E&P) industry and their digital transformation is more than just technology. Aker BP aim to build digital capabilities, develop digital mind sets and implement new ways of working where decisions are driven by data. The company show a willingness to experiment with and develop new technologies, however data management efforts are increasing as digitalization projects are realised from Aker BP’s digital lab. An investment into data management is therefore required to allow for automation over time and ensure the right competence within Aker BP. In collaboration with Aker BP, this thesis investigates the importance of an organization wide data management and data governance strategy aligned with business objectives and how the emerging concept of DataOps can enable Aker BP’s ambitions of becoming a data-driven company. DataOps is an emerging approach advocated by data practitioners to cater to the challenges in data analytics projects. The thesis examine how Aker BP work with data in their organization today through interviews and discussions with different parts of the organization and assess how the principles and concepts of DataOps can be applied. To answer this, a DataOps maturity model has been developed and Aker BP’s ways of working evaluated by using the model. The principles and concepts of DataOps have then been considered and an implementation plan and critical success factors for succeeding with DataOps or maturing data management and data governance efforts are presented. The thesis focuses on the principle that DataOps’ goal is to liberate data from its sources to its consumers and proposes the necessary steps to embark on the DataOps journey and maturing data management within the company. The research show that data management efforts are limited and based on ad-hoc needs and fast changing priorities. The pressure on the data management function to verify, clean, liberate, analyse and advise on data is growing and the participants in the research emphasize the need for change. Based on the DataOps maturity model developed, the steps required to increase Aker BP’s DataOps maturity are identified. Further research should include implementation of the identified steps and investigate the efforts to further automating business processes and maturing Aker BP’s ability to work with data

    Leveraging Data and Analytics for Digital Business Transformation through DataOps: An Information Processing Perspective

    Get PDF
    Digital business transformation has become increasingly important for organisations. Since transforming business digitally is an ongoing process, it requires an integrated and disciplined approach. Data Operations (DataOps), emerging in practice, can provide organisations with such an approach to leverage data and analytics for digital business transformation. This paper proposes a framework that integrates digital business transformation, data analytics, and DataOps through the lens of information processing theory (IPT). The details of this framework explain how organisations can employ DataOps as an integrated and disciplined approach to understand their analytical information needs and develop the analytical information processing capability required for digital business transformation. DataOps-enabled digital business transformation, in turn, improves organisational performance by improving operational efficiency and creating new business models. This research extends current knowledge on digital transformation by bringing in DataOps and analytics through IPT and thereby provides organisations with a novel approach for their digital business transformations

    Managing and Making Sense of Data to Drive Digital Transformation: A Case Study

    Get PDF
    We explore how organizations manage and make sense of data collaboratively to drive digital transformation. We present the results of an in-depth case study of a financial organization that used Data Operations (DataOps) - a collaborative data management practice to transform its digital-first offering initiative and thereby redefining its value proposition. Drawing on sensemaking theory, we develop a process model that explains how organizations use DataOps to perceive cues through data democratization, extract plausible and comprehensive insights from data through data storytelling to make interpretations, and leverage data products to take actions that drive data-driven digital transformation. Our findings have implications for data-driven digital transformation as we show how DataOps constitutes a new class of data management practices that enable collaboration between data managers and data consumers and allow organizations to make evidence-based decisions to drive their digital transformation

    After the success of DevOps introduce DataOps in enterprise culture

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementA lot of organizations implemented DevOps processes with success. This allowed different areas like development, operations, security and quality work together. This cooperation, and processes associated to the work with these areas are producing excellent results. The organizations are developing many applications that support operation and are producing a lot of data. This data has a significant value for organizations because must be used in analysis, reporting and more recently data science projects to support decisions. It is time to take decisions supported in data and for this is necessary to transform organizations in a data-driven organizations and for this we need processes to deal with this data across all teams. This dissertation follows a design science research approach to apply multiple analytical methods and perspectives to create an artifact. The type of evidence within this methodology is a systematic literature review, with the goal to attain insights into the current state-of-the art research of DataOps implementation. Additionally, proven best practices from the industry are examined in depth to further strengthen the credibility. Thereby, the systematic literature review shall be used to pinpoint, analyze, and comprehend the obtainable empirical studies and research questions. This methodology supports the main goal of this dissertation, to develop and propose evidence-based practice guidelines for the DataOps implementation that can be followed by organizations

    Data management and Data Pipelines: An empirical investigation in the embedded systems domain

    Get PDF
    Context: Companies are increasingly collecting data from all possible sources to extract insights that help in data-driven decision-making. Increased data volume, variety, and velocity and the impact of poor quality data on the development of data products are leading companies to look for an improved data management approach that can accelerate the development of high-quality data products. Further, AI is being applied in a growing number of fields, and thus it is evolving as a horizontal technology. Consequently, AI components are increasingly been integrated into embedded systems along with electronics and software. We refer to these systems as AI-enhanced embedded systems. Given the strong dependence of AI on data, this expansion also creates a new space for applying data management techniques. Objective: The overall goal of this thesis is to empirically identify the data management challenges encountered during the development and maintenance of AI-enhanced embedded systems, propose an improved data management approach and empirically validate the proposed approach.Method: To achieve the goal, we conducted this research in close collaboration with Software Center companies using a combination of different empirical research methods: case studies, literature reviews, and action research.Results and conclusions: This research provides five main results. First, it identifies key data management challenges specific to Deep Learning models developed at embedded system companies. Second, it examines the practices such as DataOps and data pipelines that help to address data management challenges. We observed that DataOps is the best data management practice that improves the data quality and reduces the time tdevelop data products. The data pipeline is the critical component of DataOps that manages the data life cycle activities. The study also provides the potential faults at each step of the data pipeline and the corresponding mitigation strategies. Finally, the data pipeline model is realized in a small piece of data pipeline and calculated the percentage of saved data dumps through the implementation.Future work: As future work, we plan to realize the conceptual data pipeline model so that companies can build customized robust data pipelines. We also plan to analyze the impact and value of data pipelines in cross-domain AI systems and data applications. We also plan to develop AI-based fault detection and mitigation system suitable for data pipelines
    • …
    corecore