4 research outputs found

    Assessment of process capabilities in transition to a data-driven organisation: A multidisciplinary approach

    Get PDF
    The ability to leverage data science can generate valuable insights and actions in organisations by enhancing data-driven decision-making to find optimal solutions based on complex business parameters and data. However, only a small percentage of the organisations can successfully obtain a business value from their investments due to a lack of organisational management, alignment, and culture. Becoming a data-driven organisation requires an organisational change that should be managed and fostered from a holistic multidisciplinary perspective. Accordingly, this study seeks to address these problems by developing the Data Drivenness Process Capability Determination Model (DDPCDM) based on the ISO/IEC 330xx family of standards. The proposed model enables organisations to determine their current management capabilities, derivation of a gap analysis, and the creation of a comprehensive roadmap for improvement in a structured and standardised way. DDPCDM comprises two main dimensions: process and capability. The process dimension consists of five organisational management processes: change management, skill and talent management, strategic alignment, organisational learning, and sponsorship and portfolio management. The capability dimension embraces six levels, from incomplete to innovating. The applicability and usability of DDPCDM are also evaluated by conducting a multiple-case study in two organisations. The results reveal that the proposed model is able to evaluate the strengths and weaknesses of an organisation in adopting, managing, and fostering the transition to a data-driven organisation and providing a roadmap for continuously improving the data-drivenness of organisations

    An Empirical Study of Refactorings and Technical Debt in Machine Learning Systems

    Full text link
    Machine Learning (ML), including Deep Learning (DL), systems, i.e., those with ML capabilities, are pervasive in today’s data-driven society. Such systems are complex; they are comprised of ML models and many subsystems that support learning processes. As with other complex systems, ML systems are prone to classic technical debt issues, especially when such systems are long-lived, but they also exhibit debt specific to these systems. Unfortunately, there is a gap of knowledge in how ML systems actually evolve and are maintained. In this paper, we fill this gap by studying refactorings, i.e., source-to-source semantics-preserving program transformations, performed in real-world, open-source software, and the technical debt issues they alleviate. We analyzed 26 projects, consisting of 4.2 MLOC, along with 327 manually examined code patches. The results indicate that developers refactor these systems for a variety of reasons, both specific and tangential to ML, some refactorings correspond to established technical debt categories, while others do not, and code duplication is a major cross-cutting theme that particularly involved ML configuration and model code, which was also the most refactored. We also introduce 14 and 7 new ML-specific refactorings and technical debt categories, respectively, and put forth several recommendations, best practices, and anti-patterns. The results can potentially assist practitioners, tool developers, and educators in facilitating long-term ML system usefulness

    Towards Automated Software Evolution of Data-Intensive Applications

    Full text link
    Recent years have witnessed an explosion of work on Big Data. Data-intensive applications analyze and produce large volumes of data typically terabyte and petabyte in size. Many techniques for facilitating data processing are integrated into data-intensive applications. API is a software interface that allows two applications to communicate with each other. Streaming APIs are widely used in today\u27s Object-Oriented programming development that can support parallel processing. In this dissertation, an approach that automatically suggests stream code run in parallel or sequentially is proposed. However, using streams efficiently and properly needs many subtle considerations. The use and misuse patterns for stream codes are proposed in this dissertation. Modern software, especially for highly transactional software systems, generates vast logging information every day. The huge amount of information prevents developers from receiving useful information effectively. Log-level could be used to filter run-time information. This dissertation proposes an automated evolution approach for alleviating logging information overload by rejuvenating log levels according to developers\u27 interests. Machine Learning (ML) systems are pervasive in today\u27s software society. They are always complex and can process large volumes of data. Due to the complexity of ML systems, they are prone to classic technical debt issues, but how ML systems evolve is still a puzzling problem. This dissertation introduces ML-specific refactoring and technical debt for solving this problem
    corecore