8,633 research outputs found

    ACon: A learning-based approach to deal with uncertainty in contextual requirements at runtime

    Get PDF
    Context: Runtime uncertainty such as unpredictable operational environment and failure of sensors that gather environmental data is a well-known challenge for adaptive systems. Objective: To execute requirements that depend on context correctly, the system needs up-to-date knowledge about the context relevant to such requirements. Techniques to cope with uncertainty in contextual requirements are currently underrepresented. In this paper we present ACon (Adaptation of Contextual requirements), a data-mining approach to deal with runtime uncertainty affecting contextual requirements. Method: ACon uses feedback loops to maintain up-to-date knowledge about contextual requirements based on current context information in which contextual requirements are valid at runtime. Upon detecting that contextual requirements are affected by runtime uncertainty, ACon analyses and mines contextual data, to (re-)operationalize context and therefore update the information about contextual requirements. Results: We evaluate ACon in an empirical study of an activity scheduling system used by a crew of 4 rowers in a wild and unpredictable environment using a complex monitoring infrastructure. Our study focused on evaluating the data mining part of ACon and analysed the sensor data collected onboard from 46 sensors and 90,748 measurements per sensor. Conclusion: ACon is an important step in dealing with uncertainty affecting contextual requirements at runtime while considering end-user interaction. ACon supports systems in analysing the environment to adapt contextual requirements and complements existing requirements monitoring approaches by keeping the requirements monitoring specification up-to-date. Consequently, it avoids manual analysis that is usually costly in today’s complex system environments.Peer ReviewedPostprint (author's final draft

    Runtime Requirements Monitoring Framework for Adaptive e-Learning Systems

    No full text
    International audienceAs academic learners and companies are turning to e-learning courses to achieve their personal and professional goals, it becomes more and more important to handle service quality in this sector. Despite scientific research conducted to personalize the learning process and meet learner's requirements under adaptive e-learning systems, however, the specification and management of quality attribute is particularly challenging due to problems arising from environmental variability. In our view, a detailed and high-level specification of requirements supported through the whole system lifecycle is needed for a comprehensive management of adaptive e-learning systems, especially in continuously changing environmental conditions. In this paper, we propose a runtime requirements monitoring to check the conformity of adaptive e-learning systems to their requirements and ensure that the activities offered by these learning environments can achieve the desired learning outcomes. As a result, when deviations (i.e., not satisfied requirements) occur, they are identified and then notified during system operation. With our approach, the requirements are supported during the whole system lifecycle. First, we specify system's requirements in the form of a dynamic software product line. This specification applies a novel requirements engineering language that combines goal-driven requirements with features and claims and avoid the enumeration of all desired adaptation strategies (i.e. when an adaptation should be applied) at the design time. Second, the specification is automatically transformed into a constraint satisfaction problem that reduces the requirements monitoring into a constraint program at runtime

    Towards the integration of enterprise software: The business manufacturing intelligence

    Get PDF
    Nowadays, the Information Communication Technology has pervaded literally the companies. In the company circulates an huge amount of information but too much information doesn’t provide any added value. The overload of information exceeds individual processing capacity and slowdowns decision making operations. We must transform the enormous quantity of information in useful knowledge taking in consideration that information becomes obsolete quickly in condition of dynamic market. Companies process this information by specific software for managing, efficiently and effectively, the business processes. In this paper we analyse the myriad of acronyms of software that is used in enterprises with the changes that occurred over the time, from production to decision making until to convergence in an intelligent modular enterprise software, that we named Business Manufacturing Intelligence (BMI), that will manage and support the enterprise in the futurebusiness manufacturing intelligence, enterprise resource planning; business intelligence; management software; automation software; decision making software

    Non-Technical Individual Skills are Weakly Connected to the Maturity of Agile Practices

    Full text link
    Context: Existing knowledge in agile software development suggests that individual competency (e.g. skills) is a critical success factor for agile projects. While assuming that technical skills are important for every kind of software development project, many researchers suggest that non-technical individual skills are especially important in agile software development. Objective: In this paper, we investigate whether non-technical individual skills can predict the use of agile practices. Method: Through creating a set of multiple linear regression models using a total of 113 participants from agile teams in six software development organizations from The Netherlands and Brazil, we analyzed the predictive power of non-technical individual skills in relation to agile practices. Results: The results show that there is surprisingly low power in using non-technical individual skills to predict (i.e. explain variance in) the mature use of agile practices in software development. Conclusions: Therefore, we conclude that looking at non-technical individual skills is not the optimal level of analysis when trying to understand, and explain, the mature use of agile practices in the software development context. We argue that it is more important to focus on the non-technical skills as a team-level capacity instead of assuring that all individuals possess such skills when understanding the use of the agile practices.Comment: 18 pages, 1 figur

    The Explainable Business Process (XBP) - An Exploratory Research

    Get PDF
    Providing explanations to the business process, its decisions and its activities, is an important key factor for the process in order to achieve the business objectives of the business process, and to minimize and deal with the ambiguity of the business process that causes multiple interpretations, as well as to engender the appropriate trust of the users in the process. As a first step towards adding explanations to business process, we present an exploratory study to bring in the concept of explainability into business process, where we propose a conceptual framework to use the explainability with business process in a model that we called the Explainable Business Process XBP, furthermore we propose the XBP lifecycle based on the Model-based and Incremental Knowledge Engineering (MIKE) approach, in order to show in details the phase where explainability can take a place in business process lifecycle, noting that we focus on explaining the decisions and activities of the process in its as-is model without transforming it into a to-be model

    Tackling Version Management and Reproducibility in MLOps

    Get PDF
    A crescente adoção de soluções baseadas em machine learning (ML) exige avanços na aplicação das melhores práticas para manter estes sistemas em produção. Operações de machine learning (MLOps) incorporam princípios de automação contínua ao desenvolvimento de modelos de ML, promovendo entrega, monitoramento e treinamento contínuos. Devido a vários fatores, como a natureza experimental do desenvolvimento de modelos de ML ou a necessidade de otimizações derivadas de mudanças nas necessidades de negócios, espera-se que os cientistas de dados criem vários experimentos para desenvolver um modelo ou preditor que atenda satisfatoriamente aos principais desafios de um dado problema. Como a reavaliação de modelos é uma necessidade constante, metadados são constantemente produzidos devido a várias execuções de experimentos. Esses metadados são conhecidos como artefatos ou ativos de ML. A linhagem adequada entre esses artefatos possibilita a recriação do ambiente em que foram desenvolvidos, facilitando a reprodutibilidade do modelo. Vincular informações de experimentos, modelos, conjuntos de dados, configurações e alterações de código requer organização, rastreamento, manutenção e controle de versão adequados. Este trabalho investigará as melhores práticas, problemas atuais e desafios relacionados ao gerenciamento e versão de artefatos e aplicará esse conhecimento para desenvolver um fluxo de trabalho que suporte a engenharia e operacionalização de ML, aplicando princípios de MLOps que facilitam a reprodutibilidade dos modelos. Cenários cobrindo preparação de dados, geração de modelo, comparação entre versões de modelo, implantação, monitoramento, depuração e re-treinamento demonstraram como as estruturas e ferramentas selecionadas podem ser integradas para atingir esse objetivo.The growing adoption of machine learning solutions requires advancements in applying best practices to maintain artificial intelligence systems in production. Machine Learning Operations (MLOps) incorporates DevOps principles into machine learning development, promoting automation, continuous delivery, monitoring, and training capabilities. Due to multiple factors, such as the experimental nature of the machine learning process or the need for model optimizations derived from changes in business needs, data scientists are expected to create multiple experiments to develop a model or predictor that satisfactorily addresses the main challenges of a given problem. Since the re-evaluation of models is a constant need, metadata is constantly produced due to multiple experiment runs. This metadata is known as ML artifacts or assets. The proper lineage between these artifacts enables environment recreation, facilitating model reproducibility. Linking information from experiments, models, datasets, configurations, and code changes requires proper organization, tracking, maintenance, and version control of these artifacts. This work will investigate the best practices, current issues, and open challenges related to artifact versioning and management and apply this knowledge to develop an ML workflow that supports ML engineering and operationalization, applying MLOps principles that facilitate model reproducibility. Scenarios covering data preparation, model generation, comparison between model versions, deployment, monitoring, debugging, and retraining demonstrated how the selected frameworks and tools could be integrated to achieve that goal

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)
    • …
    corecore