442 research outputs found

    Adding debugging support to the Prometheus methodology

    Get PDF
    This paper describes a debugger which uses the design artifacts of the Prometheus agent-oriented software engineering methodology to alert the developer testing the system, that a specification has been violated. Detailed information is provided regarding the error which can help the developer in locating its source. Interaction protocols specified during design, are converted to executable Petri net representations. The system can then be monitored at run time to identify situations which do not conform to specified protocols. A process for monitoring aspects of plan selection is also described. The paper then describes the Prometheus Design Tool, developed to support the Prometheus methodology, and presents a vision of an integrated development environment providing full life cycle support for the development of agent systems. The initial part of the paper provides a detailed summary of the Prometheus methodology and the artifacts on which the debugger is based

    AUML protocols and code generation in the Prometheus design tool

    Get PDF
    Prometheus is an agent-oriented software engineering methodology. The Prometheus Design Tool (PDT) is a software tool that supports a designer who is using the Prometheus methodology. PDT has recently been extended with two significant new features: support for Agent UML interaction protocols, and code generation

    Automated unit testing intelligent agents in PDT

    Get PDF
    The Prometheus Design Tool (PDT) is an agent development tool that supports the Prometheus design methodology and includes features like automated code generation. We enhance this tool by adding a feature that allows the automated unit testing of agents that are built from within PDT

    Prometheus design tool

    Get PDF
    The Prometheus Design Tool (PDT) supports the structured design of intelligent agent systems. It supports the Prometheus methodology, but can also be used more generally. This paper outlines the tool and some of its many features

    Tackling Version Management and Reproducibility in MLOps

    Get PDF
    A crescente adoção de soluções baseadas em machine learning (ML) exige avanços na aplicação das melhores práticas para manter estes sistemas em produção. Operações de machine learning (MLOps) incorporam princípios de automação contínua ao desenvolvimento de modelos de ML, promovendo entrega, monitoramento e treinamento contínuos. Devido a vários fatores, como a natureza experimental do desenvolvimento de modelos de ML ou a necessidade de otimizações derivadas de mudanças nas necessidades de negócios, espera-se que os cientistas de dados criem vários experimentos para desenvolver um modelo ou preditor que atenda satisfatoriamente aos principais desafios de um dado problema. Como a reavaliação de modelos é uma necessidade constante, metadados são constantemente produzidos devido a várias execuções de experimentos. Esses metadados são conhecidos como artefatos ou ativos de ML. A linhagem adequada entre esses artefatos possibilita a recriação do ambiente em que foram desenvolvidos, facilitando a reprodutibilidade do modelo. Vincular informações de experimentos, modelos, conjuntos de dados, configurações e alterações de código requer organização, rastreamento, manutenção e controle de versão adequados. Este trabalho investigará as melhores práticas, problemas atuais e desafios relacionados ao gerenciamento e versão de artefatos e aplicará esse conhecimento para desenvolver um fluxo de trabalho que suporte a engenharia e operacionalização de ML, aplicando princípios de MLOps que facilitam a reprodutibilidade dos modelos. Cenários cobrindo preparação de dados, geração de modelo, comparação entre versões de modelo, implantação, monitoramento, depuração e re-treinamento demonstraram como as estruturas e ferramentas selecionadas podem ser integradas para atingir esse objetivo.The growing adoption of machine learning solutions requires advancements in applying best practices to maintain artificial intelligence systems in production. Machine Learning Operations (MLOps) incorporates DevOps principles into machine learning development, promoting automation, continuous delivery, monitoring, and training capabilities. Due to multiple factors, such as the experimental nature of the machine learning process or the need for model optimizations derived from changes in business needs, data scientists are expected to create multiple experiments to develop a model or predictor that satisfactorily addresses the main challenges of a given problem. Since the re-evaluation of models is a constant need, metadata is constantly produced due to multiple experiment runs. This metadata is known as ML artifacts or assets. The proper lineage between these artifacts enables environment recreation, facilitating model reproducibility. Linking information from experiments, models, datasets, configurations, and code changes requires proper organization, tracking, maintenance, and version control of these artifacts. This work will investigate the best practices, current issues, and open challenges related to artifact versioning and management and apply this knowledge to develop an ML workflow that supports ML engineering and operationalization, applying MLOps principles that facilitate model reproducibility. Scenarios covering data preparation, model generation, comparison between model versions, deployment, monitoring, debugging, and retraining demonstrated how the selected frameworks and tools could be integrated to achieve that goal

    Early detection of design faults relative to requirement specifications in agent-based models

    Get PDF
    Agent systems are used for a wide range of applications, and techniques to detect and avoid defects in such systems are valuable. In particular, it is desirable to detect issues as early as possible in the software development lifecycle. We describe a technique for checking the plan structures of a BDI agent design against the requirements models, specified in terms of scenarios and goals. This approach is applicable at design time, not requiring source code. A lightweight evaluation demonstrates that a range of defects can be found using this technique

    A model driven component agent framework for domain experts

    Get PDF
    Industrial software systems are becoming more complex with a large number of interacting parts distributed over networks. Due to the inherent complexity in the problem domains, most such systems are modified over time to incorporate emerging requirements, making incremental development a suitable approach for building complex systems. In domain specific systems it is the domain experts as end users who identify improvements that better suit their needs. Examples include meteorologists who use weather modeling software, engineers who use control systems and business analysts in business process modeling. Most domain experts are not fluent in systems programming and changes are realised through software engineers. This process hinders the evolution of the system, making it time consuming and costly. We hypothesise that if domain experts are empowered to make some of the system cha nges, it would greatly ease the evolutionary process, thereby making the systems more effective. Agent Oriented Software Engineering (AOSE) is seen as a natural fit for modeling and implementing distributed complex systems. With concepts such as goals and plans, agent systems support easy extension of functionality that facilitates incremental development. Further agents provide an intuitive metaphor that works at a higher level of abstraction compared to the object oriented model. However agent programming is not at a level accessible to domain experts to capitalise on its intuitiveness and appropriateness in building complex systems. We propose a model driven development approach for domain experts that uses visual modeling and automated code generation to simplify the development and evolution of agent systems. Our approach is called the Component Agent Framework for domain-Experts (CAFnE), which builds upon the concepts from Model Driven Development and the Prometheus agent software engineering methodolo gy. CAFnE enables domain experts to work with a graphical representation of the system, which is easier to understand and work with than textual code. The model of the system, updated by domain experts, is then transformed to executable code using a transformation function. CAFnE is supported by a proof-of-concept toolkit that implements the visual modeling, model driven development and code generation. We used the CAFnE toolkit in a user study where five domain experts (weather forecasters) with no prior experience in agent programming were asked to make changes to an existing weather alerting system. Participants were able to rapidly become familiar with CAFnE concepts, comprehend the system's design, make design changes and implement them using the CAFnE toolkit

    Prioritisation mechanisms to support incremental development of agent systems

    Get PDF
    It is often necessary to partition a project into different priority levels and to develop incrementally. This paper presents a mechanism whereby a developer can prioritise scenarios on a five point scale, leading to automated, coherent partitioning of all required design entities, according to the three IEEE defined priority levels of essential, conditional and optional, which are used in many companies. This allows for automated support to guide the developer as to what design artefacts need to be developed at each phase. The developer can indicate the relative sizes desired for the three partitions and the algorithm described will attempt to get as close to this as possible. It is also possible to move items manually to achieve better sized partitions, as long as priority orderings are not violated. The approach is fast and easy to apply at various times during development, as needed

    Model based testing for agent systems

    Get PDF
    Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system

    Dynamic Monitoring in PANGEA Platform Using Event-Tracing Mechanisms

    Get PDF
    The use of distributed multi-agent systems (MAS) have increased in recent years, with the growing potential to handle large volumes of data and coordinate the operations of many organizations. In these systems, each agent independently handles a set of specialized tasks and cooperates to achieve the goals of the system and a high degree of flexibility. Multi-agent systems have become the most effective and widely used form of developing this type of application in which communication among various devices must be both reliable and efficient. One of the problems related to distribute computing is message passing, which is related to the interaction and coordination among intelligent agents. Consequently, a multi-agent architecture must necessarily provide a robust communication platform and control mechanisms. This paper presents the integration of an event-tracing model in an agent platform called PANGEA. Adding this new capability, the platform allows improving the monitoring and analysis of the information that agents can send/receive in order to fulfil their goals more efficiently
    corecore