9 research outputs found

    Webprocesspair: Recommendation System of Improvement Actions

    Get PDF
    Para ajudar no desenvolvimento de software é recomendado o uso de ferramentas apropriadas como o PSP (personal software process). O PSP é um processo de desenvolvimento de software de elevada maturidade usado para melhorar a capacidade de estimativa e planeamento de projetos, gerir a sua qualidade, assim como reduzir os seus problemas. Ferramentas como o PSP geram grandes quantidades de dados sobre o desempenho do utilizador que periodicamente podem ser analisados para identificar problemas de desempenho (por exemplo, elevada densidade de defeitos nos produtos entregues) determinar as causas dos problemas (por exemplo, número elevado de defeitos que chegam à fase de teste do sistema) e desenvolver ações de melhoria (por exemplo, introduzir testes unitários e revisões de código). No entanto a análise dos dados gerados por estes processos é uma tarefa demorada e cansativa, devido às suas grandes quantidades e ao tempo e conhecimento especializado necessários para realizar a análise. Para resolver este problema da análise manual, foi desenvolvida num trabalho de doutoramento uma aplicação em java, ProcessPAIR, que identifica os problemas de desempenho, determina as causas dos problemas e organiza as causas em diferentes ranks. O principal objetivo desta dissertação será estender a abordagem e ferramenta ProcessPAIR para a web, WebProcessPAIR, com uma nova funcionalidade, desenvolver ações de melhoria. Esta funcionalidade baseia-se em construir um catálogo de possíveis ações de melhoria para abordar as causas dos problemas de desempenho. Para esse efeito pretende-se recorrer não só à literatura existente mas também ao uso de métodos como crowdsourcing, ou seja, recorrer ao conhecimento de uma comunidade de utilizadores e especialistas. Para cada problema de desempenho e respetiva causa referida na aplicação será atribuída uma lista ordenada de ações de melhoria sugerida pelos contribuidores e utilizadores do website (WebProcessPAIR). Estas sugestões poderão ser votadas pela comunidade (por exemplo através de um sistema de likes, ou de upvote/downvote). Para cada lista de ações de melhoria sugeridas a um utilizador, este poderá deixar o seu feedback em relação à lista, dizendo se concorda ou não. Com o feedback dos utilizadores, o sistema usará o método de aprendizagem automática, atualizando o catálogo de ações de melhoria consoante as sugestões dadas pelos contribuidores e o feedback deixado pelos utilizadores, ou seja, aprendendo automaticamente à medida que vai sendo utilizado

    Artificial intelligence and machine learning for maturity evaluation and model validation

    Get PDF
    In this paper, we discuss the possibility of using machine learning (ML) to specify and validate maturity models, in particular maturity models related to the assessment of digital capabilities of an organization. Over the last decade, a rather large number of maturity models have been suggested for different aspects (such as type of technology or considered processes) and in relation to different industries. Usually, these models are based on a number of assumptions such as the data used for the assessment, the mathematical formulation of the model and various parameters such as weights or importance indicators. Empirical evidence for such assumptions is usually lacking. We investigate the potential of using data from assessments over time and for similar institutions for the ML of respective models. Related concepts are worked out in some details and for some types of maturity assessment models, a possible application of the concept is discussed

    Swarm intelligence-based model for improving prediction performance of low-expectation teams in educational software engineering projects

    Get PDF
    Software engineering is one of the most significant areas, which extensively used in educational and industrial fields. Software engineering education plays an essential role in keeping students up to date with software technologies, products, and processes that are commonly applied in the software industry. The software development project is one of the most important parts of the software engineering course, because it covers the practical side of the course. This type of project helps strengthening students' skills to collaborate in a team spirit to work on software projects. Software project involves the composition of software product and process parts. Software product part represents software deliverables at each phase of Software Development Life Cycle (SDLC) while software process part captures team activities and behaviors during SDLC. The low-expectation teams face challenges during different stages of software project. Consequently, predicting performance of such teams is one of the most important tasks for learning process in software engineering education. The early prediction of performance for low-expectation teams would help instructors to address difficulties and challenges related to such teams at earliest possible phases of software project to avoid project failure. Several studies attempted to early predict the performance for low-expectation teams at different phases of SDLC. This study introduces swarm intelligence -based model which essentially aims to improve the prediction performance for low-expectation teams at earliest possible phases of SDLC by implementing Particle Swarm Optimization-K Nearest Neighbours (PSO-KNN), and it attempts to reduce the number of selected software product and process features to reach higher accuracy with identifying less than 40 relevant features. Experiments were conducted on the Software Engineering Team Assessment and Prediction (SETAP) project dataset. The proposed model was compared with the related studies and the state-of-the-art Machine Learning (ML) classifiers: Sequential Minimal Optimization (SMO), Simple Linear Regression (SLR), Naïve Bayes (NB), Multilayer Perceptron (MLP), standard KNN, and J48. The proposed model provides superior results compared to the traditional ML classifiers and state-of-the-art studies in the investigated phases of software product and process development

    Methods, Techniques and Tools to Support Software Project Management in High Maturity

    Get PDF
    High maturity in software development is associated to statistical control of critical subprocesses performance and the use of gained predictability to manage projects with better planning precision and monitoring control. Although maturity models such as CMMI mention some statistical and other quantitative approaches, methods and techniques that can support project management in high maturity, they do not provide details about them neither present their available types. Therefore, there is a lack of knowledge on how to support software process improvement initiatives to choose and use statistical and other quantitative approaches, methods and techniques on such context. The objective of this study is to identify different approaches, methods and techniques that can assist on managing projects in high maturity. By conducting a systematic literature mapping on major data sources, we identified 75 papers describing 101 contributions. We briefly describe identified approaches, methods and techniques grouped by similar types and provide some analysis regarding technological maturity stage and evaluation method, and supported development methods and characteristics and process/indicator area in which they were applied. We hope this information can fill some of the statistical and quantitative knowledge gap about the actual types of approaches, methods and techniques being proposed, evaluated, experimented and adopted by organizations to assist on quantitative project management in high maturity

    Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC

    Get PDF
    A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Knowledge derivation and data mining strategies for probabilistic functional integrated networks

    Get PDF
    PhDOne of the fundamental goals of systems biology is the experimental verification of the interactome: the entire complement of molecular interactions occurring in the cell. Vast amounts of high-throughput data have been produced to aid this effort. However these data are incomplete and contain high levels of both false positives and false negatives. In order to combat these limitations in data quality, computational techniques have been developed to evaluate the datasets and integrate them in a systematic fashion using graph theory. The result is an integrated network which can be analysed using a variety of network analysis techniques to draw new inferences about biological questions and to guide laboratory experiments. Individual research groups are interested in specific biological problems and, consequently, network analyses are normally performed with regard to a specific question. However, the majority of existing data integration techniques are global and do not focus on specific areas of biology. Currently this issue is addressed by using known annotation data (such as that from the Gene Ontology) to produce process-specific subnetworks. However, this approach discards useful information and is of limited use in poorly annotated areas of the interactome. Therefore, there is a need for network integration techniques that produce process-specific networks without loss of data. The work described here addresses this requirement by extending one of the most powerful integration techniques, probabilistic functional integrated networks (PFINs), to incorporate a concept of biological relevance. Initially, the available functional data for the baker’s yeast Saccharomyces cerevisiae was evaluated to identify areas of bias and specificity which could be exploited during network integration. This information was used to develop an integration technique which emphasises interactions relevant to specific biological questions, using yeast ageing as an exemplar. The integration method improves performance during network-based protein functional prediction in relation to this process. Further, the process-relevant networks complement classical network integration techniques and significantly improve network analysis in a wide range of biological processes. The method developed has been used to produce novel predictions for 505 Gene Ontology biological processes. Of these predictions 41,610 are consistent with existing computational annotations, and 906 are consistent with known expert-curated annotations. The approach significantly reduces the hypothesis space for experimental validation of genes hypothesised to be involved in the oxidative stress response. Therefore, incorporation of biological relevance into network integration can significantly improve network analysis with regard to individual biological questions

    Empirical Evaluation of the ProcessPAIR Tool for Automated Performance Analysis

    No full text
    Abstract-Software development processes can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes and devise improvement actions. However, conducting that analysis manually is challenging because of the potentially large amount of data to analyze and the effort and expertise required. ProcessPAIR is a novel tool designed to help developers analyze their performance data with less effort, by automatically identifying and ranking performance problems and potential root causes. The analysis is based on performance models derived from the performance data of a large community of developers. In this paper, we present the results of an experiment conducted in the context of Personal Software Process (PSP) training, to show that ProcessPAIR is able to accurately identify and rank performance problems and potential root causes of individual developers so that subsequent manual analysis for the identification of deeper causes and improvement actions can be properly focused
    corecore