10 research outputs found

    A Software Safety Risk Taxonomy for Use in Retrospective Safety Cases

    Get PDF
    Safety standards contain technical and process-oriented safely requirements. The best time to include these requirements is early in the development lifecycle of the system. When software safety requirements are levied on a legacy system after the fact, a retrospective safety case will need to be constructed for the software in the system. This can be a difficult task because there may be few to no art facts available to show compliance to the software safely requirements. The risks associated with not meeting safely requirements in a legacy safely-critical computer system must be addressed to give confidence for reuse. This paper introduces a proposal for a software safely risk taxonomy for legacy safely-critical computer systems, by specializing the Software Engineering Institute's 'Software Development Risk Taxonomy' with safely elements and attributes

    Product Engineering Class in the Software Safety Risk Taxonomy for Building Safety-Critical Systems

    Get PDF
    When software safety requirements are imposed on legacy safety-critical systems, retrospective safety cases need to be formulated as part of recertifying the systems for further use and risks must be documented and managed to give confidence for reusing the systems. The SEJ Software Development Risk Taxonomy [4] focuses on general software development issues. It does not, however, cover all the safety risks. The Software Safety Risk Taxonomy [8] was developed which provides a construct for eliciting and categorizing software safety risks in a straightforward manner. In this paper, we present extended work on the taxonomy for safety that incorporates the additional issues inherent in the development and maintenance of safety-critical systems with software. An instrument called a Software Safety Risk Taxonomy Based Questionnaire (TBQ) is generated containing questions addressing each safety attribute in the Software Safety Risk Taxonomy. Software safety risks are surfaced using the new TBQ and then analyzed. In this paper we give the definitions for the specialized Product Engineering Class within the Software Safety Risk Taxonomy. At the end of the paper, we present the tool known as the 'Legacy Systems Risk Database Tool' that is used to collect and analyze the data required to show traceability to a particular safety standar

    Towards Knowledge Based Risk Management Approach in Software Projects

    Get PDF
    All projects involve risk; a zero risk project is not worth pursuing. Furthermore, due to software project uniqueness, uncertainty about final results will always accompany software development. While risks cannot be removed from software development, software engineers instead, should learn to manage them better (Arshad et al., 2009; Batista Webster et al., 2005; Gilliam, 2004). Risk Management and Planning requires organization experience, as it is strongly centred in both experience and knowledge acquired in former projects. The larger experience of the project manager improves his ability in identifying risks, estimating their occurrence likelihood and impact, and defining appropriate risk response plan. Thus risk knowledge cannot remain in an individual dimension, rather it must be made available for the organization that needs it to learn and enhance its performances in facing risks. If this does not occur, project managers can inadvertently repeat past mistakes simply because they do not know or do not remember the mitigation actions successfully applied in the past or they are unable to foresee the risks caused by certain project restrictions and characteristics. Risk knowledge has to be packaged and stored over time throughout project execution for future reuse. Risk management methodologies are usually based on the use of questionnaires for risk identification and templates for investigating critical issues. Such artefacts are not often related each other and thus usually there is no documented cause-effect relation between issues, risks and mitigation actions. Furthermore today methodologies do not explicitly take in to account the need to collect experience systematically in order to reuse it in future projects. To convey these problems, this work proposes a framework based on the Experience Factory Organization (EFO) model (Basili et al., 1994; Basili et al., 2007; Schneider & Hunnius, 2003) and then use of Quality Improvement Paradigm (QIP) (Basili, 1989). The framework is also specialized within one of the largest firms of current Italian Software Market. For privacy reasons, and from here on, we will refer to it as “FIRM”. Finally in order to quantitatively evaluate the proposal, two empirical investigations were carried out: a post-mortem analysis and a case study. Both empirical investigations were carried out in the FIRM context and involve legacy systems transformation projects. The first empirical investigation involved 7 already executed projects while the second one 5 in itinere projects. The research questions we ask are: Does the proposed knowledge based framework lead to a more effective risk management than the one obtained without using it? Does the proposed knowledge based framework lead to a more precise risk management than the one obtained without using it? The rest of the paper is organized as follows: section 2 provides a brief overview of the main research activities presented in literature dealing with the same topics; section 3 presents the proposed framework, while section 4 its specialization in the FIRM context; section 5 describes empirical studies we executed, results and discussions are presented in section 6. Finally, conclusions are drawn in section 7

    MODELO BIDIMENSIONAL DE RIESGOS DEL MANTENIMIENTO DE SISTEMAS INTEGRADOS DE GESTIĂ“N (ERP)

    Get PDF
    La adopción y expansión de las Tecnologías de la Información y la Comunicación en el ámbito empresarial se está produciendo a gran velocidad. De la mano de las más innovadoras TIC y de los sistemas informáticos, surgen y se desarrollan los sistemas ERP. Éstos han sido implantados por empresas de todo el mundo. Tras su implantación, comienza su mantenimiento. Para que el resultado de estos proyectos sea satisfactorio, los riesgos que lo afectan tienen que ser gestionados. Una pobre gestión de estos riesgos, con frecuencia origina fallos en el sistema, lo que hace que las compañías tengan que asumir altas pérdidas. Para gestionar adecuadamente los riesgos, los profesionales deben comenzar identificándolos y clasificándolos. Para apoyar su labor, hemos realizado un estudio formal de los riesgos que afectan al mantenimiento de ERPs. La investigación finaliza con la elaboración de un Modelo de dos dimensiones compuesto por los riesgos identificados en la literatura

    Processo de inventariação de software para um órgão público federal brasileiro

    Get PDF
    Monografia (graduação)—Universidade de Brasília, Faculdade UnB Gama, 2015.A dificuldade em se realizar a manutenção de software está relacionada, entre outros fatores, à falta ou incompletude e/ou desatualização de documentações que possam facilitar o entendimento do mantenedor sobre o software a ser mantido. No cenário público federal brasileiro, muitos são os softwares legados carecíveis de manutenção e cujas documentações são inexistentes ou estão desatualizadas ou incompletas. A inventariação dos softwares existentes é um dos primeiros procedimentos previstos pela norma ISO/IEC 14764 de Manutenção de software. O objetivo deste trabalho foi propor um processo de inventariação de softwares em desenvolvimento e legados de um órgão público federal brasileiro. A metodologia adotada foi a descritiva, com a aplicação da técnica estudo de caso, em que se selecionou um órgão federal brasileiro para coleta e análise de dados. Como resultados foram propostos e modelados três processos de inventariação de itens de configuração, incluindo a atualização e a auditoria da atualização desses itens num contexto de gerência de serviços de tecnologia de informação do órgão. Com o trabalho foi possível observar que a inventariação precisa ser prevista na fase de desenvolvimento e também na fase de manutenção. E que são necessários mecanismos que assegurem tanto a realização quanto a atualização da documentação

    Uso do Kanban em um processo de gestão de demandas de manutenção de software por terceiros para um órgão público federal brasileiro

    Get PDF
    Monografia (graduação)—Universidade de Brasília, Faculdade UnB Gama, 2015.A publicação do manifesto ágil, em 2001, possibilitou o surgimento de novas metodologias de gerenciamento e desenvolvimento menos engessadas e mais dinâmicas, a exemplo temos os frameworks Kanban e Scrum. Ambos frameworks têm como princípio a entrega constante de produto e a participação constante do cliente no processo. Carente dessa mudança, a indústria de desenvolvimento de software não demorou para avaliar as novas propostas, inclusive órgãos públicos federais. O Kanban e o Scrum podem e devem ser adaptados às necessidades específicas de uma organização. O processo de gestão de manutenção de software se difere do processo de desenvolvimento e, entre suas particularidades, destaca-se o fato desse necessitar de maior flexibilidade para agregar as frequentes mudanças, tanto no escopo da requisição, quanto na sequência de implementação. O framework Kanban vem sendo empregado para auxiliar o processo de gestão, possibilitando otimizar o fluxo de trabalho e permitindo determinar iterações, sem o limite de tempo, o que possibilita alterações nas demandas de acordo com a aceitação da instituição. O Principal objetivo deste trabalho foi apoiar a definição de um processo de gestão de demandas de manutenção de software por terceiros para um órgão do governo federal brasileiro, utilizando o Framework Kanban. A pesquisa empregada foi descritiva. Quanto aos procedimentos, realizou-se pesquisas na literatura de empresas que tenham implementado o Kanban, também se realizou o estudo de duas instituições Públicas (TCU e INEP) que utilizam este framework. Afim de aplicar o Kanban no Ministério X, primeiro foi necessário caracteriza-lo e somente então desenvolver as atividades propostas para implementação. Ao fim, foi possível propor uma modelagem do processo de manutenção, e um quadro kanban adaptado as necessidades do Ministério. A Homologação destes e a conclusão da adaptação do framework está atrelada a trabalhos futuros. ___________________________________________________________________________ ABSTRACTThe agile manifest published in 2001, enabled the surface of new maintenance and development methodologies, less dense and more dynamic. It is possible quote Kanban and Scrum as examples. Both frameworks have as a principle the constant delivery of product and the client’s constant participation in the process. Lacking this change, the software development industry did not take to evaluate the new proposals, including federal government agencies. Kanban and Scrum can and should be adapted to the organization’s specific needs. The software maintenance management process is different from the development process and, among its singularities, highlights the fact that the process needs greater flexibility to aggregate the frequent changes both in the scope of the request, as in the implementation order. Kanban framework has been used to assist the management process, allowing to optimize the workflow and allowing to define iterations, without a time limit, which allows changes in demands according to the acceptance of the institution. The mainly objective is support the definition of a software management process for maintenance demands by third parties in Brazilian federal government, using the Kanban Framework. This research is classified as descriptive. As the procedures, held research in the literature of companies that have implemented the Kanban. Also was conducted a study in two public institutions (TCU and INEP) that use this framework. In order to apply the Kanban in the Ministry X, first was necessary characterizes it and only then carry out the activities proposed for implementation. In the end, was possible to propose a modeling of the maintenance process, and kanban board adapted the Ministry of needs. The approval of these and the completion of the adjustment of the framework is linked to future work

    A conceptual framework and a risk management approach for interoperability between geospatial datacubes

    Get PDF
    De nos jours, nous observons un intérêt grandissant pour les bases de données géospatiales multidimensionnelles. Ces bases de données sont développées pour faciliter la prise de décisions stratégiques des organisations, et plus spécifiquement lorsqu’il s’agit de données de différentes époques et de différents niveaux de granularité. Cependant, les utilisateurs peuvent avoir besoin d’utiliser plusieurs bases de données géospatiales multidimensionnelles. Ces bases de données peuvent être sémantiquement hétérogènes et caractérisées par différent degrés de pertinence par rapport au contexte d’utilisation. Résoudre les problèmes sémantiques liés à l’hétérogénéité et à la différence de pertinence d’une manière transparente aux utilisateurs a été l’objectif principal de l’interopérabilité au cours des quinze dernières années. Dans ce contexte, différentes solutions ont été proposées pour traiter l’interopérabilité. Cependant, ces solutions ont adopté une approche non systématique. De plus, aucune solution pour résoudre des problèmes sémantiques spécifiques liés à l’interopérabilité entre les bases de données géospatiales multidimensionnelles n’a été trouvée. Dans cette thèse, nous supposons qu’il est possible de définir une approche qui traite ces problèmes sémantiques pour assurer l’interopérabilité entre les bases de données géospatiales multidimensionnelles. Ainsi, nous définissons tout d’abord l’interopérabilité entre ces bases de données. Ensuite, nous définissons et classifions les problèmes d’hétérogénéité sémantique qui peuvent se produire au cours d’une telle interopérabilité de différentes bases de données géospatiales multidimensionnelles. Afin de résoudre ces problèmes d’hétérogénéité sémantique, nous proposons un cadre conceptuel qui se base sur la communication humaine. Dans ce cadre, une communication s’établit entre deux agents système représentant les bases de données géospatiales multidimensionnelles impliquées dans un processus d’interopérabilité. Cette communication vise à échanger de l’information sur le contenu de ces bases. Ensuite, dans l’intention d’aider les agents à prendre des décisions appropriées au cours du processus d’interopérabilité, nous évaluons un ensemble d’indicateurs de la qualité externe (fitness-for-use) des schémas et du contexte de production (ex., les métadonnées). Finalement, nous mettons en œuvre l’approche afin de montrer sa faisabilité.Today, we observe wide use of geospatial databases that are implemented in many forms (e.g., transactional centralized systems, distributed databases, multidimensional datacubes). Among those possibilities, the multidimensional datacube is more appropriate to support interactive analysis and to guide the organization’s strategic decisions, especially when different epochs and levels of information granularity are involved. However, one may need to use several geospatial multidimensional datacubes which may be semantically heterogeneous and having different degrees of appropriateness to the context of use. Overcoming the semantic problems related to the semantic heterogeneity and to the difference in the appropriateness to the context of use in a manner that is transparent to users has been the principal aim of interoperability for the last fifteen years. However, in spite of successful initiatives, today's solutions have evolved in a non systematic way. Moreover, no solution has been found to address specific semantic problems related to interoperability between geospatial datacubes. In this thesis, we suppose that it is possible to define an approach that addresses these semantic problems to support interoperability between geospatial datacubes. For that, we first describe interoperability between geospatial datacubes. Then, we define and categorize the semantic heterogeneity problems that may occur during the interoperability process of different geospatial datacubes. In order to resolve semantic heterogeneity between geospatial datacubes, we propose a conceptual framework that is essentially based on human communication. In this framework, software agents representing geospatial datacubes involved in the interoperability process communicate together. Such communication aims at exchanging information about the content of geospatial datacubes. Then, in order to help agents to make appropriate decisions during the interoperability process, we evaluate a set of indicators of the external quality (fitness-for-use) of geospatial datacube schemas and of production context (e.g., metadata). Finally, we implement the proposed approach to show its feasibility

    Aide à l'Évolution Logicielle dans les Organisations

    Get PDF
    Software systems are now so intrinsically part of our lives that we don't see them any more. They run our phones, our cars, our leisures, our banks, our shops, our cities … This brings a significant burden on the software industry. All these systems need to be updated, corrected, and enhanced as the users and consumers have new needs. As a result, most of the software engineering activity may be classified as Software Maintenance, “the totality of activities required to provide cost-effective support to a software system”.In an ecosystem where processing power for computers, and many other relevant metrics such as disk capacity or network bandwidth, doubles every 18 months (“Moore's Law”), technologies evolve at a fast pace. In this ecosystem, software maintenance suffers from the drawback of having to address the past (past languages, existing systems, old technologies). It is often ill-perceived, and treated as a punishment. Because of this, solutions and tools for software maintenance have long lagged far behind those for new software development. For example, the antique approach of manually inserting traces in the source code to understand the execution path is still a very valid one.All my research activity focused on helping people to do software maintenance in better conditions or more efficiently. An holistic approach of the problem must consider the software that has to be maintained, the people doing it, and the organization in which and for which it is done. As such, I studied different facets of the problem that will be presented in three parts in this document: Software: The source code is the center piece of the maintenance activity. Whatever the task (ex: enhancement or bug correction), it typically comes down to understand the current source code and find out what to change and/or add to make it behave as expected. I studied how to monitor the evolution of the source code, how to prevent it's decaying and how to remedy bad situations; People: One of the fundamental asset of people dealing with maintenance is the knowledge they have, of computer science (programming techniques), of the application domain, of the software itself. It is highly significant that from 40% to 60% of software maintenance time is spent reading the code to understand what it does, how it does it, how it can be changed; Organization: Organizations may have a strong impact on the way activities such as software maintenance are performed by their individual members. The support offered within the organization, the constraints they impose, the cultural environment, all affect how easy or difficult it can be to do the tasks and therefore how well or badly they can be done. I studied some software maintenance processes that organizations use.In this document, the various research topics I addressed, are organized in a logical way that does not always respect the chronological order of events. I wished to highlight, not only the results of the research, through the publications that attest to them, but also the collaborations that made them possible, collaboration with students or fellow researchers. For each result presented here, I tried to summarize as much as possible the discussion of the previous state of the art and the result itself. First because, more details can easily be found in the referenced publications, but also because some of this research is quite old and sometimes fell in the realm of “common sense”.Les systèmes logiciels font maintenant partie intrinsèque de nos vies à tel point que nous ne les voyons plus. Ils pilotent nos téléphones, nos voitures, nos loisirs, nos banques, nos magasins, nos villes, … Cela apporte de lourdes contraintes à l'industrie du logiciel car tous ces systèmes doivent être continuellement mis à jour, corrigés, étendus quand les utilisateurs et consommateurs expriment de nouveaux besoins. Le résultat en est que la plus grande part de l'activité de génie logiciel peut être classifié comme de la Maintenance Logicielle, « La totalité des activités requises pour fournir un support efficient d'un système logiciel ».Dans un écosystème où la puissance de calcul des ordinateurs, ou beaucoup d'autres métriques reliées comme la capacité des disques, ou le débit des réseaux, double tous les 18 mois (« loi de Moore »), les technologies évoluent rapidement. Dans cet écosystème la maintenance logiciel souffre de l'inconvénient de devoir traiter le passé (langages du passé, systèmes existants, vielles technologies). Elle est souvent mal perçue et traitée comme une punition. À cause de cela, les solutions et les outils pour la maintenance logicielle sont depuis toujours très en retard sur ceux pour le développement. Par exemple, l'antique méthode de correction de problème consistant à insérer des instructions pour retracer et comprendre le flot d'exécution d'un programme est toujours complètement actuelle.Toute mon activité de recherche s'est concentrée à aider les gens à faire de la maintenance logicielle dans de meilleures conditions ou plus efficacement. Une approche holistique du problème doit considérer le logiciel qui doit être maintenu, les gens qui le font et les organisations dans lesquelles et pour lesquelles cela est fait. Pour cela, j'ai étudié différents aspects du problème qui seront présentés en trois parties dans ce document : Le Logiciel : Le code source est la pièce centrale de l'activité de maintenance logicielle. Quelque soit la tâche (ex : amélioration ou correction d'erreur), elle revient typiquement à comprendre le code source actuel pour trouver quoi changer et/ou ajouter pour obtenir le comportement souhaité. J'ai étudié comment contrôler l'évolution du code source, comment prévenir son délitement et comment remédier à des mauvaises situations ; les Gens : L'un des principaux avantages des personnes qui traitent de maintenance logicielle est les connaissances qu'elles ont de l'informatique (techniques de programmation), du domaine d'application, du logiciel lui-même. Il est très significatif que de 40 % à 60 % du temps de maintenance logicielle soit passé à lire le code pour comprendre ce qu'il fait, comment il le fait et comment il peut-être changé ; les Organisations : Elles peuvent avoir un profond impact sur la façon dont des activités comme la maintenance logicielle sont exécutées par les individus. Le support offert à l'intérieur des organisations, ou les contraintes qu'elles imposent, l’environnement culturel, peuvent tous affecter la facilité ou difficulté de ces tâches et donc la qualité qui en résultera. J'ai étudié quelques processus liés à la maintenance logicielle utilisés dans les organisations.Dans ce document, les différents sujets de recherche que j'ai considéré sont présentés de façon logique qui ne respecte pas toujours l'ordre chronologique des évènements. J'ai souhaité aussi mettre en valeur, non uniquement les résultats scientifiques de mes recherches, au travers des publications qui les attestent, mais aussi les collaborations qui les ont rendus possibles, collaborations avec des étudiants ou des collègues chercheurs. Pour chaque résultat présenté ici, j'ai tenté de résumer le plus possible les discussions sur l'état de l'art antérieur et les résultats eux-mêmes. Ce d'abord parce que de plus amples détails peuvent facilement être trouvés dans les publications citées, mais aussi parce que une part de cette recherche est maintenant vielle et peux parfois être tombé dans le domaine du « sens commun »
    corecore