49,245 research outputs found

    Architectural mismatch tolerance

    Get PDF
    The integrity of complex software systems built from existing components is becoming more dependent on the integrity of the mechanisms used to interconnect these components and, in particular, on the ability of these mechanisms to cope with architectural mismatches that might exist between components. There is a need to detect and handle (i.e. to tolerate) architectural mismatches during runtime because in the majority of practical situations it is impossible to localize and correct all such mismatches during development time. When developing complex software systems, the problem is not only to identify the appropriate components, but also to make sure that these components are interconnected in a way that allows mismatches to be tolerated. The resulting architectural solution should be a system based on the existing components, which are independent in their nature, but are able to interact in well-understood ways. To find such a solution we apply general principles of fault tolerance to dealing with arch itectural mismatche

    Introducing the STAMP method in road tunnel safety assessment

    Get PDF
    After the tremendous accidents in European road tunnels over the past decade, many risk assessment methods have been proposed worldwide, most of them based on Quantitative Risk Assessment (QRA). Although QRAs are helpful to address physical aspects and facilities of tunnels, current approaches in the road tunnel field have limitations to model organizational aspects, software behavior and the adaptation of the tunnel system over time. This paper reviews the aforementioned limitations and highlights the need to enhance the safety assessment process of these critical infrastructures with a complementary approach that links the organizational factors to the operational and technical issues, analyze software behavior and models the dynamics of the tunnel system. To achieve this objective, this paper examines the scope for introducing a safety assessment method which is based on the systems thinking paradigm and draws upon the STAMP model. The method proposed is demonstrated through a case study of a tunnel ventilation system and the results show that it has the potential to identify scenarios that encompass both the technical system and the organizational structure. However, since the method does not provide quantitative estimations of risk, it is recommended to be used as a complementary approach to the traditional risk assessments rather than as an alternative. (C) 2012 Elsevier Ltd. All rights reserved

    Software reliability and dependability: a roadmap

    Get PDF
    Shifting the focus from software reliability to user-centred measures of dependability in complete software-based systems. Influencing design practice to facilitate dependability assessment. Propagating awareness of dependability issues and the use of existing, useful methods. Injecting some rigour in the use of process-related evidence for dependability assessment. Better understanding issues of diversity and variation as drivers of dependability. Bev Littlewood is founder-Director of the Centre for Software Reliability, and Professor of Software Engineering at City University, London. Prof Littlewood has worked for many years on problems associated with the modelling and evaluation of the dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. Much of this work has been carried out in collaborative projects, including the successful EC-funded projects SHIP, PDCS, PDCS2, DeVa. He has been employed as a consultant t

    The role of IT/IS in combating fraud in the payment card industry

    Get PDF
    The vast growth of the payment card industry (PCI) in the last 50 years has placed the industry in the centre of attention, not only because of this growth, but also because of the increase of fraudulent transactions. The conducted research in this domain has produced statistical reports on detection of fraud, and ways of protection. On the other hand, the relevant body of research is quite partial and covers only specific topics. For instance, the provided reports related to losses due to fraudulent usage of cards usually do not present the measures taken to combat fraud nor do they explain the way fraud happens. This can turn out to be confusing and makes one believe that card usage can be more negative than positive. This paper is intended to provide accumulative and organized information of the efforts made to protect businesses from fraud. We try to reveal the effectiveness and efficiency of the current fraud combating techniques and show that organized worldwide efforts are needed to take care of the larger part of the problem. The research questions that will be addressed in the paper are: 1) how can IT/IS help in combating fraud in the PCI?, and 2) is the implemented IT/IS effective and efficient enough to bring progress in combating fraud? Our research methodology is based on a case study conducted in a Macedonian bank. The research is explorative and will be mostly qualitative in nature; however some quantitative aspects will be included. The findings indicate that fraud can take up many forms. A classification of the different forms of data theft into different fraudulent appearances was made. We showed that the benefits from implementing the fraud reduction efforts are multiple. Results show that a bank has to be very small to experience losses from fixed expenditures coming from the implementation of the fraud reduction IT/IS. Medium-sized and large banks should not even see any problems arising from those expenditures. Based on the empirical data and the presented facts we can conclude that the fraud reduction IT/IS do have a positive effect on all sides of the payment process and fulfills the expectations of all stakeholders

    Prognostic Launch Vehicle Probability of Failure Assessment Methodology for Conceptual Systems Predicated on Human Causal Factors

    Get PDF
    Lessons learned from past failures of launch vehicle developments and operations were used to create a new method to predict the probability of failure of conceptual systems. Existing methods such as Probabilistic Risk Assessments and Human Risk Assessments were considered but found to be too cumbersome for this type of system-wide application for yet-to-be-flown vehicles. The basis for this methodology were historic databases of past failures, where it was determined that various faulty human-interactions were the predominant root causes of failure rather than deficient component reliabilities evaluated through statistical analysis. This methodology contains an expert scoring part which can be used in either a qualitative or a quantitative mode. The method produces two products: a numerical score of the probability of failure or guidance to program management on critical areas in need of increased focus to improve the probability of success. In order to evaluate the effectiveness of this new method, data from a concluded vehicle program (USAF's Titan IV with the Centaur G-Prime upper stage) was used as a test case. Although the theoretical vs. actual probability of failure was found to be in reasonable agreement (4.46% vs. 6.67% respectively) the underlying sub-root cause scoring had significant disparities attributable to significant organizational changes and acquisitions. Recommendations are made for future applications of this method to ongoing launch vehicle development programs
    • …
    corecore