776 research outputs found

    Evaluating Trustworthiness of Software Component

    Full text link
    Makalah ini membahas tentang konsep keterpercayaan komponen perangkat lunak yang merupakan salah satu pertimbangan utama bagi pengembang perangkat lunak dalam mengimplementasikan metode pengembangan perangkat lunak berbasis komponen. Pada bagian awal makalah, penulis menjelaskan mengenai konsep penggunaan ulang perangkat lunak dan kaitannya dengan keterpercayaan komponen perangkat lunak. Selanjutnya, bagian inti makalah membahas secara detail mengenai metode pengujian komponen perangkat lunak dan 4 (empat) metode yang dapat digunakan untuk mengevaluasi tingkat keterpercayaan dari komponen perangkat lunak. Di akhir makalah, penulis memberi gambaran mengenai proses seleksi komponen perangkat lunak pada domain industri

    How to Avoid Mistakes in Software Development for Unmanned Vehicles

    Get PDF
    The purpose of this paper is to propose a design and development methodology in terms of robustness of unmanned vehicle (UV) software development, which minimizes the risk of software failure in both experimental and final solutions. The most common dangers in UV software development were determined, classified, and analyzed based on literature studies and the authors’ own experience in software development and analysis of open-source code. As a conclusion, “good practices” and failure countermeasures are proposed

    A controlled experiment for the empirical evaluation of safety analysis techniques for safety-critical software

    Get PDF
    Context: Today's safety critical systems are increasingly reliant on software. Software becomes responsible for most of the critical functions of systems. Many different safety analysis techniques have been developed to identify hazards of systems. FTA and FMEA are most commonly used by safety analysts. Recently, STPA has been proposed with the goal to better cope with complex systems including software. Objective: This research aimed at comparing quantitatively these three safety analysis techniques with regard to their effectiveness, applicability, understandability, ease of use and efficiency in identifying software safety requirements at the system level. Method: We conducted a controlled experiment with 21 master and bachelor students applying these three techniques to three safety-critical systems: train door control, anti-lock braking and traffic collision and avoidance. Results: The results showed that there is no statistically significant difference between these techniques in terms of applicability, understandability and ease of use, but a significant difference in terms of effectiveness and efficiency is obtained. Conclusion: We conclude that STPA seems to be an effective method to identify software safety requirements at the system level. In particular, STPA addresses more different software safety requirements than the traditional techniques FTA and FMEA, but STPA needs more time to carry out by safety analysts with little or no prior experience.Comment: 10 pages, 1 figure in Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering (EASE '15). ACM, 201

    A Domestic Case Studies Probability to Overcome Software Failures

    Get PDF
    Computers are the pervasive technology of our time. As computer become critically tied to human life, it also becomes more important that interactions with them are under control. They are no longer a novelty, but are integrated into the fabric of our world, performing both high and low-level tasks. That is, computers may be used to eliminate heavy, redundant work and more. Sophisticated machines have been deployed to perform remote surgery or detect subterranean landmines in repopulated civilian areas. The increasing importance of computers in our lives means that it is essential that the design of computer systems incorporates techniques that can ensure reliability, safety and security. This paper will examine technological mishaps involving the use of computers. This review will include notorious software bugs that have affected finance, communication, transit, defense, health and medicine and others systems or industries. The sequence and etiology of these accidents will be discusses as well as how catastrophes may be avoided in the future through lessons and practices based on research

    Case Studies of Most Common and Severe Types of Software System Failure

    Get PDF
    Abstract: Today software system is an integral part of each and every business model, be it core product manufacturing, banking, healthcare, insurance, aviation, hospitality, social networking, shopping, e-commerce, education or any other domain. If any business has to be leveraged and simplified then software has to be integrated with the main stream business of any organization. Designing and development of any software system requires huge capital, a lot of time, intellectuals, domain expertise, tools and infrastructure. Though the software industry has matured quiet a lot in past decade, however percentage of software failure has also increased, which led to the loss of capital, time, good-will, loss of information and in some cases severe failures of critical applications also lead to the loss of lives. Software could fail due to faults injected in various stages of software or product development life cycle starting from project initiation until deployment. This paper describes the case study of most common and severe types of software system failures in Software Industry

    Accountability in Computer Systems

    Get PDF
    The article of record as published may be found at http://dx.doi.org/10.1093/oxfordhb/9780190067397.013.10This chapter addresses the relationship between AI systems and the concept of accountability. To understand accountability in the context of AI systems, one must begin by examining the various ways the term is used and the variety of concepts to which it is meant to refer. Accountability is often associated with transparency, the principle that systems and processes should be accessible to those affected through an understanding of their structure or function. For a computer system, this often means disclosure about the system’s existence, nature, and scope; scrutiny of its underlying data and reasoning approaches; and connection of the operative rules implemented by the system to the governing norms of its context. Transparency is a useful tool in the governance of computer systems, but only insofar as it serves accountability. There are other mechanisms available for building computer systems that support accountability of their creators and operators. Ultimately, accountability requires establishing answerability relationships that serve the interests of those affected by AI systems

    Prognostic Launch Vehicle Probability of Failure Assessment Methodology for Conceptual Systems Predicated on Human Causal Factors

    Get PDF
    Lessons learned from past failures of launch vehicle developments and operations were used to create a new method to predict the probability of failure of conceptual systems. Existing methods such as Probabilistic Risk Assessments and Human Risk Assessments were considered but found to be too cumbersome for this type of system-wide application for yet-to-be-flown vehicles. The basis for this methodology were historic databases of past failures, where it was determined that various faulty human-interactions were the predominant root causes of failure rather than deficient component reliabilities evaluated through statistical analysis. This methodology contains an expert scoring part which can be used in either a qualitative or a quantitative mode. The method produces two products: a numerical score of the probability of failure or guidance to program management on critical areas in need of increased focus to improve the probability of success. In order to evaluate the effectiveness of this new method, data from a concluded vehicle program (USAF's Titan IV with the Centaur G-Prime upper stage) was used as a test case. Although the theoretical vs. actual probability of failure was found to be in reasonable agreement (4.46% vs. 6.67% respectively) the underlying sub-root cause scoring had significant disparities attributable to significant organizational changes and acquisitions. Recommendations are made for future applications of this method to ongoing launch vehicle development programs
    • …
    corecore