500,648 research outputs found

    Virtual Machining

    Get PDF
    Virtual machining systems apply computers and different types of software in manufacturing and production in order to simulate and model the behavior and errors of a real environment in virtual reality systems. This can provide useful means for products to be manufactured without the need of physical testing on the shop floor. As a result, the time and cost of part production can be decreased

    Review of Software Fault-Tolerance Methods for Reliability Enhancement of Real-Time Software Systems

    Get PDF
    Real time systems are those systems which must guarantee to response correctly within strict time constraint or within deadline. Failures can arise from both functional errors as well as timing bugs. Hence, it is necessary to provide temporal correctness of programs used in real time applications in addition to providing functional correctness. Although, there are several researches concerned with achieving fault tolerance in the presence of various functional and operational errors but many of them did not address the problem concerned with the timing bugs which is an important issue in real time systems. As for real time systems, many times it becomes a necessity for a given service to be delivered within the specified time deadline. Therefore, this paper reviews the existing approaches from the perspective of  real time systems to analyse the shortcomings of these approaches to  present a versatile and cost effective approach in the presence of timing bugs for providing fault tolerance to enhance the reliability of the real time software applications

    Efficient approaches to agile cost estimation in software industries: a project-based case study

    Get PDF
    Agile was invented to improve and overcome the traditional deficiencies of software development. At present, the agile model is used in software development very vastly due to its support to developers and clients. Agile methodology increases the interaction between the developer-client, and it makes software product defects free. The agile model is fast and becoming more popular because of its features and flexibility. The study shows that the agile software development model is an efficient and effective software development strategy that easily accommodates user changes, but it is not free from errors or shortcomings. The study shows that COCOMO and Planning Poker are famous cost estimation procedures, but are not ingenious for agile development. We conduct a study on real-time projects from multinational software industries using different estimation approaches to estimate the project’s cost and time. We thoroughly explain these projects with the limitations of the techniques. The study has proven that the traditional and modern estimation approaches still have limitations to accurate estimation of projects

    Increasing Software Reliability using Mutation Testing and Machine Learning

    Get PDF
    Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. The goal of mutation testing was to reduce complex program errors by preventing the related simple errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This dissertation used machine learning for the selection of mutation operators that reduced the computational cost of testing and improved test suite effectiveness. The goals were to produce mutations that were more resistant to test cases, improve test case evaluation, validate then improve the test suite’s effectiveness, realize cost reductions by generating fewer mutations for testing and improving software reliability by detecting more errors. To accomplish these goals, experiments were conducted using sample programs to determine how well the reinforcement learning based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. One key result from this research was the development of a reinforcement algorithm to identify mutation operator combinations that resulted in live mutants. During experimentation, the reinforcement learning algorithm identified the optimal mutation operator selections for various programs and test suite scenarios, as well as determined that by using parallel processing and multiple cores the reinforcement learning process for mutation operator selection was practical. With reinforcement learning the mutation operators utilized were reduced by 50 – 100%.In conclusion, these improvements created a ‘live’ mutation testing process that evaluated various mutation operators and generated mutants to perform real-time mutation testing while dynamically prioritizing mutation operator recommendations. This has enhanced the software developer’s ability to improve testing processes. The contributions of this paper’s research supported the shift-left testing approach, where testing is performed earlier in the software development cycle when error resolution is less costly

    Accelerated artificial neural networks on FPGA for fault detection in automotive systems

    Get PDF
    Modern vehicles are complex distributed systems with critical real-time electronic controls that have progressively replaced their mechanical/hydraulic counterparts, for performance and cost benefits. The harsh and varying vehicular environment can induce multiple errors in the computational/communication path, with temporary or permanent effects, thus demanding the use of fault-tolerant schemes. Constraints in location, weight, and cost prevent the use of physical redundancy for critical systems in many cases, such as within an internal combustion engine. Alternatively, algorithmic techniques like artificial neural networks (ANNs) can be used to detect errors and apply corrective measures in computation. Though adaptability of ANNs presents advantages for fault-detection and fault-tolerance measures for critical sensors, implementation on automotive grade processors may not serve required hard deadlines and accuracy simultaneously. In this work, we present an ANN-based fault-tolerance system based on hybrid FPGAs and evaluate it using a diesel engine case study. We show that the hybrid platform outperforms an optimised software implementation on an automotive grade ARM Cortex M4 processor in terms of latency and power consumption, also providing better consolidation

    Groundwork for the Development of Testing Plans for Concurrent Software

    Get PDF
    While multi-threading has become commonplace in many application domains (e.g., embedded systems, digital signal processing (DSP), networks, IP services, and graphics), multi-threaded code often requires complex co-ordination of threads. As a result, multi-threaded implementations are prone to subtle bugs that are difficult and time-consuming to locate. Moreover, current testing techniques that address multi-threading are generally costly while their effectiveness is unknown. The development of cost-effective testing plans requires an in-depth study of the nature, frequency, and cost of concurrency errors in the context of real-world applications. The full paper will lay the groundwork for such a study, with the purpose of informing the creation of a parametric cost model for testing multi-threaded software. The current version of the paper provides motivation for the study, an outline of the full paper, and a bibliography of related papers

    High Throughput Automated Allele Frequency Estimation by Pyrosequencing

    Get PDF
    Pyrosequencing is a DNA sequencing method based on the principle of sequencing-by-synthesis and pyrophosphate detection through a series of enzymatic reactions. This bioluminometric, real-time DNA sequencing technique offers unique applications that are cost-effective and user-friendly. In this study, we have combined a number of methods to develop an accurate, robust and cost efficient method to determine allele frequencies in large populations for association studies. The assay offers the advantage of minimal systemic sampling errors, uses a general biotin amplification approach, and replaces dTTP for dATP-apha-thio to avoid non-uniform higher peaks in order to increase accuracy. We demonstrate that this newly developed assay is a robust, cost-effective, accurate and reproducible approach for large-scale genotyping of DNA pools. We also discuss potential improvements of the software for more accurate allele frequency analysis

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques
    • …
    corecore