1,263 research outputs found

    Multiple bit error correcting architectures over finite fields

    Get PDF
    This thesis proposes techniques to mitigate multiple bit errors in GF arithmetic circuits. As GF arithmetic circuits such as multipliers constitute the complex and important functional unit of a crypto-processor, making them fault tolerant will improve the reliability of circuits that are employed in safety applications and the errors may cause catastrophe if not mitigated. Firstly, a thorough literature review has been carried out. The merits of efficient schemes are carefully analyzed to study the space for improvement in error correction, area and power consumption. Proposed error correction schemes include bit parallel ones using optimized BCH codes that are useful in applications where power and area are not prime concerns. The scheme is also extended to dynamically correcting scheme to reduce decoder delay. Other method that suits low power and area applications such as RFIDs and smart cards using cross parity codes is also proposed. The experimental evaluation shows that the proposed techniques can mitigate single and multiple bit errors with wider error coverage compared to existing methods with lesser area and power consumption. The proposed scheme is used to mask the errors appearing at the output of the circuit irrespective of their cause. This thesis also investigates the error mitigation schemes in emerging technologies (QCA, CNTFET) to compare area, power and delay with existing CMOS equivalent. Though the proposed novel multiple error correcting techniques can not ensure 100% error mitigation, inclusion of these techniques to actual design can improve the reliability of the circuits or increase the difficulty in hacking crypto-devices. Proposed schemes can also be extended to non GF digital circuits

    Automated Functional Testing based on the Navigation of Web Applications

    Full text link
    Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i) UML models; ii) Selenium scripts; iii) XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.Comment: In Proceedings WWV 2011, arXiv:1108.208

    Protection Challenges of Distributed Energy Resources Integration In Power Systems

    Get PDF
    It is a century that electrical power system are the main source of energy for the societies and industries. Most parts of these infrastructures are built long time ago. There are plenty of high rating high voltage equipment which are designed and manufactured in mid-20th and are currently operating in United States’ power network. These assets are capable to do what they are doing now. However, the issue rises with the recent trend, i.e. DERs integration, causing fundamental changes in electrical power systems and violating traditional network design basis in various ways. Recently, there have been a steep rise in demands for Distributed Energy Resources (DERs) integration. There are various incentives for demand in such integrations and employment of distributed and renewable energy resources. However, it violates the most fundamental assumption in power system traditional designs. That is the power flows from the generation (upstream) toward the load locations (downstream). Currently operating power systems are designed based on this assumption and consequently their equipment ratings, operational details, protection schemes, and protections settings. Violating these designs and operational settings leads toward reducing the power reliability and increasing outages, which are opposite of the DERs integration goals. The DERs integration and its consequences happen in both transmission and distribution levels. Both of these networks effects of DERs integration are discussed in this dissertation. The transmission level issues are explained in brief and more analytical approach while the transmission network challenges are provided in details using both field data and simulation results. It is worth mentioning that DERs integration is aligned with the goal to lead toward a smart grid. This can be considered the most fundamental network reconfiguration that has ever experienced and requires various preparations. Both long term and short term solutions are proposed for the explained challenges and corresponding results are provided to illustrate the effectiveness of the proposed solutions. The author believes that developing and considering short term solutions can make the transition period toward reaching the smart grid possible. Meanwhile, long term approaches should also be planned for the final smart grid development and operation details

    Expected Coverage (ExCov): A Proposal to Compare Fuzz Test Coverage within an Infinite Input Space

    Get PDF
    A Fuzz test is an approach used to discover vulnerabilities by intentionally sending invalid inputs to a system for the purpose of triggering some type of fault or unintended effect that renders the system vulnerable to an exploit. Fuzz testing is an important cyber-testing technique used to find and fix vulnerabilities before they are exploited. The fuzzing of military data links presents a particular challenge because existing fuzzing tools cannot be easily applied to these systems. As a result, the tools and techniques used to fuzz these links vary widely in sophistication and effectiveness. Because of the infinite, or nearly infinite, number of possible fuzzed messages that can be sent on a military data link, measuring the coverage of a fuzz test is not straightforward. This thesis proposes an understandable and meaningful metric for protocol fuzz testing called ExCov. This metric computes the coverage of a fuzz test set from a probabilistic model of vulnerability occurrence and defines coverage as the expected percent of existing vulnerabilities discovered by a set of test cases. This metric enables the acquisitions community to more succinctly write weapons system requirements for cyber security. Furthermore, it quantifies the number of faults and vulnerabilities that are expected to be found by a set of test cases, which provides decision makers with valuable information to make more informed choices on whether or not to perform additional testing. As a result, industry will be better equipped to determine cost and effort when performing cyber vulnerability testing. In addition, industry will also be able to more concretely represent the results of the cyber testing they perform. ExCov was implemented in a suite of tools called ExFuzz, and these tools were used to compare and contrast military data link fuzz testing techniques that are in use today. By assessing these current methods using the ExCov metric, optimal bit flip probabilities for the mutative fuzzing of three custom protocols was found. A generative fuzzer was also built based on the metric and was shown to outperform mutative and manual generation strategies in nearly every case

    Reducing Software Testing Time with Combinatorial Testing and Test Automation

    Get PDF
    The development of large software systems is a complex and error prone process. Errors may occur at any stage of software development. These errors, sometimes referred to as bugs, can cause great losses in terms of both time and money if not identified and removed as early as possible. Testing a software product is costly, since it takes much time and need to test many combinations of its functions, integrity, performance etc., which can be called as test cases. The company\u27s goal is to reduce the testing time, so that they can save money and deliver the product much faster to the customer. Testing time can be reduced in two main ways, first by reducing number of test cases and second by automating repeatedly testing areas. This paper will discuss fundamentals of testing such as importance and difference of verification and validation, testing throughout the software development life cycle and testing methods, levels and types. Then it will discuss the possibility of reducing the time spent on testing by reducing number of test cases with combinatorial testing and automating repeatedly tested areas with test automation using Selenium tool. Finally it will also shed some light on a real world test automation project with selenium and two integrated develop environments

    Precise vehicle location as a fundamental parameter for intelligent selfaware rail-track maintenance systems

    Get PDF
    The rail industry in the UK is undergoing substantial changes in response to a modernisation vision for 2040. Development and implementation of these will lead to a highly automated and safe railway. Real-time regulation of traffic will optimise the performance of the network, with trains running in succession within an adjacent movable safety zone. Critically, maintenance will use intelligent trainborne and track-based systems. These will provide accurate and timely information for condition based intervention at precise track locations, reducing possession downtime and minimising the presence of workers in operating railways. Clearly, precise knowledge of trains’ real-time location is of paramount importance. The positional accuracy demand of the future railway is less than 2m. A critical consideration of this requirement is the capability to resolve train occupancy in adjacent tracks, with the highest degree of confidence. A finer resolution is required for locating faults such as damage or missing parts, precisely. Location of trains currently relies on track signalling technology. However, these systems mostly provide an indication of the presence of trains within discrete track sections. The standard Global Navigation Satellite Systems (GNSS), cannot precisely and reliably resolve location as required either. Within the context of the needs of the future railway, state of the art location technologies and systems were reviewed and critiqued. It was found that no current technology is able to resolve location as required. Uncertainty is a significant factor. A new integrated approach employing complimentary technologies and more efficient data fusion process, can potentially offer a more accurate and robust solution. Data fusion architectures enabling intelligent self-aware rail-track maintenance systems are proposed

    Software component testing : a standard and the effectiveness of techniques

    Get PDF
    This portfolio comprises two projects linked by the theme of software component testing, which is also often referred to as module or unit testing. One project covers its standardisation, while the other considers the analysis and evaluation of the application of selected testing techniques to an existing avionics system. The evaluation is based on empirical data obtained from fault reports relating to the avionics system. The standardisation project is based on the development of the BC BSI Software Component Testing Standard and the BCS/BSI Glossary of terms used in software testing, which are both included in the portfolio. The papers included for this project consider both those issues concerned with the adopted development process and the resolution of technical matters concerning the definition of the testing techniques and their associated measures. The test effectiveness project documents a retrospective analysis of an operational avionics system to determine the relative effectiveness of several software component testing techniques. The methodology differs from that used in other test effectiveness experiments in that it considers every possible set of inputs that are required to satisfy a testing technique rather than arbitrarily chosen values from within this set. The three papers present the experimental methodology used, intermediate results from a failure analysis of the studied system, and the test effectiveness results for ten testing techniques, definitions for which were taken from the BCS BSI Software Component Testing Standard. The creation of the two standards has filled a gap in both the national and international software testing standards arenas. Their production required an in-depth knowledge of software component testing techniques, the identification and use of a development process, and the negotiation of the standardisation process at a national level. The knowledge gained during this process has been disseminated by the author in the papers included as part of this portfolio. The investigation of test effectiveness has introduced a new methodology for determining the test effectiveness of software component testing techniques by means of a retrospective analysis and so provided a new set of data that can be added to the body of empirical data on software component testing effectiveness

    Advanced space system concepts and their orbital support needs (1980 - 2000). Volume 3: Detailed data. Part 1: Catalog of initiatives, functional options, and future environments and goals

    Get PDF
    The following areas were discussed in relation to a study of the commonality of space vehicle applications to future national needs: (1) index of initiatives (civilian observation, communication, support), brief illustrated description of each initiative, time periods (from 1980 to 2000+) for implementation of these initiatives; (2) data bank of functional system options, presented in the form of data sheets, one for each of the major functions, with the system option for near-term, midterm, and far-term space projects applicable to each subcategory of functions to be fulfilled; (3) table relating initiatives and desired goals (public service and humanistic, materialistic, scientific and intellectual); and (4) data on size, weight and cost estimations
    • …
    corecore