6,813 research outputs found

    Integrated testing and verification system for research flight software design document

    Get PDF
    The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically

    Radiation-Induced Error Criticality in Modern HPC Parallel Accelerators

    Get PDF
    In this paper, we evaluate the error criticality of radiation-induced errors on modern High-Performance Computing (HPC) accelerators (Intel Xeon Phi and NVIDIA K40) through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis quantifies and qualifies radiation effects on applicationsā€™ output correlating the number of corrupted elements with their spatial locality. Also, we provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. We apply the selected metrics to experimental results obtained in various radiation test campaigns for a total of more than 400 hours of beam time per device. The amount of data we gathered allows us to evaluate the error criticality of a representative set of algorithms from HPC suites. Additionally, based on the characteristics of the tested algorithms, we draw generic reliability conclusions for broader classes of codes. We show that arithmetic operations are less critical for the K40, while Xeon Phi is more reliable when executing particles interactions solved through Finite Difference Methods. Finally, iterative stencil operations seem the most reliable on both architectures.This work was supported by the STIC-AmSud/CAPES scientific cooperation program under the EnergySFE research project grant 99999.007556/2015-02, EU H2020 Programme, and MCTI/RNP-Brazil under the HPC4E Project, grant agreement nĀ° 689772. Tested K40 boards were donated thanks to Steve Keckler, Timothy Tsai, and Siva Hari from NVIDIA.Postprint (author's final draft

    Well Performance Tracking in a Mature Waterflood Asset

    Get PDF
    Imperial Users onl

    A formal verification framework and associated tools for enterprise modeling : application to UEML

    Get PDF
    The aim of this paper is to propose and apply a verification and validation approach to Enterprise Modeling that enables the user to improve the relevance and correctness, the suitability and coherence of a model by using properties specification and formal proof of properties

    How robust are distributed systems

    Get PDF
    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented

    Implementation of an XML-based user interface with applications in ice sheet modeling

    Get PDF
    The scientific domain presents unique challenges to software developers. This thesis describes the application of design patterns to the problem of dynamically changing interfaces to scientific application software (GLIMMER, which performs ice sheet modeling). In its present form, GLIMMER uses a text configuration file to define model behavior, set parameters, and structure model input/output (I/O). The creation of the configuration file presents a significant problem to users due to its format and complexity. GLIMMER is still under development, and the number of changes to configuration parameters, parameter types, and parameter dependencies makes devel-opment of any single interface of use only for a short term. The application of design patterns described here resulted in an interface specification tool that then generates multiple versions of a user interface usable across a wide variety of configuration pa-rameter types, values, and dependencies. The resulting products have leveraged de-sign patterns and solved problems associated with design pattern usage not found in the specialized software engineering literature

    Reusing RTL assertion checkers for verification of SystemC TLM models

    Get PDF
    The recent trend towards system-level design gives rise to new challenges for reusing existing RTL intellectual properties (IPs) and their verification environment in TLM. While techniques and tools to abstract RTL IPs into TLM models have begun to appear, the problem of reusing, at TLM, a verification environment originally developed for an RTL IP is still under-explored, particularly when ABV is adopted. Some frameworks have been proposed to deal with ABV at TLM, but they assume a top-down design and verification flow, where assertions are defined ex-novo at TLM level. In contrast, the reuse of existing assertions in an RTL-to-TLM bottom-up design flow has not been analyzed yet, except by using transactors to create a mixed simulation between the TLM design and the RTL checkers corresponding to the assertions. However, the use of transactors may lead to longer verification time due to the need of developing and verifying the transactors themselves. Moreover, the simulation time is negatively affected by the presence of transactors, which slow down the simulation at the speed of the slowest parts (i.e., RTL checkers). This article proposes an alternative methodology that does not require transactors for reusing assertions, originally defined for a given RTL IP, in order to verify the corresponding TLM model. Experimental results have been conducted on benchmarks with different characteristics and complexity to show the applicability and the efficacy of the proposed methodology

    Process-mining-enabled audit of information systems: Methodology and an application

    Get PDF
    Current methodologies for Information Systems (ISs) audits suffer from some limitations that could question the effectiveness of such procedures in detecting deviations, frauds, or abuses. Process Mining (PM), a set of business-process-related diagnostic and improvement techniques, can tackle these weaknesses, but literature lacks contributions that address this possibility concretely. Thus, by framing PM as an Expert System (ES) engine, this paper presents a five-step PM-based methodology for IS audits and validates it through a case in a freight export port process managed by a Port Community System (PCS), an open electronic platform enabling information exchange among port stakeholders. The validation pointed out some advantages (e.g. depth of analysis, easier automation, less invasiveness) of our PM-enabled methodology over extant ESs and tools for IS audit. The substantive test and the check on the PCS processing controls and output controls allowed to identify four major non-conformances likely implying both legal and operational risks, and two unforeseen process deviations that were not known by the port authority, but that could improve the flexibility of the process. These outcomes set the stage for an export process reengineering, and for revising the boundaries in the process flow of the PCS

    Web Enabling a Bibliographic Database of Indian Biomedical Journals: IndMED

    Get PDF
    A Dissertation for the partial fulfillment of M.S. (Software Systems) Degree from BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE, PILANI (RAJASTHAN); 2001.The project aims to provide access through web to a bibliographic database called IndMED. The database is a product of Indian Medlars Centre (IMC, a division of National Informatics Centre) and has data relating to references to articles published in Indiaā€™s learned biomedical journals. The scope of this project is limited to providing a web-based user interface for database searching and developing the CGI for Web-Database Connectivity. The project was planned to ensure its scheduled completion. A survey of relevant research and technologies pertaining to Web Interface, Web-Database Connectivity and DBMS have been made and briefly described in the dissertation. Interview, Questionnaire and Observation of similar systems were utilized to determine the requirements of the proposed system. Specification of the proposed system was then based on the requirements. After the requirements and the specification of the proposed system was finalized an Analysis Model consisting of Data Flow Diagrams (DFDs) and Entity-Relationship Diagram (ERD) was developed. Design of the system was then developed for the sub-systems ā€“ Web-Interface, Web-database Connectivity and DBMS - so the requirements and specifications are met. The Web-Interface mainly consists of html pages i.e. Simple Search Form, Minimal Search Form and Advanced Search Form and the dynamically generated by CGI the Search Result Screen. The Web-Database Connectivity sub-system consists of PERL Scripts dealing with Processing of data submitted from html forms, Variables assignment, Error Trap, Formulation of query string which can passed on to the DBMS sub-system and Html Patching of the output of the database in response to the query. The implementation of the system includes: the screens along with html code written for various search forms and generated by the CGI for the Web-Interface, Perl Scripts (IMSS.PL; IMAS.PL; IMRS.PL) for the CGI and the Database Structure, Fields included in Inverted File Index of the Database and Display Formats for the DBMS sub-system. The system thus developed was tested to meet the tests prescribed by the system specification. The final product has been integrated into the Home-page of IMC at the URL http://indmed.nic.in . A user help document for searching is also included. The system developed has scope for future research and refinement in regard to relevancy ranking, federated searching, links to full text articles and other material relevant to the user query available on the Internet along with the results from IndMED

    Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results

    Full text link
    Fixed and mobile telecom operators, enterprise network operators and cloud providers strive to face the challenging demands coming from the evolution of IP networks (e.g. huge bandwidth requirements, integration of billions of devices and millions of services in the cloud). Proposed in the early 2010s, Segment Routing (SR) architecture helps face these challenging demands, and it is currently being adopted and deployed. SR architecture is based on the concept of source routing and has interesting scalability properties, as it dramatically reduces the amount of state information to be configured in the core nodes to support complex services. SR architecture was first implemented with the MPLS dataplane and then, quite recently, with the IPv6 dataplane (SRv6). IPv6 SR architecture (SRv6) has been extended from the simple steering of packets across nodes to a general network programming approach, making it very suitable for use cases such as Service Function Chaining and Network Function Virtualization. In this paper we present a tutorial and a comprehensive survey on SR technology, analyzing standardization efforts, patents, research activities and implementation results. We start with an introduction on the motivations for Segment Routing and an overview of its evolution and standardization. Then, we provide a tutorial on Segment Routing technology, with a focus on the novel SRv6 solution. We discuss the standardization efforts and the patents providing details on the most important documents and mentioning other ongoing activities. We then thoroughly analyze research activities according to a taxonomy. We have identified 8 main categories during our analysis of the current state of play: Monitoring, Traffic Engineering, Failure Recovery, Centrally Controlled Architectures, Path Encoding, Network Programming, Performance Evaluation and Miscellaneous...Comment: SUBMITTED TO IEEE COMMUNICATIONS SURVEYS & TUTORIAL
    • ā€¦
    corecore