76 research outputs found

    Monte Carlo Simulation With The GATE Software Using Grid Computing

    Get PDF
    DémonstrationInternational audienceMonte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the "Multiple Replications In Parallel" approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-sciencE), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses

    Goddard Conference on Mass Storage Systems and Technologies, volume 2

    Get PDF
    Papers and viewgraphs from the conference are presented. Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional discussion topics addressed the evolution of the identifiable unit for processing (file, granule, data set, or some similar object) as data ingestion rates increase dramatically, and the present state of the art in mass storage technology

    Lessons learned applying CASE methods/tools to Ada software development projects

    Get PDF
    This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones

    The Impact of Domain Knowledge on the Effectiveness of Requirements Engineering Activities

    Get PDF
    One of the factors that seems to influence an individual’s effectiveness in requirements engineering activities is his or her knowledge of the problem being solved, i.e., domain knowledge. While in-depth domain knowledge enables a requirements engineer to understand the problem easier, he or she can fall for tacit assumptions of the domain and might overlook issues that are obvious to domain experts and thus remain unmentioned. The purpose of this thesis is to investigate the impact of domain knowledge on different requirements engineering activities. The main research question this thesis attempts to answer is “How does one form the most effective team, consisting of some mix of domain ignorants and domain awares, for a requirements engineering activity involving knowledge about the domain of the computer-based system whose requirements are being determined by the team?” This thesis presents two controlled experiments and an industrial case study to test a number of hypotheses. The main hypothesis states that a requirements engineering team for a computer-based system in a particular domain, consisting of a mix of requirements analysts that are ignorant of the domain and requirements analysts that are aware of the domain, is more effective at requirement idea generation than a team consisting of only requirements analysts that are aware of the domain. The results of the controlled experiments, although not conclusive, provided some support for the positive effect of the mix on effectiveness of a requirements engineering team. The results also showed a significant effect of other independent variables, especially educational background. The data of the case study corroborated the results of the controlled experiments. The main conclusion that can be drawn from the findings of this thesis is that the presence in a requirements engineering team of a domain ignorant with a computer science or software engineering background improves the effectiveness of the team

    Glosarium Pendidikan

    Get PDF

    The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    Get PDF
    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions

    Performance studies of file system design choices for two concurrent processing paradigms

    Get PDF

    Active FPGA Security through Decoy Circuits

    Get PDF
    Field Programmable Gate Arrays (FPGAs) based on Static Random Access Memory (SRAM) are vulnerable to tampering attacks such as readback and cloning attacks. Such attacks enable the reverse engineering of the design programmed into an FPGA. To counter such attacks, measures that protect the design with low performance penalties should be employed. This research proposes a method which employs the addition of active decoy circuits to protect SRAM FPGAs from reverse engineering. The effects of the protection method on security, execution time, power consumption, and FPGA resource usage are quantified. The method significantly increases the security of the design with only minor increases in execution time, power consumption, and resource usage. For the circuits used to characterize the method, security increased to more than one million times the original values, while execution time increased to at most 1.2 times, dynamic power consumption increased to at most two times, and look-up table usage increased to at most seven times the original values. These are reasonable penalties given the size and security of the modified circuits. The proposed design protection method also extends to FPGAs based on other technologies and to Application-Specific Integrated Circuits (ASICs). In addition to the design methodology proposed, a new classification of tampering attacks and countermeasures is presented

    Integrating legacy mainframe systems: architectural issues and solutions

    Get PDF
    For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most. In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged. Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment. However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications. The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies

    Finite elements software and applications

    Get PDF
    The contents of this thesis are a detailed study of the software for the finite element method. In the text, the finite element method is introduced from both the engineering and mathematical points of view. The computer implementation of the method is explained with samples of mainframe, mini- and micro-computer implementations. A solution is presented for the problem of limited stack size for both mini- and micro-computers which possess stack architecture. Several finite element programs are presented. Special purpose programs to solve problems in structural analysis and groundwater flow are discussed. However, an efficient easy-to-use finite element program for general two-dimensional problems is presented. Several problems in groundwater flow are considered that include steady, unsteady flows in different types of aquifers. Different cases of sinks and sources in the flow domain are also considered. The performance of finite element methods is studied for the chosen problems by comparing the numerical solutions of test problems with analytical solutions (if they exist) or with solutions obtained by other numerical methods. The polynomial refinement of the finite elements is studied for the presented problems in order to offer some evidence as to which finite element simulation is best to use under a variety of circumstances
    corecore