32,362 research outputs found

    Experimental Test bed to De-Risk the Navy Advanced Development Model

    Get PDF
    This paper presents a reduced scale demonstration test-bed at the University of Texas’ Center for Electromechanics (UT-CEM) which is well equipped to support the development and assessment of the anticipated Navy Advanced Development Model (ADM). The subscale ADM test bed builds on collaborative power management experiments conducted as part of the Swampworks Program under the US/UK Project Arrangement as well as non-military applications. The system includes the required variety of sources, loads, and controllers as well as an Opal-RT digital simulator. The test bed architecture is described and the range of investigations that can be carried out on it is highlighted; results of preliminary system simulations and some initial tests are also provided. Subscale ADM experiments conducted on the UT-CEM microgrid can be an important step in the realization of a full-voltage, full-power ADM three-zone demonstrator, providing a test-bed for components, subsystems, controls, and the overall performance of the Medium Voltage Direct Current (MVDC) ship architecture.Center for Electromechanic

    Advanced information processing system: Fault injection study and results

    Get PDF
    The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given

    Centrifuge modeling of rocking-isolated inelastic RC bridge piers

    Get PDF
    Experimental proof is provided of an unconventional seismic design concept, which is based on deliberately underdesigning shallow foundations to promote intense rocking oscillations and thereby to dramatically improve the seismic resilience of structures. Termed rocking isolation, this new seismic design philosophy is investigated through a series of dynamic centrifuge experiments on properly scaled models of a modern reinforced concrete (RC) bridge pier. The experimental method reproduces the nonlinear and inelastic response of both the soil-footing interface and the structure. To this end, a novel scale model RC (1:50 scale) that simulates reasonably well the elastic response and the failure of prototype RC elements is utilized, along with realistic representation of the soil behavior in a geotechnical centrifuge. A variety of seismic ground motions are considered as excitations. They result in consistent demonstrably beneficial performance of the rocking-isolated pier in comparison with the one designed conventionally. Seismic demand is reduced in terms of both inertial load and deck drift. Furthermore, foundation uplifting has a self-centering potential, whereas soil yielding is shown to provide a particularly effective energy dissipation mechanism, exhibiting significant resistance to cumulative damage. Thanks to such mechanisms, the rocking pier survived, with no signs of structural distress, a deleterious sequence of seismic motions that caused collapse of the conventionally designed pier. © 2014 The Authors Earthquake Engineering & Structural Dynamics Published by John Wiley & Sons Ltd

    Advanced information processing system: Inter-computer communication services

    Get PDF
    The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented

    Stacco: Differentially Analyzing Side-Channel Traces for Detecting SSL/TLS Vulnerabilities in Secure Enclaves

    Full text link
    Intel Software Guard Extension (SGX) offers software applications enclave to protect their confidentiality and integrity from malicious operating systems. The SSL/TLS protocol, which is the de facto standard for protecting transport-layer network communications, has been broadly deployed for a secure communication channel. However, in this paper, we show that the marriage between SGX and SSL may not be smooth sailing. Particularly, we consider a category of side-channel attacks against SSL/TLS implementations in secure enclaves, which we call the control-flow inference attacks. In these attacks, the malicious operating system kernel may perform a powerful man-in-the-kernel attack to collect execution traces of the enclave programs at page, cacheline, or branch level, while positioning itself in the middle of the two communicating parties. At the center of our work is a differential analysis framework, dubbed Stacco, to dynamically analyze the SSL/TLS implementations and detect vulnerabilities that can be exploited as decryption oracles. Surprisingly, we found exploitable vulnerabilities in the latest versions of all the SSL/TLS libraries we have examined. To validate the detected vulnerabilities, we developed a man-in-the-kernel adversary to demonstrate Bleichenbacher attacks against the latest OpenSSL library running in the SGX enclave (with the help of Graphene) and completely broke the PreMasterSecret encrypted by a 4096-bit RSA public key with only 57286 queries. We also conducted CBC padding oracle attacks against the latest GnuTLS running in Graphene-SGX and an open-source SGX-implementation of mbedTLS (i.e., mbedTLS-SGX) that runs directly inside the enclave, and showed that it only needs 48388 and 25717 queries, respectively, to break one block of AES ciphertext. Empirical evaluation suggests these man-in-the-kernel attacks can be completed within 1 or 2 hours.Comment: CCS 17, October 30-November 3, 2017, Dallas, TX, US

    Filter for detecting and isolating faults for a nonlinear system

    Get PDF
    In the paper the problem of detecting and isolating multiple faults for nonlinear systems is considered. A strategy of state filtering is derived in order to detect and isolate multiple faults which appear simultaneously or sequentially in a discrete time nonlinear systems with unknown inputs. For the considered system for which a fault isolation condition is fulfilled the proposed method can isolate p simultaneous faults with at least p+q output measurements, where q is the number of unknown inputs or disturbances. A reduced output residual vector of dimension p+q is generated and the elements of this vector are decoupled in a way that each element of the vector is associated with only one fault or unmeasured input

    Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics

    Get PDF
    The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications

    Challenges Using Linux as a Real-Time Operating System

    Get PDF
    Human-in-the-loop (HITL) simulation groups at NASA and the Air Force Research Lab have been using Linux as a real-time operating system (RTOS) for over a decade. More recently, SpaceX has revealed that it is using Linux as an RTOS for its Falcon launch vehicles and Dragon capsules. As Linux makes its way from ground facilities to flight critical systems, it is necessary to recognize that the real-time capabilities in Linux are cobbled onto a kernel architecture designed for general purpose computing. The Linux kernel contain numerous design decisions that favor throughput over determinism and latency. These decisions often require workarounds in the application or customization of the kernel to restore a high probability that Linux will achieve deadlines

    Logic-controlled solid state switchgear for 270 volts dc

    Get PDF
    A feasibility study to design and demonstrate solid state switchgear in the form of circuit breakers and a power transfer switch is described. The switchgear operates on a nominal 270 V dc circuit and controls power to a load of up to 15 amperes. One circuit breaker may be interconnected to a second breaker to form a power transfer switch. On-off and transfer functions of the breakers or the transfer switch are remotely controlled. A number of reclosures with variable time delay between tripout and reclosure are programmed and controlled by integrated analog and COSMOS logic circuits. A unique commutation circuit, that generates only minimal transient disturbance to either source or load, was developed to interrupt current flow through the main SCR switching element. Laboratory tests demonstrated performance of the solid state circuit breakers over specified voltage and temperature ranges

    Simulation verification techniques study

    Get PDF
    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided
    • 

    corecore