7,465 research outputs found

    Quantifying Information Leaks Using Reliability Analysis

    Get PDF
    acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4We report on our work-in-progress into the use of reliability analysis to quantify information leaks. In recent work we have proposed a software reliability analysis technique that uses symbolic execution and model counting to quantify the probability of reaching designated program states, e.g. assert violations, under uncertainty conditions in the environment. The technique has many applications beyond reliability analysis, ranging from program understanding and debugging to analysis of cyber-physical systems. In this paper we report on a novel application of the technique, namely Quantitative Information Flow analysis (QIF). The goal of QIF is to measure information leakage of a program by using information-theoretic metrics such as Shannon entropy or Renyi entropy. We exploit the model counting engine of the reliability analyzer over symbolic program paths, to compute an upper bound of the maximum leakage over all possible distributions of the confidential data. We have implemented our approach into a prototype tool, called QILURA, and explore its effectiveness on a number of case studie

    A GPU-based survey for millisecond radio transients using ARTEMIS

    Get PDF
    Astrophysical radio transients are excellent probes of extreme physical processes originating from compact sources within our Galaxy and beyond. Radio frequency signals emitted from these objects provide a means to study the intervening medium through which they travel. Next generation radio telescopes are designed to explore the vast unexplored parameter space of high time resolution astronomy, but require High Performance Computing (HPC) solutions to process the enormous volumes of data that are produced by these telescopes. We have developed a combined software /hardware solution (code named ARTEMIS) for real-time searches for millisecond radio transients, which uses GPU technology to remove interstellar dispersion and detect millisecond radio bursts from astronomical sources in real-time. Here we present an introduction to ARTEMIS. We give a brief overview of the software pipeline, then focus specifically on the intricacies of performing incoherent de-dispersion. We present results from two brute-force algorithms. The first is a GPU based algorithm, designed to exploit the L1 cache of the NVIDIA Fermi GPU. Our second algorithm is CPU based and exploits the new AVX units in Intel Sandy Bridge CPUs.Comment: 4 pages, 7 figures. To appear in the proceedings of ADASS XXI, ed. P.Ballester and D.Egret, ASP Conf. Se

    Guía de Gestión del repositorio de Objetos Digitales OdA

    Get PDF
    Esta guía detalla los distintos menús y acciones para crear y gestionar una colección de Objetos Digitales en el contenedor OdA (en adelante OdA). El contenedor OdA 2.0 permite crear un sitio web para almacenar, gestionar y publicar colecciones de Objetos Digitales. Entre las aplicaciones creadas con OdA destacan los Repositorios de Objetos de Aprendizaje y los Museos Virtuales Académicos

    Correlating low energy impact damage with changes in modal parameters: diagnosis tools and FE validation

    Get PDF
    This paper presents a basic experimental technique and simplified FE based models for the detection, localization and quantification of impact damage in composite beams around the BVID level. Detection of damage is carried out by shift in modal parameters. Localization of damage is done by a topology optimization tool which showed that correct damage locations can be found rather efficiently for low-level damage. The novelty of this paper is that we develop an All In One (AIO) package dedicated to impact identification by modal analysis. The damaged zones in the FE models are updated by reducing the most sensitive material property in order to improve the experimental/numerical correlation of the frequency response functions. These approximate damage models(in term of equivalent rigidity) give us a simple degradation factor that can serve as a warning regarding structure safety

    Mechanical Properties of Nanostructured Materials Determined Through Molecular Modeling Techniques

    Get PDF
    The potential for gains in material properties over conventional materials has motivated an effort to develop novel nanostructured materials for aerospace applications. These novel materials typically consist of a polymer matrix reinforced with particles on the nanometer length scale. In this study, molecular modeling is used to construct fully atomistic models of a carbon nanotube embedded in an epoxy polymer matrix. Functionalization of the nanotube which consists of the introduction of direct chemical bonding between the polymer matrix and the nanotube, hence providing a load transfer mechanism, is systematically varied. The relative effectiveness of functionalization in a nanostructured material may depend on a variety of factors related to the details of the chemical bonding and the polymer structure at the nanotube-polymer interface. The objective of this modeling is to determine what influence the details of functionalization of the carbon nanotube with the polymer matrix has on the resulting mechanical properties. By considering a range of degree of functionalization, the structure-property relationships of these materials is examined and mechanical properties of these models are calculated using standard techniques

    A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Get PDF
    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes

    Junctions and thin shells in general relativity using computer algebra I: The Darmois-Israel Formalism

    Full text link
    We present the GRjunction package which allows boundary surfaces and thin-shells in general relativity to be studied with a computer algebra system. Implementing the Darmois-Israel thin shell formalism requires a careful selection of definitions and algorithms to ensure that results are generated in a straight-forward way. We have used the package to correctly reproduce a wide variety of examples from the literature. We present several of these verifications as a means of demonstrating the packages capabilities. We then use GRjunction to perform a new calculation - joining two Kerr solutions with differing masses and angular momenta along a thin shell in the slow rotation limit.Comment: Minor LaTeX error corrected. GRjunction for GRTensorII is available from http://astro.queensu.ca/~grtensor/GRjunction.htm

    FLOSS

    Get PDF
    L'obiettivo principale del progetto è quello di diffondere presso le aziende coinvolte la coscienza di cosa il FLOSS (Free/Libre/Open Source Software) può rappresentare per il loro business, di fornire le conoscenze necessarie per fruire appieno delle opportunità ad esse offerte per poter mettere in atto nuovi modelli di business a partire dal software non proprietario. Il progetto cluster ha visto la partecipazione di 17 imprese del settore ICT (Tecnologie dell'Informazione e delle Comunicazioni).Finanziamenti::Piano del Lavoro - RA

    Gravitational waves from spinning eccentric binaries

    Full text link
    This paper is to introduce a new software called CBwaves which provides a fast and accurate computational tool to determine the gravitational waveforms yielded by generic spinning binaries of neutron stars and/or black holes on eccentric orbits. This is done within the post-Newtonian (PN) framework by integrating the equations of motion and the spin precession equations while the radiation field is determined by a simultaneous evaluation of the analytic waveforms. In applying CBwaves various physically interesting scenarios have been investigated. In particular, we have studied the appropriateness of the adiabatic approximation, and justified that the energy balance relation is indeed insensitive to the specific form of the applied radiation reaction term. By studying eccentric binary systems it is demonstrated that circular template banks are very ineffective in identifying binaries even if they possess tiny residual orbital eccentricity. In addition, by investigating the validity of the energy balance relation we show that, on contrary to the general expectations, the post-Newtonian approximation should not be applied once the post-Newtonian parameter gets beyond the critical value 0.080.1\sim 0.08-0.1. Finally, by studying the early phase of the gravitational waves emitted by strongly eccentric binary systems---which could be formed e.g. in various many-body interactions in the galactic halo---we have found that they possess very specific characteristics which may be used to identify these type of binary systems.Comment: 37 pages, 18 figures, submitted to Class. Quantum Gra

    "She is my teacher and if it was not for her I would be dead": Exploration of rural South African community health workers' information, education and communication activities

    Get PDF
    Community health workers (CHWs) are important resources in health systems affected by the HIV/AIDS pandemic. International guidelines on task-shifting recommend that CHWs can provide diverse HIV services, ranging from HIV prevention to counselling patients for lifelong antiretroviral therapy. There is, however, little evidence on the experiences with CHW delivery of these services in Africa. This qualitative study included 102 interviews that explored experiences with information, education and communication (IEC) activities provided by CHWs within rural South Africa. Semi- structured interviews were conducted with CHWs (n = 17), their clients (n = 33) and the primary caregivers of these clients (n = 30), allowing for data source triangulation. Twenty-two follow-up interviews explored emergent themes from preliminary interviews. Despite limited formal education and training, CHWs in this study were significant providers of IEC, including provision of generic health talks and HIV-specific information and facilitation to support clients’ entry and maintenance in the formal health system. They often incorporated local knowledge and understanding of illness in their communication. CHWs in this study were able to bridge the lifeworlds of the community and the formal services to expedite access and adherence to local clinics and other services. As mediators between the two worlds, CHWs reinterpreted health information to make it comprehensible in their communities. With growing formalisation of CHW programmes in South Africa and elsewhere, CHWs’ important role in health service access, health promotion and health maintenance must be recognised and supported in order to maximise impact.Web of Scienc
    corecore