152 research outputs found

    EPiK-a Workflow for Electron Tomography in Kepler.

    Get PDF
    Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archiving and sharing scientific workflows. Electron tomography (ET) enables high-resolution views of complex cellular structures, such as cytoskeletons, organelles, viruses and chromosomes. Imaging investigations produce large datasets. For instance, in Electron Tomography, the size of a 16 fold image tilt series is about 65 Gigabytes with each projection image including 4096 by 4096 pixels. When we use serial sections or montage technique for large field ET, the dataset will be even larger. For higher resolution images with multiple tilt series, the data size may be in terabyte range. Demands of mass data processing and complex algorithms require the integration of diverse codes into flexible software structures. This paper describes a workflow for Electron Tomography Programs in Kepler (EPiK). This EPiK workflow embeds the tracking process of IMOD, and realizes the main algorithms including filtered backprojection (FBP) from TxBR and iterative reconstruction methods. We have tested the three dimensional (3D) reconstruction process using EPiK on ET data. EPiK can be a potential toolkit for biology researchers with the advantage of logical viewing, easy handling, convenient sharing and future extensibility

    3D reconstruction of biological structures: automated procedures for alignment and reconstruction of multiple tilt series in electron tomography

    Get PDF
    Transmission electron microscopy allows the collection of multiple views of specimens and their computerized three-dimensional reconstruction and analysis with electron tomography. Here we describe development of methods for automated multi-tilt data acquisition, tilt-series processing, and alignment which allow assembly of electron tomographic data from a greater number of tilt series, yielding enhanced data quality and increasing contrast associated with weakly stained structures. This scheme facilitates visualization of nanometer scale details of fine structure in volumes taken from plastic-embedded samples of biological specimens in all dimensions. As heavy metal-contrasted plastic-embedded samples are less sensitive to the overall dose rather than the electron dose rate, an optimal resampling of the reconstruction space can be achieved by accumulating lower dose electron micrographs of the same area over a wider range of specimen orientations. The computerized multiple tilt series collection scheme is implemented together with automated advanced procedures making collection, image alignment, and processing of multi-tilt tomography data a seamless process. We demonstrate high-quality reconstructions from samples of well-described biological structures. These include the giant Mimivirus and clathrin-coated vesicles, imaged in situ in their normal intracellular contexts. Examples are provided from samples of cultured cells prepared by high-pressure freezing and freeze-substitution as well as by chemical fixation before epoxy resin embedding

    SMART SECURITY MANAGEMENT IN SECURE DEVICES

    Get PDF
    International audienceAmong other threats, secure components are subjected tophysical attacks whose aim is to recover the secret information theystore. Most of the work carried out to protect these components generally consists in developing protections (or countermeasures) taken one byone. But this “countermeasure-centered” approach drastically decreasesthe performance of the chip in terms of power, speed and availability.In order to overcome this limitation, we propose a complementary approach: smart dynamic management of the whole set of countermeasuresembedded in the component. Three main specifications for such management are required in a real world application (for example, a conditionalaccess system for Pay-TV): it has to provide capabilities for the chip todistinguish between attacks and normal use cases (without the help of ahuman being and in a robust but versatile way); it also has to be basedon mechanisms which dynamically find a trade-off between security andperformance; all these mecanisms have to formalized in a way which isclearly understandable by the designer. In this article, a prototype whichenables such security management is described. The solution is based ona double-processor architecture: one processor embeds a representativeset of countermeasures (and mechanisms to define their parameters) andexecutes the application code. The second processor, on the same chip,applies a given security strategy, but without requesting sensitive datafrom the first processor. The chosen strategy is based on fuzzy logic reasoning to enable the designer to describe, using a fairly simple formalism,both the attack paths and the normal use cases. A proof of concept hasbeen proposed for the smart card part of a conditional access for Pay-TV,but it could easily be fine-tuned for other applications

    Variation individuelle des paramètres de marche au long d'une distance de 500m chez des personnes atteintes de sclérose en plaques et chez des volontaires sains

    Full text link
    Background: we previously demonstrated the usefulness of the Deceleration Index (DI, the ratio between the last 100m of the Timed 500-Meter Walk test –T500MW – and the walking speed - WS – of the Timed 25-Foot Walk Test with a propelled start – T25FW+) to evaluate motor fatigue over a long walking distance in people with multiple sclerosis (pwMS). We also recently designed and internally validated a new gait analysis tool for pwMS (GAIMS) that can measure other relevant gait characteristics than the sole WS, such as ataxia, asymmetry and perhaps spasticity. Aims: (i) To compare various gait characteristics between the last and the first 100m of the T500MW in a population of pwMS and healthy volunteers (HV), (ii) to compare the ratio between the last and the first 100m of the T500MW with the DI, and (iii) their relationship with the EDSS. Methods : Subjects were asked to perform the T25FW+ and the T500MW as part of a multimodal evaluation at the MS Clinic of the CHU of Liège. Their gait characteristics were measured using GAIMS. (i) Paired Student’s t-tests were performed on various gait characteristics extracted during the last and first 100m of the T500MW with .05 as a level of significance, (ii) Spearman correlation coefficient (ρ) was calculated (ii) between these ratio and (iii) subject’s EDSS. Results: Seventy-one pwMS and 129 were enrolled in our study. (i) Significant differences were observed for speed related gait characteristics between the last and first 100m of the T500MW, but also for gait characteristics related to ataxia and precision of foot placement. (ii) A moderate positive correlation was observed between the WS ratio of the last and first 100m of the T500MW and the DI. (iii) The correlation between the DI and the EDSS was weakly negative, while the one between the last and first 100m of the T500MW ratio and the EDSS was moderatly negative. Conclusion: (i) As previously demonstrated, we here confirm that alongside to WS, there are other gait features affected by locomotor fatigue over a long walking distance, (ii) the moderate positive correlation between the DI and the last/first 100m of the T500MW indicates that these measures are not the same and that next to a long distance walking test such as the T500MW, a short one such as the T25FW+ remains useful. (iii) The last/first 100m of the T500MW is better correlated to the EDSS and might be a better predictive tool of pwMS’ neurologic state than the DI

    Computational Differential Privacy for Encrypted Databases Supporting Linear Queries

    Get PDF
    Differential privacy is a fundamental concept for protecting individual privacy in databases while enabling data analysis. Conceptually, it is assumed that the adversary has no direct access to the database, and therefore, encryption is not necessary. However, with the emergence of cloud computing and the «on-cloud» storage of vast databases potentially contributed by multiple parties, it is becoming increasingly necessary to consider the possibility of the adversary having (at least partial) access to sensitive databases. A consequence is that, to protect the on-line database, it is now necessary to employ encryption. At PoPETs\u2719, it was the first time that the notion of differential privacy was considered for encrypted databases, but only for a limited type of query, namely histograms. Subsequently, a new type of query, summation, was considered at CODASPY\u2722. These works achieve statistical differential privacy, by still assuming that the adversary has no access to the encrypted database. In this paper, we argue that it is essential to assume that the adversary may eventually access the encrypted data, rendering statistical differential privacy inadequate. Therefore, the appropriate privacy notion for encrypted databases that we use is computational differential privacy, which was introduced by Beimel et al. at CRYPTO \u2708. In our work, we focus on the case of functional encryption, which is an extensively studied primitive permitting some authorized computation over encrypted data. Technically, we show that any randomized functional encryption scheme that satisfies simulation-based security and differential privacy of the output can achieve computational differential privacy for multiple queries to one database. Our work also extends the summation query to a much broader range of queries, specifically linear queries, by utilizing inner-product functional encryption. Hence, we provide an instantiation for inner-product functionalities by proving its simulation soundness and present a concrete randomized inner-product functional encryption with computational differential privacy against multiple queries. In term of efficiency, our protocol is almost as practical as the underlying inner product functional encryption scheme. As evidence, we provide a full benchmark, based on our concrete implementation for databases with up to 1 000 000 entries. Our work can be considered as a step towards achieving privacy-preserving encrypted databases for a wide range of query types and considering the involvement of multiple database owners

    SOS An innovative secure system architecture

    Get PDF
    International audience`Smart On Smart' (SOS) is a project launched in 2008 and funded by the `Agence Nationale pourla Recherche'. This project aims at helping the partnershipformed by Trusted Logic, Viaccess, LIP6, and the CEA-LETI to study aninnovative secure system architecture. This architecture is based on twoparts. The first one called the host system is in charge of the mainapplication and processes the sensitive data. Connected to the host system thesecond part called the audit system is strictly dedicated to thesecurity strategy response of the whole system.The underlying idea is that such an architecture will help to build abetter secure system. For instance it becomes possible to improve theintelligence of the security policy to be able to differentiate a normalerror behaviour from a suspect one. Since the whole system's securityreliability is adjustable it becomes also possible to maximise theperformance when nonsensitive data are processed. Non repeatablebehaviour and response of the system under attack could be alsoprogrammed in order to counter the attacker.One of the main tasks is to define a hardware architecture correspondingto this concept for which a safe boundary has to be established between thetwo systems. The software part and in particular the way thesecurity policy is implemented on the audit system is also a non obvioustask especially because the overhead due to this additional auditsystem has to be the lowest possible in order to be cost-effective.We are building a proof of concept around a pay TV application. Additionalhardware features are used to emulate fault injection attacks for thedemo. We are building the hardware model on an FPGA board provingalso that such an architecture fits an FPGA based system
    corecore