20 research outputs found

    FPGA Based Random Number Generation for Cryptographic Applications

    Get PDF
    Random numbers are useful for a variety of purposes, such as generating data encryption keys,simulating and modeling complex phenomena and for selecting random samples from larger data sets. They have also been used aesthetically, for example in literature and music, and are of course ever popular for games and gambling. When discussing single numbers, a random number is one that is drawn from a set of possible values, each of which is equally probable, i.e., a uniform distribution. When discussing a sequence of random numbers, each number drawn must be statistically independent of the others. Random numbers are generated by various methods. The two types of generators used for random number generation are pseudo random number generator (PRNG) and true random number generator (TRNG). The numbers generated are random because no polynomial – time algorithm can describe the relation amongst the different numbers of the sequence. Numbers generated by true random number generator (TRNG) or cryptographically secure pseudo random number generator (CSPRNG). The sources of randomness in TRNG are physical phenomena like lightning, radioactive decay, thermal noise etc. The source of randomness in CSPRNG is the algorithm on which it is based. In this project, the random numbers generated for cryptographic applications were generated by using the Blum Blum Shub generator, the CSPRBG. It was implemented on a FPGA platform using VHDL programming language and the simulation was done and tested on the Xilinx ISE 10.1i

    Faster 64-bit universal hashing using carry-less multiplications

    Get PDF
    Intel and AMD support the Carry-less Multiplication (CLMUL) instruction set in their x64 processors. We use CLMUL to implement an almost universal 64-bit hash family (CLHASH). We compare this new family with what might be the fastest almost universal family on x64 processors (VHASH). We find that CLHASH is at least 60% faster. We also compare CLHASH with a popular hash function designed for speed (Google's CityHash). We find that CLHASH is 40% faster than CityHash on inputs larger than 64 bytes and just as fast otherwise

    Peak Transmission Rate Resilient Crosslayer Broadcast for Body Area Networks

    Get PDF
    International audienc

    Streaming Data through the IoT via Actor-Based Semantic Routing Trees

    Get PDF
    The Internet of Things (IoT) enables the usage of resources at the edge of the network for various data management tasks that are traditionally executed in the cloud. However, the heterogeneity of devices and communication methods in a multi-tiered IoT environment (cloud/fog/edge) exacerbates the problem of deciding which nodes to use for processing and how to route data. In addition, both decisions cannot be made only statically for the entire lifetime of an application, as an IoT environment is highly dynamic and nodes in the same topology can be both stationary and mobile as well as reliable and volatile. As a result of these different characteristics, an IoT data management system that spans across all tiers of an IoT network cannot meet the same availability assumptions for all its nodes. To address the problem of choosing ad-hoc which nodes to use and include in a processing workload, we propose a networking component that uses a-priori as well as ad-hoc routing information from the network. Our approach, called Rime, relies on keeping track of nodes at the gateway level and exchanging routing information with other nodes in the network. By tracking nodes while the topology evolves in a geo-distributed manner, we enable efficient communication even in the case of frequent node failures. Our evaluation shows that Rime keeps in check communication costs and message transmissions by reducing unnecessary message exchange by up to 82:65%

    Calculs Monte Carlo en transport d'énergie pour le calcul de la dose en radiothérapie sur plateforme graphique hautement parallèle

    Get PDF
    RÉSUMÉ Le calcul de dose est au centre de la planication de traitement. Ce calcul de dose doit être 1) assez précis pour que le physicien médical et le radio-oncologue puissent prendre une décision basée sur des résultats près de la réalité et 2) assez rapide pour permettre une utilisation routinière du calcul de dose. Le compromis entre ces deux facteurs en opposition a donné lieu à la création de plusieurs algorithmes de calcul de dose, des plus approximatifs et rapides aux plus exacts et lents. Le plus exact de ces algorithmes est la méthode de Monte Carlo puisqu'il se base sur des principes physiques fondamentaux. Depuis 2007, une nouvelle plateforme de calcul gagne de la popularité dans la communauté du calcul scientifique : le processeur graphique (GPU). La plateforme existe depuis plus longtemps que 2007 et certains calculs scientifiques étaient déjà effectués sur le GPU. L'année 2007 marque par contre l'arrivée du langage de programmation CUDA qui permet de faire abstraction du contexte graphique pour programmer le GPU. Le GPU est une plateforme de calcul massivement parallèle et adaptée aux algorithmes avec données parallèles. Cette thèse vise à savoir comment maximiser l'utilisation d'une plateforme GPU en vue d'améliorer la vitesse d'exécution de la simulation de Monte Carlo en transport d'énergie pour le calcul de la dose en radiothérapie. Pour répondre a cette question, la plateforme GPUMCD a été développée. GPUMCD implémente une simulation de Monte Carlo couplée photon-électron et s'exécute complètement sur le GPU. Le premier objectif de cette thèse est d'évaluer cette méthode pour un calcul en radiothérapie externe. Des sources monoénergétiques simples et des fantômes en couches sont utilisés. Une comparaison aux plateforme EGSnrc et DPM est effectuée. GPUMCD reste à l'intérieur de critères gamma 2%-2mm de EGSnrc tout en étant au moins 1200x plus rapide qu'EGSnrc et 250x plus rapide que DPM. Le deuxième objectif consiste en l'évaluation de la plateforme pour un calcul en curiethérapie interne. Des sources complexes basées sur la géométrie et le spectre énergétique de sources réelles sont utilisées à l'intérieur d'une géométrie de type TG-43. Des différences de moins de 4% sont trouvées lors d'une comparaison à la plateforme BrachyDose ainsi qu'aux données consensus du TG-43. Le troisième objectif vise l'utilisation de GPUMCD comme engin de calcul pour le MRI-Linac. Pour ce faire, la prise en considération de l'effet du champ magnétique sur les particules doit être ajoutée. Il a été démontré que GPUMCD se situe à l'intérieur de critères gamma 2%-2mm de deux expériences visant à mettre en évidence l'in fuence du champ magnétique sur les distributions de dose. Les résultats suggèrent que le GPU est une plateforme matérielle intéressante pour le calcul de dose par simulation de Monte Carlo et que la plateforme logicielle GPUMCD permet de faire un calcul rapide et exact.----------ABSTRACT Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientic computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientic computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The rst objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200 faster than EGSnrc and 250 faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43 reference geometry. Dierences of less than 4% are found compared to the BrachyDose platforms well as TG-43 consensus data. The third objective aims at the use of GPUMCD for dose calculation within MRI-Linac vii environment. To this end, the eect of the magnetic eld on charged particles has been added to the simulation. It was shown that GPUMCD is within a gamma criteria of 2%-2mm of two experiments aiming at highlighting the in uence of the magnetic eld on the dose distribution. The results suggest that the GPU is an interesting computing platform for dose calculations through Monte Carlo simulations and that software platform GPUMCD makes it possible to achieve fast and accurate results

    Reliable chip design from low powered unreliable components

    Get PDF
    The pace of technological improvement of the semiconductor market is driven by Moore’s Law, enabling chip transistor density to double every two years. The transistors would continue to decline in cost and size but increase in power. The continuous transistor scaling and extremely lower power constraints in modern Very Large Scale Integrated(VLSI) chips can potentially supersede the benefits of the technology shrinking due to reliability issues. As VLSI technology scales into nanoscale regime, fundamental physical limits are approached, and higher levels of variability, performance degradation, and higher rates of manufacturing defects are experienced. Soft errors, which traditionally affected only the memories, are now also resulting in logic circuit reliability degradation. A solution to these limitations is to integrate reliability assessment techniques into the Integrated Circuit(IC) design flow. This thesis investigates four aspects of reliability driven circuit design: a)Reliability estimation; b) Reliability optimization; c) Fault-tolerant techniques, and d) Delay degradation analysis. To guide the reliability driven synthesis and optimization of combinational circuits, highly accurate probability based reliability estimation methodology christened Conditional Probabilistic Error Propagation(CPEP) algorithm is developed to compute the impact of gate failures on the circuit output. CPEP guides the proposed rewriting based logic optimization algorithm employing local transformations. The main idea behind this methodology is to replace parts of the circuit with functionally equivalent but more reliable counterparts chosen from a precomputed subset of Negation-Permutation-Negation(NPN) classes of 4-variable functions. Cut enumeration and Boolean matching driven by reliability-aware optimization algorithm are used to identify the best possible replacement candidates. Experiments on a set of MCNC benchmark circuits and 8051 functional microcontroller units indicate that the proposed framework can achieve up to 75% reduction of output error probability. On average, about 14% SER reduction is obtained at the expense of very low area overhead of 6.57% that results in 13.52% higher power consumption. The next contribution of the research describes a novel methodology to design fault tolerant circuitry by employing the error correction codes known as Codeword Prediction Encoder(CPE). Traditional fault tolerant techniques analyze the circuit reliability issue from a static point of view neglecting the dynamic errors. In the context of communication and storage, the study of novel methods for reliable data transmission under unreliable hardware is an increasing priority. The idea of CPE is adapted from the field of forward error correction for telecommunications focusing on both encoding aspects and error correction capabilities. The proposed Augmented Encoding solution consists of computing an augmented codeword that contains both the codeword to be transmitted on the channel and extra parity bits. A Computer Aided Development(CAD) framework known as CPE simulator is developed providing a unified platform that comprises a novel encoder and fault tolerant LDPC decoders. Experiments on a set of encoders with different coding rates and different decoders indicate that the proposed framework can correct all errors under specific scenarios. On average, about 1000 times improvement in Soft Error Rate(SER) reduction is achieved. Last part of the research is the Inverse Gaussian Distribution(IGD) based delay model applicable to both combinational and sequential elements for sub-powered circuits. The Probability Density Function(PDF) based delay model accurately captures the delay behavior of all the basic gates in the library database. The IGD model employs these necessary parameters, and the delay estimation accuracy is demonstrated by evaluating multiple circuits. Experiments results indicate that the IGD based approach provides a high matching against HSPICE Monte Carlo simulation results, with an average error less than 1.9% and 1.2% for the 8-bit Ripple Carry Adder(RCA), and 8-bit De-Multiplexer(DEMUX) and Multiplexer(MUX) respectively

    Modeling and Experimental Techniques to Demonstrate Nanomanipulation With Optical Tweezers

    Get PDF
    The development of truly three-dimensional nanodevices is currently impeded by the absence of effective prototyping tools at the nanoscale. Optical trapping is well established for flexible three-dimensional manipulation of components at the microscale. However, it has so far not been demonstrated to confine nanoparticles, for long enough time to be useful in nanoassembly applications. Therefore, as part of this work we demonstrate new techniques that successfully extend optical trapping to nanoscale manipulation. In order to extend optical trapping to the nanoscale, we must overcome certain challenges. For the same incident beam power, the optical binding forces acting on a nanoparticle within an optical trap are very weak, in comparison with forces acting on microscale particles. Consequently, due to Brownian motion, the nanoparticle often exits the trap in a very short period of time. We improve the performance of optical traps at the nanoscale by using closed-loop control. Furthermore, we show through laboratory experiments that we are able to localize nanoparticles to the trap using control systems, for sufficient time to be useful in nanoassembly applications, conditions under which a static trap set to the same power as the controller is unable to confine a same-sized particle. Before controlled optical trapping can be demonstrated in the laboratory, key tools must first be developed. We implement Langevin dynamics simulations to model the interaction of nanoparticles with an optical trap. Physically accurate simulations provide a robust platform to test new methods to characterize and improve the performance of optical tweezers at the nanoscale, but depend on accurate trapping force models. Therefore, we have also developed two new laboratory-based force measurement techniques that overcome the drawbacks of conventional force measurements, which do not accurately account for the weak interaction of nanoparticles in an optical trap. Finally, we use numerical simulations to develop new control algorithms that demonstrate significantly enhanced trapping of nanoparticles and implement these techniques in the laboratory. The algorithms and characterization tools developed as part of this work will allow the development of optical trapping instruments that can confine nanoparticles for longer periods of time than is currently possible, for a given beam power. Furthermore, the low average power achieved by the controller makes this technique especially suitable to manipulate biological specimens, but is also generally beneficial to nanoscale prototyping applications. Therefore, capabilities developed as part of this work, and the technology that results from it may enable the prototyping of three-dimensional nanodevices, critically required in many applications
    corecore