26,147 research outputs found

    ARM2GC: Succinct Garbled Processor for Secure Computation

    Get PDF
    We present ARM2GC, a novel secure computation framework based on Yao's Garbled Circuit (GC) protocol and the ARM processor. It allows users to develop privacy-preserving applications using standard high-level programming languages (e.g., C) and compile them using off-the-shelf ARM compilers (e.g., gcc-arm). The main enabler of this framework is the introduction of SkipGate, an algorithm that dynamically omits the communication and encryption cost of the gates whose outputs are independent of the private data. SkipGate greatly enhances the performance of ARM2GC by omitting costs of the gates associated with the instructions of the compiled binary, which is known by both parties involved in the computation. Our evaluation on benchmark functions demonstrates that ARM2GC not only outperforms the current GC frameworks that support high-level languages, it also achieves efficiency comparable to the best prior solutions based on hardware description languages. Moreover, in contrast to previous high-level frameworks with domain-specific languages and customized compilers, ARM2GC relies on standard ARM compiler which is rigorously verified and supports programs written in the standard syntax.Comment: 13 page

    Design of multimedia processor based on metric computation

    Get PDF
    Media-processing applications, such as signal processing, 2D and 3D graphics rendering, and image compression, are the dominant workloads in many embedded systems today. The real-time constraints of those media applications have taxing demands on today's processor performances with low cost, low power and reduced design delay. To satisfy those challenges, a fast and efficient strategy consists in upgrading a low cost general purpose processor core. This approach is based on the personalization of a general RISC processor core according the target multimedia application requirements. Thus, if the extra cost is justified, the general purpose processor GPP core can be enforced with instruction level coprocessors, coarse grain dedicated hardware, ad hoc memories or new GPP cores. In this way the final design solution is tailored to the application requirements. The proposed approach is based on three main steps: the first one is the analysis of the targeted application using efficient metrics. The second step is the selection of the appropriate architecture template according to the first step results and recommendations. The third step is the architecture generation. This approach is experimented using various image and video algorithms showing its feasibility

    Optimizing the flash-RAM energy trade-off in deeply embedded systems

    Full text link
    Deeply embedded systems often have the tightest constraints on energy consumption, requiring that they consume tiny amounts of current and run on batteries for years. However, they typically execute code directly from flash, instead of the more energy efficient RAM. We implement a novel compiler optimization that exploits the relative efficiency of RAM by statically moving carefully selected basic blocks from flash to RAM. Our technique uses integer linear programming, with an energy cost model to select a good set of basic blocks to place into RAM, without impacting stack or data storage. We evaluate our optimization on a common ARM microcontroller and succeed in reducing the average power consumption by up to 41% and reducing energy consumption by up to 22%, while increasing execution time. A case study is presented, where an application executes code then sleeps for a period of time. For this example we show that our optimization could allow the application to run on battery for up to 32% longer. We also show that for this scenario the total application energy can be reduced, even if the optimization increases the execution time of the code

    Micro-Anthropic Principle for Quantum theory

    Full text link
    Probabilistic models (developped by workers such as Boltzmann, on foundations due to pioneers such as Bayes) were commonly regarded merely as approximations to a deterministic reality before the roles were reversed by the quantum revolution (under the leadership of Heisenberg and Dirac) whereby it was the deterministic description that was reduced to the status of an approximation, while the role of the observer became particularly prominent. The concomitant problem of lack of objectivity in the original Copenhagen interpretation has not been satisfactorily resolved in newer approaches of the kind pioneered by Everett. The deficiency of such interpretations is attributable to failure to allow for the anthropic aspect of the problem, meaning {\it a priori} uncertainty about the identity of the observer. The required reconciliation of subjectivity with objectivity is achieved here by distinguishing the concept of an observer from that of a perceptor, whose chances of identification with a particular observer need to be prescribed by a suitable anthropic principle. It is proposed that this should be done by an entropy ansatz according to which the relevant micro-anthropic weighting is taken to be proportional to the logarithm of the relevant number of Everett type branch-channels.Comment: 29 pages Latex, 1 figure. Contribution to `Universe or Multiverse?' ed. B.J. Carr, for Cambridge U.
    • …
    corecore