19 research outputs found

    Teaching Your Wireless Card New Tricks: Smartphone Performance and Security Enhancements Through Wi-Fi Firmware Modifications

    Get PDF
    Smartphones come with a variety of sensors and communication interfaces, which make them perfect candidates for mobile communication testbeds. Nevertheless, proprietary firmwares hinder us from accessing the full capabilities of the underlying hardware platform which impedes innovation. Focusing on FullMAC Wi-Fi chips, we present Nexmon, a C-based firmware modification framework. It gives access to raw Wi-Fi frames and advanced capabilities that we found by reverse engineering chips and their firmware. As firmware modifications pose security risks, we discuss how to secure firmware handling without impeding experimentation on Wi-Fi chips. To present and evaluate our findings in the field, we developed the following applications. We start by presenting a ping-offloading application that handles ping requests in the firmware instead of the operating system. It significantly reduces energy consumption and processing delays. Then, we present a software-defined wireless networking application that enhances scalable video streaming by setting flow-based requirements on physical-layer parameters. As security application, we present a reactive Wi-Fi jammer that analyses incoming frames during reception and transmits arbitrary jamming waveforms by operating Wi-Fi chips as software-defined radios (SDRs). We further introduce an acknowledging jammer to ensure the flow of non-targeted frames and an adaptive power-control jammer to adjust transmission powers based on measured jamming successes. Additionally, we discovered how to extract channel state information (CSI) on a per-frame basis. Using both SDR and CSI-extraction capabilities, we present a physical-layer covert channel. It hides covert symbols in phase changes of selected OFDM subcarriers. Those manipulations can be extracted from CSI measurements at a receiver. To ease the analysis of firmware binaries, we created a debugging application that supports single stepping and runs as firmware patch on the Wi-Fi chip. We published the source code of our framework and our applications to ensure reproducibility of our results and to enable other researchers to extend our work. Our framework and the applications emphasize the need for freely modifiable firmware and detailed hardware documentation to create novel and exciting applications on commercial off-the-shelf devices

    Validation and evaluation of the DNDC model to simulate soil water content, mineral N and N2O emission in the North China Plain

    Get PDF
    Using measured datasets (various soil properties, the soil water content, daily N2O emissions, and different crop parameters) from a multi-factorial field experiment (N fertilisation, irrigation, and straw removal) in the years 1999-2002 on the experimental site Dong Bei Wang (DBW) in the North China Plain (NCP), the ability of the process-oriented model DNDC (DeNitrification-DeComposition) was tested to simulate soil processes, and especially N2O trace gas emissions. The soil is classified as ?calcaric cambisol? (16 % clay content), while the site itself is further characterised by the regime of a continental monsoon climate. The central hypothesis in this work was that a thorough testing of the model (using a considerable range of different datasets) will allow the identification of shortcomings or discrepancies in the model, and that, given the linear succession of model calculation steps, the model calculation can be improved step by step, starting with improvements of initial calculation steps before continuing the improvement of following calculation steps. Due to increases in the N2O atmospheric concentration, and a lifetime of 100 to 150 years for one molecule (as well as a global warming potential 32 times that of a CO2 molecule), N2O is estimated to account for 7.9 % of the global warming potential. 70 % ? 90 % of the anthropogenic N2O emissions are thought to origin from agriculture. The formation of nitrous oxide is dependent on the availability of reactive nitrogen, and, therefore, mainly influenced by the N fertilisation rate, fertiliser type, application timing and method. China, and the main cropping area NCP, are expected to contribute considerably to the anthropogenic N2O emissions. The DNDC model consists of two compartments, which first calculate soil temperature, moisture, pH, redox potential and substrate concentration profiles from climate, soil, vegetation and anthropogenic activity datasets, and in a second step NO, N2O, CH4 and NH3 fluxes. In accordance with the data availability, the simulation of the soil water content, the mineral nitrogen concentration, and the N2O fluxes were investigated. An automated parameter optimisation (using the software UCODE_2005) and programmed changes in the source code were conducted to improve the model simulations. In result, neither the automated parameter optimisations, nor the programmed changes, were able to improve the unsatisfying default simulations of the DNDC model. The results of the cascade model, employed by the DNDC model to simulate soil water dynamics, suggest that conceptual errors exist in the model calculation. Also the results of the mineral nitrogen and N2O emissions simulations suggest shortcomings in the model calculation. The best agreement between measured and simulated total cumulative N2O fluxes was achieved using an adapted (90 cm soil depth, adjusted SOC fractioning, and added atmospheric N deposition) default model version, despite unsatisfactory simulations of soil water content, mineral nitrogen, and daily N2O fluxes. Thus, in conclusion, the investigated DNDC model version appears to be able to give an approximation of seasonal N2O fluxes, without being able to simulate the underlying processes accurately in detail. Therefore, caution is suggested when modelling sites on the process level.Die Messergebnisse (generelle Bodenparameter, Bodenwassergehalt, tägliche N2O Emissionen, sowie verschiedene Pflanzenparameter) eines multifaktoriellen Feldversuchs (Stickstoffdüngung, Bewässerung und die Entfernung von Getreidestroh nach der Ernte) in den Jahren 1999-2002, erstellt auf der Versuchsfläche Dong Bei Wang in der Nordchinesischen Tiefebene, wurden verwendet um die Genauigkeit des Prozess-orientierten Simulationsmodells DNDC (DeNitrification-DeComposition) zu untersuchen. In diesem Sinne standen die Simulation von Bodenprozessen, und insbesondere die Simulation von N2O Treibhausgas-Emissionen, im Mittelpunkt der Arbeit. Der Boden der Versuchsfläche ist klassifiziert als ?kalkiger Cambisol? (16% Tongehalt), eine weitere charakteristische Eigenschaft des untersuchten Bodens ist der Einfluss des kontinentalen Monsun-Klimas. Zentrale Hypothese der Arbeit war, dass die schrittweise Verbesserung einzelner (möglicherweise) fehlerhafter Kalkulationsschritte es erlauben würde, am Ende eine Übereinstimmung zwischen simulierten und gemessenen Bodenprozess-Datensätzen zu erzielen. Der Anstieg der atmosphärischen N2O Konzentration, die geschätzte Lebensdauer von 100 bis 150 Jahren eines N2O Moleküls (und einem Treibhauspotential, welches das 32-fache des Treibhauspotentials eines CO2 Moleküls beträgt), führen zu der Schätzung dass N2O Emissionen für ca. 7.9 % des gesamten Treibhauspotentials verantwortlich sind. Es wird erwartet das 70 % ? 90 % dieser N2O Emissionen aus der Landwirtschaft stammen. Die Menge des emittierten N2Os wird bestimmt durch die Verfügbarkeit von reaktiven Stickstoffverbindungen, und ist damit abhängig von Stickstoff-Düngemengen, Düngertyp, Ausbringungstermin und ?methode. China gilt, und hier insbesondere das Hauptanbaugebiet Nordchinesische Tiefebene, als eine der Hauptquellen menschlich verursachter N2O Emissionen. Das DNDC model besteht aus zwei Teilen, in denen zuerst (aus Eingabewerten von Wetter, Boden, Vegetation und menschlichen Aktivitäten) Bodentemperatur, Bodenfeuchtigkeit, den pH Wert, das Boden Redox Potential, sowie Substratkonzentrationen im Bodenprofil, und in einem zweiten Schritt NO, N2O, CH4 und NH3 Flüsse berechnet werden. In Übereinstimmung mit der Datenverfügbarkeit wurden die Simulation des Bodenwassergehalts, des Stickstoffhaushalts und der N2O Flüsse überprüft. Eine automatisierte Parameter Optimierung (mit Hilfe der Software UCODE_2005) und programmierte Änderungen im DNDC Quellcode wurden genutzt um die Modellsimulationen zu verbessern. Im Ergebnis führten aber weder die automatisierte Parameter Optimierung, noch die programmierten Änderung zu einer Verbesserung der unzulänglichen Simulationsergebnisse des DNDC Modells. Die Resultate des Kaskaden-Modell, welches im DNDC Modell für die Simulation des Bodenwasserhaushalts zuständig ist, legen die Existenz grundlegender Fehler in der Berechnung nahe. Die Resultate der Simulation des Stickstoffhaushalts und der N2O Emissionen deuten ebenfalls auf Unzulänglichkeiten in der Modellberechnung. Die beste Übereinstimmung zwischen gemessenen und simulierten saisonalen N2O Emissionsraten wurde mit einer adaptierten DNDC Version erreicht (90 cm Bodentiefe, angepasste Fraktionierung des organischen Kohlenstoffgehalts und hinzugefügter atmosphärischer Stickstoffablagerung), allerdings basierend auf einer äußerst ungenauen Simulation des Bodenwassergehalts, des Stickstoffhaushalts und der täglichen N2O Emissionen. Deswegen muss geschlussfolgert werden, dass das Modell nicht in der Lage ist die Bodenprozesse auf dem Untersuchungsstandort detailgetreu nachzustellen, und dass Vorsicht geboten ist wenn das Modell zur Simulation der Bodenprozesse anderer Standorte eingesetzt wird. Es bleibt allerdings die Möglichkeit, das DNDC Modell zur Simulation von saisonalen N2O Emissionsraten in hypothetischen Situationen und zur Berechnung von regionalen N2O Emissionsraten zu verwenden

    Quantum Advantage for All

    Full text link
    We show that the algorithmic complexity of any classical algorithm written in a Turing-complete programming language polynomially bounds the number of quantum bits that are required to run and even symbolically execute the algorithm on a quantum computer. In particular, we show that any classical algorithm AA that runs in O(f(n))\mathcal{O}(f(n)) time and O(g(n))\mathcal{O}(g(n)) space requires no more than O(f(n)g(n))\mathcal{O}(f(n)\cdot g(n)) quantum bits to execute, even symbolically, on a quantum computer. With O(1)O(g(n))O(f(n))\mathcal{O}(1)\leq\mathcal{O}(g(n))\leq\mathcal{O}(f(n)) for all nn, the quantum bits required to execute AA may therefore not exceed O(f(n)2)\mathcal{O}(f(n)^2) and may come down to O(f(n))\mathcal{O}(f(n)) if memory consumption by AA is bounded by a constant. Our construction works by encoding symbolic execution of machine code in a finite state machine over the satisfiability-modulo-theory (SMT) of bitvectors, for modeling CPU registers, and arrays of bitvectors, for modeling main memory. The FSM is linear in the size of the code, independent of execution time and space, and represents the reachable machine states for any given input. The FSM may be explored by bounded model checkers using SMT and SAT solvers as backend. However, for the purpose of this paper, we focus on quantum computing by unrolling and bit-blasting the FSM into (1)~satisfiability-preserving quadratic unconstrained binary optimization (QUBO) models targeting adiabatic forms of quantum computing such as quantum annealing, and (2)~semantics-preserving quantum circuits (QCs) targeting gate-model quantum computers. With our compact QUBOs, real quantum annealers can now execute simple but real code even symbolically, yet only with potential but no guarantee for exponential speedup, and with our QCs as oracles, Grover's algorithm applies to symbolic execution of arbitrary code, guaranteeing at least in theory a quadratic speedup

    Customizing the Computation Capabilities of Microprocessors.

    Full text link
    Designers of microprocessor-based systems must constantly improve performance and increase computational efficiency in their designs to create value. To this end, it is increasingly common to see computation accelerators in general-purpose processor designs. Computation accelerators collapse portions of an application's dataflow graph, reducing the critical path of computations, easing the burden on processor resources, and reducing energy consumption in systems. There are many problems associated with adding accelerators to microprocessors, though. Design of accelerators, architectural integration, and software support all present major challenges. This dissertation tackles these challenges in the context of accelerators targeting acyclic and cyclic patterns of computation. First, a technique to identify critical computation subgraphs within an application set is presented. This technique is hardware-cognizant and effectively generates a set of instruction set extensions given a domain of target applications. Next, several general-purpose accelerator structures are quantitatively designed using critical subgraph analysis for a broad application set. The next challenge is architectural integration of accelerators. Traditionally, software invokes accelerators by statically encoding new instructions into the application binary. This is incredibly costly, though, requiring many portions of hardware and software to be redesigned. This dissertation develops strategies to utilize accelerators, without changing the instruction set. In the proposed approach, the microarchitecture translates applications at run-time, replacing computation subgraphs with microcode to utilize accelerators. We explore the tradeoffs in performing difficult aspects of the translation at compile-time, while retaining run-time replacement. This culminates in a simple microarchitectural interface that supports a plug-and-play model for integrating accelerators into a pre-designed microprocessor. Software support is the last challenge in dealing with computation accelerators. The primary issue is difficulty in generating high-quality code utilizing accelerators. Hand-written assembly code is standard in industry, and if compiler support does exist, simple greedy algorithms are common. In this work, we investigate more thorough techniques for compiling for computation accelerators. Where greedy heuristics only explore one possible solution, the techniques in this dissertation explore the entire design space, when possible. Intelligent pruning methods ensure that compilation is both tractable and scalable.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57633/2/ntclark_1.pd

    Object tracking using a many-core embedded system

    Get PDF
    Object localization and tracking is essential for many practical applications, such as mancomputer interaction, security and surveillance, robot competitions, and Industry 4.0. Because of the large amount of data present in an image, and the algorithmic complexity involved, this task can be computationally demanding, mainly for traditional embedded systems, due to their processing and storage limitations. This calls for investigation and experimentation with new approaches, as emergent heterogeneous embedded systems, that promise higher performance, without compromising energy e ciency. This work explores several real-time color-based object tracking techniques, applied to images supplied by a RGB-D sensor attached to di erent embedded platforms. The main motivation was to explore an heterogeneous Parallella board with a 16-core Epiphany coprocessor, to reduce image processing time. Another goal was to confront this platform with more conventional embedded systems, namely the popular Raspberry Pi family. In this regard, several processing options were pursued, from low-level implementations specially tailored to the Parallella, to higher-level multi-platform approaches. The results achieved allow to conclude that the programming e ort required to e - ciently use the Epiphany co-processor is considerable. Also, for the selected case study, the performance attained was bellow the one o ered by simpler approaches running on quad-core Raspberry Pi boards.A localização e o seguimento de objetos são essenciais para muitas aplicações práticas, como interação homem-computador, segurança e vigilância, competições de robôs e Industria 4.0. Devido `a grande quantidade de dados presentes numa imagem, e a` complexidade algorítmica envolvida, esta tarefa pode ser computacionalmente exigente, principalmente para os sistemas embebidos tradicionais, devido às suas limitações de processamento e armazenamento. Desta forma, ´e importante a investigação e experimentação com novas abordagens, tais como sistemas embebidos heterogéneos emergentes, que trazem consigo a promessa de melhor desempenho, sem comprometer a eficiência energética. Este trabalho explora várias t´técnicas de seguimento de objetos em tempo real baseado em imagens a cores adquiridas por um sensor RBD-D, conectado a diferentes sistemas em- bebidos. A motivação principal foi a exploração de uma placa heterogénea Parallella com um co-processador Epiphany de 16 núcleos, a fim de reduzir o tempo de processamento das imagens. Outro objetivo era confrontar esta plataforma com sistemas embebidos mais convencionais, nomeadamente a popular família Raspberry Pi. Nesse sentido, foram prosseguidas diversas opções de processamento, desde implementações de baixo nível, específicas da placa Parallella, até abordagens multi-plataforma de mais alto nível. Os resultados alcançados permitem concluir que o esforço de programação necessário para utilizar eficientemente o co-processador Epiphany é considerável. Adicionalmente, para o caso de estudo deste trabalho, o desempenho alcançado fica aquém do conseguido por abordagens mais simples executando em sistemas Raspberry Pi com quatro núcleos

    Cluster Framework for Internet of People, Things and Services

    Get PDF

    ���������������� � Graphics Programming in Icon

    Get PDF
    ~ COMMUNICATIONSThis book originally was published by Peer-to-Peer Communications. It is out ofprint and the rights have reverted to the authors, who hereby place it in the public domain. Publisher's Cataloging-in-Publication (Provided by Quality Books, Inc.

    POCO. Ein portables System zur Generierung portabler Compiler

    Get PDF

    Simulation assisted performance optimization of large-scale multiparameter technical systems

    Get PDF
    During the past two decades the role of dynamic process simulation within the research and development work of process and control solutions has grown tremendously. As the simulation assisted working practices have become more and more popular, also the accuracy requirements concerning the simulation results have tightened. The accuracy improvement of complex, plant-wide models via parameter tuning necessitates implementing practical, scalable methods and tools operating on the correct level of abstraction. In modern integrated process plants, it is not only the performance of individual controllers but also their interactions that determine the overall performance of the large-scale control systems. However, in practice it has become customary to split large-scale problems into smaller pieces and to use traditional analytical control engineering approaches, which inevitably end in suboptimal solutions. The performance optimization problems related to large control systems and to plant-wide process models are essentially connected in the context of new simulation assisted process and control design practices. The accuracy of the model that is obtained with data-based parameter tuning determines the quality of the simulation assisted controller tuning results. In this doctoral thesis both problems are formulated in the same framework depicted in the title of the thesis. To solve the optimization problem, a novel method called Iterative Regression Tuning (IRT) applying numerical optimization and multivariate regression is presented. IRT method has been designed especially for large-scale systems and it allows the incorporation of domain area expertise into the optimization goals. The thesis introduces different variations on the IRT method, technical details related to their application and various use cases of the algorithm. The simulation assisted use case is presented through a number of application examples of control performance and model accuracy optimization
    corecore