660 research outputs found

    Evaluation of Anonymized ONS Queries

    Full text link
    Electronic Product Code (EPC) is the basis of a pervasive infrastructure for the automatic identification of objects on supply chain applications (e.g., pharmaceutical or military applications). This infrastructure relies on the use of the (1) Radio Frequency Identification (RFID) technology to tag objects in motion and (2) distributed services providing information about objects via the Internet. A lookup service, called the Object Name Service (ONS) and based on the use of the Domain Name System (DNS), can be publicly accessed by EPC applications looking for information associated with tagged objects. Privacy issues may affect corporate infrastructures based on EPC technologies if their lookup service is not properly protected. A possible solution to mitigate these issues is the use of online anonymity. We present an evaluation experiment that compares the of use of Tor (The second generation Onion Router) on a global ONS/DNS setup, with respect to benefits, limitations, and latency.Comment: 14 page

    A Scalable and Adaptive Network on Chip for Many-Core Architectures

    Get PDF
    In this work, a scalable network on chip (NoC) for future many-core architectures is proposed and investigated. It supports different QoS mechanisms to ensure predictable communication. Self-optimization is introduced to adapt the energy footprint and the performance of the network to the communication requirements. A fault tolerance concept allows to deal with permanent errors. Moreover, a template-based automated evaluation and design methodology and a synthesis flow for NoCs is introduced

    Modeling and Implementation of 5G Edge Caching over Satellite

    Get PDF
    The fifth generation (5G) wireless networks have to deal with the high data rate and stringent latency requirements due to the massive invasion of connected devices and data-hungry applications. Edge caching is a promising technique to overcome these challenges by prefetching the content closer to the end users at the edge node’s local storage. In this paper, we analyze the performance of edge caching 5G networks with the aid of satellite communication systems. Firstly, we investigate the satellite-aided edge caching systems in two promising use cases: a) in dense urban areas, and b) in sparsely populated regions, e.g., rural areas. Secondly, we study the effectiveness of satellite systems via the proposed satellite-aided caching algorithm, which can be used in three configurations: i) mono-beam satellite, ii) multi-beam satellite, and iii) hybrid mode. Thirdly, the proposed caching algorithm is evaluated by using both empirical Zipf-distribution data and the more realistic Movielens dataset. Last but not least, the proposed caching scheme is implemented and tested by our developed demonstrators which allow real-time analysis of the cache hit ratio and cost analysis

    Energy and Load Based Routing Protocol for Mobile Ad-Hoc Networks

    Get PDF
    A Mobile Ad-hoc Network (MANET) is a network with no infrastructure, operating on wireless mobile nodes. MANET consist of quickly deployable, independent as well as self-configuring nodes with no centralized administration. There is no precise topology and have limited energy and computing resources. Jitter, a random, small variation in timing is widely used in between periodic transmission of the control message in wireless communication protocols. It is an especially important technique during route discovery process when a process may cause a situation where adjacent nodes have to broadcast concurrently, then the use of jitter makes a protocol able to avoid concurrent packet transmissions over the same channel by neighbouring nodes in the network. In AODV jitter, i.e., a small delay during the flooding of a control message is used during route discovery process to avoid simultaneous packet transmission by neighbouring nodes, which might result in the collision between these packets. The proposed energy and load based protocol (ENL-AODV) introduces energy and load factor in the calculation of jitter while forwarding of route requests(RREQ), making it select the path with enough energy to transfer the data packet. As simulation results describe, ENL-AODV improves the efficiency of ad-hoc networks, increases packet delivery ratio, throughput and network lifetime, also decreases average end-to-end delay

    Performance tuning and cost discovery of mobile web-based applications

    Get PDF
    When considering the addition of a mobile presentation channel to an existing web-based application, project managers should know how the mobile channel|s characteristics will impact the user experience and the cost of using the application, even before development begins. The PETTICOAT (Performance Tuning and cost discovery of mobile web-based Applications) approach presented here provides decision-makers with indicators on the economical feasibility of mobile channel development. In a nutshell, it involves analysing interaction patterns on the existing stationary channel, identifying key business processes among them, measuring the time and data volume incurred in their execution, and then simulating how the same interaction patterns would run when subjected to the frame conditions of a mobile channel. As a result of the simulation, we then gain time and volume projections for those interaction patterns that allow us to estimate the costs incurred by executing certain business processes on different mobile channels

    Resource-aware Programming in a High-level Language - Improved performance with manageable effort on clustered MPSoCs

    Get PDF
    Bis 2001 bedeutete Moores und Dennards Gesetz eine Verdoppelung der Ausführungszeit alle 18 Monate durch verbesserte CPUs. Heute ist Nebenläufigkeit das dominante Mittel zur Beschleunigung von Supercomputern bis zu mobilen Geräten. Allerdings behindern neuere Phänomene wie "Dark Silicon" zunehmend eine weitere Beschleunigung durch Hardware. Um weitere Beschleunigung zu erreichen muss sich auch die Soft­ware mehr ihrer Hardware Resourcen gewahr werden. Verbunden mit diesem Phänomen ist eine immer heterogenere Hardware. Supercomputer integrieren Beschleuniger wie GPUs. Mobile SoCs (bspw. Smartphones) integrieren immer mehr Fähigkeiten. Spezialhardware auszunutzen ist eine bekannte Methode, um den Energieverbrauch zu senken, was ein weiterer wichtiger Aspekt ist, welcher mit der reinen Geschwindigkeit abgewogen werde muss. Zum Beispiel werden Supercomputer auch nach "Performance pro Watt" bewertet. Zur Zeit sind systemnahe low-level Programmierer es gewohnt über Hardware nachzudenken, während der gemeine high-level Programmierer es vorzieht von der Plattform möglichst zu abstrahieren (bspw. Cloud). "High-level" bedeutet nicht, dass Hardware irrelevant ist, sondern dass sie abstrahiert werden kann. Falls Sie eine Java-Anwendung für Android entwickeln, kann der Akku ein wichtiger Aspekt sein. Irgendwann müssen aber auch Hochsprachen resourcengewahr werden, um Geschwindigkeit oder Energieverbrauch zu verbessern. Innerhalb des Transregio "Invasive Computing" habe ich an diesen Problemen gearbeitet. In meiner Dissertation stelle ich ein Framework vor, mit dem man Hochsprachenanwendungen resourcengewahr machen kann, um so die Leistung zu verbessern. Das könnte beispielsweise erhöhte Effizienz oder schnellerer Ausführung für das System als Ganzes bringen. Ein Kerngedanke dabei ist, dass Anwendungen sich nicht selbst optimieren. Stattdessen geben sie alle Informationen an das Betriebssystem. Das Betriebssystem hat eine globale Sicht und trifft Entscheidungen über die Resourcen. Diesen Prozess nennen wir "Invasion". Die Aufgabe der Anwendung ist es, sich an diese Entscheidungen anzupassen, aber nicht selbst welche zu fällen. Die Herausforderung besteht darin eine Sprache zu definieren, mit der Anwendungen Resourcenbedingungen und Leistungsinformationen kommunizieren. So eine Sprache muss ausdrucksstark genug für komplexe Informationen, erweiterbar für neue Resourcentypen, und angenehm für den Programmierer sein. Die zentralen Beiträge dieser Dissertation sind: Ein theoretisches Modell der Resourcen-Verwaltung, um die Essenz des resourcengewahren Frameworks zu beschreiben, die Korrektheit der Entscheidungen des Betriebssystems bezüglich der Bedingungen einer Anwendung zu begründen und zum Beweis meiner Thesen von Effizienz und Beschleunigung in der Theorie. Ein Framework und eine Übersetzungspfad resourcengewahrer Programmierung für die Hochsprache X10. Zur Bewertung des Ansatzes haben wir Anwendungen aus dem High Performance Computing implementiert. Eine Beschleunigung von 5x konnte gemessen werden. Ein Speicherkonsistenzmodell für die X10 Programmiersprache, da dies ein notwendiger Schritt zu einer formalen Semantik ist, die das theoretische Modell und die konkrete Implementierung verknüpft. Zusammengefasst zeige ich, dass resourcengewahre Programmierung in Hoch\-sprachen auf zukünftigen Architekturen mit vielen Kernen mit vertretbarem Aufwand machbar ist und die Leistung verbessert

    Invasive compute balancing for applications with shared and hybrid parallelization

    Get PDF
    This is the author manuscript. The final version is available from the publisher via the DOI in this record.Achieving high scalability with dynamically adaptive algorithms in high-performance computing (HPC) is a non-trivial task. The invasive paradigm using compute migration represents an efficient alternative to classical data migration approaches for such algorithms in HPC. We present a core-distribution scheduler which realizes the migration of computational power by distributing the cores depending on the requirements specified by one or more parallel program instances. We validate our approach with different benchmark suites for simulations with artificial workload as well as applications based on dynamically adaptive shallow water simulations, and investigate concurrently executed adaptivity parameter studies on realistic Tsunami simulations. The invasive approach results in significantly faster overall execution times and higher hardware utilization than alternative approaches. A dynamic resource management is therefore mandatory for a more efficient execution of scenarios similar to our simulations, e.g. several Tsunami simulations in urgent computing, to overcome strong scalability challenges in the area of HPC. The optimizations obtained by invasive migration of cores can be generalized to similar classes of algorithms with dynamic resource requirements.This work was supported by the German Research Foundation (DFG) as part of the Transregional Collaborative Research Centre ”Invasive Computing” (SFB/TR 89)
    corecore