438 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationPortable electronic devices will be limited to available energy of existing battery chemistries for the foreseeable future. However, system-on-chips (SoCs) used in these devices are under a demand to offer more functionality and increased battery life. A difficult problem in SoC design is providing energy-efficient communication between its components while maintaining the required performance. This dissertation introduces a novel energy-efficient network-on-chip (NoC) communication architecture. A NoC is used within complex SoCs due it its superior performance, energy usage, modularity, and scalability over traditional bus and point-to-point methods of connecting SoC components. This is the first academic research that combines asynchronous NoC circuits, a focus on energy-efficient design, and a software framework to customize a NoC for a particular SoC. Its key contribution is demonstrating that a simple, asynchronous NoC concept is a good match for low-power devices, and is a fruitful area for additional investigation. The proposed NoC is energy-efficient in several ways: simple switch and arbitration logic, low port radix, latch-based router buffering, a topology with the minimum number of 3-port routers, and the asynchronous advantages of zero dynamic power consumption while idle and the lack of a clock tree. The tool framework developed for this work uses novel methods to optimize the topology and router oorplan based on simulated annealing and force-directed movement. It studies link pipelining techniques that yield improved throughput in an energy-efficient manner. A simulator is automatically generated for each customized NoC, and its traffic generators use a self-similar message distribution, as opposed to Poisson, to better match application behavior. Compared to a conventional synchronous NoC, this design is superior by achieving comparable message latency with half the energy

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    Quality of service and dependability of cellular vehicular communication networks

    Get PDF
    Improving the dependability of mobile network applications is a complicated task for many reasons: Especially in Germany, the development of cellular infrastructure has not always been fast enough to keep up with the growing demand, resulting in many blind spots that cause communication outages. However, even when the infrastructure is available, the mobility of the users still poses a major challenge when it comes to the dependability of applications: As the user moves, the capacity of the channel can experience major changes. This can mean that applications like adjustable bitrate video streaming cannot infer future performance by analyzing past download rates, as it will only have old information about the data rate at a different location. In this work, we explore the use of 4G LTE for dependable communication in mobile vehicular scenarios. For this, we first look at the performance of LTE, especially in mobile environments, and how it has developed over time. We compare measurements performed several years apart and look at performance differences in urban and rural areas. We find that even though the continued development of the 4G standard has enabled better performance in theory, this has not always been reflected in real-life performance due to the slow development of infrastructure, especially along highways. We also explore the possibility of performance prediction in LTE networks without the need to perform active measurements. For this, we look at the relationship between the measured signal quality and the achievable data rates and latencies. We find that while there is a strong correlation between some of the signal quality indicators and the achievable data rates, the relationship between them is stochastic, i.e., a higher signal quality makes better performance more probable but does not guarantee it. We then use our empirical measurement results as a basis for a model that uses signal quality measurements to predict a throughput distribution. The resulting estimate of the obtainable throughput can then be used in adjustable bitrate applications like video streaming to improve their dependability. Mobile networks also task TCP congestion control algorithms with a new challenge: Usually, senders use TCP congestion control to avoid causing congestion in the network by sending too many packets and so that the network bandwidth is divided fairly. This can be a challenging task since it is not known how many senders are in the network, and the network load can change at any time. In mobile vehicular networks, TCP congestion control is confronted with the additional problem of a constantly changing capacity: As users change their location, the quality of the channel also changes, and the capacity of the channel can experience drastic reductions even when the difference of location is very small. Additionally, in our measurements, we have observed that packet losses only rarely occur (and instead, packets are delayed and retransmitted), meaning that loss-based algorithms like Reno or CUBIC can be at a significant disadvantage. In this thesis, we compare several popular congestion control algorithms in both stationary and mobile scenarios. We find that many loss-based algorithms tend to cause bufferbloat and thus overly increase delays. At the same time, many delay-based algorithms tend to underestimate the network capacity and thus achieve data rates that are too low. The algorithm that performed the best in our measurements was TCP BBR, as it was able to utilize the full capacity of the channel without causing bufferbloat and also react to changes in capacity by adjusting its window. However, since TCP BBR can be unfair towards other algorithms in wired networks, its use could be problematic. Finally, we also propose how our model for data rate prediction can be used to improve the dependability of mobile video streaming. For this, we develop an algorithm for adaptive bitrate streaming that provides a guarantee that the video freeze probability does not exceed a certain pre-selected upper threshold. For the algorithm to work, it needs to know the distribution of obtainable throughput. We use a simulation to verify the function of this algorithm using a distribution obtained through the previously proposed data rate prediction algorithm. In our simulation, the algorithm limited the video freeze probability as intended. However, it did so at the cost of frequent switches of video bitrate, which can diminish the quality of user experience. In future work, we want to explore the possibility of different algorithms that offer a trade-off between the video freeze probability and the frequency of bitrate switches.Die Verbesserung der ZuverlĂ€ssigkeit von mobilen Netzwerk-basierten Anwendungen ist aus vielen GrĂŒnden eine komplizierte Aufgabe: Vor allem in Deutschland war die Entwicklung der Mobilfunkinfrastruktur nicht immer schnell genug, um mit der wachsenden Nachfrage Schritt zu halten. Es gibt immer noch viele Funklöchern, die fĂŒr KommunikationsausfĂ€lle verantwortlich sind. Aber auch an Orten, an denen Infrastruktur ausreichend vorhanden ist, stellt die MobilitĂ€t der Nutzer eine große Herausforderung fĂŒr die ZuverlĂ€ssigkeit der Anwendungen dar: Wenn sich der Nutzer bewegt, kann sich die KapazitĂ€t des Kanals stark verĂ€ndern. Dies kann dazu fĂŒhren, dass Anwendungen wie Videostreaming mit einstellbarer Bitrate die in der Vergangenheit erreichten Downloadraten nicht zur Vorhersage der zukĂŒnftigen Leistung nutzen können, da diese nur alte Informationen ĂŒber die Datenraten an einem anderen Standort enthalten. In dieser Arbeit untersuchen wir die Nutzung von 4G LTE fĂŒr zuverlĂ€ssige Kommunikation in mobilen Fahrzeugszenarien. Zu diesem Zweck untersuchen wir zunĂ€chst die Leistung von LTE, insbesondere in mobilen Umgebungen, und wie sie sich im Laufe der Zeit entwickelt hat. Wir vergleichen Messungen, die in einem zeitlichen Abstand von mehreren Jahren durchgefĂŒhrt wurden, und untersuchen Leistungsunterschiede in stĂ€dtischen und lĂ€ndlichen Gebieten. Wir stellen fest, dass die kontinuierliche Weiterentwicklung des 4G-Standards zwar theoretisch eine bessere Leistung ermöglicht hat, dass sich dies aber aufgrund des langsamen Ausbaus der Infrastruktur, insbesondere entlang von Autobahnen, nicht immer in der Praxis bemerkbar gemacht hat. Wir untersuchen auch die Möglichkeit der Leistungsvorhersage in LTE-Netzen, ohne aktive Messungen durchfĂŒhren zu mĂŒssen. Zu diesem Zweck untersuchen wir die Beziehung zwischen der gemessenen SignalqualitĂ€t und den erreichbaren Datenraten und Latenzzeiten. Wir stellen fest, dass es zwar eine starke Korrelation zwischen einigen der SignalqualitĂ€tsindikatoren und den erreichbaren Datenraten gibt, die Beziehung zwischen ihnen aber stochastisch ist, d. h. eine höhere SignalqualitĂ€t macht eine bessere Leistung zwar wahrscheinlicher, garantiert sie aber nicht. Wir verwenden dann unsere empirischen Messergebnisse als Grundlage fĂŒr ein Modell, das die SignalqualitĂ€tsmessungen zur Vorhersage einer Durchsatzverteilung nutzt. Die sich daraus ergebende SchĂ€tzung des erzielbaren Durchsatzes kann dann in Anwendungen mit einstellbarer Bitrate wie Videostreaming verwendet werden, um deren ZuverlĂ€ssigkeit zu verbessern. Mobile Netze stellen auch TCP Congestion Control Algorithmen vor eine neue Herausforderung: Normalerweise verwenden Sender TCP Congestion Control, um eine Überlastung des Netzes durch das Senden von zu vielen Paketen zu vermeiden, und um die Bandbreite des Netzes gerecht aufzuteilen. Dies kann eine schwierige Aufgabe sein, da es nicht bekannt ist, wie viele Sender sich im Netz befinden, und sich die Netzlast jederzeit Ă€ndern kann. In mobilen Fahrzeugnetzen ist TCP Congestion Control mit dem zusĂ€tzlichen Problem einer sich stĂ€ndig Ă€ndernden KapazitĂ€t konfrontiert: Wenn die Benutzer ihren Standort wechseln, Ă€ndert sich auch die QualitĂ€t des Kanals, und die KanalkapazitĂ€t des Kanals kann drastisch sinken, selbst wenn der Unterschied zwischen den Standorten sehr gering ist. DarĂŒber hinaus haben wir bei unseren Messungen festgestellt, dass Paketverluste nur selten auftreten (stattdessen werden Pakete verzögert und erneut ĂŒbertragen), was bedeutet, dass verlustbasierte Algorithmen wie Reno oder CUBIC einen großen Nachteil haben können. In dieser Arbeit vergleichen wir mehrere gĂ€ngige Congestion Control Algorithmen sowohl in stationĂ€ren als auch in mobilen Szenarien. Wir stellen fest, dass viele verlustbasierte Algorithmen dazu neigen, einen PufferĂŒberlauf zu verursachen und somit die Latenzen ĂŒbermĂ€ĂŸig erhöhen, wĂ€hrend viele latenzbasierte Algorithmen dazu neigen, die KanalkapazitĂ€t zu unterschĂ€tzen und somit zu niedrige Datenraten erzielen. Der Algorithmus, der bei unseren Messungen am besten abgeschnitten hat, war TCP BBR, da er in der Lage war, die volle KapazitĂ€t des Kanals auszunutzen, ohne den PufferfĂŒllstand ĂŒbermĂ€ĂŸig zu erhöhen. Ebenso hat TCP BBR schnell auf KapazitĂ€tsĂ€nderungen reagiert, indem er seine FenstergrĂ¶ĂŸe angepasst hat. Da TCP BBR jedoch in kabelgebundenen Netzen gegenĂŒber anderen Algorithmen unfair sein kann, könnte seine Verwendung problematisch sein. Schließlich schlagen wir auch vor, wie unser Modell zur Vorhersage von Datenraten verwendet werden kann, um die ZuverlĂ€ssigkeit des mobilen Videostreaming zu verbessern. Dazu entwickeln wir einen Algorithmus fĂŒr Streaming mit adaptiver Bitrate, der garantiert, dass die Wahrscheinlichkeit des Anhaltens eines Videos eine bestimmte, vorher festgelegte Obergrenze nicht ĂŒberschreitet. Damit der Algorithmus funktionieren kann, muss er die Verteilung des erreichbaren Durchsatzes kennen. Wir verwenden eine Simulation, um die Funktion dieses Algorithmus zu ĂŒberprĂŒfen. Hierzu verwenden wir eine Verteilung, die wir durch den zuvor vorgeschlagenen Algorithmus zur Vorhersage von Datenraten erhalten haben. In unserer Simulation begrenzte der Algorithmus die Wahrscheinlichkeit des Anhaltens von Videos wie beabsichtigt, allerdings um den Preis eines hĂ€ufigen Wechsels der Videobitrate, was die QualitĂ€t der Benutzererfahrung beeintrĂ€chtigen kann. In zukĂŒnftigen Arbeiten wollen wir die Möglichkeit verschiedener Algorithmen untersuchen, die einen Kompromiss zwischen der Wahrscheinlichkeit des Anhaltens des Videos und der HĂ€ufigkeit der Bitratenwechsel bieten

    On the Distribution of Control in Asynchronous Processor Architectures

    Get PDF
    Institute for Computing Systems ArchitectureThe effective performance of computer systems is to a large measure determined by the synergy between the processor architecture, the instruction set and the compiler. In the past, the sequencing of information within processor architectures has normally been synchronous: controlled centrally by a clock. However, this global signal could possibly limit the future gains in performance that can potentially be achieved through improvements in implementation technology. This thesis investigates the effects of relaxing this strict synchrony by distributing control within processor architectures through the use of a novel asynchronous design model known as a micronet. The impact of asynchronous control on the performance of a RISC-style processor is explored at different levels. Firstly, improvements in the performance of individual instructions by exploiting actual run-time behaviours are demonstrated. Secondly, it is shown that micronets are able to exploit further (both spatial and temporal) instructionlevel parallelism (ILP) efficiently through the distribution of control to datapath resources. Finally, exposing fine-grain concurrency within a datapath can only be of benefit to a computer system if it can easily be exploited by the compiler. Although compilers for micronet-based asynchronous processors may be considered to be more complex than their synchronous counterparts, it is shown that the variable execution time of an instruction does not adversely affect the compiler's ability to schedule code efficiently. In conclusion, the modelling of a processor's datapath as a micronet permits the exploitation of both finegrain ILP and actual run-time delays, thus leading to the efficient utilisation of functional units and in turn resulting in an improvement in overall system performance

    Cloud-aided wireless systems: communications and radar applications

    Get PDF
    This dissertation focuses on cloud-assisted radio technologies for communication, including mobile cloud computing and Cloud Radio Access Network (C-RAN), and for radar systems. This dissertation first concentrates on cloud-aided communications. Mobile cloud computing, which allows mobile users to run computationally heavy applications on battery limited devices, such as cell phones, is considered initially. Mobile cloud computing enables the offloading of computation-intensive applications from a mobile device to a cloud processor via a wireless interface. The interplay between offloading decisions at the application layer and physical-layer parameters, which determine the energy and latency associated with the mobile-cloud communication, motivates the inter-layer optimization of fine-grained task offloading across both layers. This problem is modeled by using application call graphs, and the joint optimization of application-layer and physical-layer parameters is carried out via a message passing algorithm by minimizing the total energy expenditure of the mobile user. The concept of cloud radio is also being considered for the development of two cellular architectures known as Distributed RAN (D-RAN) and C-RAN, whereby the baseband processing of base stations is carried out in a remote Baseband Processing Unit (BBU). These architectures can reduce the capital and operating expenses of dense deployments at the cost of increasing the communication latency. The effect of this latency, which is due to the fronthaul transmission between the Remote Radio Head (RRH) and the BBU, is then studied for implementation of Hybrid Automatic Repeat Request (HARQ) protocols. Specifically, two novel solutions are proposed, which are based on the control-data separation architecture. The trade-offs involving resources such as the number of transmitting and receiving antennas, transmission power and the blocklength of the transmitted codeword, and the performance of the proposed solutions is investigated in analysis and numerical results. The detection of a target in radar systems requires processing of the signal that is received by the sensors. Similar to cloud radio access networks in communications, this processing of the signals can be carried out in a remote Fusion Center (FC) that is connected to all sensors via limited-capacity fronthaul links. The last part of this dissertation is dedicated to exploring the application of cloud radio to radar systems. In particular, the problem of maximizing the detection performance at the FC jointly over the code vector used by the transmitting antenna and over the statistics of the noise introduced by quantization at the sensors for fronthaul transmission is investigated by adopting the information-theoretic criterion of the Bhattacharyya distance and information-theoretic bounds on the quantization rate

    Service-oriented models for audiovisual content storage

    No full text
    What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues
    • 

    corecore