6 research outputs found

    Optimizing energy-efficiency for multi-core packet processing systems in a compiler framework

    Get PDF
    Network applications become increasingly computation-intensive and the amount of traffic soars unprecedentedly nowadays. Multi-core and multi-threaded techniques are thus widely employed in packet processing system to meet the changing requirement. However, the processing power cannot be fully utilized without a suitable programming environment. The compilation procedure is decisive for the quality of the code. It can largely determine the overall system performance in terms of packet throughput, individual packet latency, core utilization and energy efficiency. The thesis investigated compilation issues in networking domain first, particularly on energy consumption. And as a cornerstone for any compiler optimizations, a code analysis module for collecting program dependency is presented and incorporated into a compiler framework. With that dependency information, a strategy based on graph bi-partitioning and mapping is proposed to search for an optimal configuration in a parallel-pipeline fashion. The energy-aware extension is specifically effective in enhancing the energy-efficiency of the whole system. Finally, a generic evaluation framework for simulating the performance and energy consumption of a packet processing system is given. It accepts flexible architectural configuration and is capable of performingarbitrary code mapping. The simulation time is extremely short compared to full-fledged simulators. A set of our optimization results is gathered using the framework

    Static Web content distribution and request routing in a P2P overlay

    Get PDF
    The significance of collaboration over the Internet has become a corner-stone of modern computing, as the essence of information processing and content management has shifted to networked and Webbased systems. As a result, the effective and reliable access to networked resources has become a critical commodity in any modern infrastructure. In order to cope with the limitations introduced by the traditional client-server networking model, most of the popular Web-based services have employed separate Content Delivery Networks (CDN) to distribute the server-side resource consumption. Since the Web applications are often latency-critical, the CDNs are additionally being adopted for optimizing the content delivery latencies perceived by the Web clients. Because of the prevalent connection model, the Web content delivery has grown to a notable industry. The rapid growth in the amount of mobile devices further contributes to the amount of resources required from the originating server, as the content is also accessible on the go. While the Web has become one of the most utilized sources of information and digital content, the openness of the Internet is simultaneously being reduced by organizations and governments preventing access to any undesired resources. The access to information may be regulated or altered to suit any political interests or organizational benefits, thus conflicting with the initial design principle of an unrestricted and independent information network. This thesis contributes to the development of more efficient and open Internet by combining a feasibility study and a preliminary design of a peer-to-peer based Web content distribution and request routing mechanism. The suggested design addresses both the challenges related to effectiveness of current client-server networking model and the openness of information distributed over the Internet. Based on the properties of existing peer-to-peer implementations, the suggested overlay design is intended to provide low-latency access to any Web content without sacrificing the end-user privacy. The overlay is additionally designed to increase the cost of censorship by forcing a successful blockade to isolate the censored network from the rest of the Internet

    Metoda projektovanja namenskih programabilnih hardverskih akceleratora

    Get PDF
    Namenski računarski sistemi se najčesće projektuju tako da mogu da podrže izvršavanje većeg broja željenih aplikacija. Za postizanje što veće efikasnosti, preporučuje se korišćenje specijalizovanih procesora Application Specific Instruction Set Processors–ASIPs, na kojima se izvršavanje programskih instrukcija obavlja u za to projektovanim i nezavisnimhardverskim blokovima (akceleratorima). Glavni razlog za postojanje nezavisnih akceleratora jeste postizanjemaksimalnog ubrzanja izvršavanja instrukcija. Me ¯ dutim, ovakav pristup podrazumeva da je za svaki od blokova potrebno projektovati integrisano (ASIC) kolo, čime se bitno povećava ukupna površina procesora. Metod za smanjenje ukupne površine jeste primena DatapathMerging tehnike na dijagrame toka podataka ulaznih aplikacija. Kao rezultat, dobija se jedan programabilni hardverski akcelerator, sa mogućnosću izvršavanja svih željenih instrukcija. Međutim, ovo ima negativne posledice na efikasnost sistema. često se zanemaruje činjenica da, usled veoma ograničene fleksibilnosti ASIC hardverskih akceleratora, specijalizovani procesori imaju i drugih nedostataka. Naime, u slučaju izmena, ili prosto nadogradnje, specifikacije procesora u završnimfazama projektovanja, neizbežna su velika kašnjenja i dodatni troškovi promene dizajna. U ovoj tezi je pokazano da zahtevi za fleksibilnošću i efikasnošću ne moraju biti međusobno isključivi. Demonstrirano je je da je moguce uneti ograničeni nivo fleksibilnosti hardvera tokom dizajn procesa, tako da dobijeni hardverski akcelerator može da izvršava ne samo aplikacije definisane na samom početku projektovanja, već i druge aplikacije, pod uslovom da one pripadaju istom domenu. Drugim rečima, u tezi je prezentovana metoda projektovanja fleksibilnih namenskih hardverskih akceleratora. Eksperimentalnom evaluacijom pokazano je da su tako dobijeni akceleratori u većini slučajeva samo do 2 x veće površine ili 2 x većeg kašnjenja od akceleratora dobijenih primenom DatapathMerging metode, koja pritom ne pruža ni malo dodatne fleksibilnosti.Typically, embedded systems are designed to support a limited set of target applications. To efficiently execute those applications, they may employ Application Specific Instruction Set Processors (ASIPs) enriched with carefully designed Instructions Set Extension (ISEs) implemented in dedicated hardware blocks. The primary goal when designing ISEs is efficiency, i.e. the highest possible speedup, which implies synthesizing all critical computational kernels of the application dataflow graphs as an Application Specific Integrated Circuit (ASICs). Yet, this can lead to high on-chip area dedicated solely to ISEs. One existing approach to decrease this area by paying a reasonable price of decreased efficiency is to perform datapath merging on input dataflow graphs (DFGs) prior to generating the ASIC. It is often neglected that even higher costs can be accidentally incurred due to the lack of flexibility of such ISEs. Namely, if late design changes or specification upgrades happen, significant time-to-market delays and nonrecurrent costs for redesigning the ISEs and the corresponding ASIPs become inevitable. This thesis shows that flexibility and efficiency are not mutually exclusive. It demonstrates that it is possible to introduce a limited amount of hardware flexibility during the design process, such that the resulting datapath is in fact reconfigurable and thus can execute not only the applications known at design time, but also other applications belonging to the same application-domain. In other words, it proposes a methodology for designing domain-specific reconfigurable arrays out of a limited set of input applications. The experimental results show that resulting arrays are usually around 2£ larger and 2£ slower than ISEs synthesized using datapath merging, which have practically null flexibility beyond the design set of DFGs

    Reductie van het geheugengebruik van besturingssysteemkernen Memory Footprint Reduction for Operating System Kernels

    Get PDF
    In ingebedde systemen is er vaak maar een beperkte hoeveelheid geheugen beschikbaar. Daarom wordt er veel aandacht besteed aan het produceren van compacte programma's voor deze systemen, en zijn er allerhande technieken ontwikkeld die automatisch het geheugengebruik van programma's kunnen verkleinen. Tot nu toe richtten die technieken zich voornamelijk op de toepassingssoftware die op het systeem draait, en werd het besturingssysteem over het hoofd gezien. In dit proefschrift worden een aantal technieken beschreven die het mogelijk maken om op een geautomatiseerde manier het geheugengebruik van een besturingssysteemkern gevoelig te verkleinen. Daarbij wordt in eerste instantie gebruik gemaakt van compactietransformaties tijdens het linken. Als we de hardware en software waaruit het systeem samengesteld is kennen, is het mogelijk om nog verdere reducties te bekomen. Daartoe wordt de kern gespecialiseerd voor een bepaalde hardware-software combinatie. Overbodige functionaliteit wordt opgespoord en uit de kern verwijderd, terwijl de resterende functionaliteit wordt aangepast aan de specifieke gebruikspatronen die uit de hardware en software kunnen afgeleid worden. Als laatste worden technieken voorgesteld die het mogelijk maken om weinig of niet uitgevoerde code (bijvoorbeeld code voor het afhandelen van slechts zeldzaam optredende foutcondities) uit het geheugen te verwijderen. Deze code wordt dan enkel ingeladen op het moment dat ze effectief nodig is. Voor ons testsysteem kunnen we met de gecombineerde technieken het geheugengebruik van een Linux 2.4 kern met meer dan 48% verminderen

    Cyber Analogies

    Get PDF
    This anthology of cyber analogies will resonate with readers whose duties call for them to set strategies to protect the virtual domain and determine the policies that govern it. Our belief is that learning is most effective when concepts under consideration can be aligned with already-existing understanding or knowledge. Cyber issues are inherently tough to explain in layman's terms. The future is always open and undetermined, and the numbers of actors and the complexity of their relations are too great to give definitive guidance about future developments. In this respect, historical analogies, carefully developed and properly applied, help indicate a direction for action by reducing complexity and making the future at least cognately manageable.US Cyber CommandIntroduction: Emily O. Goldman & John Arquilla; The Cyber Pearl Harbor:James J. Wirtz: Applying the Historical Lessons of Surprise Attack to the Cyber Domain: The Example of the United Kingdom:Dr Michael S. Goodman: The Cyber Pearl Harbor Analogy: An Attacker’s Perspective: Emily O. Goldman, John Surdu, & Michael Warner: “When the Urgency of Time and Circumstances Clearly Does Not Permit...”: Redelegation in Nuclear and Cyber Scenarios: Peter Feaver & Kenneth Geers; Comparing Airpower and Cyberpower: Dr. Gregory Rattray: Active Cyber Defense: Applying Air Defense to the Cyber Domain: Dorothy E. Denning & Bradley J. Strawser: The Strategy of Economic Warfare: A Historical Case Study and Possible Analogy to: Contemporary Cyber Warfare: Nicholas A. Lambert: Silicon Valley: Metaphor for Cybersecurity, Key to Understanding Innovation War: John Kao: The Offense-Defense Balance and Cyber Warfare: Keir Lieber: A Repertory of Cyber Analogies: Robert Axelro

    Building Sustainability: Definitions, Process and Case

    Get PDF
    This thesis is an exploration of how to do sustainable development for buildings, especially during the earliest stages of such development. The thesis starts by considering clear definitions of sustainability, development and sustainable development as these concepts apply to organizations in general and as they apply specifically to the charity All Our Relations (AOR) and their community of the Region of Waterloo in Ontario, Canada. Three critical challenges to the process of development are also discussed in these early chapters, namely assessment, vision and feedback. In the third chapter, these same challenges are put under the lens of sustainable development and three new, but related, challenges of connection complexity, shared futures and resilience are examined to better understand the problems and solutions that surround them. At the end of this broad introductory section, AOR’s relationships with the community are explored as part of their efforts to draft an organization-wide sustainability plan. The second part of the thesis is an attempt to apply and expand on the general ideas from the first half through a focus on buildings and specifically the building of AOR’s planned Hospice and Retreat Centre in Bloomingdale, Ontario. As part of the focus on sustainable buildings, the Leadership in Energy and Environmental Design (LEED™) system of assessing building impacts is presented and critiqued. As part of a focus on building developments the earlier challenges of assessment, vision and feedback are revisited as they apply to the concept design phase of the typical building design. The final three chapters of the thesis are a synthesis of all the previous chapters and the formal presentation of the case study concept development for the AOR building. A full summary of all previous definitions are presented and the final definition of sustainable building development is expressed as a culmination and extension of its parts: Sustainable building development is a process of creating space-for-use which recognizes both the importance of space in our lives and the impact that developing that space has on our greater goal to pursue sustainability. Potential critiques of this definition are discussed and two methods of engaging in the difficult challenges of sustainable building development are presented: the decider’s dilemma and the life-cycle-service-network model of connection complexity. Finally, the case study use of LEED as a guide for doing sustainable development in buildings is contrasted against the author’s proposed approaches. Through a series of qualitative and quantitative observations based on the results from the case study design, LEED is revealed as being effective mostly as an early guide, but lacking in the rigor and complexity needed to address properly the challenges of building sustainability
    corecore