197 research outputs found

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Computing on the Edge of the Network

    Get PDF
    Um Systeme der fünften Generation zellularer Kommunikationsnetze (5G) zu ermöglichen, sind Energie effiziente Architekturen erforderlich, die eine zuverlässige Serviceplattform für die Bereitstellung von 5G-Diensten und darüber hinaus bieten können. Device Enhanced Edge Computing ist eine Ableitung des Multi-Access Edge Computing (MEC), das Rechen- und Speicherressourcen direkt auf den Endgeräten bereitstellt. Die Bedeutung dieses Konzepts wird durch die steigenden Anforderungen von rechenintensiven Anwendungen mit extrem niedriger Latenzzeit belegt, die den MEC-Server allein und den drahtlosen Kanal überfordern. Diese Dissertation stellt ein Berechnungs-Auslagerungsframework mit Berücksichtigung von Energie, Mobilität und Anreizen in einem gerätegestützten MEC-System mit mehreren Benutzern und mehreren Aufgaben vor, das die gegenseitige Abhängigkeit der Aufgaben sowie die Latenzanforderungen der Anwendungen berücksichtigt.To enable fifth generation cellular communication network (5G) systems, energy efficient architectures are required that can provide a reliable service platform for the delivery of 5G services and beyond. Device Enhanced Edge Computing is a derivative of Multi-Access Edge Computing (MEC), which provides computing and storage resources directly on the end devices. The importance of this concept is evidenced by the increasing demands of ultra-low latency computationally intensive applications that overwhelm the MEC server alone and the wireless channel. This dissertation presents a computational offloading framework considering energy, mobility and incentives in a multi-user, multi-task device-based MEC system that takes into account task interdependence and application latency requirements

    A High Performance Payment Processing System Designed for Central Bank Digital Currencies

    Get PDF
    In light of continued innovation in money and payments, many central banks are exploring the creation of a central bank digital currency (CBDC), a new form of central bank money which supplements existing central bank reserve account balances and physical currency. This paper presents Hamilton, a flexible transaction processor design that supports a range of models for a CBDC and minimizes data storage in the core transaction processor by storing unspent funds as opaque hashes. Hamilton supports users custodying their own funds or custody provided by financial intermediaries. We describe and evaluate two implementations: the atomizer architecture which provides a globally ordered history of transactions but is limited in throughput (170,000 transactions per second), and the 2PC architecture that scales peak throughput almost linearly with resources (up to a measured throughput of 1.7M transactions per second) but does not provide a globally ordered list of transactions. We released our two architectures under the MIT open source license at https://github.com/mit-dci/opencbdc-tx

    A Methodology and Simulation-Based Toolchain for Estimating Deployment Performance of Smart Collective Services at the Edge

    Get PDF
    Research trends are pushing artificial intelligence (AI) across the Internet of Things (IoT)-edge-fog-cloud continuum to enable effective data analytics, decision making, as well as the efficient use of resources for QoS targets. Approaches for collective adaptive systems (CASs) engineering, such as aggregate computing, provide declarative programming models and tools for dealing with the uncertainty and the complexity that may arise from scale, heterogeneity, and dynamicity. Crucially, aggregate computing architecture allows for 'pulverization': applications can be decomposed into many deployable micromodules that can be spread across the ICT infrastructure, thus allowing multiple potential deployment configurations for the same application logic. This article studies the deployment architecture of aggregate-based edge services and its implications in terms of performance and cost. The goal is to provide methodological guidelines and a model-based toolchain for the generation and simulation-based evaluation of potential deployments. First, we address this subject methodologically by proposing an approach based on deployment code generators and a simulation phase whose obtained solutions are assessed with respect to their performance and costs. We then tailor this approach to aggregate computing applications deployed onto an IoT-edge-fog-cloud infrastructure, and we develop a corresponding toolchain based on Protelis and EdgeCloudSim. Finally, we evaluate the approach and tools through a case study of edge multimedia streaming, where the edge ecosystem exhibits intelligence by self-organizing into clusters to promote load balancing in large-scale dynamic settings

    Protein Structure, Dynamics, and Function: A Philosophical Account of Representation and Explanation in Structural Biology

    Get PDF
    Most philosophical work in molecular biology has historically centered on DNA, genetics, and questions of reduction. My dissertation breaks from this tradition to make proteins the object of philosophical and historical analysis. The recent history of structural biology and protein science offers untapped potential for history and philosophy of science. My ultimate goal for this dissertation therefore is to identify and analyze some of the key historical and philosophical puzzles that arise in these fields. I focus primarily on the shift from the static to the dynamic view of proteins in the late twentieth century. The static view treated proteins as stable, rigid structures, whereas the dynamic view considers proteins to be dynamic molecules in constant motion. In the first half of the dissertation, I develop a historical account of the origins of the static view of proteins. I show how this view led molecular biologists to adopt mechanistic explanation as their preferred strategy for explaining protein function. I then develop an account of the emergence of the dynamic view of proteins, arguing that thermodynamic theory and the theoretical commitments of scientists played an important and often overlooked role in driving this change. In the second half of the dissertation, I analyze the epistemological relationship between the static and dynamic concepts of the protein and argue that conceptual replacement is occurring. I then develop an account of ensemble explanation, a new type of explanation introduced to highlight the role of dynamics in protein function. I show that these explanations fail to fit existing philosophical accounts of explanation, ultimately concluding that my account is required to capture their epistemic structure

    Engineering Systems Integration

    Get PDF
    Dreamers may envision our future, but it is the pragmatists who build it. Solve the right problem in the right way, mankind moves forward. Solve the right problem in the wrong way or the wrong problem in the right way, however clever or ingenious the solution, neither credits mankind. Instead, this misfire demonstrates a failure to appreciate a crucial step in pragmatic problem solving: systems integration. The first book to address the underlying premises of systems integration and how to exposit them in a practical and productive manner, Engineering Systems Integration: Theory, Metrics, and Methods looks at the fundamental nature of integration, exposes the subtle premises to achieve integration, and posits a substantial theoretical framework that is both simple and clear. Offering systems managers and systems engineers the framework from which to consider their decisions in light of systems integration metrics, the book isolates two basic questions, 1) Is there a way to express the interplay of human actions and the result of system interactions of a product with its environment?, and 2) Are there methods that combine to improve the integration of systems? The author applies the four axioms of General Systems Theory (holism, decomposition, isomorphism, and models) and explores the domains of history and interpretation to devise a theory of systems integration, develop practical guidance applying the three frameworks, and formulate the mathematical constructs needed for systems integration. The practicalities of integrating parts when we build or analyze systems mandate an analysis and evaluation of existing integrative frameworks of causality and knowledge. Integration is not just a word that describes a best practice, an art, or a single discipline. The act of integrating is an approach, operative in all disciplines, in all we see, in all we do

    The internet of ontological things: On symmetries between ubiquitous problems and their computational solutions in the age of smart objects

    Get PDF
    This dissertation is about an abstract form of computer network that has recently earned a new physical incarnation called “the Internet of Things.” It surveys the ontological transformations that have occurred over recent decades to the computational components of this network, objects—initially designed as abstract algorithmic agents in a source code of computer programming but now transplanted into real-world objects. Embodying the ideal of modularity, objects have provided computer programmers with more intuitive means to construct a software application with lots of simple and reusable functional building blocks. Their capability of being reassembled into many different networks for a variety of applications has also embodied another ideal of computing machines, namely general-purposiveness. In the algorithmic cultures of the past century, these objects existed as mere abstractions to help humans to understand electromagnetic signals that had infiltrated every corner of automatized spaces from private to public. As an instrumental means to domesticate these elusive signals into programmable architectures according to the goals imposed by professional programmers and amateur end-users, objects promised a universal language for any computable human activities. This utopian vision for the object-oriented domestication of the digital has had enough traction for the growth of the software industry as it has provided an alibi to hide another process of colonization occurring on the flipside of their interfacing between humans and machines: making programmable the highest number of online and offline human activities possible. A more recent media age, which this dissertation calls the age of the Internet of Things, refers to the second phase of this colonization of human cultures by the algorithmic objects, no longer trapped in the hard-wired circuit boards of personal computer, but now residing in real-life objects with new wireless communicability. Chapters of this dissertation examine each different computer application—a navigation system in a smart car, smart home, open-world video games, and neuro-prosthetics—as each particular case of this object-oriented redefinition of human cultures

    Improving scalability of large-scale distributed Spiking Neural Network simulations on High Performance Computing systems using novel architecture-aware streaming hypergraph partitioning

    Get PDF
    After theory and experimentation, modelling and simulation is regarded as the third pillar of science, helping scientists to further their understanding of a complex system. In recent years there has been a growing scientific focus on computational neuroscience as a means to understand the brain and its functions, with large international projects (Human Brain Project, Brain Activity Map, MindScope and \textit{China Brain Project}) aiming to further our knowledge of high level cognitive functions. They are a testament to the enormous interest, difficulty and importance of solving the mysteries of the brain. Spiking Neural Network (SNN) simulations are widely used in the domain to facilitate experimentation. Scaling SNN simulations to large networks usually results in more-than-linear increase in computational complexity. The computing resources required at the brain scale simulation far surpass the capabilities of personal computers today. If those demands are to be met, distributed computation models need to be adopted, since there is a slow down of improvements in individual processors speed due to physical limitations on heat dissipation. This is a significant change that requires careful management of the workload in many levels: partition of work, communication and workload balancing, efficient inter-process communication and efficient use of available memory. If large scale neuronal network models are to be run successfully, simulators must consider these, and offer a viable solution to the challenges they pose. Large scale SNN simulations evidence most of the issues of general HPC systems evident in large distributed computation. Commonly used distribution of workload algorithms (round robin, random and manual allocation) do not take into consideration connectivity locality, which is natural in biological networks, which can lead to increased communication requirements when distributing the simulation in multiple computing nodes. State-of-the-art SNN simulations use dense communication collectives to distribute spike data. The common method of point to point communication in distributed computation is through dense patterns. Sparse communication collectives have been suggested to incur in lower overheads when the application's pattern of communication is sparse. In this work we characterise the bottlenecks on communication-bound SNN simulations and identify communication balance and sparsity as the main contributors to scalability. We propose hypergraph partitioning to distribute neurons along computing nodes to minimise communication (increasing sparsity). A hypergraph is a generalisation of graphs, where a (hyper)edge can link 2 or more vertices at once. Coupled with a novel use of sparse-aware communication collective, computational efficiency increases by up to 40.8 percent points and simulation time reduces by up to 73\%, compared to the common round-robin allocation in neuronal simulators. HPC systems have, by design, highly hierarchical communication network links, with qualitative differences in communication speed and latency between computing nodes. This can create a mismatch between the distributed simulation communication patterns and the physical capabilities of the hardware. If large distributed simulations are to take full advantage of these systems, the communication properties of the HPC need to be taken into consideration when allocating workload to route frequent, heavy communication through fast network links. Strategies that consider the heterogeneous physical communication capabilities are called architecture-aware. After demonstrating that hypergraph partitioning leads to more efficient workload allocation in SNN simulations, this thesis proposes a novel sequential hypergraph partitioning algorithm that incorporates network bandwidth via profiling. This leads to a significant reduction in execution time (up to 14x speedup in synthetic benchmark simulations compared to architecture-agnostic partitioners). The motivating context of this work is large scale brain simulations, however in the era of social media, large graphs and hypergraphs are increasingly relevant in many other scientific applications. A common feature of such graphs is that they are too big for a single machine to cope, both in terms of performance and memory requirements. State-of-the-art multilevel partitioning has been shown to struggle to scale to large graphs in distributed memory, not just because they take a long time to process, but also because they require full knowledge of the graph (not possible in dynamic graphs) and to fit the graph entirely in memory (not possible for very large graphs). To address those limitations we propose a parallel implementation of our architecture-aware streaming hypergraph partitioning algorithm (HyperPRAW) to model distributed applications. Results demonstrate that HyperPRAW produces consistent speedup over previous streaming approaches that only consider hyperedge overlap (up to 5.2x speedup). Compared to multilevel global partitioner in dense hypergraphs (those with high average cardinality), HyperPRAW is able to produce workload allocations that result in speeding up runtime in a synthetic simulation benchmark (up to 4.3x). HyperPRAW has the potential to scale to very large hypergraphs as it only requires local information to make allocation decisions, with an order of magnitude less memory footprint than global partitioners. The combined contributions of this thesis lead to a novel, parallel, scalable, streaming hypergraph partitioning algorithm (HyperPRAW) that can be used to help scale large distributed simulations in HPC systems. HyperPRAW helps tackle three of the main scalability challenges: it produces highly balanced distributed computation and communication, minimising idle time between computing nodes; it reduces the communication overhead by placing frequently communicating simulation elements close to each other (where the communication cost is minimal); and it provides a solution with a reasonable memory footprint that allows tackling larger problems than state-of-the-art alternatives such as global multilevel partitioning
    • …
    corecore