476 research outputs found

    Best effort measurement based congestion control

    Get PDF
    Abstract available: p.

    Practical Strategic Reasoning with Applications in Market Games.

    Full text link
    Strategic reasoning is part of our everyday lives: we negotiate prices, bid in auctions, write contracts, and play games. We choose actions in these scenarios based on our preferences, and our beliefs about preferences of the other participants. Game theory provides a rich mathematical framework through which we can reason about the influence of these preferences. Clever abstractions allow us to predict the outcome of complex agent interactions, however, as the scenarios we model increase in complexity, the abstractions we use to enable classical game-theoretic analysis lose fidelity. In empirical game-theoretic analysis, we construct game models using empirical sources of knowledge—such as high-fidelity simulation. However, utilizing empirical knowledge introduces a host of different computational and statistical problems. I investigate five main research problems that focus on efficient selection, estimation, and analysis of empirical game models. I introduce a flexible modeling approach, where we may construct multiple game-theoretic models from the same set of observations. I propose a principled methodology for comparing empirical game models and a family of algorithms that select a model from a set of candidates. I develop algorithms for normal-form games that efficiently identify formations—sets of strategies that are closed under a (correlated) best-response correspondence. This aids in problems, such as finding Nash equilibria, that are key to analysis but hard to solve. I investigate policies for sequentially determining profiles to simulate, when constrained by a budget for simulation. Efficient policies allow modelers to analyze complex scenarios by evaluating a subset of the profiles. The policies I introduce outperform the existing policies in experiments. I establish a principled methodology for evaluating strategies given an empirical game model. I employ this methodology in two case studies of market scenarios: first, a case study in supply chain management from the perspective of a strategy designer; then, a case study in Internet ad auctions from the perspective of a mechanism designer. As part of the latter analysis, I develop an ad-auctions scenario that captures several key strategic issues in this domain for the first time.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/75848/1/prjordan_1.pd

    Online Modeling and Tuning of Parallel Stream Processing Systems

    Get PDF
    Writing performant computer programs is hard. Code for high performance applications is profiled, tweaked, and re-factored for months specifically for the hardware for which it is to run. Consumer application code doesn\u27t get the benefit of endless massaging that benefits high performance code, even though heterogeneous processor environments are beginning to resemble those in more performance oriented arenas. This thesis offers a path to performant, parallel code (through stream processing) which is tuned online and automatically adapts to the environment it is given. This approach has the potential to reduce the tuning costs associated with high performance code and brings the benefit of performance tuning to consumer applications where otherwise it would be cost prohibitive. This thesis introduces a stream processing library and multiple techniques to enable its online modeling and tuning. Stream processing (also termed data-flow programming) is a compute paradigm that views an application as a set of logical kernels connected via communications links or streams. Stream processing is increasingly used by computational-x and x-informatics fields (e.g., biology, astrophysics) where the focus is on safe and fast parallelization of specific big-data applications. A major advantage of stream processing is that it enables parallelization without necessitating manual end-user management of non-deterministic behavior often characteristic of more traditional parallel processing methods. Many big-data and high performance applications involve high throughput processing, necessitating usage of many parallel compute kernels on several compute cores. Optimizing the orchestration of kernels has been the focus of much theoretical and empirical modeling work. Purely theoretical parallel programming models can fail when the assumptions implicit within the model are mis-matched with reality (i.e., the model is incorrectly applied). Often it is unclear if the assumptions are actually being met, even when verified under controlled conditions. Full empirical optimization solves this problem by extensively searching the range of likely configurations under native operating conditions. This, however, is expensive in both time and energy. For large, massively parallel systems, even deciding which modeling paradigm to use is often prohibitively expensive and unfortunately transient (with workload and hardware). In an ideal world, a parallel run-time will re-optimize an application continuously to match its environment, with little additional overhead. This work presents methods aimed at doing just that through low overhead instrumentation, modeling, and optimization. Online optimization provides a good trade-off between static optimization and online heuristics. To enable online optimization, modeling decisions must be fast and relatively accurate. Online modeling and optimization of a stream processing system first requires the existence of a stream processing framework that is amenable to the intended type of dynamic manipulation. To fill this void, we developed the RaftLib C++ template library, which enables usage of the stream processing paradigm for C++ applications (it is the run-time which is the basis of almost all the work within this dissertation). An application topology is specified by the user, however almost everything else is optimizable by the run-time. RaftLib takes advantage of the knowledge gained during the design of several prior streaming languages (notably Auto-Pipe). The resultant framework enables online migration of tasks, auto-parallelization, online buffer-reallocation, and other useful dynamic behaviors that were not available in many previous stream processing systems. Several benchmark applications have been designed to assess the performance gains through our approaches and compare performance to other leading stream processing frameworks. Information is essential to any modeling task, to that end a low-overhead instrumentation framework has been developed which is both dynamic and adaptive. Discovering a fast and relatively optimal configuration for a stream processing application often necessitates solving for buffer sizes within a finite capacity queueing network. We show that a generalized gain/loss network flow model can bootstrap the process under certain conditions. Any modeling effort, requires that a model be selected; often a highly manual task, involving many expensive operations. This dissertation demonstrates that machine learning methods (such as a support vector machine) can successfully select models at run-time for a streaming application. The full set of approaches are incorporated into the open source RaftLib framework

    Dynamic power allocation and routing for satellite and wireless networks with time varying channels

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.Includes bibliographical references (p. 283-295).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Satellite and wireless networks operate over time varying channels that depend on attenuation conditions, power allocation decisions, and inter-channel interference. In order to reliably integrate these systems into a high speed data network and meet the increasing demand for high throughput and low delay, it is necessary to develop efficient network layer strategies that fully utilize the physical layer capabilities of each network element. In this thesis, we develop the notion of network layer capacity and describe capacity achieving power allocation and routing algorithms for general networks with wireless links and adaptive transmission rates. Fundamental issues of delay, throughput optimality, fairness, implementation complexity, and robustness to time varying channel conditions and changing user demands are discussed. Analysis is performed at the packet level and fully considers the queueing dynamics in systems with arbitrary, potentially bursty, arrival processes. Applications of this research are examined for the specific cases of satellite networks and ad-hoc wireless networks. Indeed, in Chapter 3 we consider a multi-beam satellite downlink and develop a dynamic power allocation algorithm that allocates power to each link in reaction to queue backlog and current channel conditions. The algorithm operates without knowledge of the arriving traffic or channel statistics, and is shown to achieve maximum throughput while maintaining average delay guarantees. At the end of Chapter 4, a crosslinked collection of such satellites is considered and a satellite separation principle is developed, demonstrating that joint optimal control can be implemented with separate algorithms for the downlinks and crosslinks.(cont.) Ad-hoc wireless networks are given special attention in Chapter 6. A simple cell- partitioned model for a mobile ad-hoc network with N users is constructed, and exact expressions for capacity and delay are derived. End-to-end delay is shown to be O(N), and hence grows large as the size of the network is increased. To reduce delay, a transmission protocol which sends redundant packet information over multiple paths is developed and shown to provide O(vN) delay at the cost of reducing throughput. A fundamental rate- delay tradeoff curve is established, and the given protocols for achieving O(N) and O(vN) delay are shown to operate on distinct boundary points of this curve. In Chapters 4 and 5 we consider optimal control for a general time-varying network. A cross-layer strategy is developed that stabilizes the network whenever possible, and makes fair decisions about which data to serve when inputs exceed capacity. The strategy is decoupled into separate algorithms for dynamic flow control, power allocation, and routing, and allows for each user to make greedy decisions independent of the actions of others. The combined strategy is shown to yield data rates that are arbitrarily close to the optimally fair operating point that is achieved when all network controllers are coordinated and have perfect knowledge of future events. The cost of approaching this fair operating point is an end-to-end delay increase for data that is served by the network.by Michael J. Neely.Ph.D

    Comparing order-picking solutions for a warehouse

    Get PDF

    To Wave Or Not To Wave? Order Release Policies for Warehouses with an Automated Sorter

    Get PDF
    Wave-based release policies are prevalent in warehouses with an automated sorter, and take different forms depending on how much waves overlap and whether the sorter is split for operating purposes. Waveless release is emerging as an alternative policy adopted by an increasing number of firms. While that new policy presents several advantages relative to waves, it also involves the possibility of gridlock at the sorter. In collaboration with a large US online retailer and using an extensive dataset of detailed flow information, we first develop a model with validated predictive accuracy for its warehouses operating under a waveless release policy. We then use that model to compute operational guidelines for dynamically controlling the main parameter of its waveless policy, with the goal of maximizing throughput while keeping the risk of gridlock under a specified threshold. Secondly, we leverage that model and dataset to perform through simulation a performance comparison of wave-based and waveless policies in this context. Our waveless policy yields larger or equal throughput than the best performing wave-based policy with a lower gridlock probability in all scenarios considered. Waveless release policies thus appear to merit very serious consideration by practitioners. Facilities using a non-overlapping wave policy should also consider overlapping waves or a split sorter policy

    Scalable visual analytics over voluminous spatiotemporal data

    Get PDF
    2018 Fall.Includes bibliographical references.Visualization is a critical part of modern data analytics. This is especially true of interactive and exploratory visual analytics, which encourages speedy discovery of trends, patterns, and connections in data by allowing analysts to rapidly change what data is displayed and how it is displayed. Unfortunately, the explosion of data production in recent years has led to problems of scale as storage, processing, querying, and visualization have struggled to keep pace with data volumes. Visualization of spatiotemporal data pose unique challenges, thanks in part to high-dimensionality in the input feature space, interactions between features, and the production of voluminous, high-resolution outputs. In this dissertation, we address challenges associated with supporting interactive, exploratory visualization of voluminous spatiotemporal datasets and underlying phenomena. This requires the visualization of millions of entities and changes to these entities as the spatiotemporal phenomena unfolds. The rendering and propagation of spatiotemporal phenomena must be both accurate and timely. Key contributions of this dissertation include: 1) the temporal and spatial coupling of spatially localized models to enable the visualization of phenomena at far greater geospatial scales; 2) the ability to directly compare and contrast diverging spatiotemporal outcomes that arise from multiple exploratory "what-if" queries; and 3) the computational framework required to support an interactive user experience in a heavily resource-constrained environment. We additionally provide support for collaborative and competitive exploration with multiple synchronized clients

    Engineering Aggregation Operators for Relational In-Memory Database Systems

    Get PDF
    In this thesis we study the design and implementation of Aggregation operators in the context of relational in-memory database systems. In particular, we identify and address the following challenges: cache-efficiency, CPU-friendliness, parallelism within and across processors, robust handling of skewed data, adaptive processing, processing with constrained memory, and integration with modern database architectures. Our resulting algorithm outperforms the state-of-the-art by up to 3.7x

    IoT and Sensor Networks in Industry and Society

    Get PDF
    The exponential progress of Information and Communication Technology (ICT) is one of the main elements that fueled the acceleration of the globalization pace. Internet of Things (IoT), Artificial Intelligence (AI) and big data analytics are some of the key players of the digital transformation that is affecting every aspect of human's daily life, from environmental monitoring to healthcare systems, from production processes to social interactions. In less than 20 years, people's everyday life has been revolutionized, and concepts such as Smart Home, Smart Grid and Smart City have become familiar also to non-technical users. The integration of embedded systems, ubiquitous Internet access, and Machine-to-Machine (M2M) communications have paved the way for paradigms such as IoT and Cyber Physical Systems (CPS) to be also introduced in high-requirement environments such as those related to industrial processes, under the forms of Industrial Internet of Things (IIoT or I2oT) and Cyber-Physical Production Systems (CPPS). As a consequence, in 2011 the German High-Tech Strategy 2020 Action Plan for Germany first envisioned the concept of Industry 4.0, which is rapidly reshaping traditional industrial processes. The term refers to the promise to be the fourth industrial revolution. Indeed, the first industrial revolution was triggered by water and steam power. Electricity and assembly lines enabled mass production in the second industrial revolution. In the third industrial revolution, the introduction of control automation and Programmable Logic Controllers (PLCs) gave a boost to factory production. As opposed to the previous revolutions, Industry 4.0 takes advantage of Internet access, M2M communications, and deep learning not only to improve production efficiency but also to enable the so-called mass customization, i.e. the mass production of personalized products by means of modularized product design and flexible processes. Less than five years later, in January 2016, the Japanese 5th Science and Technology Basic Plan took a further step by introducing the concept of Super Smart Society or Society 5.0. According to this vision, in the upcoming future, scientific and technological innovation will guide our society into the next social revolution after the hunter-gatherer, agrarian, industrial, and information eras, which respectively represented the previous social revolutions. Society 5.0 is a human-centered society that fosters the simultaneous achievement of economic, environmental and social objectives, to ensure a high quality of life to all citizens. This information-enabled revolution aims to tackle today’s major challenges such as an ageing population, social inequalities, depopulation and constraints related to energy and the environment. Accordingly, the citizens will be experiencing impressive transformations into every aspect of their daily lives. This book offers an insight into the key technologies that are going to shape the future of industry and society. It is subdivided into five parts: the I Part presents a horizontal view of the main enabling technologies, whereas the II-V Parts offer a vertical perspective on four different environments. The I Part, dedicated to IoT and Sensor Network architectures, encompasses three Chapters. In Chapter 1, Peruzzi and Pozzebon analyse the literature on the subject of energy harvesting solutions for IoT monitoring systems and architectures based on Low-Power Wireless Area Networks (LPWAN). The Chapter does not limit the discussion to Long Range Wise Area Network (LoRaWAN), SigFox and Narrowband-IoT (NB-IoT) communication protocols, but it also includes other relevant solutions such as DASH7 and Long Term Evolution MAchine Type Communication (LTE-M). In Chapter 2, Hussein et al. discuss the development of an Internet of Things message protocol that supports multi-topic messaging. The Chapter further presents the implementation of a platform, which integrates the proposed communication protocol, based on Real Time Operating System. In Chapter 3, Li et al. investigate the heterogeneous task scheduling problem for data-intensive scenarios, to reduce the global task execution time, and consequently reducing data centers' energy consumption. The proposed approach aims to maximize the efficiency by comparing the cost between remote task execution and data migration. The II Part is dedicated to Industry 4.0, and includes two Chapters. In Chapter 4, Grecuccio et al. propose a solution to integrate IoT devices by leveraging a blockchain-enabled gateway based on Ethereum, so that they do not need to rely on centralized intermediaries and third-party services. As it is better explained in the paper, where the performance is evaluated in a food-chain traceability application, this solution is particularly beneficial in Industry 4.0 domains. Chapter 5, by De Fazio et al., addresses the issue of safety in workplaces by presenting a smart garment that integrates several low-power sensors to monitor environmental and biophysical parameters. This enables the detection of dangerous situations, so as to prevent or at least reduce the consequences of workers accidents. The III Part is made of two Chapters based on the topic of Smart Buildings. In Chapter 6, Petroșanu et al. review the literature about recent developments in the smart building sector, related to the use of supervised and unsupervised machine learning models of sensory data. The Chapter poses particular attention on enhanced sensing, energy efficiency, and optimal building management. In Chapter 7, Oh examines how much the education of prosumers about their energy consumption habits affects power consumption reduction and encourages energy conservation, sustainable living, and behavioral change, in residential environments. In this Chapter, energy consumption monitoring is made possible thanks to the use of smart plugs. Smart Transport is the subject of the IV Part, including three Chapters. In Chapter 8, Roveri et al. propose an approach that leverages the small world theory to control swarms of vehicles connected through Vehicle-to-Vehicle (V2V) communication protocols. Indeed, considering a queue dominated by short-range car-following dynamics, the Chapter demonstrates that safety and security are increased by the introduction of a few selected random long-range communications. In Chapter 9, Nitti et al. present a real time system to observe and analyze public transport passengers' mobility by tracking them throughout their journey on public transport vehicles. The system is based on the detection of the active Wi-Fi interfaces, through the analysis of Wi-Fi probe requests. In Chapter 10, Miler et al. discuss the development of a tool for the analysis and comparison of efficiency indicated by the integrated IT systems in the operational activities undertaken by Road Transport Enterprises (RTEs). The authors of this Chapter further provide a holistic evaluation of efficiency of telematics systems in RTE operational management. The book ends with the two Chapters of the V Part on Smart Environmental Monitoring. In Chapter 11, He et al. propose a Sea Surface Temperature Prediction (SSTP) model based on time-series similarity measure, multiple pattern learning and parameter optimization. In this strategy, the optimal parameters are determined by means of an improved Particle Swarm Optimization method. In Chapter 12, Tsipis et al. present a low-cost, WSN-based IoT system that seamlessly embeds a three-layered cloud/fog computing architecture, suitable for facilitating smart agricultural applications, especially those related to wildfire monitoring. We wish to thank all the authors that contributed to this book for their efforts. We express our gratitude to all reviewers for the volunteering support and precious feedback during the review process. We hope that this book provides valuable information and spurs meaningful discussion among researchers, engineers, businesspeople, and other experts about the role of new technologies into industry and society
    corecore