1,104 research outputs found

    Complex networks analysis in socioeconomic models

    Full text link
    This chapter aims at reviewing complex networks models and methods that were either developed for or applied to socioeconomic issues, and pertinent to the theme of New Economic Geography. After an introduction to the foundations of the field of complex networks, the present summary adds insights on the statistical mechanical approach, and on the most relevant computational aspects for the treatment of these systems. As the most frequently used model for interacting agent-based systems, a brief description of the statistical mechanics of the classical Ising model on regular lattices, together with recent extensions of the same model on small-world Watts-Strogatz and scale-free Albert-Barabasi complex networks is included. Other sections of the chapter are devoted to applications of complex networks to economics, finance, spreading of innovations, and regional trade and developments. The chapter also reviews results involving applications of complex networks to other relevant socioeconomic issues, including results for opinion and citation networks. Finally, some avenues for future research are introduced before summarizing the main conclusions of the chapter.Comment: 39 pages, 185 references, (not final version of) a chapter prepared for Complexity and Geographical Economics - Topics and Tools, P. Commendatore, S.S. Kayam and I. Kubin Eds. (Springer, to be published

    Fastpass: A Centralized “Zero-Queue” Datacenter Network

    Get PDF
    An ideal datacenter network should provide several properties, including low median and tail latency, high utilization (throughput), fair allocation of network resources between users or applications, deadline-aware scheduling, and congestion (loss) avoidance. Current datacenter networks inherit the principles that went into the design of the Internet, where packet transmission and path selection decisions are distributed among the endpoints and routers. Instead, we propose that each sender should delegate control—to a centralized arbiter—of when each packet should be transmitted and what path it should follow. This paper describes Fastpass, a datacenter network architecture built using this principle. Fastpass incorporates two fast algorithms: the first determines the time at which each packet should be transmitted, while the second determines the path to use for that packet. In addition, Fastpass uses an efficient protocol between the endpoints and the arbiter and an arbiter replication strategy for fault-tolerant failover. We deployed and evaluated Fastpass in a portion of Facebook’s datacenter network. Our results show that Fastpass achieves high throughput comparable to current networks at a 240 reduction is queue lengths (4.35 Mbytes reducing to 18 Kbytes), achieves much fairer and consistent flow throughputs than the baseline TCP (5200 reduction in the standard deviation of per-flow throughput with five concurrent connections), scalability from 1 to 8 cores in the arbiter implementation with the ability to schedule 2.21 Terabits/s of traffic in software on eight cores, and a 2.5 reduction in the number of TCP retransmissions in a latency-sensitive service at Facebook.National Science Foundation (U.S.) (grant IIS-1065219)Irwin Mark Jacobs and Joan Klein Jacobs Presidential FellowshipHertz Foundation (Fellowship

    Joint buffer management and scheduling for input queued switches

    Get PDF
    Input queued (IQ) switches are highly scalable and they have been the focus of many studies from academia and industry. Many scheduling algorithms have been proposed for IQ switches. However, they do not consider the buffer space requirement inside an IQ switch that may render the scheduling algorithms inefficient in practical applications. In this dissertation, the Queue Length Proportional (QLP) algorithm is proposed for IQ switches. QLP considers both the buffer management and the scheduling mechanism to obtain the optimal allocation region for both bandwidth and buffer space according to real traffic load. In addition, this dissertation introduces the Queue Proportional Fairness (QPF) criterion, which employs the cell loss ratio as the fairness metric. The research in this dissertation will show that the utilization of network resources will be improved significantly with QPF. Furthermore, to support diverse Quality of Service (QoS) requirements of heterogeneous and bursty traffic, the Weighted Minmax algorithm (WMinmax) is proposed to efficiently and dynamically allocate network resources. Lastly, to support traffic with multiple priorities and also to handle the decouple problem in practice, this dissertation introduces the multiple dimension scheduling algorithm which aims to find the optimal scheduling region in the multiple Euclidean space

    An Introduction to Systems Biology for Mathematical Programmers

    Get PDF
    Many recent advances in biology, medicine and health care are due to computational efforts that rely on new mathematical results. These mathematical tools lie in discrete mathematics, statistics & probability, and optimization, and when combined with savvy computational tools and an understanding of cellular biology they are capable of remarkable results. One of the most significant areas of growth is in the field of systems biology, where we are using detailed biological information to construct models that describe larger entities. This chapter is designed to be an introduction to systems biology for individuals in Operations Research (OR) and mathematical programming who already know the supporting mathematics but are unaware of current research in this field

    Mapper on Graphs for Network Visualization

    Full text link
    Networks are an exceedingly popular type of data for representing relationships between individuals, businesses, proteins, brain regions, telecommunication endpoints, etc. Network or graph visualization provides an intuitive way to explore the node-link structures of network data for instant sense-making. However, naive node-link diagrams can fail to convey insights regarding network structures, even for moderately sized data of a few hundred nodes. We propose to apply the mapper construction--a popular tool in topological data analysis--to graph visualization, which provides a strong theoretical basis for summarizing network data while preserving their core structures. We develop a variation of the mapper construction targeting weighted, undirected graphs, called mapper on graphs, which generates property-preserving summaries of graphs. We provide a software tool that enables interactive explorations of such summaries and demonstrates the effectiveness of our method for synthetic and real-world data. The mapper on graphs approach we propose represents a new class of techniques that leverages tools from topological data analysis in addressing challenges in graph visualization

    Computational Approaches for Estimating Life Cycle Inventory Data

    Full text link
    Data gaps in life cycle inventory (LCI) are stumbling blocks for investigating the life cycle performance and impact of emerging technologies. It can be tedious, expensive and time consuming for LCI practitioners to collect LCI data or to wait for experime ntal data become available. I propose a computational approach to estimate missing LCI data using link prediction techniques in network science. LCI data in E coinvent 3.1 is used to test the method. The proposed approach is based on the similarities between different processes or environmental intervention s in the LCI database. By comparing two processes’ material inputs and emission outputs, I measure the similarity of these processes. I hypothesize that similar processes tend to have similar material inputs and emission outputs which are life cycle inventory data I want to estimate. In particular, I measure similarity using four metrics, including average difference, Pearson correlation coefficient, Euclidean di stance, and SimRank with or without data normalization . I test these four metrics and normalization method for their performance of estimating missing LCI data. The results show that processes in the same industrial classification have higher similarities, which validat e the approach of measuring the similarity between unit processes. I remove a small set of data (from one data point to 50) for each process and then use the rest of LCI data as to train the model for estimating the removed data. I t is found that approximately 80% of removed data can be successfully estimated with less than 10% errors. This st udy is the first attempt in the searching for an effective computational method for estimating missing LCI data. I t is anticipate d that this approach wil l significantly transform LCI compilation and LCA studies in future.Master of ScienceNatural Resources and EnvironmentUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/134693/3/Cai_Jiarui_Document.pd
    • …
    corecore