10,023 research outputs found

    Instruction fetch architectures and code layout optimizations

    Get PDF
    The design of higher performance processors has been following two major trends: increasing the pipeline depth to allow faster clock rates, and widening the pipeline to allow parallel execution of more instructions. Designing a higher performance processor implies balancing all the pipeline stages to ensure that overall performance is not dominated by any of them. This means that a faster execution engine also requires a faster fetch engine, to ensure that it is possible to read and decode enough instructions to keep the pipeline full and the functional units busy. This paper explores the challenges faced by the instruction fetch stage for a variety of processor designs, from early pipelined processors, to the more aggressive wide issue superscalars. We describe the different fetch engines proposed in the literature, the performance issues involved, and some of the proposed improvements. We also show how compiler techniques that optimize the layout of the code in memory can be used to improve the fetch performance of the different engines described Overall, we show how instruction fetch has evolved from fetching one instruction every few cycles, to fetching one instruction per cycle, to fetching a full basic block per cycle, to several basic blocks per cycle: the evolution of the mechanism surrounding the instruction cache, and the different compiler optimizations used to better employ these mechanisms.Peer ReviewedPostprint (published version

    Building Damage-Resilient Dominating Sets in Complex Networks against Random and Targeted Attacks

    Full text link
    We study the vulnerability of dominating sets against random and targeted node removals in complex networks. While small, cost-efficient dominating sets play a significant role in controllability and observability of these networks, a fixed and intact network structure is always implicitly assumed. We find that cost-efficiency of dominating sets optimized for small size alone comes at a price of being vulnerable to damage; domination in the remaining network can be severely disrupted, even if a small fraction of dominator nodes are lost. We develop two new methods for finding flexible dominating sets, allowing either adjustable overall resilience, or dominating set size, while maximizing the dominated fraction of the remaining network after the attack. We analyze the efficiency of each method on synthetic scale-free networks, as well as real complex networks

    Generating Representative ISP Technologies From First-Principles

    Full text link
    Understanding and modeling the factors that underlie the growth and evolution of network topologies are basic questions that impact capacity planning, forecasting, and protocol research. Early topology generation work focused on generating network-wide connectivity maps, either at the AS-level or the router-level, typically with an eye towards reproducing abstract properties of observed topologies. But recently, advocates of an alternative "first-principles" approach question the feasibility of realizing representative topologies with simple generative models that do not explicitly incorporate real-world constraints, such as the relative costs of router configurations, into the model. Our work synthesizes these two lines by designing a topology generation mechanism that incorporates first-principles constraints. Our goal is more modest than that of constructing an Internet-wide topology: we aim to generate representative topologies for single ISPs. However, our methods also go well beyond previous work, as we annotate these topologies with representative capacity and latency information. Taking only demand for network services over a given region as input, we propose a natural cost model for building and interconnecting PoPs and formulate the resulting optimization problem faced by an ISP. We devise hill-climbing heuristics for this problem and demonstrate that the solutions we obtain are quantitatively similar to those in measured router-level ISP topologies, with respect to both topological properties and fault-tolerance

    Automatic frequency assignment for cellular telephones using constraint satisfaction techniques

    Get PDF
    We study the problem of automatic frequency assignment for cellular telephone systems. The frequency assignment problem is viewed as the problem to minimize the unsatisfied soft constraints in a constraint satisfaction problem (CSP) over a finite domain of frequencies involving co-channel, adjacent channel, and co-site constraints. The soft constraints are automatically derived from signal strength prediction data. The CSP is solved using a generalized graph coloring algorithm. Graph-theoretical results play a crucial role in making the problem tractable. Performance results from a real-world frequency assignment problem are presented. We develop the generalized graph coloring algorithm by stepwise refinement, starting from DSATUR and augmenting it with local propagation, constraint lifting, intelligent backtracking, redundancy avoidance, and iterative deepening

    Resilient Backhaul Network Design Using Hybrid Radio/Free-Space Optical Technology

    Full text link
    The radio-frequency (RF) technology is a scalable solution for the backhaul planning. However, its performance is limited in terms of data rate and latency. Free Space Optical (FSO) backhaul, on the other hand, offers a higher data rate but is sensitive to weather conditions. To combine the advantages of RF and FSO backhauls, this paper proposes a cost-efficient backhaul network using the hybrid RF/FSO technology. To ensure a resilient backhaul, the paper imposes a given degree of redundancy by connecting each node through KK link-disjoint paths so as to cope with potential link failures. Hence, the network planning problem considered in this paper is the one of minimizing the total deployment cost by choosing the appropriate link type, i.e., either hybrid RF/FSO or optical fiber (OF), between each couple of base-stations while guaranteeing KK link-disjoint connections, a data rate target, and a reliability threshold. The paper solves the problem using graph theory techniques. It reformulates the problem as a maximum weight clique problem in the planning graph, under a specified realistic assumption about the cost of OF and hybrid RF/FSO links. Simulation results show the cost of the different planning and suggest that the proposed heuristic solution has a close-to-optimal performance for a significant gain in computation complexity
    • …
    corecore