1,667 research outputs found

    Path computation in multi-layer networks: Complexity and algorithms

    Full text link
    Carrier-grade networks comprise several layers where different protocols coexist. Nowadays, most of these networks have different control planes to manage routing on different layers, leading to a suboptimal use of the network resources and additional operational costs. However, some routers are able to encapsulate, decapsulate and convert protocols and act as a liaison between these layers. A unified control plane would be useful to optimize the use of the network resources and automate the routing configurations. Software-Defined Networking (SDN) based architectures, such as OpenFlow, offer a chance to design such a control plane. One of the most important problems to deal with in this design is the path computation process. Classical path computation algorithms cannot resolve the problem as they do not take into account encapsulations and conversions of protocols. In this paper, we propose algorithms to solve this problem and study several cases: Path computation without bandwidth constraint, under bandwidth constraint and under other Quality of Service constraints. We study the complexity and the scalability of our algorithms and evaluate their performances on real topologies. The results show that they outperform the previous ones proposed in the literature.Comment: IEEE INFOCOM 2016, Apr 2016, San Francisco, United States. To be published in IEEE INFOCOM 2016, \<http://infocom2016.ieee-infocom.org/\&g

    Taking Primitive Optimality Theory Beyond the Finite State

    Full text link
    Primitive Optimality Theory (OTP) (Eisner, 1997a; Albro, 1998), a computational model of Optimality Theory (Prince and Smolensky, 1993), employs a finite state machine to represent the set of active candidates at each stage of an Optimality Theoretic derivation, as well as weighted finite state machines to represent the constraints themselves. For some purposes, however, it would be convenient if the set of candidates were limited by some set of criteria capable of being described only in a higher-level grammar formalism, such as a Context Free Grammar, a Context Sensitive Grammar, or a Multiple Context Free Grammar (Seki et al., 1991). Examples include reduplication and phrasal stress models. Here we introduce a mechanism for OTP-like Optimality Theory in which the constraints remain weighted finite state machines, but sets of candidates are represented by higher-level grammars. In particular, we use multiple context-free grammars to model reduplication in the manner of Correspondence Theory (McCarthy and Prince, 1995), and develop an extended version of the Earley Algorithm (Earley, 1970) to apply the constraints to a reduplicating candidate set.Comment: 11 pages, 5 figures, worksho

    Survey on Combinatorial Register Allocation and Instruction Scheduling

    Full text link
    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization

    Size-Change Termination, Monotonicity Constraints and Ranking Functions

    Full text link
    Size-Change Termination (SCT) is a method of proving program termination based on the impossibility of infinite descent. To this end we may use a program abstraction in which transitions are described by monotonicity constraints over (abstract) variables. When only constraints of the form x>y' and x>=y' are allowed, we have size-change graphs. Both theory and practice are now more evolved in this restricted framework then in the general framework of monotonicity constraints. This paper shows that it is possible to extend and adapt some theory from the domain of size-change graphs to the general case, thus complementing previous work on monotonicity constraints. In particular, we present precise decision procedures for termination; and we provide a procedure to construct explicit global ranking functions from monotonicity constraints in singly-exponential time, which is better than what has been published so far even for size-change graphs.Comment: revised version of September 2

    Recovering non-local dependencies for Chinese

    Get PDF
    To date, work on Non-Local Dependencies (NLDs) has focused almost exclusively on English and it is an open research question how well these approaches migrate to other languages. This paper surveys non-local dependency constructions in Chinese as represented in the Penn Chinese Treebank (CTB) and provides an approach for generating proper predicate-argument-modifier structures including NLDs from surface contextfree phrase structure trees. Our approach recovers non-local dependencies at the level of Lexical-Functional Grammar f-structures, using automatically acquired subcategorisation frames and f-structure paths linking antecedents and traces in NLDs. Currently our algorithm achieves 92.2% f-score for trace insertion and 84.3% for antecedent recovery evaluating on gold-standard CTB trees, and 64.7% and 54.7%, respectively, on CTBtrained state-of-the-art parser output trees

    Nonparametric estimation of multivariate extreme-value copulas

    Full text link
    Extreme-value copulas arise in the asymptotic theory for componentwise maxima of independent random samples. An extreme-value copula is determined by its Pickands dependence function, which is a function on the unit simplex subject to certain shape constraints that arise from an integral transform of an underlying measure called spectral measure. Multivariate extensions are provided of certain rank-based nonparametric estimators of the Pickands dependence function. The shape constraint that the estimator should itself be a Pickands dependence function is enforced by replacing an initial estimator by its best least-squares approximation in the set of Pickands dependence functions having a discrete spectral measure supported on a sufficiently fine grid. Weak convergence of the standardized estimators is demonstrated and the finite-sample performance of the estimators is investigated by means of a simulation experiment.Comment: 26 pages; submitted; Universit\'e catholique de Louvain, Institut de statistique, biostatistique et sciences actuarielle

    Memory and Parallelism Analysis Using a Platform-Independent Approach

    Full text link
    Emerging computing architectures such as near-memory computing (NMC) promise improved performance for applications by reducing the data movement between CPU and memory. However, detecting such applications is not a trivial task. In this ongoing work, we extend the state-of-the-art platform-independent software analysis tool with NMC related metrics such as memory entropy, spatial locality, data-level, and basic-block-level parallelism. These metrics help to identify the applications more suitable for NMC architectures.Comment: 22nd ACM International Workshop on Software and Compilers for Embedded Systems (SCOPES '19), May 201

    Wide-coverage deep statistical parsing using automatic dependency structure annotation

    Get PDF
    A number of researchers (Lin 1995; Carroll, Briscoe, and Sanfilippo 1998; Carroll et al. 2002; Clark and Hockenmaier 2002; King et al. 2003; Preiss 2003; Kaplan et al. 2004;Miyao and Tsujii 2004) have convincingly argued for the use of dependency (rather than CFG-tree) representations for parser evaluation. Preiss (2003) and Kaplan et al. (2004) conducted a number of experiments comparing “deep” hand-crafted wide-coverage with “shallow” treebank- and machine-learning based parsers at the level of dependencies, using simple and automatic methods to convert tree output generated by the shallow parsers into dependencies. In this article, we revisit the experiments in Preiss (2003) and Kaplan et al. (2004), this time using the sophisticated automatic LFG f-structure annotation methodologies of Cahill et al. (2002b, 2004) and Burke (2006), with surprising results. We compare various PCFG and history-based parsers (based on Collins, 1999; Charniak, 2000; Bikel, 2002) to find a baseline parsing system that fits best into our automatic dependency structure annotation technique. This combined system of syntactic parser and dependency structure annotation is compared to two hand-crafted, deep constraint-based parsers (Carroll and Briscoe 2002; Riezler et al. 2002). We evaluate using dependency-based gold standards (DCU 105, PARC 700, CBS 500 and dependencies for WSJ Section 22) and use the Approximate Randomization Test (Noreen 1989) to test the statistical significance of the results. Our experiments show that machine-learning-based shallow grammars augmented with sophisticated automatic dependency annotation technology outperform hand-crafted, deep, widecoverage constraint grammars. Currently our best system achieves an f-score of 82.73% against the PARC 700 Dependency Bank (King et al. 2003), a statistically significant improvement of 2.18%over the most recent results of 80.55%for the hand-crafted LFG grammar and XLE parsing system of Riezler et al. (2002), and an f-score of 80.23% against the CBS 500 Dependency Bank (Carroll, Briscoe, and Sanfilippo 1998), a statistically significant 3.66% improvement over the 76.57% achieved by the hand-crafted RASP grammar and parsing system of Carroll and Briscoe (2002)
    corecore