244 research outputs found

    Towards explainable evaluation of language models on the semantic similarity of visual concepts

    Full text link
    Recent breakthroughs in NLP research, such as the advent of Transformer models have indisputably contributed to major advancements in several tasks. However, few works research robustness and explainability issues of their evaluation strategies. In this work, we examine the behavior of high-performing pre-trained language models, focusing on the task of semantic similarity for visual vocabularies. First, we address the need for explainable evaluation metrics, necessary for understanding the conceptual quality of retrieved instances. Our proposed metrics provide valuable insights in local and global level, showcasing the inabilities of widely used approaches. Secondly, adversarial interventions on salient query semantics expose vulnerabilities of opaque metrics and highlight patterns in learned linguistic representations

    Exploiting locality in distributed SDN control

    Get PDF
    Large SDN networks will be partitioned in multiple controller domains; each controller is responsible for one domain, and the controllers of adjacent domains may need to communicate to enforce global policies. This paper studies the implications of the local network view of the controllers. In particular, we establish a connection to the field of local algorithms and distributed computing, and discuss lessons for the design of a distributed control plane. In particular, we show that existing local algorithms can be used to develop efficient coordination protocols in which each controller only needs to respond to events that take place in its local neighborhood. However, while existing algorithms can be used, SDN networks also suggest a new approach to the study of locality in distributed computing. We introduce the so-called supported locality model of distributed computing. The new model is more expressive than the classical models, and it is a better match with the features of SDN networks.Peer reviewe

    Fastpass: A Centralized “Zero-Queue” Datacenter Network

    Get PDF
    An ideal datacenter network should provide several properties, including low median and tail latency, high utilization (throughput), fair allocation of network resources between users or applications, deadline-aware scheduling, and congestion (loss) avoidance. Current datacenter networks inherit the principles that went into the design of the Internet, where packet transmission and path selection decisions are distributed among the endpoints and routers. Instead, we propose that each sender should delegate control—to a centralized arbiter—of when each packet should be transmitted and what path it should follow. This paper describes Fastpass, a datacenter network architecture built using this principle. Fastpass incorporates two fast algorithms: the first determines the time at which each packet should be transmitted, while the second determines the path to use for that packet. In addition, Fastpass uses an efficient protocol between the endpoints and the arbiter and an arbiter replication strategy for fault-tolerant failover. We deployed and evaluated Fastpass in a portion of Facebook’s datacenter network. Our results show that Fastpass achieves high throughput comparable to current networks at a 240 reduction is queue lengths (4.35 Mbytes reducing to 18 Kbytes), achieves much fairer and consistent flow throughputs than the baseline TCP (5200 reduction in the standard deviation of per-flow throughput with five concurrent connections), scalability from 1 to 8 cores in the arbiter implementation with the ability to schedule 2.21 Terabits/s of traffic in software on eight cores, and a 2.5 reduction in the number of TCP retransmissions in a latency-sensitive service at Facebook.National Science Foundation (U.S.) (grant IIS-1065219)Irwin Mark Jacobs and Joan Klein Jacobs Presidential FellowshipHertz Foundation (Fellowship

    Large-Scale Structure of Multi-Optimised Networks

    Get PDF

    Quantum-accelerated constraint programming

    Get PDF
    Constraint programming (CP) is a paradigm used to model and solve constraint satisfaction and combinatorial optimization problems. In CP, problems are modeled with constraints that describe acceptable solutions and solved with backtracking tree search augmented with logical inference. In this paper, we show how quantum algorithms can accelerate CP, at both the levels of inference and search. Leveraging existing quantum algorithms, we introduce a quantum-accelerated filtering algorithm for the alldifferent\texttt{alldifferent} global constraint and discuss its applicability to a broader family of global constraints with similar structure. We propose frameworks for the integration of quantum filtering algorithms within both classical and quantum backtracking search schemes, including a novel hybrid classical-quantum backtracking search method. This work suggests that CP is a promising candidate application for early fault-tolerant quantum computers and beyond.Comment: published in Quantu
    • …
    corecore