1,271 research outputs found

    This House Proves that Debating is Harder than Soccer

    Get PDF
    During the last twenty years, a lot of research was conducted on the sport elimination problem: Given a sports league and its remaining matches, we have to decide whether a given team can still possibly win the competition, i.e., place first in the league at the end. Previously, the computational complexity of this problem was investigated only for games with two participating teams per game. In this paper we consider Debating Tournaments and Debating Leagues in the British Parliamentary format, where four teams are participating in each game. We prove that it is NP-hard to decide whether a given team can win a Debating League, even if at most two matches are remaining for each team. This contrasts settings like football where two teams play in each game since there this case is still polynomial time solvable. We prove our result even for a fictitious restricted setting with only three teams per game. On the other hand, for the common setting of Debating Tournaments we show that this problem is fixed parameter tractable if the parameter is the number of remaining rounds kk. This also holds for the practically very important question of whether a team can still qualify for the knock-out phase of the tournament and the combined parameter k+bk + b where bb denotes the threshold rank for qualifying. Finally, we show that the latter problem is polynomial time solvable for any constant kk and arbitrary values bb that are part of the input.Comment: 18 pages, to appear at FUN 201

    Highly parallel sparse Cholesky factorization

    Get PDF
    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms

    A Connection Between Sports and Matroids: How Many Teams Can We Beat?

    Get PDF

    Optimal web-scale tiering as a flow problem

    Get PDF
    We present a fast online solver for large scale parametric max-flow problems as they occur in portfolio optimization, inventory management, computer vision, and logistics. Our algorithm solves an integer linear program in an online fashion. It exploits total unimodularity of the constraint matrix and a Lagrangian relaxation to solve the problem as a convex online game. The algorithm generates approximate solutions of max-flow problems by performing stochastic gradient descent on a set of flows. We apply the algorithm to optimize tier arrangement of over 84 million web pages on a layered set of caches to serve an incoming query stream optimally

    Applications of network optimization

    Get PDF
    Includes bibliographical references (p. 41-48).Ravindra K. Ahuja ... [et al.]

    Multimodal content-based video retrieval

    Get PDF

    The Data Science Design Manual

    Get PDF

    κ°μ²΄κ²€μΆœ μ•Œκ³ λ¦¬μ¦˜μ„ μœ„ν•œ 일관성과 보간법 기반의 쀀지도 ν•™μŠ΅

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : μœ΅ν•©κ³Όν•™κΈ°μˆ λŒ€ν•™μ› μœ΅ν•©κ³Όν•™λΆ€(지λŠ₯ν˜•μœ΅ν•©μ‹œμŠ€ν…œμ „κ³΅), 2021.8. κ³½λ…Έμ€€.Object detection, one of the main areas of computer vision researches, is a task that predicts where and what the objects are in an RGB image. While the object detection task requires a massive number of annotated samples to guarantee its performance, placing bounding boxes for every object in each sample is costly and time consuming. To alleviate this problem, Weakly-Supervised Learning and Semi-Supervised Learning methods have been proposed. However, they show large gaps from supervised learning in efficiency and require a lot of research. Especially in Semi-Supervised Learning, the deep learning-based learning methods are not yet applied to object detection. In this dissertation, we have applied the latest deep learning-based Semi-Supervised Learning methods to object detection, which considers and solves the problems caused by applying the established Semi-Supervised Learning algorithms. Specifically, we have adopted Consistency Regularization (CR) and Interpolation Regularization (IR) Semi-Supervised Learning methods to object detection individually and combined them together for performance improvement. It is the first attempt to extend CR and IR to object detection problem which was only used in conventional semi-supervised classification problems First, we propose a novel Consistency-based Semi-Supervised Learning method for object Detection (CSD), which is a way of using consistency constraints to enhance detection performance by making full use of available unlabeled data. To be specific, the consistency constraint is applied not only for object classification but also for localization. We also propose Background Elimination (BE) to avoid the negative effect of the predominant backgrounds on the detection performance. We evaluated the proposed CSD both in single-stage and two-stage detectors, and the results show the effectiveness of our method. Second, we present a novel Interpolation-based Semi-Supervised Learning method for object Detection (ISD), which considers and solves the problems caused by applying conventional Interpolation Regularization (IR) directly to object detection. We divide the output of the model into two types according to the objectness scores of both original patches that are mixed in IR. Then, we apply a separate loss suitable for each type in an unsupervised manner. The proposed losses dramatically improve the performance of Semi-Supervised Learning as well as supervised learning. Third, we introduce the method of combining CSD and ISD. In CSD, it requires an additional prediction for applying consistency regularization, and it allocates twice (x2) as much memory as conventional supervised learning. In ISD, in addition, two supplementary predictions are computed for applying interpolation regularization, and it takes three times (x3) as much memory as conventional training. Therefore, it requires three extra predictions to combine CSD and ISD. In our method, by applying shuffle the sample in mini-batch in CSD, we reduced the additional predictions from three to two, which can cut back the memory. Furthermore, combining two algorithms shows performance improvement.객체 κ²€μΆœ μ•Œκ³ λ¦¬μ¦˜μ€ RGB μ΄λ―Έμ§€μ—μ„œ μ–΄λŠ μœ„μΉ˜μ— μ–΄λ–€ 객체가 μžˆλŠ”μ§€λ₯Ό κ²€μΆœν•˜λŠ” κ²ƒμœΌλ‘œ, 컴퓨터 λΉ„μ „ λΆ„μ•Όμ—μ„œ κ°€μž₯ μ€‘μš”ν•œ 연ꡬ뢄야 쀑 ν•˜λ‚˜μ΄λ‹€. ν•˜μ§€λ§Œ, μ΄λŸ¬ν•œ 객체 κ²€μΆœ μ•Œκ³ λ¦¬μ¦˜μ„ μœ„ν•΄μ„œλŠ” 잘 λ ˆμ΄λΈ”λ§λœ 큰 데이터 셋을 ν•„μš”λ‘œ ν•˜κ³ , μ΄λŸ¬ν•œ λ ˆμ΄λΈ”λ§μ€ 맀우 λ§Žμ€ λΉ„μš©κ³Ό μ‹œκ°„μ„ ν•„μš”λ‘œ ν•œλ‹€. μœ„μ™€ 같은 문제λ₯Ό ν•΄κ²°ν•˜κΈ° μœ„ν•˜μ—¬ μ•½ν•œ μ§€λ„ν•™μŠ΅ (Weakly Supervised Learning), 쀀지도 ν•™μŠ΅ (Semi Supervised Learning)의 방법듀이 μ—°κ΅¬λ˜κ³  μžˆμœΌλ‚˜, κ·Έ 연ꡬ가 λ§Žμ§€ μ•Šκ³ , 쀀지도 ν•™μŠ΅μ˜ 경우, μ΅œμ‹  λ”₯λŸ¬λ‹ 기반의 ν•™μŠ΅λ°©λ²•λ“€μ΄ μ μš©λ˜μ§€ μ•Šκ³  μžˆμ—ˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” μ΅œμ‹ μ˜ λ”₯λŸ¬λ‹ 기반의 쀀지도 ν•™μŠ΅ 방법듀을 객체 κ²€μΆœ μ•Œκ³ λ¦¬μ¦˜μ— μ μš©ν•˜μ˜€κ³ , μ—¬κΈ°μ„œ λ°œμƒν•˜λŠ” λ¬Έμ œλ“€μ„ λ°œκ²¬ν•˜κ³  ν•΄κ²°ν•˜λŠ” 방법을 μ—°κ΅¬ν•˜μ˜€λ‹€. ꡬ체적으둜 일관성 μ •κ·œν™” (Consistency Regularization), 보간법 μ •κ·œν™” (Interpolation Regularization) 기반의 쀀지도 ν•™μŠ΅ 방법을 μ œμ‹œν•˜μ˜€κ³ , μ΅œμ’…μ μœΌλ‘œ 이 λ‘˜μ„ ν•©μΉ˜λŠ” 방법을 μ œμ‹œν•˜μ˜€λ‹€. μ΄λŠ” 기쑴의 λΆ„λ₯˜λ¬Έμ œμ—μ„œ μ‚¬μš©λ˜λŠ” CR κ³Ό IR 을 κ°μ²΄κ²€μΆœ μ•Œκ³ λ¦¬μ¦˜ λ¬Έμ œμ— 처음으둜 ν™•μž₯ν•œ 것이닀. 첫 번째둜, μš°λ¦¬λŠ” 객체 κ²€μΆœ μ•Œκ³ λ¦¬μ¦˜μ„ μœ„ν•œ 일관성 μ •κ·œν™” 기반의 쀀지도 ν•™μŠ΅λ°©λ²• (CSD)을 μ œμ•ˆν•˜μ˜€λ‹€. μ΄λŠ” μ •κ·œν™” μ œμ•½μ„ μ‚¬μš©ν•˜μ—¬ λ ˆμ΄λΈ”λ§μ΄ μ—†λŠ” λͺ¨λ“  데이터λ₯Ό ν™œμš©ν•˜μ—¬ 객체 κ²€μΆœ μ„±λŠ₯을 ν–₯μƒμ‹œν‚€λŠ” 방법이닀. ꡬ체적으둜 μš°λ¦¬λŠ” μ •κ·œν™” μ œμ•½μ„ λΆ„λ₯˜λΏλ§Œ μ•„λ‹ˆλΌ νšŒκ·€μ— λŒ€ν•΄μ„œλ„ μ μš©ν•˜μ˜€λ‹€. κ²Œλ‹€κ°€, μš°λ¦¬λŠ” ν•œ 이미지 λ‚΄μ—μ„œ λŒ€λΆ€λΆ„μ˜ μ˜μ—­μ„ μ°¨μ§€ν•˜λŠ” λ°°κ²½ λΆ€λΆ„μ˜ 영ν–₯을 쀄이기 μœ„ν•˜μ—¬ λ°°κ²½ 제거 (Background Elimination) 을 μ μš©ν•˜μ˜€λ‹€. μš°λ¦¬λŠ” μ œμ•ˆν•œ CSD λ₯Ό μ‹±κΈ€ 단계 (Single-Stage)와 두 단계(Two-Stage) κ²€μΆœκΈ°μ— λͺ¨λ‘ μ μš©ν•˜μ—¬ ν‰κ°€ν•˜μ˜€κ³ , 결과듀은 우리의 μ•Œκ³ λ¦¬μ¦˜μ˜ 효과λ₯Ό λ³΄μ˜€λ‹€. λ‘λ²ˆμ§Έλ‘œ, μš°λ¦¬λŠ” 객체 κ²€μΆœ μ•Œκ³ λ¦¬μ¦˜μ„ μœ„ν•œ 보간법 μ •κ·œν™” (IR) 기반의 쀀지도 ν•™μŠ΅λ°©λ²• (ISD)을 μ œμ•ˆν•˜μ˜€λ‹€. μš°λ¦¬λŠ” 보간법 μ •κ·œν™”λ₯Ό 객체 κ²€μΆœ μ•Œκ³ λ¦¬μ¦˜μ— λ°”λ‘œ μ μš©μ‹œμΌ°μ„ λ•Œ μƒκΈ°λŠ” λ¬Έμ œλ“€μ„ κ³ λ €ν•˜κ³  ν•΄κ²°ν•˜μ˜€λ‹€. μš°λ¦¬λŠ” 두 원본 νŒ¨μΉ˜μ—μ„œμ˜ 객체 ν™•λ₯ μ— 따라 λͺ¨λΈμ˜ 좜λ ₯을 λ‘κ°œμ˜ νƒ€μž…μœΌλ‘œ λ‚˜λˆ„μ—ˆλ‹€. 그리고, μš°λ¦¬λŠ” 각각의 νƒ€μž…μ— 따라 각각에 λ§žλŠ” 손싀 ν•¨μˆ˜λ₯Ό μ •μ˜ν•˜μ˜€λ‹€. μ œμ•ˆν•œ μ•Œκ³ λ¦¬μ¦˜μ€ μ§€λ„ν•™μŠ΅λΏλ§Œ μ•„λ‹ˆλΌ 쀀지도 ν•™μŠ΅μ—μ„œλ„ 맀우 큰 μ„±λŠ₯ν–₯상을 λ³΄μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, μš°λ¦¬λŠ” μœ„μ˜ CSD 와 ISD 의 κ²½ν•©ν•˜λŠ” 방법을 μ†Œκ°œν•˜μ˜€λ‹€. CSDμ—μ„œλŠ” 일관성 μ •κ·œν™”λ₯Ό μ μš©ν•˜κΈ° μœ„ν•˜μ—¬ ν•œλ²ˆμ˜ 좔가적인 연산을 ν•„μš”λ‘œ ν•˜κ³ , μ΄λŠ” 기쑴의 μ§€λ„ν•™μŠ΅μ— λΉ„ν•΄ 2배의 λ©”λͺ¨λ¦¬λ₯Ό ν•„μš”λ‘œ ν•œλ‹€. ISD의 경우, 보간법 μ •κ·œν™”λ₯Ό μ μš©ν•˜κΈ° μœ„ν•˜μ—¬ λ‘λ²ˆμ˜ 좔가적인 연산을 ν•„μš”λ‘œ ν•˜κ³ , μ΄λŠ” 3배의 λ©”λͺ¨λ¦¬λ₯Ό ν•„μš”λ‘œ ν•œλ‹€. κ·ΈλŸ¬λ―€λ‘œ, 두 μ•Œκ³ λ¦¬μ¦˜μ„ κ²°ν•©ν•˜κΈ° μœ„ν•΄μ„œλŠ” μ„Έλ²ˆμ˜ 좔가적인 결과값이 ν•„μš”ν•˜λ‹€. μš°λ¦¬λŠ” CSD λ―Έλ‹ˆλ°°μΉ˜μ˜ μƒ˜ν”Œλ“€μ„ μ„žλŠ” 방법을 μ μš©ν•˜μ˜€κ³ , μ΄λŠ” 좔가적인 연산을 μ„Έλ²ˆμ—μ„œ λ‘λ²ˆμœΌλ‘œ 쀄여 λ©”λͺ¨λ¦¬μ˜ μ†Œλͺ¨λ₯Ό 쀄일 수 μžˆμ—ˆλ‹€. λ˜ν•œ, 이 두 μ•Œκ³ λ¦¬μ¦˜μ„ ν•©μ³μ„œ λͺ¨λΈμ˜ μ„±λŠ₯이 ν–₯상됨을 λ³΄μ˜€λ‹€.1 INTRODUCTION 1 2 Related works 12 3 Consistency-based Semi-supervised learning for object Detection (CSD) 42 4 Interpolation-based Semi-supervised learning for object Detection (ISD) 65 5 Combination of CSD and ISD 82 6 Conclusion 98 Abstract (In Korean) 116 κ°μ‚¬μ˜ κΈ€ 118λ°•

    ImageNet Large Scale Visual Recognition Challenge

    Get PDF
    The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.Comment: 43 pages, 16 figures. v3 includes additional comparisons with PASCAL VOC (per-category comparisons in Table 3, distribution of localization difficulty in Fig 16), a list of queries used for obtaining object detection images (Appendix C), and some additional reference
    • …
    corecore