1,271 research outputs found
This House Proves that Debating is Harder than Soccer
During the last twenty years, a lot of research was conducted on the sport
elimination problem: Given a sports league and its remaining matches, we have
to decide whether a given team can still possibly win the competition, i.e.,
place first in the league at the end. Previously, the computational complexity
of this problem was investigated only for games with two participating teams
per game. In this paper we consider Debating Tournaments and Debating Leagues
in the British Parliamentary format, where four teams are participating in each
game. We prove that it is NP-hard to decide whether a given team can win a
Debating League, even if at most two matches are remaining for each team. This
contrasts settings like football where two teams play in each game since there
this case is still polynomial time solvable. We prove our result even for a
fictitious restricted setting with only three teams per game. On the other
hand, for the common setting of Debating Tournaments we show that this problem
is fixed parameter tractable if the parameter is the number of remaining rounds
. This also holds for the practically very important question of whether a
team can still qualify for the knock-out phase of the tournament and the
combined parameter where denotes the threshold rank for qualifying.
Finally, we show that the latter problem is polynomial time solvable for any
constant and arbitrary values that are part of the input.Comment: 18 pages, to appear at FUN 201
Highly parallel sparse Cholesky factorization
Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms
Optimal web-scale tiering as a flow problem
We present a fast online solver for large scale parametric max-flow problems as they occur in portfolio optimization, inventory management, computer vision, and logistics. Our algorithm solves an integer linear program in an online fashion. It exploits total unimodularity of the constraint matrix and a Lagrangian relaxation to solve the problem as a convex online game. The algorithm generates approximate solutions of max-flow problems by performing stochastic gradient descent on a set of flows. We apply the algorithm to optimize tier arrangement of over 84 million web pages on a layered set of caches to serve an incoming query stream optimally
Applications of network optimization
Includes bibliographical references (p. 41-48).Ravindra K. Ahuja ... [et al.]
κ°μ²΄κ²μΆ μκ³ λ¦¬μ¦μ μν μΌκ΄μ±κ³Ό 보κ°λ² κΈ°λ°μ μ€μ§λ νμ΅
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : μ΅ν©κ³ΌνκΈ°μ λνμ μ΅ν©κ³ΌνλΆ(μ§λ₯νμ΅ν©μμ€ν
μ 곡), 2021.8. κ³½λ
Έμ€.Object detection, one of the main areas of computer vision researches, is a task that predicts where and what the objects are in an RGB image. While the object detection task requires a massive number of annotated samples to guarantee its performance, placing bounding boxes for every object in each sample is costly and time consuming. To alleviate this problem, Weakly-Supervised Learning and Semi-Supervised Learning methods have been proposed. However, they show large gaps from supervised learning in efficiency and require a lot of research. Especially in Semi-Supervised Learning, the deep learning-based learning methods are not yet applied to object detection.
In this dissertation, we have applied the latest deep learning-based Semi-Supervised Learning methods to object detection, which considers and solves the problems caused by applying the established Semi-Supervised Learning algorithms. Specifically, we have adopted Consistency Regularization (CR) and Interpolation Regularization (IR) Semi-Supervised Learning methods to object detection individually and combined them together for performance improvement. It is the first attempt to extend CR and IR to object detection problem which was only used in conventional semi-supervised classification problems
First, we propose a novel Consistency-based Semi-Supervised Learning method for object Detection (CSD), which is a way of using consistency constraints to enhance detection performance by making full use of available unlabeled data. To be specific, the consistency constraint is applied not only for object classification but also for localization. We also propose Background Elimination (BE) to avoid the negative effect of the predominant backgrounds on the detection performance. We evaluated the proposed CSD both in single-stage and two-stage detectors, and the results show the effectiveness of our method.
Second, we present a novel Interpolation-based Semi-Supervised Learning method for object Detection (ISD), which considers and solves the problems caused by applying conventional Interpolation Regularization (IR) directly to object detection. We divide the output of the model into two types according to the objectness scores of both original patches that are mixed in IR. Then, we apply a separate loss suitable for each type in an unsupervised manner. The proposed losses dramatically improve the performance of Semi-Supervised Learning as well as supervised learning.
Third, we introduce the method of combining CSD and ISD. In CSD, it requires an additional prediction for applying consistency regularization, and it allocates twice (x2) as much memory as conventional supervised learning. In ISD, in addition, two supplementary predictions are computed for applying interpolation regularization, and it takes three times (x3) as much memory as conventional training. Therefore, it requires three extra predictions to combine CSD and ISD. In our method, by applying shuffle the sample in mini-batch in CSD, we reduced the additional predictions from three to two, which can cut back the memory. Furthermore, combining two algorithms shows performance improvement.κ°μ²΄ κ²μΆ μκ³ λ¦¬μ¦μ RGB μ΄λ―Έμ§μμ μ΄λ μμΉμ μ΄λ€ κ°μ²΄κ° μλμ§λ₯Ό κ²μΆνλ κ²μΌλ‘, μ»΄ν¨ν° λΉμ λΆμΌμμ κ°μ₯ μ€μν μ°κ΅¬λΆμΌ μ€ νλμ΄λ€. νμ§λ§, μ΄λ¬ν κ°μ²΄ κ²μΆ μκ³ λ¦¬μ¦μ μν΄μλ μ λ μ΄λΈλ§λ ν° λ°μ΄ν° μ
μ νμλ‘ νκ³ , μ΄λ¬ν λ μ΄λΈλ§μ λ§€μ° λ§μ λΉμ©κ³Ό μκ°μ νμλ‘ νλ€. μμ κ°μ λ¬Έμ λ₯Ό ν΄κ²°νκΈ° μνμ¬ μ½ν μ§λνμ΅ (Weakly Supervised Learning), μ€μ§λ νμ΅ (Semi Supervised Learning)μ λ°©λ²λ€μ΄ μ°κ΅¬λκ³ μμΌλ, κ·Έ μ°κ΅¬κ° λ§μ§ μκ³ , μ€μ§λ νμ΅μ κ²½μ°, μ΅μ λ₯λ¬λ κΈ°λ°μ νμ΅λ°©λ²λ€μ΄ μ μ©λμ§ μκ³ μμλ€.
λ³Έ λ
Όλ¬Έμμλ μ΅μ μ λ₯λ¬λ κΈ°λ°μ μ€μ§λ νμ΅ λ°©λ²λ€μ κ°μ²΄ κ²μΆ μκ³ λ¦¬μ¦μ μ μ©νμκ³ , μ¬κΈ°μ λ°μνλ λ¬Έμ λ€μ λ°κ²¬νκ³ ν΄κ²°νλ λ°©λ²μ μ°κ΅¬νμλ€. ꡬ체μ μΌλ‘ μΌκ΄μ± μ κ·ν (Consistency Regularization), 보κ°λ² μ κ·ν (Interpolation Regularization) κΈ°λ°μ μ€μ§λ νμ΅ λ°©λ²μ μ μνμκ³ , μ΅μ’
μ μΌλ‘ μ΄ λμ ν©μΉλ λ°©λ²μ μ μνμλ€. μ΄λ κΈ°μ‘΄μ λΆλ₯λ¬Έμ μμ μ¬μ©λλ CR κ³Ό IR μ κ°μ²΄κ²μΆ μκ³ λ¦¬μ¦ λ¬Έμ μ μ²μμΌλ‘ νμ₯ν κ²μ΄λ€.
첫 λ²μ§Έλ‘, μ°λ¦¬λ κ°μ²΄ κ²μΆ μκ³ λ¦¬μ¦μ μν μΌκ΄μ± μ κ·ν κΈ°λ°μ μ€μ§λ νμ΅λ°©λ² (CSD)μ μ μνμλ€. μ΄λ μ κ·ν μ μ½μ μ¬μ©νμ¬ λ μ΄λΈλ§μ΄ μλ λͺ¨λ λ°μ΄ν°λ₯Ό νμ©νμ¬ κ°μ²΄ κ²μΆ μ±λ₯μ ν₯μμν€λ λ°©λ²μ΄λ€. ꡬ체μ μΌλ‘ μ°λ¦¬λ μ κ·ν μ μ½μ λΆλ₯λΏλ§ μλλΌ νκ·μ λν΄μλ μ μ©νμλ€. κ²λ€κ°, μ°λ¦¬λ ν μ΄λ―Έμ§ λ΄μμ λλΆλΆμ μμμ μ°¨μ§νλ λ°°κ²½ λΆλΆμ μν₯μ μ€μ΄κΈ° μνμ¬ λ°°κ²½ μ κ±° (Background Elimination) μ μ μ©νμλ€. μ°λ¦¬λ μ μν CSD λ₯Ό μ±κΈ λ¨κ³ (Single-Stage)μ λ λ¨κ³(Two-Stage) κ²μΆκΈ°μ λͺ¨λ μ μ©νμ¬ νκ°νμκ³ , κ²°κ³Όλ€μ μ°λ¦¬μ μκ³ λ¦¬μ¦μ ν¨κ³Όλ₯Ό 보μλ€.
λλ²μ§Έλ‘, μ°λ¦¬λ κ°μ²΄ κ²μΆ μκ³ λ¦¬μ¦μ μν 보κ°λ² μ κ·ν (IR) κΈ°λ°μ μ€μ§λ νμ΅λ°©λ² (ISD)μ μ μνμλ€. μ°λ¦¬λ 보κ°λ² μ κ·νλ₯Ό κ°μ²΄ κ²μΆ μκ³ λ¦¬μ¦μ λ°λ‘ μ μ©μμΌ°μ λ μκΈ°λ λ¬Έμ λ€μ κ³ λ €νκ³ ν΄κ²°νμλ€. μ°λ¦¬λ λ μλ³Έ ν¨μΉμμμ κ°μ²΄ νλ₯ μ λ°λΌ λͺ¨λΈμ μΆλ ₯μ λκ°μ νμ
μΌλ‘ λλμλ€. κ·Έλ¦¬κ³ , μ°λ¦¬λ κ°κ°μ νμ
μ λ°λΌ κ°κ°μ λ§λ μμ€ ν¨μλ₯Ό μ μνμλ€. μ μν μκ³ λ¦¬μ¦μ μ§λνμ΅λΏλ§ μλλΌ μ€μ§λ νμ΅μμλ λ§€μ° ν° μ±λ₯ν₯μμ 보μλ€.
λ§μ§λ§μΌλ‘, μ°λ¦¬λ μμ CSD μ ISD μ κ²½ν©νλ λ°©λ²μ μκ°νμλ€. CSDμμλ μΌκ΄μ± μ κ·νλ₯Ό μ μ©νκΈ° μνμ¬ νλ²μ μΆκ°μ μΈ μ°μ°μ νμλ‘ νκ³ , μ΄λ κΈ°μ‘΄μ μ§λνμ΅μ λΉν΄ 2λ°°μ λ©λͺ¨λ¦¬λ₯Ό νμλ‘ νλ€. ISDμ κ²½μ°, 보κ°λ² μ κ·νλ₯Ό μ μ©νκΈ° μνμ¬ λλ²μ μΆκ°μ μΈ μ°μ°μ νμλ‘ νκ³ , μ΄λ 3λ°°μ λ©λͺ¨λ¦¬λ₯Ό νμλ‘ νλ€. κ·Έλ¬λ―λ‘, λ μκ³ λ¦¬μ¦μ κ²°ν©νκΈ° μν΄μλ μΈλ²μ μΆκ°μ μΈ κ²°κ³Όκ°μ΄ νμνλ€. μ°λ¦¬λ CSD λ―Έλλ°°μΉμ μνλ€μ μλ λ°©λ²μ μ μ©νμκ³ , μ΄λ μΆκ°μ μΈ μ°μ°μ μΈλ²μμ λλ²μΌλ‘ μ€μ¬ λ©λͺ¨λ¦¬μ μλͺ¨λ₯Ό μ€μΌ μ μμλ€. λν, μ΄ λ μκ³ λ¦¬μ¦μ ν©μ³μ λͺ¨λΈμ μ±λ₯μ΄ ν₯μλ¨μ 보μλ€.1 INTRODUCTION 1
2 Related works 12
3 Consistency-based Semi-supervised learning for object Detection (CSD) 42
4 Interpolation-based Semi-supervised learning for object Detection (ISD) 65
5 Combination of CSD and ISD 82
6 Conclusion 98
Abstract (In Korean) 116
κ°μ¬μ κΈ 118λ°
ImageNet Large Scale Visual Recognition Challenge
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in
object category classification and detection on hundreds of object categories
and millions of images. The challenge has been run annually from 2010 to
present, attracting participation from more than fifty institutions.
This paper describes the creation of this benchmark dataset and the advances
in object recognition that have been possible as a result. We discuss the
challenges of collecting large-scale ground truth annotation, highlight key
breakthroughs in categorical object recognition, provide a detailed analysis of
the current state of the field of large-scale image classification and object
detection, and compare the state-of-the-art computer vision accuracy with human
accuracy. We conclude with lessons learned in the five years of the challenge,
and propose future directions and improvements.Comment: 43 pages, 16 figures. v3 includes additional comparisons with PASCAL
VOC (per-category comparisons in Table 3, distribution of localization
difficulty in Fig 16), a list of queries used for obtaining object detection
images (Appendix C), and some additional reference
- β¦