718 research outputs found

    Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation

    Full text link
    Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data.Comment: Accepted at MICCAI 201

    Finding Connected Dense kk-Subgraphs

    Full text link
    Given a connected graph GG on nn vertices and a positive integer knk\le n, a subgraph of GG on kk vertices is called a kk-subgraph in GG. We design combinatorial approximation algorithms for finding a connected kk-subgraph in GG such that its density is at least a factor Ω(max{n2/5,k2/n2})\Omega(\max\{n^{-2/5},k^2/n^2\}) of the density of the densest kk-subgraph in GG (which is not necessarily connected). These particularly provide the first non-trivial approximations for the densest connected kk-subgraph problem on general graphs

    Budget-restricted utility games with ordered strategic decisions

    Full text link
    We introduce the concept of budget games. Players choose a set of tasks and each task has a certain demand on every resource in the game. Each resource has a budget. If the budget is not enough to satisfy the sum of all demands, it has to be shared between the tasks. We study strategic budget games, where the budget is shared proportionally. We also consider a variant in which the order of the strategic decisions influences the distribution of the budgets. The complexity of the optimal solution as well as existence, complexity and quality of equilibria are analyzed. Finally, we show that the time an ordered budget game needs to convergence towards an equilibrium may be exponential

    Computing Stable Coalitions: Approximation Algorithms for Reward Sharing

    Full text link
    Consider a setting where selfish agents are to be assigned to coalitions or projects from a fixed set P. Each project k is characterized by a valuation function; v_k(S) is the value generated by a set S of agents working on project k. We study the following classic problem in this setting: "how should the agents divide the value that they collectively create?". One traditional approach in cooperative game theory is to study core stability with the implicit assumption that there are infinite copies of one project, and agents can partition themselves into any number of coalitions. In contrast, we consider a model with a finite number of non-identical projects; this makes computing both high-welfare solutions and core payments highly non-trivial. The main contribution of this paper is a black-box mechanism that reduces the problem of computing a near-optimal core stable solution to the purely algorithmic problem of welfare maximization; we apply this to compute an approximately core stable solution that extracts one-fourth of the optimal social welfare for the class of subadditive valuations. We also show much stronger results for several popular sub-classes: anonymous, fractionally subadditive, and submodular valuations, as well as provide new approximation algorithms for welfare maximization with anonymous functions. Finally, we establish a connection between our setting and the well-studied simultaneous auctions with item bidding; we adapt our results to compute approximate pure Nash equilibria for these auctions.Comment: Under Revie

    Limitations to Frechet's Metric Embedding Method

    Full text link
    Frechet's classical isometric embedding argument has evolved to become a major tool in the study of metric spaces. An important example of a Frechet embedding is Bourgain's embedding. The authors have recently shown that for every e>0 any n-point metric space contains a subset of size at least n^(1-e) which embeds into l_2 with distortion O(\log(2/e) /e). The embedding we used is non-Frechet, and the purpose of this note is to show that this is not coincidental. Specifically, for every e>0, we construct arbitrarily large n-point metric spaces, such that the distortion of any Frechet embedding into l_p on subsets of size at least n^{1/2 + e} is \Omega((\log n)^{1/p}).Comment: 10 pages, 1 figur

    Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach

    Full text link
    In this paper, we study the kk-forest problem in the model of resource augmentation. In the kk-forest problem, given an edge-weighted graph G(V,E)G(V,E), a parameter kk, and a set of mm demand pairs V×V\subseteq V \times V, the objective is to construct a minimum-cost subgraph that connects at least kk demands. The problem is hard to approximate---the best-known approximation ratio is O(min{n,k})O(\min\{\sqrt{n}, \sqrt{k}\}). Furthermore, kk-forest is as hard to approximate as the notoriously-hard densest kk-subgraph problem. While the kk-forest problem is hard to approximate in the worst-case, we show that with the use of resource augmentation, we can efficiently approximate it up to a constant factor. First, we restate the problem in terms of the number of demands that are {\em not} connected. In particular, the objective of the kk-forest problem can be viewed as to remove at most mkm-k demands and find a minimum-cost subgraph that connects the remaining demands. We use this perspective of the problem to explain the performance of our algorithm (in terms of the augmentation) in a more intuitive way. Specifically, we present a polynomial-time algorithm for the kk-forest problem that, for every ϵ>0\epsilon>0, removes at most mkm-k demands and has cost no more than O(1/ϵ2)O(1/\epsilon^{2}) times the cost of an optimal algorithm that removes at most (1ϵ)(mk)(1-\epsilon)(m-k) demands

    Line-distortion, Bandwidth and Path-length of a graph

    Full text link
    We investigate the minimum line-distortion and the minimum bandwidth problems on unweighted graphs and their relations with the minimum length of a Robertson-Seymour's path-decomposition. The length of a path-decomposition of a graph is the largest diameter of a bag in the decomposition. The path-length of a graph is the minimum length over all its path-decompositions. In particular, we show: - if a graph GG can be embedded into the line with distortion kk, then GG admits a Robertson-Seymour's path-decomposition with bags of diameter at most kk in GG; - for every class of graphs with path-length bounded by a constant, there exist an efficient constant-factor approximation algorithm for the minimum line-distortion problem and an efficient constant-factor approximation algorithm for the minimum bandwidth problem; - there is an efficient 2-approximation algorithm for computing the path-length of an arbitrary graph; - AT-free graphs and some intersection families of graphs have path-length at most 2; - for AT-free graphs, there exist a linear time 8-approximation algorithm for the minimum line-distortion problem and a linear time 4-approximation algorithm for the minimum bandwidth problem

    Smoothed Complexity Theory

    Get PDF
    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like P and AvgP, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first hardness results (of bounded halting and tiling) and tractability results (binary optimization problems, graph coloring, satisfiability). Furthermore, we discuss extensions and shortcomings of our model and relate it to semi-random models.Comment: to be presented at MFCS 201

    The Discovery of a Gravitationally Lensed Quasar at z = 6.51

    Get PDF
    Strong gravitational lensing provides a powerful probe of the physical properties of quasars and their host galaxies. A high fraction of the most luminous high-redshift quasars was predicted to be lensed due to magnification bias. However, no multiple imaged quasar was found at z>5 in previous surveys. We report the discovery of J043947.08+163415.7, a strongly lensed quasar at z=6.51, the first such object detected at the epoch of reionization, and the brightest quasar yet known at z>5. High-resolution HST imaging reveals a multiple imaged system with a maximum image separation theta ~ 0.2", best explained by a model of three quasar images lensed by a low luminosity galaxy at z~0.7, with a magnification factor of ~50. The existence of this source suggests that a significant population of strongly lensed, high redshift quasars could have been missed by previous surveys, as standard color selection techniques would fail when the quasar color is contaminated by the lensing galaxy.Comment: 8 pages, 4 figures, submitted to ApJ

    Fast Distributed Approximation for Max-Cut

    Full text link
    Finding a maximum cut is a fundamental task in many computational settings. Surprisingly, it has been insufficiently studied in the classic distributed settings, where vertices communicate by synchronously sending messages to their neighbors according to the underlying graph, known as the LOCAL\mathcal{LOCAL} or CONGEST\mathcal{CONGEST} models. We amend this by obtaining almost optimal algorithms for Max-Cut on a wide class of graphs in these models. In particular, for any ϵ>0\epsilon > 0, we develop randomized approximation algorithms achieving a ratio of (1ϵ)(1-\epsilon) to the optimum for Max-Cut on bipartite graphs in the CONGEST\mathcal{CONGEST} model, and on general graphs in the LOCAL\mathcal{LOCAL} model. We further present efficient deterministic algorithms, including a 1/31/3-approximation for Max-Dicut in our models, thus improving the best known (randomized) ratio of 1/41/4. Our algorithms make non-trivial use of the greedy approach of Buchbinder et al. (SIAM Journal on Computing, 2015) for maximizing an unconstrained (non-monotone) submodular function, which may be of independent interest
    corecore