249 research outputs found

    Efficient Inference of Gaussian Process Modulated Renewal Processes with Application to Medical Event Data

    Full text link
    The episodic, irregular and asynchronous nature of medical data render them difficult substrates for standard machine learning algorithms. We would like to abstract away this difficulty for the class of time-stamped categorical variables (or events) by modeling them as a renewal process and inferring a probability density over continuous, longitudinal, nonparametric intensity functions modulating that process. Several methods exist for inferring such a density over intensity functions, but either their constraints and assumptions prevent their use with our potentially bursty event streams, or their time complexity renders their use intractable on our long-duration observations of high-resolution events, or both. In this paper we present a new and efficient method for inferring a distribution over intensity functions that uses direct numeric integration and smooth interpolation over Gaussian processes. We demonstrate that our direct method is up to twice as accurate and two orders of magnitude more efficient than the best existing method (thinning). Importantly, the direct method can infer intensity functions over the full range of bursty to memoryless to regular events, which thinning and many other methods cannot. Finally, we apply the method to clinical event data and demonstrate the face-validity of the abstraction, which is now amenable to standard learning algorithms.Comment: 8 pages, 4 figure

    Online Algorithms with Randomly Infused Advice

    Get PDF
    We introduce a novel method for the rigorous quantitative evaluation of online algorithms that relaxes the "radical worst-case" perspective of classic competitive analysis. In contrast to prior work, our method, referred to as randomly infused advice (RIA), does not make any assumptions about the input sequence and does not rely on the development of designated online algorithms. Rather, it can be applied to existing online randomized algorithms, introducing a means to evaluate their performance in scenarios that lie outside the radical worst-case regime. More concretely, an online algorithm ALG with RIA benefits from pieces of advice generated by an omniscient but not entirely reliable oracle. The crux of the new method is that the advice is provided to ALG by writing it into the buffer ? from which ALG normally reads its random bits, hence allowing us to augment it through a very simple and non-intrusive interface. The (un)reliability of the oracle is captured via a parameter 0 ? ? ? 1 that determines the probability (per round) that the advice is successfully infused by the oracle; if the advice is not infused, which occurs with probability 1 - ?, then the buffer ? contains fresh random bits (as in the classic online setting). The applicability of the new RIA method is demonstrated by applying it to three extensively studied online problems: paging, uniform metrical task systems, and online set cover. For these problems, we establish new upper bounds on the competitive ratio of classic online algorithms that improve as the infusion parameter ? increases. These are complemented with (often tight) lower bounds on the competitive ratio of online algorithms with RIA for the three problems

    Query processing of spatial objects: Complexity versus Redundancy

    Get PDF
    The management of complex spatial objects in applications, such as geography and cartography, imposes stringent new requirements on spatial database systems, in particular on efficient query processing. As shown before, the performance of spatial query processing can be improved by decomposing complex spatial objects into simple components. Up to now, only decomposition techniques generating a linear number of very simple components, e.g. triangles or trapezoids, have been considered. In this paper, we will investigate the natural trade-off between the complexity of the components and the redundancy, i.e. the number of components, with respect to its effect on efficient query processing. In particular, we present two new decomposition methods generating a better balance between the complexity and the number of components than previously known techniques. We compare these new decomposition methods to the traditional undecomposed representation as well as to the well-known decomposition into convex polygons with respect to their performance in spatial query processing. This comparison points out that for a wide range of query selectivity the new decomposition techniques clearly outperform both the undecomposed representation and the convex decomposition method. More important than the absolute gain in performance by a factor of up to an order of magnitude is the robust performance of our new decomposition techniques over the whole range of query selectivity

    Depth Estimation via Affinity Learned with Convolutional Spatial Propagation Network

    Full text link
    Depth estimation from a single image is a fundamental problem in computer vision. In this paper, we propose a simple yet effective convolutional spatial propagation network (CSPN) to learn the affinity matrix for depth prediction. Specifically, we adopt an efficient linear propagation model, where the propagation is performed with a manner of recurrent convolutional operation, and the affinity among neighboring pixels is learned through a deep convolutional neural network (CNN). We apply the designed CSPN to two depth estimation tasks given a single image: (1) To refine the depth output from state-of-the-art (SOTA) existing methods; and (2) to convert sparse depth samples to a dense depth map by embedding the depth samples within the propagation procedure. The second task is inspired by the availability of LIDARs that provides sparse but accurate depth measurements. We experimented the proposed CSPN over two popular benchmarks for depth estimation, i.e. NYU v2 and KITTI, where we show that our proposed approach improves in not only quality (e.g., 30% more reduction in depth error), but also speed (e.g., 2 to 5 times faster) than prior SOTA methods.Comment: 14 pages, 8 figures, ECCV 201

    Online Primal-Dual Algorithms with Configuration Linear Programs

    Get PDF
    In this paper, we present primal-dual algorithms for online problems with non-convex objectives. Problems with convex objectives have been extensively studied in recent years where the analyses rely crucially on the convexity and the Fenchel duality. However, problems with non-convex objectives resist against current approaches and non-convexity represents a strong barrier in optimization in general and in the design of online algorithms in particular. In our approach, we consider configuration linear programs with the multilinear extension of the objectives. We follow the multiplicative weight update framework in which a novel point is that the primal update is defined based on the gradient of the multilinear extension. We introduce new notions, namely (local) smoothness, in order to characterize the competitive ratios of our algorithms. The approach leads to competitive algorithms for several problems with convex/non-convex objectives

    Makespan Minimization via Posted Prices

    Full text link
    We consider job scheduling settings, with multiple machines, where jobs arrive online and choose a machine selfishly so as to minimize their cost. Our objective is the classic makespan minimization objective, which corresponds to the completion time of the last job to complete. The incentives of the selfish jobs may lead to poor performance. To reconcile the differing objectives, we introduce posted machine prices. The selfish job seeks to minimize the sum of its completion time on the machine and the posted price for the machine. Prices may be static (i.e., set once and for all before any arrival) or dynamic (i.e., change over time), but they are determined only by the past, assuming nothing about upcoming events. Obviously, such schemes are inherently truthful. We consider the competitive ratio: the ratio between the makespan achievable by the pricing scheme and that of the optimal algorithm. We give tight bounds on the competitive ratio for both dynamic and static pricing schemes for identical, restricted, related, and unrelated machine settings. Our main result is a dynamic pricing scheme for related machines that gives a constant competitive ratio, essentially matching the competitive ratio of online algorithms for this setting. In contrast, dynamic pricing gives poor performance for unrelated machines. This lower bound also exhibits a gap between what can be achieved by pricing versus what can be achieved by online algorithms

    Online Learning-Augmented Algorithms

    Get PDF
    Σε αυτή την πτυχιακή εργασία μελετάμε το πρόσφατα εκκολαπτόμενο πεδίο των Άμεσων Αλγορίθμων με Προβλέψεις (Online Learning-Augmented Algorithms). Αυτοί είναι άμεσοι αλγόριθμοι οι οποίοι λαμβάνουν και προβλέψεις για την είσοδό τους, προτού αυτή τους εμφανιστεί. Αρχικά, εξετάζουμε μερικά κλασσικά Προβλήματα με Άμεση Ανταπόκριση (Online Problems) και την ανάλυσή τους. Έπειτα, παρουσιάζουμε μερικά από τα πιό σημαντικά και αντιπροσωπευτικά πρόσφατα αποτελέσματα στον χώρο των Άμεσων Αλγορίθμων με Προβλέψεις, με σκοπό να περιγράψουμε τις τεχνικές που χρησιμοποιούνται στην σχετική βιβλιογραφία. Τέλος, δίνουμε μερικά νέα θεωρητικά αλλά και πειραματικά αποτελέσματα όσον αφορά το Πρόβλημα του Πλανώδιου Πωλητή στον Άξονα με Άμεση Ανταπόκριση (Online TSP on the Line) με ύπαρξη προβλέψεων. Στην κλασσική έκδοση αυτού του προβλήματος, μία σειρά από αιτήματα (requests) εμφανίζονται με το πέρασμα του χρόνου πάνω στον άξονα των πραγματικών αριθμών (real line). Ο στόχος είναι να ελαχιστοποιηθεί η χρονοκαθυστέρηση (makespan), δηλαδή ο χρόνος που χρειάζεται ο αλγόριθμος για να ικανοποιήσει όλα τα αιτήματα. Υπάρχει η ανοιχτή (open) έκδοση του προβλήματος και η κλειστή (closed), στην οποία επίσης απαιτούμε από τον αλγόριθμο να επιστρέψει στο αρχικό σημείο (origin). Οι καλύτεροι αλγόριθμοι που υπάρχουν είναι 1.64- και 2.04-ανταγωνιστικοί (competitive) αντίστοιχα [15]. Και στις δύο εκδόσεις, υπάρχει εφαπτόμενο κάτω φράγμα (lower bound) [8,15]. Το κύριο μοντέλο προβλέψεων (prediction model) που χρησιμοποιούμε περιέχει προβλέψεις για τις θέσεις (positions) των αιτημάτων. Δίνουμε αλγόριθμους οι οποίοι (i) έχουν την ιδιότητα της συνέπειας (consistency), δηλαδή είναι 1.5- και 1.66-ανταγωνιστικοί με τέλειες προβλέψεις για την κλειστή και ανοιχτή έκδοση αντίστοιχα, (ii) είναι εύρωστοι (robust) ενάντια σε οσοδήποτε λανθασμένες προβλέψεις, και (iii) είναι ομαλοί (smooth), δηλαδή η απόδοση τους χειροτερεύει ελεγχόμενα ανάλογα με την ποιότητα των προβλέψεων. Επιπλέον, μελετάμε βαθύτερα την ανοιχτή έκδοση, επαυξάνοντας το μοντέλο προβλέψεων μας με μία πρόβλεψη για το ποιό είναι το αίτημα το οποίο θα ικανοποιούσε τελευταίο ένας βέλτιστος Αλγόριθμος με Καθυστερημένη Ανταπόκριση (Offline Algorithm). Ο αλγόριθμος που δίνουμε για αυτή την περίπτωση είναι 1.33-συνεπής (consistent) χωρίς να χάνει την ομαλότητα (smoothness) και την ευρωστία (robustness) του, καταρρίπτοντας το κάτω φράγμα του 1.44 που αποδεικνύουμε για την μη-επαυξημένη περίπτωση. Τέλος, δείχνουμε ένα κάτω φράγμα του 1.25 για την επαυξημένη περίπτωση.In this thesis we study the emerging field of Online Learning-Augmented Algorithms. These are Online Algorithms which in addition to the regular input also receive predictions about their input beforehand. First, a few online problems and their analysis are given to introduce the premise. Then, we give a presentation of some important and representative results in the area of Learning-Augmented Algorithms, with the goal of explaining the techniques used in their design. Finally, we give some original results regarding the Online Travelling Salesman Problem on the Line under such a setting. In the classical TSP on the Line, there is a stream of requests released over time along the real line. The goal is to minimize the makespan of the algorithm. We distinguish between the open variant and the closed one, in which we additionally require the algorithm to return to the origin after serving all requests. The state of the art is a 1.64-competitive algorithm and a 2.04-competitive algorithm for the closed and open variants, respectively [15]. In both cases, a tight lower bound is known [8,15]. In both variants, our primary prediction model involves predicted positions of the requests. We introduce algorithms that (i) obtain a tight 1.5 competitive ratio for the closed variant and a 1.66 competitive ratio for the open variant in the case of perfect predictions, (ii) are robust against unbounded prediction error, and (iii) are smooth, i.e., their performance degrades gracefully as the prediction error increases. Moreover, we further investigate the learning-augmented setting in the open variant by additionally considering a prediction for the last request served by the optimal offline algorithm. Our algorithm for this enhanced setting obtains a 1.33 competitive ratio with perfect predictions while also being smooth and robust, beating the lower bound of 1.44 we show for our original prediction setting for the open variant. Also, we provide a lower bound of 1.25 for this enhanced setting

    Online algorithms for covering and packing problems with convex objectives

    Get PDF
    We present online algorithms for covering and packing problems with (non-linear) convex objectives. The convex covering problem is defined as ...postprin
    corecore