13 research outputs found

    A Match in Time Saves Nine: Deterministic Online Matching With Delays

    Full text link
    We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) introduced by Emek et al. (STOC 2016). In this problem, an even number of requests appear in a metric space at different times and the goal of an online algorithm is to match them in pairs. In contrast to traditional online matching problems, in MPMD all requests appear online and an algorithm can match any pair of requests, but such decision may be delayed (e.g., to find a better match). The cost is the sum of matching distances and the introduced delays. We present the first deterministic online algorithm for this problem. Its competitive ratio is O(mlog25.5)O(m^{\log_2 5.5}) =O(m2.46) = O(m^{2.46}), where 2m2 m is the number of requests. This is polynomial in the number of metric space points if all requests are given at different points. In particular, the bound does not depend on other parameters of the metric, such as its aspect ratio. Unlike previous (randomized) solutions for the MPMD problem, our algorithm does not need to know the metric space in advance

    Deep Learning: A Philosophical Introduction

    Get PDF
    Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs, and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally-accepted explanation as to why they work so well. This article provides an introduction to these networks, as well as an opinionated guidebook on the philosophical significance of their structure and achievements. It argues that deep learning neural networks differ importantly in their structure and mathematical properties from the shallower neural networks that were the subject of so much philosophical reflection in the 1980s and 1990s. The article then explores several different explanations for their success, and ends by proposing ten areas of research that would benefit from future engagement by philosophers of mind, epistemology, science, perception, law, and ethics

    Budget Semi-supervised Learning

    No full text

    Locally Linear Embedding for dimensionality reduction in QSAR

    No full text
    International audienc
    corecore