1,916 research outputs found

    On the Combinatorial Complexity of Approximating Polytopes

    Get PDF
    Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body KK of diameter diam(K)\mathrm{diam}(K) is given in Euclidean dd-dimensional space, where dd is a constant. Given an error parameter Δ>0\varepsilon > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from KK is at most Δ⋅diam(K)\varepsilon \cdot \mathrm{diam}(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/Δ(d−1)/2)O(1/\varepsilon^{(d-1)/2}) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is O~(1/Δ(d−1)/2)\tilde{O}(1/\varepsilon^{(d-1)/2}), where O~\tilde{O} conceals a polylogarithmic factor in 1/Δ1/\varepsilon. This is a significant improvement upon the best known bound, which is roughly O(1/Δd−2)O(1/\varepsilon^{d-2}). Our result is based on a novel combination of both old and new ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of B\'{a}r\'{a}ny and Larman's economical cap covering. Finally, we use a deterministic adaptation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction.Comment: In Proceedings of the 32nd International Symposium Computational Geometry (SoCG 2016) and accepted to SoCG 2016 special issue of Discrete and Computational Geometr

    On Forgetting Relations in Relational Databases

    Get PDF
    Although not usually acknowledged as such, forgetting is a crucial aspect of human reasoning. It allows us to deal with large amounts of information, pushing irrelevant details out of our consciousness so that we can focus on the essential knowledge. Motivated by its beneficial effect on the human brain, this operation has been emulated in many formalisms in the field of Knowledge Representation and Reasoning, where several approaches to forgetting have been proposed. In common, these support computer systems dealing with inaccurate or excessive information without negatively affecting the remaining knowledge. More recently, the General Data Protection Regulation’s ‘right to be forgotten’ has given additional impetus to the study of this operation. Surprisingly, forgetting has not yet been studied in relational databases, the most widespread technology for knowledge representation. This is a serious drawback that needs to be addressed, considering the prominence of databases in our society and the relevance of the operation in numerous knowledge processing tasks. In this dissertation, we take the first steps to tackle this need, proposing a theoretical investigation of forgetting relations in relational databases. We start by introducing an alternative formalisation of the relational model, which includes a novel notion of equivalence between databases. Afterwards, we look further into the problem of forgetting. We formally define the general concept of a relation forgetting operator and present concrete operators, each aligned with a distinct view on the operation and thus with its unique features. Moreover, we illustrate the operators with examples inspired by realistic situations. Finally, we evaluate them. For that, we formalise in the form of properties the requirements that guided the definition of the operators and prove that they satisfy desirable properties. Ultimately, with this work, we motivate the importance of forgetting in relational databases and lay the foundations for its study.Embora nem sempre reconhecido como tal, o esquecimento Ă© um aspeto crucial do raciocĂ­nio humano, pois permite-nos lidar com grandes quantidades de informação, ajudandonos a concentrar no conhecimento essencial. Motivada pelo seu efeito benĂ©fico no cĂ©rebro humano, esta operação tem sido emulada em diversos formalismos na ĂĄrea da Representação do Conhecimento e RaciocĂ­nio, onde vĂĄrias abordagens ao esquecimento tĂȘm sido propostas. Em comum, estas apoiam sistemas informĂĄticos a lidar com informação imprecisa ou excessiva sem afetar negativamente o restante conhecimento. Mais recentemente, o ‘direito ao esquecimento’ do Regulamento Geral sobre a Proteção de Dados deu um impulso extra ao estudo desta operação. Surpreendentemente, o esquecimento ainda nĂŁo foi estudado em bases de dados relacionais, a tecnologia mais utilizada para representação de conhecimento. Este Ă© um grave inconveniente a resolver, tendo em conta a proeminĂȘncia das bases de dados na nossa sociedade e a relevĂąncia da operação em inĂșmeras tarefas de processamento de conhecimento. Nesta dissertação, damos os primeiros passos no sentido de fazer frente a esta necessidade, propondo uma investigação teĂłrica do esquecimento de relaçÔes em bases de dados relacionais. Começamos por introduzir uma formalização alternativa do modelo relacional, que inclui uma nova noção de equivalĂȘncia entre bases de dados. Posteriormente, analisamos mais aprofundadamente o problema do esquecimento. Definimos formalmente o conceito geral de um operador de esquecimento de relaçÔes e apresentamos operadores concretos, cada um alinhado com uma visĂŁo distinta sobre a operação e, portanto, com as suas caracterĂ­sticas Ășnicas. Ademais, ilustramos os operadores com exemplos inspirados em situaçÔes reais. Finalmente, avaliamo-los. Para isso, formalizamos sob a forma de propriedades os requisitos que orientaram a definição dos operadores e provamos que estes satisfazem propriedades desejĂĄveis. Em Ășltima anĂĄlise, com este trabalho, motivamos a importĂąncia do esquecimento em bases de dados relacionais e estabelecemos as bases para o seu estudo

    Shadoks Approach to Convex Covering

    Full text link
    We describe the heuristics used by the Shadoks team in the CG:SHOP 2023 Challenge. The Challenge consists of 206 instances, each being a polygon with holes. The goal is to cover each instance polygon with a small number of convex polygons. Our general strategy is the following. We find a big collection of large (often maximal) convex polygons inside the instance polygon and then solve several set cover problems to find a small subset of the collection that covers the whole polygon.Comment: SoCG CG:SHOP 2023 Challeng

    The Cost of Perfection for Matchings in Graphs

    Full text link
    Perfect matchings and maximum weight matchings are two fundamental combinatorial structures. We consider the ratio between the maximum weight of a perfect matching and the maximum weight of a general matching. Motivated by the computer graphics application in triangle meshes, where we seek to convert a triangulation into a quadrangulation by merging pairs of adjacent triangles, we focus mainly on bridgeless cubic graphs. First, we characterize graphs that attain the extreme ratios. Second, we present a lower bound for all bridgeless cubic graphs. Third, we present upper bounds for subclasses of bridgeless cubic graphs, most of which are shown to be tight. Additionally, we present tight bounds for the class of regular bipartite graphs

    Adapting for Survival: Islamic State’s Shifting Strategies

    Get PDF
    This article discusses the strategic shifts that the Islamic State (IS) has implemented in orderto survive, especially in what regards its propaganda and military tactics. We argue that – fora long time now and in both domains – the IS and its predecessors have been flexible andresilient enough to adapt to new realities on the ground being able to shape and reshape itsstrategy and tactics towards its enemies’ capabilities and policies. In terms of propaganda,despite a decrease of its online presence, the IS has struggled to adapt some of its mainnarratives to the new reality brought about by the beginning of the international coalition attacks. However, evidence seems to suggest that the group will likely be able to maintainits online relevance yet for some time. Regarding its military tactics in Syria and Iraq, historyand current evidence points to a return to its insurgent roots. This seems to be corroboratedby the group’s current increasing resort to terrorism and guerrilla tactics. Lastly, we arguethat it is still premature to either claim the rebirth of the IS or to declare its demise.This article discusses the strategic shifts that the Islamic State (IS) has implemented in order to survive, especially in what regards its propaganda and military tactics. We argue that, for a long time now and in both domains, the IS and its predecessors have been flexible and resilient enough to adapt to new realities on the ground being able to shape and reshape its strategy and tactics towards its enemies’ capabilities and policies. In terms of propaganda, despite a decrease of its online presence, the IS has struggled to adapt some of its main narratives to the new reality brought about by the beginning of the international coalition attacks. However, evidence seems to suggest that the group will likely be able to maintain its online relevance yet for some time. Regarding its military tactics in Syria and Iraq, history and current evidence points to a return to its insurgent roots. This seems to be corroborated by the group’s current increasing resort to terrorism and guerrilla tactics. Lastly, we argue that it is still premature to either claim the rebirth of the IS or to declare its demise

    Simusoccer App: business plan

    Get PDF
    National regulations introduced in Portugal in 2015 impacted the online gambling market (betting real money), closing sports betting websites and, consequently blocking players from online betting. The research aims to investigate the potential of the launch of a mobile app (SimuSoccer) fully dedicated to recreational gambling (not betting real money) on football results, not violating 2015’s law. The methodology adopted qualitative and quantitative measures, through structured questionnaires, based on 151 respondents. The research explores if there is a market of consumers driven solely by the pleasure of playing in a fan-loyalty relation with player’s favorite leagues and clubs, instead of betting real money. The key conclusions suggest a window of opportunity to launch SimuSoccer as a viable risk-free game app - following the freemium business model - while taking advantage of users’ [apparent] preference for interface’s intuitiveness, football exclusivity, and fanloyalty- gaming approach

    Efficient Algorithms for Battleship

    Get PDF
    We consider an algorithmic problem inspired by the Battleship game. In the variant of the problem that we investigate, there is a unique ship of shape S⊂Z2S \subset Z^2 which has been translated in the lattice Z2Z^2. We assume that a player has already hit the ship with a first shot and the goal is to sink the ship using as few shots as possible, that is, by minimizing the number of missed shots. While the player knows the shape SS, which position of SS has been hit is not known. Given a shape SS of nn lattice points, the minimum number of misses that can be achieved in the worst case by any algorithm is called the Battleship complexity of the shape SS and denoted c(S)c(S). We prove three bounds on c(S)c(S), each considering a different class of shapes. First, we have c(S)≀n−1c(S) \leq n-1 for arbitrary shapes and the bound is tight for parallelogram-free shapes. Second, we provide an algorithm that shows that c(S)=O(log⁥n)c(S) = O(\log n) if SS is an HV-convex polyomino. Third, we provide an algorithm that shows that c(S)=O(log⁥log⁥n)c(S) = O(\log \log n) if SS is a digital convex set. This last result is obtained through a novel discrete version of the Blaschke-Lebesgue inequality relating the area and the width of any convex body.Comment: Conference version at 10th International Conference on Fun with Algorithms (FUN 2020

    An experimental study of the partitioning of trace elements between rutile and silicate melt as a function of oxygen fugacity

    Get PDF
    Subduction zone or arc magmas are known to display a characteristic depletion of High Field Strength Elements (HFSE) relative to other similarly incompatible elements, which can be attributed to the presence of the accessory mineral rutile (TiO2) in the residual slab. Here we show that the partitioning behavior of vanadium between rutile and silicate melt varies from incompatible (~0.1) to compatible (~18) as a function of oxygen fugacity. We also confirm that the HFSE are compatible in rutile, with D(Ta) > D(Nb) >> (D(Hf) >/~ D(Zr), but that the level of compatibility is strongly dependent on melt composition, with partition coefficients increasing about one order of magnitude with increasing melt polymerization (or decreasing basicity). Our partitioning results also indicate that residual rutile may fractionate U from Th due to the contrasting (over 2 orders of magnitude) partitioning between these two elements. We confirm that, in addition to the HFSE, Cr, Cu, Zn and W are compatible in rutile at all oxygen fugacity conditions

    On the ratio between maximum weight perfect matchings and maximum weight matchings in grids

    Get PDF
    Given a graph G that admits a perfect matching, we investigate the parameter η(G) (originally motivated by computer graphics applications) which is defined as follows. Among all nonnegative edge weight assignments, η(G) is the minimum ratio between (i) the maximum weight of a perfect matching and (ii) the maximum weight of a general matching. In this paper, we determine the exact value of η for all rectangular grids, all bipartite cylindrical grids, and all bipartite toroidal grids. We introduce several new techniques to this endeavor

    Short Flip Sequences to Untangle Segments in the Plane

    Full text link
    A (multi)set of segments in the plane may form a TSP tour, a matching, a tree, or any multigraph. If two segments cross, then we can reduce the total length with the following flip operation. We remove a pair of crossing segments, and insert a pair of non-crossing segments, while keeping the same vertex degrees. The goal of this paper is to devise efficient strategies to flip the segments in order to obtain crossing-free segments after a small number of flips. Linear and near-linear bounds on the number of flips were only known for segments with endpoints in convex position. We generalize these results, proving linear and near-linear bounds for cases with endpoints that are not in convex position. Our results are proved in a general setting that applies to multiple problems, using multigraphs and the distinction between removal and insertion choices when performing a flip.Comment: 19 pages, 10 figure
    • 

    corecore