4 research outputs found

    Near-Optimal Algorithms for Point-Line Covering Problems

    Get PDF
    We study fundamental point-line covering problems in computational geometry, in which the input is a set SS of points in the plane. The first is the Rich Lines problem, which asks for the set of all lines that each covers at least λ\lambda points from SS, for a given integer parameter λ2\lambda \geq 2; this problem subsumes the 3-Points-on-Line problem and the Exact Fitting problem, which -- the latter -- asks for a line containing the maximum number of points. The second is the NP-hard problem Line Cover, which asks for a set of kk lines that cover the points of SS, for a given parameter kNk \in \mathbb{N}. Both problems have been extensively studied. In particular, the Rich Lines problem is a fundamental problem whose solution serves as a building block for several algorithms in computational geometry. For Rich Lines and Exact Fitting, we present a randomized Monte Carlo algorithm that achieves a lower running time than that of Guibas et al.'s algorithm [Computational Geometry 1996], for a wide range of the parameter λ\lambda. We derive lower-bound results showing that, for λ=Ω(nlogn)\lambda =\Omega(\sqrt{n \log n}), the upper bound on the running time of this randomized algorithm matches the lower bound that we derive on the time complexity of Rich Lines in the algebraic computation trees model. For Line Cover, we present two kernelization algorithms: a randomized Monte Carlo algorithm and a deterministic algorithm. Both algorithms improve the running time of existing kernelization algorithms for Line Cover. We derive lower-bound results showing that the running time of the randomized algorithm we present comes close to the lower bound we derive on the time complexity of kernelization algorithms for Line Cover in the algebraic computation trees model

    Effective governance through implementation of appropriate algorithms in share trading

    Get PDF
    Thesis (MAcc)--Stellenbosch University, 2018.ENGLISH SUMMARY : Advancement in computer technology enabled an evolution in share trading. This brought such an increase in available data that manual analysis can no longer provide accurate, timeous results. Many share traders have found a solution in the implementation of algorithms. To effectively govern algorithms and ensure the control objectives of validity, accuracy and completeness are met, the life cycle of an algorithm must be considered: the input data, analysis and results must be governed. The choice of algorithm is fundamental to effectively govern its analysis and results, since an algorithm is not always appropriate for implementation. The algorithm must be appropriate for the available data, the requirements of the analysis, as well as the required algorithm result in order to meet the control objectives. To investigate the applicability of algorithms, this research provides an understanding of the evolution in the share trading industry, algorithms and the enabling technologies of big data and machine learning. The study considers both qualitative and quantitative algorithms: statistical characteristics of predictive algorithms are identified, which indicate if the algorithm is appropriate for implementation based on the nature of the data available, the required analysis as well as the results the algorithm can achieve. The research will also investigate how nonpredictive algorithms’ outcome determine if it will be useful and appropriate to the data scientist. Based on the investigation, an applicability model was designed to map the investigated statistical characteristics with the indicators found. This model will provide guidance to data scientists and other users to assess their data and algorithm needs to what the available algorithms can provide, therefore determining which algorithm characteristics will be most appropriate for implementation.AFRIKAANSE OPSOMMING : Die vooruitgang in rekenaartegnologie het ʼn evolusie in die verhandeling van aandele moontlik gemaak. Met die toename in beskikbare data, is dit nie meer moontlik om ʼn analise per hand te ondersoek en akkurate resultate betyds te kry nie. Baie aandele-makelaars het gevind dat die implementering van algoritmes ʼn oplossing hiervoor bied. Om algoritmes effektief te beheer en te verseker dat die kontroledoelwitte van geldigheid, akkuraatheid en volledigheid behaal word, moet die lewenssiklus van ʼn algoritme in ag geneem word: die inset data, analise en resultate moet beheer word. ʼn Fundamentele keuse is watter algoritme om te implementeer om die analise en die resultate daarvan te beheer, aangesien algoritmes nie altyd gepas is vir implementering nie. Die algoritme moet gekies word volgens die beskikbare data, die vereistes van die analise, sowel as die resultaat wat van die algoritme vereis word. Om die toepaslikheid van algoritmes te ondersoek, bied hierdie navorsing ʼn begrip van die evolusie in die industrie van aandele-verhandeling, algoritmes en die tegnologieë van ‘big data’ en masjienleer. Hierdie studie neem beide kwalitatiewe en kwantitatiewe algoritmes in ag: dit identifiseer statistiese karaktereienskappe van voorspellende algoritmes, wat gebruik kan word om te bepaal of die algoritme gepas is vir implementering. Dit word bepaal deur die aard van die beskikbare data, die ontleding wat die algoritme moet uitvoer en die resultate wat die algoritme moet verkry. Hierdie studie ondersoek ook die doelwit van algoritmes wat nie waardes voorspel nie, bepaal of dit nuttig en gepas is vir die gebruiker. Volgens die bevindinge van die ondersoek is ʼn model van toepaslikheid ontwerp om die statistiese eienskappe wat ondersoek is, met die aanwysers wat gevind is, te karteer. Hierdie model verskaf riglyne aan die gebruikers om die beskikbare data en behoeftes vir die algoritme te vergelyk met wat die algoritme kan verskaf, en dus te kan bepaal watter algoritme-eienskappe gepas is vir implementering

    Big data algorithms beyond machine learning

    No full text
    The availability of big data sets in research, industry and society in general has opened up many possibilities of how to use this data. In many applications, however, it is not the data itself that is of interest but rather we want to answer some question about it. These answers may sometimes be phrased as solutions to an optimization problem. We survey some algorithmic methods that optimize over large-scale data sets, beyond the realm of machine learning
    corecore