753 research outputs found

    Candidate selection and instance ordering for realtime algorithm configuration

    Get PDF
    Many modern combinatorial solvers have a variety of parameters through which a user can customise their behaviour. Algorithm configuration is the process of selecting good values for these parameters in order to improve performance. Time and again algorithm configuration has been shown to significantly improve the performance of many algorithms for solving challenging computational problems. Automated systems for tuning parameters regularly out-perform human experts, sometimes but orders of magnitude. Online algorithm configurators, such as ReACTR, are able to tune a solver online without incurring costly offline training. As such ReACTR’s main focus is on runtime minimisation while solving combinatorial problems. To do this ReACTR adopts a one-pass methodology where each instance in a stream of instances to be solved is considered only as it arrives. As such ReACTR’s performance is sensitive to the order in which instances arrive. It is still not understood which instance orderings positively or negatively effect the performance of ReACTR. This paper investigates the effect of instance ordering and grouping by empirically evaluating different instance orderings based on difficulty and feature values. Though the end use is generally unable to control the order in which instances arrive it is important to understand which orderings impact Re- ACTR’s performance and to what extent. This study also has practical benefit as such orderings can occur organically. For example as business grows the problems it may encounter, such as routing or scheduling, often grow in size and difficulty. ReACTR’s performance also depends strongly configuration selection procedure used. This component controls which configurations are selected to run in parallel from the internal configuration pool. This paper evaluates various ranking mechanisms and different ways of combining them to better understand how the candidate selection procedure affects realtime algorithm configuration. We show that certain selection procedures are superior to others and that the order which instances arrive in determines which selection procedure performs best. We find that both instance order and grouping can significantly affect the overall solving time of the online automatic algorithm configurator ReACTR. One of the more surprising discoveries is that having groupings of similar instances can actually negatively impact on the overall performance of the configurator. In particular we show that orderings based on nearly any instance feature values can lead to significant reductions in total runtime over random instance orderings. In addition, certain candidate selection procedures are more suited to certain orderings than others and selecting the correct one can show a marked improvement in solving times

    Real-time algorithm configuration

    Get PDF
    This dissertation presents a number of contributions to the field of algorithm configur- ation. In particular, we present an extension to the algorithm configuration problem, real-time algorithm configuration, where configuration occurs online on a stream of instances, without the need for prior training, and problem solutions are returned in the shortest time possible. We propose a framework for solving the real-time algorithm configuration problem, ReACT. With ReACT we demonstrate that by using the parallel computing architectures, commonplace in many systems today, and a robust aggregate ranking system, configuration can occur without any impact on performance from the perspective of the user. This is achieved by means of a racing procedure. We show two concrete instantiations of the framework, and show them to be on a par with or even exceed the state-of-the-art in offline algorithm configuration using empirical evaluations on a range of combinatorial problems from the literature. We discuss, assess, and provide justification for each of the components used in our framework instantiations. Specifically, we show that the TrueSkill ranking system commonly used to rank players’ skill in multiplayer games can be used to accurately es- timate the quality of an algorithm’s configuration using only censored results from races between algorithm configurations. We confirm that the order that problem instances arrive in influences the configuration performance and that the optimal selection of configurations to participate in races is dependent on the distribution of the incoming in- stance stream. We outline how to maintain a pool of quality configurations by removing underperforming configurations, and techniques to generate replacement configurations with minimal computational overhead. Finally, we show that the configuration space can be reduced using feature selection techniques from the machine learning literature, and that doing so can provide a boost in configuration performance

    On the Configuration of More and Less Expressive Logic Programs

    Get PDF
    The decoupling between the representation of a certain problem, i.e., its knowledge model, and the reasoning side is one of main strong points of model-based Artificial Intelligence (AI). This allows, e.g. to focus on improving the reasoning side by having advantages on the whole solving process. Further, it is also well-known that many solvers are very sensitive to even syntactic changes in the input. In this paper, we focus on improving the reasoning side by taking advantages of such sensitivity. We consider two well-known model-based AI methodologies, SAT and ASP, define a number of syntactic features that may characterise their inputs, and use automated configuration tools to reformulate the input formula or program. Results of a wide experimental analysis involving SAT and ASP domains, taken from respective competitions, show the different advantages that can be obtained by using input reformulation and configuration. Under consideration in Theory and Practice of Logic Programming (TPLP).Comment: Under consideration in Theory and Practice of Logic Programming (TPLP

    Towards a crowdsourced solution for the authoring bottleneck in interactive narratives

    Get PDF
    Interactive Storytelling research has produced a wealth of technologies that can be employed to create personalised narrative experiences, in which the audience takes a participating rather than observing role. But so far this technology has not led to the production of large scale playable interactive story experiences that realise the ambitions of the field. One main reason for this state of affairs is the difficulty of authoring interactive stories, a task that requires describing a huge amount of story building blocks in a machine friendly fashion. This is not only technically and conceptually more challenging than traditional narrative authoring but also a scalability problem. This thesis examines the authoring bottleneck through a case study and a literature survey and advocates a solution based on crowdsourcing. Prior work has already shown that combining a large number of example stories collected from crowd workers with a system that merges these contributions into a single interactive story can be an effective way to reduce the authorial burden. As a refinement of such an approach, this thesis introduces the novel concept of Crowd Task Adaptation. It argues that in order to maximise the usefulness of the collected stories, a system should dynamically and intelligently analyse the corpus of collected stories and based on this analysis modify the tasks handed out to crowd workers. Two authoring systems, ENIGMA and CROSCAT, which show two radically different approaches of using the Crowd Task Adaptation paradigm have been implemented and are described in this thesis. While ENIGMA adapts tasks through a realtime dialog between crowd workers and the system that is based on what has been learned from previously collected stories, CROSCAT modifies the backstory given to crowd workers in order to optimise the distribution of branching points in the tree structure that combines all collected stories. Two experimental studies of crowdsourced authoring are also presented. They lead to guidelines on how to employ crowdsourced authoring effectively, but more importantly the results of one of the studies demonstrate the effectiveness of the Crowd Task Adaptation approach

    26. Theorietag Automaten und Formale Sprachen 23. Jahrestagung Logik in der Informatik: Tagungsband

    Get PDF
    Der Theorietag ist die Jahrestagung der Fachgruppe Automaten und Formale Sprachen der Gesellschaft für Informatik und fand erstmals 1991 in Magdeburg statt. Seit dem Jahr 1996 wird der Theorietag von einem eintägigen Workshop mit eingeladenen Vorträgen begleitet. Die Jahrestagung der Fachgruppe Logik in der Informatik der Gesellschaft für Informatik fand erstmals 1993 in Leipzig statt. Im Laufe beider Jahrestagungen finden auch die jährliche Fachgruppensitzungen statt. In diesem Jahr wird der Theorietag der Fachgruppe Automaten und Formale Sprachen erstmalig zusammen mit der Jahrestagung der Fachgruppe Logik in der Informatik abgehalten. Organisiert wurde die gemeinsame Veranstaltung von der Arbeitsgruppe Zuverlässige Systeme des Instituts für Informatik an der Christian-Albrechts-Universität Kiel vom 4. bis 7. Oktober im Tagungshotel Tannenfelde bei Neumünster. Während des Tre↵ens wird ein Workshop für alle Interessierten statt finden. In Tannenfelde werden • Christoph Löding (Aachen) • Tomás Masopust (Dresden) • Henning Schnoor (Kiel) • Nicole Schweikardt (Berlin) • Georg Zetzsche (Paris) eingeladene Vorträge zu ihrer aktuellen Arbeit halten. Darüber hinaus werden 26 Vorträge von Teilnehmern und Teilnehmerinnen gehalten, 17 auf dem Theorietag Automaten und formale Sprachen und neun auf der Jahrestagung Logik in der Informatik. Der vorliegende Band enthält Kurzfassungen aller Beiträge. Wir danken der Gesellschaft für Informatik, der Christian-Albrechts-Universität zu Kiel und dem Tagungshotel Tannenfelde für die Unterstützung dieses Theorietags. Ein besonderer Dank geht an das Organisationsteam: Maike Bradler, Philipp Sieweck, Joel Day. Kiel, Oktober 2016 Florin Manea, Dirk Nowotka und Thomas Wilk

    Parallel Markov Chain Monte Carlo

    Get PDF
    The increasing availability of multi-core and multi-processor architectures provides new opportunities for improving the performance of many computer simulations. Markov Chain Monte Carlo (MCMC) simulations are widely used for approximate counting problems, Bayesian inference and as a means for estimating very highdimensional integrals. As such MCMC has found a wide variety of applications in fields including computational biology and physics,financial econometrics, machine learning and image processing. This thesis presents a number of new method for reducing the runtime of Markov Chain Monte Carlo simulations by using SMP machines and/or clusters. Two of the methods speculatively perform iterations in parallel, reducing the runtime of MCMC programs whilst producing statistically identical results to conventional sequential implementations. The other methods apply only to problem domains that can be presented as an image, and involve using various means of dividing the image into subimages that can be proceed with some degree of independence. Where possible the thesis includes a theoretical analysis of the reduction in runtime that may be achieved using our technique under perfect conditions, and in all cases the methods are tested and compared on selection of multi-core and multi-processor architectures. A framework is provided to allow easy construction of MCMC application that implement these parallelisation methods

    Target classification in multimodal video

    Get PDF
    The presented thesis focuses on enhancing scene segmentation and target recognition methodologies via the mobilisation of contextual information. The algorithms developed to achieve this goal utilise multi-modal sensor information collected across varying scenarios, from controlled indoor sequences to challenging rural locations. Sensors are chiefly colour band and long wave infrared (LWIR), enabling persistent surveillance capabilities across all environments. In the drive to develop effectual algorithms towards the outlined goals, key obstacles are identified and examined: the recovery of background scene structure from foreground object ’clutter’, employing contextual foreground knowledge to circumvent training a classifier when labeled data is not readily available, creating a labeled LWIR dataset to train a convolutional neural network (CNN) based object classifier and the viability of spatial context to address long range target classification when big data solutions are not enough. For an environment displaying frequent foreground clutter, such as a busy train station, we propose an algorithm exploiting foreground object presence to segment underlying scene structure that is not often visible. If such a location is outdoors and surveyed by an infra-red (IR) and visible band camera set-up, scene context and contextual knowledge transfer allows reasonable class predictions for thermal signatures within the scene to be determined. Furthermore, a labeled LWIR image corpus is created to train an infrared object classifier, using a CNN approach. The trained network demonstrates effective classification accuracy of 95% over 6 object classes. However, performance is not sustainable for IR targets acquired at long range due to low signal quality and classification accuracy drops. This is addressed by mobilising spatial context to affect network class scores, restoring robust classification capability

    Modeling, Predicting and Capturing Human Mobility

    Get PDF
    Realistic models of human mobility are critical for modern day applications, specifically for recommendation systems, resource planning and process optimization domains. Given the rapid proliferation of mobile devices equipped with Internet connectivity and GPS functionality today, aggregating large sums of individual geolocation data is feasible. The thesis focuses on methodologies to facilitate data-driven mobility modeling by drawing parallels between the inherent nature of mobility trajectories, statistical physics and information theory. On the applied side, the thesis contributions lie in leveraging the formulated mobility models to construct prediction workflows by adopting a privacy-by-design perspective. This enables end users to derive utility from location-based services while preserving their location privacy. Finally, the thesis presents several approaches to generate large-scale synthetic mobility datasets by applying machine learning approaches to facilitate experimental reproducibility

    Creating an Objective Methodology for Human-Robot Team Configuration Selection

    Get PDF
    As technology has been advancing and designers have been looking to future applications, it has become increasingly evident that robotic technology can be used to supplement, augment, and improve human performance of tasks. Team members can be combined in various combinations to better utilize their capabilities and skills to create more efficient and diversified operational teams. A primary obstacle to integrating new robotic technology has been the inability to quantitatively compare overall team performance between very different team configurations without limiting the analysis to a few metrics. To-date, mission designers have arbitrarily assigned importance to mission parameters, subjectively limiting the search space. While this has been effective at evaluating individual mission plans, the arbitrary evaluation criteria has made a straightforward comparison between different research projects and ranking scales impossible. The question then becomes how to select an objective set of criteria for any given problem. It is this final question that this research sought to answer. A methodology was developed to facilitate performance comparison amongst heterogeneous human and robot teams. This methodology makes no assumptions about mission priorities or preferences. Instead, it provides an objective, generic, quantitative method to reduce the complexity of the mission designer's decision space. It employs an heuristic, greedy objective reduction algorithm to reduce problem complexity and a multi-objective genetic algorithm to explore the design space. The human-robot team configuration selection problem was utilized as the application that motivated this research. The methodology, however, will be applicable to a wider domain of research. It will provide a structure to enable broader search of the design space, exploration of the differences between performance metrics, and comparison of optimization models that facilitate evaluation of the design options
    corecore