72 research outputs found

    A caching game with infinitely divisible hidden material

    Get PDF
    We consider a caching game in which a unit amount of infinitely divisible material is distributed among n≄2n\geq 2 locations. A Searcher chooses how to distribute his search effort rr about the locations so as to maximize the probability she will find a given minimum amount mˉ=1−m≀r\bar{m} =1-m\leq r of the material. If the search effort yiy_{i} invested by the Searcher in a given location ii is at least as great as the amount of material xix_{i} located there she finds all of it, otherwise the amount she finds is only yiy_{i}. In other words she finds min⁥{xi,yi}\min \left\{ x_{i},y_{i}\right\} in location ii. We seek the randomized distribution of search effort that maximizes the probability of success for the Searcher in the worst case, hence we model the problem as a zero-sum win-lose game between the Searcher and a malevolent Hider who wishes to keep more than mm of the material. We show that in the case r=mˉr=\bar{m} the game has a geometric interpretation that for n=2n=2 corresponds to a problem posed by W. H. Ruckle in his monograph [Geometric Games and Their Applications, Pitman, Boston, 1983]. We give solutions for the geometric game when n=3n=3 for certain values of mm, and bounds on the value for other values of mm. In the more general case r≄mˉr\geq \bar{m} we show that for n=2n=2 the game reduces to Ruckle's gam

    Resource Management in Distributed Camera Systems

    Get PDF
    The aim of this work is to investigate different methods to solve the problem of allocating the correct amount of resources (network bandwidth and storage space) to video camera systems. Here we explore the intersection between two research areas: automatic control and game theory. Camera systems are a good example of the emergence of the Internet of Things (IoT) and its impact on our daily lives and the environment. We aim to improve today’s systems, shift from resources over-provisioning to allocate dynamically resources where they are needed the most. We optimize the storage and bandwidth allocation of camera systems to limit the impact on the environment as well as provide the best visual quality attainable with the resource limitations. This thesis is written as a collection of papers. It begins by introducing the problem with today’s camera systems, and continues with background information about resource allocation, automatic control and game theory. The third chapter de- scribes the models of the considered systems, their limitations and challenges. It then continues by providing more background on the automatic control and game theory techniques used in the proposed solutions. Finally, the proposed solutions are provided in five papers.Paper I proposes an approach to estimate the amount of data needed by surveillance cameras given camera and scenario parameters. This model is used for calculating the quasi Worst-Case Transmission Times of videos over a network. Papers II and III apply control concepts to camera network storage and bandwidth assignment. They provide simple, yet elegant solutions to the allocation of these resources in distributed camera systems. Paper IV com- bines pricing theory with control techniques to force the video quality of cam- era systems to converge to a common value based solely on the compression parameter of the provided videos. Paper V uses the VCG auction mechanism to solve the storage space allocation problem in competitive camera systems. It allows for a better system-wide visual quality than a simple split allocation given the limited system knowledge, trust and resource constraints

    Resource-aware Programming in a High-level Language - Improved performance with manageable effort on clustered MPSoCs

    Get PDF
    Bis 2001 bedeutete Moores und Dennards Gesetz eine Verdoppelung der AusfĂŒhrungszeit alle 18 Monate durch verbesserte CPUs. Heute ist NebenlĂ€ufigkeit das dominante Mittel zur Beschleunigung von Supercomputern bis zu mobilen GerĂ€ten. Allerdings behindern neuere PhĂ€nomene wie "Dark Silicon" zunehmend eine weitere Beschleunigung durch Hardware. Um weitere Beschleunigung zu erreichen muss sich auch die Soft­ware mehr ihrer Hardware Resourcen gewahr werden. Verbunden mit diesem PhĂ€nomen ist eine immer heterogenere Hardware. Supercomputer integrieren Beschleuniger wie GPUs. Mobile SoCs (bspw. Smartphones) integrieren immer mehr FĂ€higkeiten. Spezialhardware auszunutzen ist eine bekannte Methode, um den Energieverbrauch zu senken, was ein weiterer wichtiger Aspekt ist, welcher mit der reinen Geschwindigkeit abgewogen werde muss. Zum Beispiel werden Supercomputer auch nach "Performance pro Watt" bewertet. Zur Zeit sind systemnahe low-level Programmierer es gewohnt ĂŒber Hardware nachzudenken, wĂ€hrend der gemeine high-level Programmierer es vorzieht von der Plattform möglichst zu abstrahieren (bspw. Cloud). "High-level" bedeutet nicht, dass Hardware irrelevant ist, sondern dass sie abstrahiert werden kann. Falls Sie eine Java-Anwendung fĂŒr Android entwickeln, kann der Akku ein wichtiger Aspekt sein. Irgendwann mĂŒssen aber auch Hochsprachen resourcengewahr werden, um Geschwindigkeit oder Energieverbrauch zu verbessern. Innerhalb des Transregio "Invasive Computing" habe ich an diesen Problemen gearbeitet. In meiner Dissertation stelle ich ein Framework vor, mit dem man Hochsprachenanwendungen resourcengewahr machen kann, um so die Leistung zu verbessern. Das könnte beispielsweise erhöhte Effizienz oder schnellerer AusfĂŒhrung fĂŒr das System als Ganzes bringen. Ein Kerngedanke dabei ist, dass Anwendungen sich nicht selbst optimieren. Stattdessen geben sie alle Informationen an das Betriebssystem. Das Betriebssystem hat eine globale Sicht und trifft Entscheidungen ĂŒber die Resourcen. Diesen Prozess nennen wir "Invasion". Die Aufgabe der Anwendung ist es, sich an diese Entscheidungen anzupassen, aber nicht selbst welche zu fĂ€llen. Die Herausforderung besteht darin eine Sprache zu definieren, mit der Anwendungen Resourcenbedingungen und Leistungsinformationen kommunizieren. So eine Sprache muss ausdrucksstark genug fĂŒr komplexe Informationen, erweiterbar fĂŒr neue Resourcentypen, und angenehm fĂŒr den Programmierer sein. Die zentralen BeitrĂ€ge dieser Dissertation sind: Ein theoretisches Modell der Resourcen-Verwaltung, um die Essenz des resourcengewahren Frameworks zu beschreiben, die Korrektheit der Entscheidungen des Betriebssystems bezĂŒglich der Bedingungen einer Anwendung zu begrĂŒnden und zum Beweis meiner Thesen von Effizienz und Beschleunigung in der Theorie. Ein Framework und eine Übersetzungspfad resourcengewahrer Programmierung fĂŒr die Hochsprache X10. Zur Bewertung des Ansatzes haben wir Anwendungen aus dem High Performance Computing implementiert. Eine Beschleunigung von 5x konnte gemessen werden. Ein Speicherkonsistenzmodell fĂŒr die X10 Programmiersprache, da dies ein notwendiger Schritt zu einer formalen Semantik ist, die das theoretische Modell und die konkrete Implementierung verknĂŒpft. Zusammengefasst zeige ich, dass resourcengewahre Programmierung in Hoch\-sprachen auf zukĂŒnftigen Architekturen mit vielen Kernen mit vertretbarem Aufwand machbar ist und die Leistung verbessert

    On Efficient Zero-Knowledge Arguments

    Get PDF

    Creating Systems and Applying Large-Scale Methods to Improve Student Remediation in Online Tutoring Systems in Real-time and at Scale

    Get PDF
    A common problem shared amongst online tutoring systems is the time-consuming nature of content creation. It has been estimated that an hour of online instruction can take up to 100-300 hours to create. Several systems have created tools to expedite content creation, such as the Cognitive Tutors Authoring Tool (CTAT) and the ASSISTments builder. Although these tools make content creation more efficient, they all still depend on the efforts of a content creator and/or past historical. These tools do not take full advantage of the power of the crowd. These issues and challenges faced by online tutoring systems provide an ideal environment to implement a solution using crowdsourcing. I created the PeerASSIST system to provide a solution to the challenges faced with tutoring content creation. PeerASSIST crowdsources the work students have done on problems inside the ASSISTments online tutoring system and redistributes that work as a form of tutoring to their peers, who are in need of assistance. Multi-objective multi-armed bandit algorithms are used to distribute student work, which balance exploring which work is good and exploiting the best currently known work. These policies are customized to run in a real-world environment with multiple asynchronous reward functions and an infinite number of actions. Inspired by major companies such as Google, Facebook, and Bing, PeerASSIST is also designed as a platform for simultaneous online experimentation in real-time and at scale. Currently over 600 teachers (grades K-12) are requiring students to show their work. Over 300,000 instances of student work have been collected from over 18,000 students across 28,000 problems. From the student work collected, 2,000 instances have been redistributed to over 550 students who needed help over the past few months. I conducted a randomized controlled experiment to evaluate the effectiveness of PeerASSIST on student performance. Other contributions include representing learning maps as Bayesian networks to model student performance, creating a machine-learning algorithm to derive student incorrect processes from their incorrect answer and the inputs of the problem, and applying Bayesian hypothesis testing to A/B experiments. We showed that learning maps can be simplified without practical loss of accuracy and that time series data is necessary to simplify learning maps if the static data is highly correlated. I also created several interventions to evaluate the effectiveness of the buggy messages generated from the machine-learned incorrect processes. The null results of these experiments demonstrate the difficulty of creating a successful tutoring and suggest that other methods of tutoring content creation (i.e. PeerASSIST) should be explored

    Self-Evaluation Applied Mathematics 2003-2008 University of Twente

    Get PDF
    This report contains the self-study for the research assessment of the Department of Applied Mathematics (AM) of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) at the University of Twente (UT). The report provides the information for the Research Assessment Committee for Applied Mathematics, dealing with mathematical sciences at the three universities of technology in the Netherlands. It describes the state of affairs pertaining to the period 1 January 2003 to 31 December 2008

    Computationally Efficient Relational Reinforcement Learning

    Full text link
    Relational Reinforcement Learning (RRL) is a technique that enables Reinforcement Learning (RL) agents to generalize from their experience, allowing them to learn over large or potentially infinite state spaces, to learn context sensitive behaviors, and to learn to solve variable goals and to transfer knowledge between similar situations. Prior RRL architectures are not sufficiently computationally efficient to see use outside of small, niche roles within larger Artificial Intelligence (AI) architectures. I present a novel online, incremental RRL architecture and an implementation that is orders of magnitude faster than its predecessors. The first aspect of this architecture that I explore is a computationally efficient implementation of an adaptive Hierarchical Tile Coding (aHTC), a kind of Adaptive Tile Coding (ATC) in which more general tiles which cover larger portions of the state-action space are kept as ones that cover smaller portions of the state-action space are introduced, using k-dimensional tries (k-d tries) to implement the value function for non-relational Temporal Difference (TD) methods. In order to achieve comparable performance for RRL, I implement the Rete algorithm to replace my k-d tries due to its efficient handling of both the variable binding problem and variable numbers of actions. Tying aHTCs and Rete together, I present a rule grammar that both maps aHTCs onto Rete and allows the architecture to automatically extract relational features in order to support adaptation of the value function over time. I experiment with several refinement criteria and additional functionality with which my agents attempt to determine if rerefinement using different features might allow them to better learn a near optimal policy. I present optimal results using a value criterion for several variants of BlocksWorld. I provide transfer results for BlocksWorld and a scalable Taxicab domain. I additionally introduce a Higher Order Grammar (HOG) that grants online, incremental RRL agents additional flexibility to introduce additional variables and corresponding relations as needed in order to learn effective value functions. I evaluate agents that use the HOG on a version of Blocks World and on an Adventure task. In summary, I present a new online, incremental RRL architecture, a grammar to map aHTCs onto the Rete, and an implementation that is orders of magnitude faster than its predecessors.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145859/1/bazald_1.pd

    Realtime ray tracing and interactive global illumination

    Get PDF
    One of the most sought-for goals in computer graphics is to generate "realism in real time". i.e. the generation of realistically looking images at realtime frame rates. Today, virtually all approaches towards realtime rendering use graphics hardware, which is based almost exclusively on triangle rasterization. Unfortunately, though this technology has seen tremendous progress over the last few years, for many applications it is currently reaching its limits in both model complexity, supported features, and achievable realism. An alternative to triangle rasterizations is the ray tracing algorithm, which is well-known for its higher flexibility, its generally higher achievable realism, and its superior scalability in both model size and compute power. However, ray tracing is also computationally demanding and thus so far is used almost exclusively for high-quality offline rendering tasks. This dissertation focuses on the question why ray tracing is likely to soon play a larger role for interactive applications, and how this scenario can be reached. To this end, we discuss the RTRT/OpenRT realtime ray tracing system, a software based ray tracing system that achieves interactive to realtime frame rates on todays commodity CPUs. In particular, we discuss the overall system design, the efficient implementation of the core ray tracing algorithms, techniques for handling dynamic scenes, an efficient parallelization framework, and an OpenGL-like low-level API. Taken together, these techniques form a complete realtime rendering engine that supports massively complex scenes, highley realistic and physically correct shading, and even physically based lighting simulation at interactive rates. In the last part of this thesis we then discuss the implications and potential of realtime ray tracing on global illumination, and how the availability of this new technology can be leveraged to finally achieve interactive global illumination - the physically correct simulation of light transport at interactive rates.Eines der wichtigsten Ziele der Computer-Graphik ist die Generierung von "Realismus in Echtzeit\u27; — die Erzeugung von realistisch wirkenden, computer- generierten Bildern in Echtzeit. Heutige Echtzeit-Graphikanwendungen werden derzeit zum ĂŒberwiegenden Teil mit schneller Graphik-Hardware realisiert, welche zum aktuellen Stand der Technik fast ausschliesslich auf dem Dreiecksrasterisierungsalgorithmus basiert. Obwohl diese Rasterisierungstechnologie in den letzten Jahren zunehmend beeindruckende Fortschritte gemacht hat, stĂ¶ĂŸt sie heutzutage zusehends an ihre Grenzen, speziell im Hinblick auf ModellkomplexitĂ€t, unterstĂŒtzte Beleuchtungseffekte, und erreichbaren Realismus. Eine Alternative zur Dreiecksrasterisierung ist das "Ray-Tracing\u27; (Stahl-RĂŒckverfolgung), welches weithin bekannt ist fĂŒr seine höhere FlexibilitĂ€t, seinen im Großen und Ganzen höheren erreichbaren Realismus, und seine bessere Skalierbarkeit sowohl in SzenengrĂ¶ĂŸe als auch in Rechner-KapazitĂ€ten. Allerdings ist Ray-Tracing ebenso bekannt fĂŒr seinen hohen Rechenbedarf, und wird daher heutzutage fast ausschließlich fĂŒr die hochqualitative, nichtinteraktive Bildsynthese benutzt. Diese Dissertation behandelt die GrĂŒnde warum Ray-Tracing in nĂ€herer Zukunft voraussichtlich eine grĂ¶ĂŸere Rolle fĂŒr interaktive Graphikanwendungen spielen wird, und untersucht, wie dieses Szenario des Echtzeit Ray-Tracing erreicht werden kann. HierfĂŒr stellen wir das RTRT/OpenRT Echtzeit Ray-Tracing System vor, ein software-basiertes Ray-Tracing System, welches es erlaubt, interaktive Performanz auf heutigen Standard-PC-Prozessoren zu erreichen. Speziell diskutieren wir das grundlegende System-Design, die effiziente Implementierung der Kern-Algorithmen, Techniken zur UnterstĂŒtzung von dynamischen Szenen, ein effizientes Parallelisierungs-Framework, und eine OpenGL-Ă€hnliche Anwendungsschnittstelle. In ihrer Gesamtheit formen diese Techniken ein komplettes Echtzeit-Rendering-System, welches es erlaubt, extrem komplexe Szenen, hochgradig realistische und physikalisch korrekte Effekte, und sogar physikalisch-basierte Beleuchtungssimulation interaktiv zu berechnen. Im letzten Teil der Dissertation behandeln wir dann die Implikationen und das Potential, welches Echtzeit Ray-Tracing fĂŒr die Globale Beleuchtungssimulation bietet, und wie die VerfĂŒgbarkeit dieser neuen Technologie benutzt werden kann, um letztendlich auch Globale Belechtung — die physikalisch korrekte Simulation des Lichttransports — interaktiv zu berechnen

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum
    • 

    corecore