474,739 research outputs found

    Sparse Positional Strategies for Safety Games

    Full text link
    We consider the problem of obtaining sparse positional strategies for safety games. Such games are a commonly used model in many formal methods, as they make the interaction of a system with its environment explicit. Often, a winning strategy for one of the players is used as a certificate or as an artefact for further processing in the application. Small such certificates, i.e., strategies that can be written down very compactly, are typically preferred. For safety games, we only need to consider positional strategies. These map game positions of a player onto a move that is to be taken by the player whenever the play enters that position. For representing positional strategies compactly, a common goal is to minimize the number of positions for which a winning player's move needs to be defined such that the game is still won by the same player, without visiting a position with an undefined next move. We call winning strategies in which the next move is defined for few of the player's positions sparse. Unfortunately, even roughly approximating the density of the sparsest strategy for a safety game has been shown to be NP-hard. Thus, to obtain sparse strategies in practice, one either has to apply some heuristics, or use some exhaustive search technique, like ILP (integer linear programming) solving. In this paper, we perform a comparative study of currently available methods to obtain sparse winning strategies for the safety player in safety games. We consider techniques from common knowledge, such as using ILP or SAT (satisfiability) solving, and a novel technique based on iterative linear programming. The results of this paper tell us if current techniques are already scalable enough for practical use.Comment: In Proceedings SYNT 2012, arXiv:1207.055

    Optimal Control of a Single Queue with Retransmissions: Delay-Dropping Tradeoffs

    Full text link
    A single queue incorporating a retransmission protocol is investigated, assuming that the sequence of per effort success probabilities in the Automatic Retransmission reQuest (ARQ) chain is a priori defined and no channel state information at the transmitter is available. A Markov Decision Problem with an average cost criterion is formulated where the possible actions are to either continue the retransmission process of an erroneous packet at the next time slot or to drop the packet and move on to the next packet awaiting for transmission. The cost per slot is a linear combination of the current queue length and a penalty term in case dropping is chosen as action. The investigation seeks policies that provide the best possible average packet delay-dropping trade-off for Quality of Service guarantees. An optimal deterministic stationary policy is shown to exist, several structural properties of which are obtained. Based on that, a class of suboptimal -policies is introduced. These suggest that it is almost optimal to use a K-truncated ARQ protocol as long as the queue length is lower than L, else send all packets in one shot. The work concludes with an evaluation of the optimal delay-dropping tradeoff using dynamic programming and a comparison between the optimal and suboptimal policies.Comment: 29 pages, 8 figures, submitted to IEEE Transactions on Wireless Communication

    When centers can fail: a close second opportunity

    Get PDF
    This paper presents the p-next center problem, which aims to locate p out of n centers so as to minimize the maximum cost of allocating customers to backup centers. In this problem it is assumed that centers can fail and customers only realize that their closest (reference) center has failed upon arrival. When this happens, they move to their backup center, i.e., to the center that is closest to the reference center. Hence, minimizing the maximum travel distance from a customer to its backup center can be seen as an alternative approach to handle humanitarian logistics, that hedges customers against severe scenario deteriorations when a center fails. For this extension of the p-center problem we have developed several different integer programming formulations with their corresponding strengthenings based on valid inequalities and variable fixing. The suitability of these formulations for solving the p-next center problem using standard software is analyzed in a series of computational experiments. These experiments were carried out using instances taken from the previous discrete location literature.Peer ReviewedPostprint (author’s final draft

    Perancangan Sistem Deteksi Objek pada Robot Krsbi Berbasis Mini Pc Raspberry Pi 3

    Full text link
    KRSBI Wheeled is One of the competitions on the Indonesian Robot Contest,.  It is a football match that plays 3 robot  full autonomous versus other teams. The robot uses a drive in the form of wheels that are controlled in such a way, to be able to do the work the robot uses a camera sensor mounted on the front of the robot, while  for movement in the paper author uses 3 omni wheel so the robot can move in all directions to make it easier  towards the ball object. For the purposes of image processing and input and output processing the author uses a Single Board Computer Raspberry PI 3 are programmed using the Python programming language with OpenCV image processing library,  to optimize the work of Single Board Computer(SBC) Raspberry PI 3 Mini PC assisted by the Microcontroller Arduino Mega 2560. Both devices are connected serially via the USB port. Raspberry PI will process the image data obtained  webcam camera input. Next, If the ball object can be detected the object\u27s position coordinates will be encoded in character and sent to the Microcontroller Arduino Mega 2560. Furthermore, Arduino mega 2560 will process data to drive the motors so that can move towards the position of the ball object. Based on the data from the maximum distance test results that can be read by the camera sensor to be able to detect a ball object is ±5 meters with a maximum viewing angle of 120 °

    Superwylbur macro : D50TD80

    Get PDF
    The purpose of this project was to write a Superwylbur macro, called D50TD80, for the Computing Information center at Northern Illinois University. The project was completed using Superwylbur, an interactive computing environment which enables access to the University’s mainframe computer. Superwylbur macro programming combines Superwylbur commands along with a set of special instructions for branching and decision making. The D50TD80 macro was designed to assist users in moving data sets from the current 3350 disk packs to the newly installed, more efficient 3380 disk packs. This macro allows the user three options: 1) To create a partitioned data set from sequential data sets, and place on a 3380 disk pack, 2) To move an entire partitioned data set to a 3380 disk pack, and 3) To move a sequential data set to a 3380 disk pack. Options 2 and 3 are executed by calling modified versions of existing macros. Changes to these macros included additional error checking and more informative prompts. The first option begins by asking the user for the name and location of the partitioned data set that they wish to create. The user is next prompted for the names of the sequential data sets to be saved as members of the partitioned data set. The requested data sets are then placed into the specified PDS. The macro was coded to be user friendly by anticipating a variety of user errors, and making the prompts and error messages friendly, yet informative. The method followed to complete the end product was basically a “hands on” approach. Using The Superwylbur Macro Programming Manual as a primary reference, a simple macro was written which can assist a user in uploading and downloading files from Superwylbur. After some practice with the language, the D50TD80 macro was coded. This macro will be available for public use in the Spring of 1989.B.S. (Bachelor of Science

    Initial Kernel Timing Using a Simple PIM Performance Model

    Get PDF
    This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance
    corecore