452 research outputs found

    The APOKASC Catalog: An Asteroseismic and Spectroscopic Joint Survey of Targets in the Kepler Fields

    Full text link
    We present the first APOKASC catalog of spectroscopic and asteroseismic properties of 1916 red giants observed in the Kepler fields. The spectroscopic parameters provided from the Apache Point Observatory Galactic Evolution Experiment project are complemented with asteroseismic surface gravities, masses, radii, and mean densities determined by members of the Kepler Asteroseismology Science Consortium. We assess both random and systematic sources of error and include a discussion of sample selection for giants in the Kepler fields. Total uncertainties in the main catalog properties are of order 80 K in Teff , 0.06 dex in [M/H], 0.014 dex in log g, and 12% and 5% in mass and radius, respectively; these reflect a combination of systematic and random errors. Asteroseismic surface gravities are substantially more precise and accurate than spectroscopic ones, and we find good agreement between their mean values and the calibrated spectroscopic surface gravities. There are, however, systematic underlying trends with Teff and log g. Our effective temperature scale is between 0-200 K cooler than that expected from the Infrared Flux Method, depending on the adopted extinction map, which provides evidence for a lower value on average than that inferred for the Kepler Input Catalog (KIC). We find a reasonable correspondence between the photometric KIC and spectroscopic APOKASC metallicity scales, with increased dispersion in KIC metallicities as the absolute metal abundance decreases, and offsets in Teff and log g consistent with those derived in the literature. We present mean fitting relations between APOKASC and KIC observables and discuss future prospects, strengths, and limitations of the catalog data.Comment: 49 pages. ApJSupp, in press. Full machine-readable ascii files available under ancillary data. Categories: Kepler targets, asteroseismology, large spectroscopic survey

    Combatting Advanced Persistent Threat via Causality Inference and Program Analysis

    Get PDF
    Cyber attackers are becoming more and more sophisticated. In particular, Advanced Persistent Threat (APT) is a new class of attack that targets a specifc organization and compromises systems over a long time without being detected. Over the years, we have seen notorious examples of APTs including Stuxnet which disrupted Iranian nuclear centrifuges and data breaches affecting millions of users. Investigating APT is challenging as it occurs over an extended period of time and the attack process is highly sophisticated and stealthy. Also, preventing APTs is diffcult due to ever-expanding attack vectors. In this dissertation, we present proposals for dealing with challenges in attack investigation. Specifcally, we present LDX which conducts precise counter-factual causality inference to determine dependencies between system calls (e.g., between input and output system calls) and allows investigators to determine the origin of an attack (e.g., receiving a spam email) and the propagation path of the attack, and assess the consequences of the attack. LDX is four times more accurate and two orders of magnitude faster than state-of-the-art taint analysis techniques. Moreover, we then present a practical model-based causality inference system, MCI, which achieves precise and accurate causality inference without requiring any modifcation or instrumentation in end-user systems. Second, we show a general protection system against a wide spectrum of attack vectors and methods. Specifcally, we present A2C that prevents a wide range of attacks by randomizing inputs such that any malicious payloads contained in the inputs are corrupted. The protection provided by A2C is both general (e.g., against various attack vectors) and practical (7% runtime overhead)

    Algorithmic analysis of parity games

    Get PDF
    Parity games are discrete infinite games of two players with complete information. There are two main motivations to study parity games. Firstly the problem of deciding a winner in a parity game is polynomially equivalent to the modal µ-calculus model checking, and therefore is very important in the field of computer aided verification. Secondly it is the intriguing status of parity games from the point of view of complexity theory. Solving parity games is one of the few natural problems in the class NP∩co-NP (even in UP∩co-UP), and there is no known polynomial time algorithm, despite the substantial amount of effort to find one. In this thesis we add to the body of work on parity games. We start by presenting parity games and explaining the concepts behind them, giving a survey of known algorithms, and show their relationship to other problems. In the second part of the thesis we want to answer the following question: Are there classes of graphs on which we can solve parity games in polyno

    Querying Data Exchange Settings Beyond Positive Queries

    Full text link
    Data exchange, the problem of transferring data from a source schema to a target schema, has been studied for several years. The semantics of answering positive queries over the target schema has been defined in early work, but little attention has been paid to more general queries. A few proposals of semantics for more general queries exist but they either do not properly extend the standard semantics under positive queries, giving rise to counterintuitive answers, or they make query answering undecidable even for the most important data exchange settings, e.g., with weakly-acyclic dependencies. The goal of this paper is to provide a new semantics for data exchange that is able to deal with general queries. At the same time, we want our semantics to coincide with the classical one when focusing on positive queries, and to not trade-off too much in terms of complexity of query answering. We show that query answering is undecidable in general under the new semantics, but it is \co\NP\complete when the dependencies are weakly-acyclic. Moreover, in the latter case, we show that exact answers under our semantics can be computed by means of logic programs with choice, thus exploiting existing efficient systems. For more efficient computations, we also show that our semantics allows for the construction of a representative target instance, similar in spirit to a universal solution, that can be exploited for computing approximate answers in polynomial time. Under consideration in Theory and Practice of Logic Programming (TPLP).Comment: Under consideration in Theory and Practice of Logic Programming (TPLP

    Calculus for decision systems

    Get PDF
    The conceptualization of the term system has become highly dependent on the application domain. What a physicist means by the term system might be different than what a sociologist means by the same term. In 1956, Bertalanffy [1] defined a system as a set of units with relationships among them . This and many other definitions of system share the idea of a system as a black box that has parts or elements interacting between each other. This means that at some level of abstraction all systems are similar, what eventually differentiates one system from another is the set of underlining equations which describe how these parts interact within the system. ^ In this dissertation we develop a framework that allows us to characterize systems from an interaction level, i.e., a framework that gives us the capability to capture how/when the elements of the system interact. This framework is a process algebra called Calculus for Decision Systems (CDS). This calculus provides means to create mathematical expressions that capture how the systems interact and react to different stimuli. It also provides the ability to formulate procedures to analyze these interactions and to further derive other interesting insights of the system. ^ After defining the syntax and reduction rules of the CDS, we develop a notion of behavioral equivalence for decision systems. This equivalence, called bisimulation, allows us to compare decision systems from the behavioral standpoint. We apply our results to games in extensive form, some physical systems, and cyber-physical systems. ^ Using the CDS for the study of games in extensive form we were able to define the concept of subgame perfect equilibrium for a two-person game with perfect information. Then, we investigate the behavior of two games played in parallel by one of the players. We also explore different couplings between games, and compare - using bisimulation - the behavior of two games that are the result of two different couplings. The results showed that, with some probability, the behavior of playing a game as first player, or second player, could be irrelevant. ^ Decision systems can be comprised by multiple decision makers. We show that in the case where two decision makers interact, we can use extensive games to represent the conflict resolution. For the case where there are more than two decision makers, we presented how to characterize the interactions between elements within an organizational structure. Organizational structures can be perceived as multiple players interacting in a game. In the context of organizational structures, we use the CDS as an information sharing mechanism to transfer the inputs and outputs from one extensive game to another. We show the suitability of our calculus for the analysis of organizational structures, and point out some potential research extensions for the analysis of organizational structures. ^ The other general area we investigate using the CDS is cyber-physical systems. Cyber-physical systems or CPS is a class of systems that are characterized by a tight relationship between systems (or processes) in the areas of computing, communication and physics. We use the CDS to describe the interaction between elements in some simple mechanical system, as well as a particular case of the generalized railroad crossing (GRC) problem, which is a typical case of CPS. We show two approaches to the solution of the GRC problem. ^ This dissertation does not intend to develop new methods to solve game theoretical problems or equations of motion of a physical system, it aims to be a seminal work towards the creation of a general framework to study systems and equivalence of systems from a formal standpoint, and to increase the applications of formal methods to real-world problems

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    ACP : algebra of communicating processes : workshop : proceedings, 2nd, Eindhoven, The Netherlands, 1995

    Get PDF

    Logic and Automata

    Get PDF
    Mathematical logic and automata theory are two scientific disciplines with a fundamentally close relationship. The authors of Logic and Automata take the occasion of the sixtieth birthday of Wolfgang Thomas to present a tour d'horizon of automata theory and logic. The twenty papers in this volume cover many different facets of logic and automata theory, emphasizing the connections to other disciplines such as games, algorithms, and semigroup theory, as well as discussing current challenges in the field

    Control-Flow Security.

    Full text link
    Computer security is a topic of paramount importance in computing today. Though enormous effort has been expended to reduce the software attack surface, vulnerabilities remain. In contemporary attacks, subverting the control-flow of an application is often the cornerstone to a successful attempt to compromise a system. This subversion, known as a control-flow attack, remains as an essential building block of many software exploits. This dissertation proposes a multi-pronged approach to securing software control-flow to harden the software attack surface. The primary domain of this dissertation is the elimination of the basic mechanism in software enabling control-flow attacks. I address the prevalence of such attacks by going to the heart of the problem, removing all of the operations that inject runtime data into program control. This novel approach, Control-Data Isolation, provides protection by subtracting the root of the problem; indirect control-flow. Previous works have attempted to address control-flow attacks by layering additional complexity in an effort to shield software from attack. In this work, I take a subtractive approach; subtracting the primary cause of both contemporary and classic control-flow attacks. This novel approach to security advances the state of the art in control-flow security by ensuring the integrity of the programmer-intended control-flow graph of an application at runtime. Further, this dissertation provides methodologies to eliminate the barriers to adoption of control-data isolation while simultaneously moving ahead to reduce future attacks. The secondary domain of this dissertation is technique which leverages the process by which software is engineered, tested, and executed to pinpoint the statements in software which are most likely to be exploited by an attacker, defined as the Dynamic Control Frontier. Rather than reacting to successful attacks by patching software, the approach in this dissertation will move ahead of the attacker and identify the susceptible code regions before they are compromised. In total, this dissertation combines software and hardware design techniques to eliminate contemporary control-flow attacks. Further, it demonstrates the efficacy and viability of a subtractive approach to software security, eliminating the elements underlying security vulnerabilities.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133304/1/warthur_1.pd

    Dynamic block encryption with self-authenticating key exchange

    Get PDF
    One of the greatest challenges facing cryptographers is the mechanism used for key exchange. When secret data is transmitted, the chances are that there may be an attacker who will try to intercept and decrypt the message. Having done so, he/she might just gain advantage over the information obtained, or attempt to tamper with the message, and thus, misguiding the recipient. Both cases are equally fatal and may cause great harm as a consequence. In cryptography, there are two commonly used methods of exchanging secret keys between parties. In the first method, symmetric cryptography, the key is sent in advance, over some secure channel, which only the intended recipient can read. The second method of key sharing is by using a public key exchange method, where each party has a private and public key, a public key is shared and a private key is kept locally. In both cases, keys are exchanged between two parties. In this thesis, we propose a method whereby the risk of exchanging keys is minimised. The key is embedded in the encrypted text using a process that we call `chirp coding', and recovered by the recipient using a process that is based on correlation. The `chirp coding parameters' are exchanged between users by employing a USB flash memory retained by each user. If the keys are compromised they are still not usable because an attacker can only have access to part of the key. Alternatively, the software can be configured to operate in a one time parameter mode, in this mode, the parameters are agreed upon in advance. There is no parameter exchange during file transmission, except, of course, the key embedded in ciphertext. The thesis also introduces a method of encryption which utilises dynamic blocks, where the block size is different for each block. Prime numbers are used to drive two random number generators: a Linear Congruential Generator (LCG) which takes in the seed and initialises the system and a Blum-Blum Shum (BBS) generator which is used to generate random streams to encrypt messages, images or video clips for example. In each case, the key created is text dependent and therefore will change as each message is sent. The scheme presented in this research is composed of five basic modules. The first module is the key generation module, where the key to be generated is message dependent. The second module, encryption module, performs data encryption. The third module, key exchange module, embeds the key into the encrypted text. Once this is done, the message is transmitted and the recipient uses the key extraction module to retrieve the key and finally the decryption module is executed to decrypt the message and authenticate it. In addition, the message may be compressed before encryption and decompressed by the recipient after decryption using standard compression tools
    corecore