68 research outputs found

    Singularities and MZVs in Feynman Loop Integrals (Various aspects of multiple zeta values)

    Get PDF
    "Various aspects of multiple zeta values". July 23~26, 2013. edited by Kentaro Ihara. The papers presented in this volume of RIMS Kôkyûroku Bessatsu are in final form and refereed.Feynman diagram is a theoretical tool for computing physical quantities in particle physics, and each diagram is given as a multiple integral. Often Feynman diagrams can be expressed by multiple zeta values (MZVs) and their generalizations. We discuss relations between singularities of diagrams and types of MZVs from a physicist's viewpoint. This provides a hint to a connection between topology of a diagram and its value in terms of MZVs. New technologies for computing complicated diagrams and unsolved problems are also discussed

    Verification of a fieldbus scheduling protocol using timed automata

    Get PDF
    This paper deals with the formal verification of a fieldbus real-time scheduling mechanism, using the notion of timed-automata and the UPPAAL model checker. A new approach is proposed here that treats the set of schedulers that regulate access on a fieldbus as a separate entity, called the scheduling layer. In addition a network with a changing topology is considered, where nodes may be turned on or off. The behaviour of the scheduling layer in conjunction with the data link, the medium and the network management layer is examined and it is proved that it enjoys a number of desirable properties

    LARCMACS: A TEX macro set for typesetting NASA reports

    Get PDF
    This LARCMACS user's manual describes the February 1988 version of LARCMACS, the TEX macro set used by the Technical Editing Branch (TEB) at NASA Langley Research Center. These macros were developed by the authors to facilitate the typesetting of NASA formal reports. They are also useful, however, for informal NASA reports and other technical documents such as meeting papers. LARCMACS are distributed by TEB for the convenience of the Langley TEX user community. LARCMACS contain macros for obtaining the standard double-column format for NASA reports, for typesetting tables in the ruled format traditional in NASA reports, and for typesetting difficult mathematical expressions. Each macro is described and numerous examples are included. Definitions of the LARCMACS macros are also included

    Applying Hierarchical Contextual Parsing with Visual Density and Geometric Features to Typeset Formula Recognition

    Get PDF
    We demonstrate that recognition of scanned typeset mathematical expression images can be done by extracting maximum spanning trees from line of sight graphs weighted using geometric and visual density features. The approach used is hierarchical contextual parsing (HCP): Hierarchical in terms of starting with connected components and building to the symbol level using visual, spatial, and contextual features of connected components. Once connected components have been segmented into symbols, a new set of spatial, visual, and contextual features are extracted. One set of visual features is used for symbol classification, and another for parsing. The features are used in parsing to assign classifications and confidences to edges in a line of sight symbol graph. Layout trees describe expression structure in terms of spatial relations between symbols, such as horizontal, subscript, and superscript. From the weighted graph Edmonds\u27 algorithm is used to extract a maximum spanning tree. Segmentation and parsing are done without using symbol classification information, and symbol classification is done independently of expression structure recognition. The commonality between the recognition processes is the type of features they use, the visual densities. These visual densities are used for shape, spatial, and contextual information. The contextual information is shown to help in segmentation, parsing, and symbol recognition. The hierarchical contextual parsing has been implemented in the Python and Graph-based Online/Offline Recognizer for Math (Pythagor^m) system and tested on the InftyMCCDB-2 dataset. We created InftyMCCDB-2 from InftyCDB-2 as a open source dataset for scanned typeset math expression recognition. In building InftyMCCDB-2 modified formula structure representations were used to better capture the spatial positioning of symbols in the expression structures. Namely, baseline punctuation and symbol accents were moved out of horizontal baselines as their positions are not horizontally aligned with symbols on a writing line. With the transformed spatial layouts and HCP, 95.97% of expressions were parsed correctly when given symbols and 93.95% correctly parsed when requiring symbol segmentation from connected components. Overall HCP reached 90.83% expression recognition rate from connected components

    Investigating the Spatial and Temporal Scale Variability of Ebullitive Flux from a Subarctic Thaw Pond System

    Get PDF
    Arctic regions are experiencing more rapid warming than other parts of the world, leading to destabilization of carbon (C) that has been sequestered in permafrost, especially in peatlands where the C content of the peat is very high. More frequent incidence of thaw in permafrost peatlands is leading to the development of small thaw ponds that are known to be sources of methane (CH4) to the atmosphere, yet there is a lack in long-term studies of CH4 emission from these formations. This is of concern because CH4 has thirty-two times the global warming potential of carbon dioxide over a one-hundred-year timescale (Holmes et al., 2013). At a site in northern Sweden, we have collected over 3000 measurements of CH4 ebullition, or bubbling, from eight small thaw ponds (\u3c0.001 km2) differing in physical and hydrological characteristics over seven growing seasons (2012-2018). We found ebullitive emission to be highly variable over space and time, with an average emission rate of 21.9 mg CH4 m-2 d-1. Between 2012 and 2015, ebullitive emission was weakly correlated with environmental conditions like atmospheric pressure and temperature and potentially more influenced by the physical characteristics of the ponds themselves. Based on their rates of daily ebullitive emission, the ponds fell into four statistically significant groups which appeared to differ from each other based on physical characteristics among the ponds within each group. This grouping, further called pond types, distinguishes ponds from one another based on vegetation presence, pond depth, and hydrologic connectivity to neighboring fen areas (or lack there-of). Type 1, with the lowest daily ebullitive emissions measured, are the shallowest ponds, they are hydrologically isolated have low instances of sedge vegetation (Carex spp. and Eriophorum spp.) and have Sphagnum spp. mosses present within them. Type 2 ponds, which emit more ebullitive CH4 than type 1, are deeper, have more sedge vegetation present and are hydrologically isolated. Type 3 ponds are this highest emitting on a daily scale and are the deepest, with more sedge vegetation present than type 3 yet remain hydrologically isolated. Type 4, are shallower than type 3, have no Sphagnum spp. present, are surrounded by sedge vegetation and connected to a neighboring fen area allowing water to flow. Based on our findings, and the available literature, we estimate that small ponds (\u3c 0.001 km2) emit between 0.2 and 1.0 Tg of CH4 through ebullition over an estimated 149 ice-free days. Using acoustic techniques, we determined that on a sub-daily timescale CH4 emission rates varied significantly over space and time within a single pond with diel variability in bubbling rate following that of air temperature, shortwave radiation and wind speed. Using remotely collected imagery from an unmanned aerial system (UAS) platform of seven ponds collected over five sampling seasons (2014 — 2018) we found pond edge and water area varied significantly between ponds as well as over time, with water area varying significantly between pond types. Annual ebullitive flux was highest in ponds that ranged in pond edge area of 50 – 150 m2 with smaller and larger ponds emitting less, however this relationship is likely more related to physical differences between the ponds, rather than differences in overall size. This work supports the importance of long-term studies that take advantage of a range of spatial and temporal scale sampling techniques in order to adequately capture the variability in CH4 ebullition from these highly dynamic formations. Not only are high resolution measurements of CH4 ebullition important, but the tandem monitoring of pond size and other physical characteristics that distinguish ponds from one another are also important to better understand the observed CH4 emissions. With an increase in the number of long-term studies such as this, we will be better able to model CH4 emissions from thawing permafrost ecosystems in the future

    Automating Cyber Analytics

    Get PDF
    Model based security metrics are a growing area of cyber security research concerned with measuring the risk exposure of an information system. These metrics are typically studied in isolation, with the formulation of the test itself being the primary finding in publications. As a result, there is a flood of metric specifications available in the literature but a corresponding dearth of analyses verifying results for a given metric calculation under different conditions or comparing the efficacy of one measurement technique over another. The motivation of this thesis is to create a systematic methodology for model based security metric development, analysis, integration, and validation. In doing so we hope to fill a critical gap in the way we view and improve a system’s security. In order to understand the security posture of a system before it is rolled out and as it evolves, we present in this dissertation an end to end solution for the automated measurement of security metrics needed to identify risk early and accurately. To our knowledge this is a novel capability in design time security analysis which provides the foundation for ongoing research into predictive cyber security analytics. Modern development environments contain a wealth of information in infrastructure-as-code repositories, continuous build systems, and container descriptions that could inform security models, but risk evaluation based on these sources is ad-hoc at best, and often simply left until deployment. Our goal in this work is to lay the groundwork for security measurement to be a practical part of the system design, development, and integration lifecycle. In this thesis we provide a framework for the systematic validation of the existing security metrics body of knowledge. In doing so we endeavour not only to survey the current state of the art, but to create a common platform for future research in the area to be conducted. We then demonstrate the utility of our framework through the evaluation of leading security metrics against a reference set of system models we have created. We investigate how to calibrate security metrics for different use cases and establish a new methodology for security metric benchmarking. We further explore the research avenues unlocked by automation through our concept of an API driven S-MaaS (Security Metrics-as-a-Service) offering. We review our design considerations in packaging security metrics for programmatic access, and discuss how various client access-patterns are anticipated in our implementation strategy. Using existing metric processing pipelines as reference, we show how the simple, modular interfaces in S-MaaS support dynamic composition and orchestration. Next we review aspects of our framework which can benefit from optimization and further automation through machine learning. First we create a dataset of network models labeled with the corresponding security metrics. By training classifiers to predict security values based only on network inputs, we can avoid the computationally expensive attack graph generation steps. We use our findings from this simple experiment to motivate our current lines of research into supervised and unsupervised techniques such as network embeddings, interaction rule synthesis, and reinforcement learning environments. Finally, we examine the results of our case studies. We summarize our security analysis of a large scale network migration, and list the friction points along the way which are remediated by this work. We relate how our research for a large-scale performance benchmarking project has influenced our vision for the future of security metrics collection and analysis through dev-ops automation. We then describe how we applied our framework to measure the incremental security impact of running a distributed stream processing system inside a hardware trusted execution environment
    corecore