1,072,606 research outputs found

    Implementation of Faceted Values in Node.JS.

    Get PDF
    Information flow analysis is the study of mechanisms by which developers may protect sensitive data within an ecosystem containing untrusted third-party code. Secure multi-execution is one such mechanism that reliably prevents undesirable information flows, but a programmer’s use of secure multi-execution is itself challenging and prone to error. Faceted values have been shown to provide an alternative to secure multi-execution which is, in theory, functionally equivalent. The purpose of this work is to show that the theory holds in practice by implementing usable faceted values in JavaScript via source code transformation. The primary contribution of this project is to provide a library that makes these transformations possible in any standard JavaScript runtime without requiring native support. We build a pipeline that takes JavaScript code with syntactic support for faceted values and, through source code transformation, produces platform-independent JavaScript code containing functional faceted values. Our findings include a method by which we may optimize the use of faceted values through static analysis of the program’s information flow

    Automated searching for quantum subsystem codes

    Full text link
    Quantum error correction allows for faulty quantum systems to behave in an effectively error free manner. One important class of techniques for quantum error correction is the class of quantum subsystem codes, which are relevant both to active quantum error correcting schemes as well as to the design of self-correcting quantum memories. Previous approaches for investigating these codes have focused on applying theoretical analysis to look for interesting codes and to investigate their properties. In this paper we present an alternative approach that uses computational analysis to accomplish the same goals. Specifically, we present an algorithm that computes the optimal quantum subsystem code that can be implemented given an arbitrary set of measurement operators that are tensor products of Pauli operators. We then demonstrate the utility of this algorithm by performing a systematic investigation of the quantum subsystem codes that exist in the setting where the interactions are limited to 2-body interactions between neighbors on lattices derived from the convex uniform tilings of the plane.Comment: 38 pages, 15 figure, 10 tables. The algorithm described in this paper is available as both library and a command line program (including full source code) that can be downloaded from http://github.com/gcross/CodeQuest/downloads. The source code used to apply the algorithm to scan the lattices is available upon request. Please feel free to contact the authors with question

    Diversity analysis, code design, and tight error rate lower bound for binary joint network-channel coding

    Get PDF
    Joint network-channel codes (JNCC) can improve the performance of communication in wireless networks, by combining, at the physical layer, the channel codes and the network code as an overall error-correcting code. JNCC is increasingly proposed as an alternative to a standard layered construction, such as the OSI-model. The main performance metrics for JNCCs are scalability to larger networks and error rate. The diversity order is one of the most important parameters determining the error rate. The literature on JNCC is growing, but a rigorous diversity analysis is lacking, mainly because of the many degrees of freedom in wireless networks, which makes it very hard to prove general statements on the diversity order. In this article, we consider a network with slowly varying fading point-to-point links, where all sources also act as relay and additional non-source relays may be present. We propose a general structure for JNCCs to be applied in such network. In the relay phase, each relay transmits a linear transform of a set of source codewords. Our main contributions are the proposition of an upper and lower bound on the diversity order, a scalable code design and a new lower bound on the word error rate to assess the performance of the network code. The lower bound on the diversity order is only valid for JNCCs where the relays transform only two source codewords. We then validate this analysis with an example which compares the JNCC performance to that of a standard layered construction. Our numerical results suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction, both on a fundamental level, expressed by the outage probability, as on a practical level, expressed by the word error rate

    Efficient computation of capillary-gravity generalized solitary waves

    Get PDF
    This paper is devoted to the computation of capillary-gravity solitary waves of the irrotational incompressible Euler equations with free surface. The numerical study is a continuation of a previous work in several points: an alternative formulation of the Babenko-type equation for the wave profiles, a detailed description of both the numerical resolution and the analysis of the internal flow structure under a solitary wave. The numerical code used in this study is provided in open source for those interested readers.Comment: 26 pages, 21 figures, 2 tables, 43 references. Other author's papers can be downloaded at http://www.denys-dutykh.com/. arXiv admin note: substantial text overlap with arXiv:1411.551

    PSpectRe: A Pseudo-Spectral Code for (P)reheating

    Full text link
    PSpectRe is a C++ program that uses Fourier-space pseudo-spectral methods to evolve interacting scalar fields in an expanding universe. PSpectRe is optimized for the analysis of parametric resonance in the post-inflationary universe, and provides an alternative to finite differencing codes, such as Defrost and LatticeEasy. PSpectRe has both second- (Velocity-Verlet) and fourth-order (Runge-Kutta) time integrators. Given the same number of spatial points and/or momentum modes, PSpectRe is not significantly slower than finite differencing codes, despite the need for multiple Fourier transforms at each timestep, and exhibits excellent energy conservation. Further, by computing the post-resonance equation of state, we show that in some circumstances PSpectRe obtains reliable results while using substantially fewer points than a finite differencing code. PSpectRe is designed to be easily extended to other problems in early-universe cosmology, including the generation of gravitational waves during phase transitions and pre-inflationary bubble collisions. Specific applications of this code will be pursued in future work.Comment: 22 pages; source code for PSpectRe available: http://easther.physics.yale.edu v2 Typos fixed, minor improvements to wording; v3 updated as per referee comment

    An empirical study on code comprehension: DCI compared to OO

    Get PDF
    Comprehension of source code affects software development, especially its maintenance where reading code is the most time consuming performed activity. A programming paradigm imposes a style of arranging the source code that is aligned with a way of thinking toward a computable solution. Then, a programming paradigm with a programming language represents an important factor for source code comprehension. Object-Oriented (OO) is the dominant paradigm today. Although, it was criticized from its beginning and recently an alternative has been proposed. In an OO source code, system functions cannot escape outside the definition of classes and their descriptions live inside multiple class declarations. This results in an obfuscated code, a lost sense the run-time, and in a lack of global knowledge that weaken the understandability of the source code at system level. A new paradigm is emerging to address these and other OO issues, this is the Data Context Interaction (DCI) paradigm. We conducted the first human subject related controlled experiment to evaluate the effects of DCI on code comprehension compared to OO. We looked for correctness, time consumption, and focus of attention during comprehension tasks. We also present a novel approach using metrics from Social Network Analysis to analyze what we call the Cognitive Network of Language Elements (CNLE) that is built by programmers while comprehending a system. We consider this approach useful to understand source code properties uncovered from code reading cognitive tasks. The results obtained are preliminary in nature but indicate that DCI-trygve approach produces more comprehensible source code and promotes a stronger focus the attention in important files when programmers are reading code during program comprehension. Regarding reading time spent on files, we were not able to indicate with statistical significance which approach allows programmers to consume less time

    Searching, Selecting, and Synthesizing Source Code Components

    Get PDF
    As programmers develop software, they instinctively sense that source code exists that could be reused if found --- many programming tasks are common to many software projects across different domains. oftentimes, a programmer will attempt to create new software from this existing source code, such as third-party libraries or code from online repositories. Unfortunately, several major challenges make it difficult to locate the relevant source code and to reuse it. First, there is a fundamental mismatch between the high-level intent reflected in the descriptions of source code, and the low-level implementation details. This mismatch is known as the concept assignment problem , and refers to the frequent case when the keywords from comments or identifiers in code do not match the features implemented in the code. Second, even if relevant source code is found, programmers must invest significant intellectual effort into understanding how to reuse the different functions, classes, or other components present in the source code. These components may be specific to a particular application, and difficult to reuse.;One key source of information that programmers use to understand source code is the set of relationships among the source code components. These relationships are typically structural data, such as function calls or class instantiations. This structural data has been repeatedly suggested as an alternative to textual analysis for search and reuse, however as yet no comprehensive strategy exists for locating relevant and reusable source code. In my research program, I harness this structural data in a unified approach to creating and evolving software from existing components. For locating relevant source code, I present a search engine for finding applications based on the underlying Application Programming Interface (API) calls, and a technique for finding chains of relevant function invocations from repositories of millions of lines of code. Next, for reusing source code, I introduce a system to facilitate building software prototypes from existing packages, and an approach to detecting similar software applications

    S4Net: Single Stage Salient-Instance Segmentation

    Full text link
    We consider an interesting problem-salient instance segmentation in this paper. Other than producing bounding boxes, our network also outputs high-quality instance-level segments. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also its surrounding context, enabling us to distinguish the instances in the same scope even with obstruction. Our network is end-to-end trainable and runs at a fast speed (40 fps when processing an image with resolution 320x320). We evaluate our approach on a publicly available benchmark and show that it outperforms other alternative solutions. We also provide a thorough analysis of the design choices to help readers better understand the functions of each part of our network. The source code can be found at \url{https://github.com/RuochenFan/S4Net}

    Investigating Automatic Static Analysis Results to Identify Quality Problems: an Inductive Study

    Get PDF
    Background: Automatic static analysis (ASA) tools examine source code to discover "issues", i.e. code patterns that are symptoms of bad programming practices and that can lead to defective behavior. Studies in the literature have shown that these tools find defects earlier than other verification activities, but they produce a substantial number of false positive warnings. For this reason, an alternative approach is to use the set of ASA issues to identify defect prone files and components rather than focusing on the individual issues. Aim: We conducted an exploratory study to investigate whether ASA issues can be used as early indicators of faulty files and components and, for the first time, whether they point to a decay of specific software quality attributes, such as maintainability or functionality. Our aim is to understand the critical parameters and feasibility of such an approach to feed into future research on more specific quality and defect prediction models. Method: We analyzed an industrial C# web application using the Resharper ASA tool and explored if significant correlations exist in such a data set. Results: We found promising results when predicting defect-prone files. A set of specific Resharper categories are better indicators of faulty files than common software metrics or the collection of issues of all issue categories, and these categories correlate to different software quality attributes. Conclusions: Our advice for future research is to perform analysis on file rather component level and to evaluate the generalizability of categories. We also recommend using larger datasets as we learned that data sparseness can lead to challenges in the proposed analysis proces
    • …
    corecore