223 research outputs found

    EvoAlloy: An Evolutionary Approach For Analyzing Alloy Specifications

    Get PDF
    Using mathematical notations and logical reasoning, formal methods precisely define a program’s specifications, from which we can instantiate valid instances of a system. With these techniques, we can perform a variety of analysis tasks to verify system dependability and rigorously prove the correctness of system properties. While there exist well-designed automated verification tools including ones considered lightweight, they still lack a strong adoption in practice. The essence of the problem is that when applied to large real world applications, they are not scalable and applicable due to the expense of thorough verification process. In this thesis, I present a new approach and demonstrate how to relax the completeness guarantee without much loss, since soundness is maintained. I have extended a widely applied lightweight analysis, Alloy, with a genetic algorithm. Our new tool, EvoAlloy, works at the level of finite relations generated by Kodkod and evolves the chromosomes based on the feedback including failed constraints. Through a feasibility study, I prove that my approach can successfully find solutions to a set of specifications beyond the scope where traditional Alloy Analyzer fails. While EvoAlloy solves small size problems with longer time, its scalability provided by genetic extension shows its potential to handle larger specifications. My future vision is that when specifications are small I can maintain both soundness and completeness, but when this fails, EvoAlloy can switch to its genetic algorithm. Adviser: Hamid Bagher

    Zero-Mode Contribution in Nucleon-Delta Transition

    Full text link
    We investigate the transition form factors between nucleon and Δ\Delta(1232) particles by using a covariant quark-spectator-diquark field theory model in (3+1) dimensions. Performing a light-front calculation in parallel with the manifestly covariant calculation in light-front helicity basis, we examine the light-front zero-mode contribution to the helicity components of light-front good ("+") current matrix elements. Choosing the light-front gauge (ϵh=±+=0\epsilon^+_{h=\pm}=0) with circular polarization in Drell-Yan-West frame, we find that only the helicity components (12,12)({1\over 2}, {1\over 2}) and (12,−12)({1\over 2},-{1\over 2}) of the good current receive the zero-mode contribution. Taking into account the zero-mode, we find the prescription independence in obtaining the light-front solution of form factors from any three helicity matrix elements with smeared light-front wavefunctions. The angular condition, which guarantees the full covariance of different schemes, is recovered.Comment: 16 latex pages, 7 figures, to appear in PR

    Synthesis Strategies about 2D Materials

    Get PDF
    In recent years, more and more attentions have been paid to two-dimensional (2D) materials due to the excellent electrical, optical, thermal and mechanical properties. To characterize the layer-dependent changes in properties and to provide pathways for their integration into a multitude of applications, it is essential to explore the reliable syntheses of single- and few-layer 2D materials. Therefore, many strategies, such as micromechanical exfoliation, ultrasonic exfoliation, hydrothermal method, topochemical transformation, chemical vapor deposition method and so on, have been developed to synthesize high-quality and ultrathin nanosheets showing their own merits and demerits in preparing 2D nanomaterials. Herein, an overview of the recent progress in the synthetic techniques is presented for 2D materials, in which we would introduce their experimental scheme, advantages and disadvantages, and applications of these synthetic strategies. Eventually, the potential trends and future directions for synthesizing technology for 2D materials are proposed

    The geography of city liveliness and consumption: evidence from location-based big data

    Get PDF
    Understanding the complexity in the connection between city liveliness and spatial configurationsfor consumptive amenities has been an important but understudied research field in fast urbanising countries like China. This paper presents the first step towards filling this gap though location-based big data perspectives. City liveliness is measured by aggregated spacetime human activity intensities using mobile phone positioning data.Consumptive amenities are identified by point-of-interest data from Chinese Yelp website (dian ping). The results provide the insights into the geographic contextual uncertainties of consumptive amenities in shaping the rise and fall in the vibrancy of city liveliness

    Approximating actual flows in physical infrastructure networks : the case of the Yangtze River Delta high-speed railway network

    Get PDF
    Previous empirical research on urban networks has used data on infrastructure networks to guesstimate actual inter-city flows. However, with the exception of recent research on airline networks in the context of the world city literature, relatively limited attention has been paid to the degree to which the outline of these infrastructure networks reflects the actual flows they undergird. This study presents a method to improve our estimation of urban interaction in and through infrastructure networks by focusing on the example of passenger railways, which is arguably a key potential data source in research on urban networks in metropolitan regions. We first review common biases when using infrastructure networks to approximate actual inter-city flows, after which we present an alternative approach that draws on research on operational train scheduling. This research has shown that 'dwell time' at train stations reflects the length of the alighting and boarding process, and we use this insight to estimate actual interaction through the application of a bimodal network projection function. We apply our method to the high-speed railway (HSR) network within the Yangtze River Delta (YRD) region, discuss the difference between our modelled network and the original network, and evaluate its validity through a systemic comparison with a benchmark dataset of actual passenger flows

    Platinum: Reusing Constraint Solutions in Bounded Analysis of Relational Logic

    Get PDF
    Alloy is a light weight specification language based on relational logic, with an analysis engine that relies on SAT solvers to automate bounded verifica- tion of specifications. In spite of its strengths, the reliance of the Alloy Analyzer on computationally heavy solvers means that it can take a significant amount of time to verify software properties, even within limited bounds. This challenge is exacerbated by the ever-evolving nature of complex software systems. This paper presents PLATINUM, a technique for efficient analysis of evolving Alloy specifications, that recognizes opportunities for constraint reduction and reuse of previously identified constraint solutions. The insight behind PLATINUM is that formula constraints recur often during the analysis of a single specification and across its revisions, and constraint solutions can be reused over sequences of anal- yses performed on evolving specifications. Our empirical results show that PLAT- INUM substantially reduces (by 66.4% on average) the analysis time required on specifications extracted from real-world software systems

    Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

    Full text link
    While increasingly deep networks are still in general desired for achieving state-of-the-art performance, for many specific inputs a simpler network might already suffice. Existing works exploited this observation by learning to skip convolutional layers in an input-dependent manner. However, we argue their binary decision scheme, i.e., either fully executing or completely bypassing one layer for a specific input, can be enhanced by introducing finer-grained, "softer" decisions. We therefore propose a Dynamic Fractional Skipping (DFS) framework. The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate "soft" choices to be made between fully utilizing and skipping a layer. For each input, DFS dynamically assigns a bitwidth to both weights and activations of each layer, where fully executing and skipping could be viewed as two "extremes" (i.e., full bitwidth and zero bitwidth). In this way, DFS can "fractionally" exploit a layer's expressive power during input-adaptive inference, enabling finer-grained accuracy-computational cost trade-offs. It presents a unified view to link input-adaptive layer skipping and input-adaptive hybrid quantization. Extensive experimental results demonstrate the superior tradeoff between computational cost and model expressive power (accuracy) achieved by DFS. More visualizations also indicate a smooth and consistent transition in the DFS behaviors, especially the learned choices between layer skipping and different quantizations when the total computational budgets vary, validating our hypothesis that layer quantization could be viewed as intermediate variants of layer skipping. Our source code and supplementary material are available at \link{https://github.com/Torment123/DFS}
    • …
    corecore