6,866 research outputs found

    Security Attributes Based Digital Rights Management

    Get PDF
    Most real-life systems delegate responsibilities to different authorities. We apply this model to a digital rights management system, to achieve flexible security. In our model a hierarchy of authorities issues certificates that are linked by cryptographic means. This linkage establishes a chain of control, identity-attribute-rights, and allows flexible rights control over content. Typical security objectives, such as identification, authentication, authorization and access control can be realised. Content keys are personalised to detect illegal super distribution. We describe a working prototype, which we develop using standard techniques, such as standard certificates, XML and Java. We present experimental results to evaluate the scalability of the system. A formal analysis demonstrates that our design is able to detect a form of illegal super distribution

    Implementing and evaluating candidate-based invariant generation

    Get PDF
    The discovery of inductive invariants lies at the heart of static program verification. Presently, many automatic solutions to inductive invariant generation are inflexible, only applicable to certain classes of programs, or unpredictable. An automatic technique that circumvents these deficiencies to some extent is candidate-based invariant generation , whereby a large number of candidate invariants are guessed and then proven to be inductive or rejected using a sound program analyser. This paper describes our efforts to apply candidate-based invariant generation in GPUVerify, a static checker of programs that run on GPUs. We study a set of 383 GPU programs that contain loops, drawn from a number of open source suites and vendor SDKs. Among this set, 253 benchmarks require provision of loop invariants for verification to succeed. We describe the methodology we used to incrementally improve the invariant generation capabilities of GPUVerify to handle these benchmarks, through candidate-based invariant generation , whereby potential program invariants are speculated using cheap static analysis and subsequently either refuted or proven. We also describe a set of experiments that we used to examine the effectiveness of our rules for candidate generation, assessing rules based on their generality (the extent to which they generate candidate invariants), hit rate (the extent to which the generated candidates hold), effectiveness (the extent to which provable candidates actually help in allowing verification to succeed), and influence (the extent to which the success of one generation rule depends on candidates generated by another rule). We believe that our methodology for devising and evaluation candidate generation rules may serve as a useful framework for other researchers interested in candidate-based invariant generation. The candidates produced by GPUVerify help to verify 231 of these 253 programs. An increase in precision, however, has created sluggishness in GPUVerify because more candidates are generated and hence more time is spent on computing those which are inductive invariants. To speed up this process, we have investigated four under-approximating program analyses that aim to reject false candidates quickly and a framework whereby these analyses can run in sequence or in parallel. Across two platforms, running Windows and Linux, our results show that the best combination of these techniques running sequentially speeds up invariant generation across our benchmarks by 1 . 17 × (Windows) and 1 . 01 × (Linux), with per-benchmark best speedups of 93 . 58 × (Windows) and 48 . 34 × (Linux), and worst slowdowns of 10 . 24 × (Windows) and 43 . 31 × (Linux). We find that parallelising the strategies marginally improves overall invariant generation speedups to 1 . 27 × (Windows) and 1 . 11 × (Linux), maintains good best-case speedups of 91 . 18 × (Windows) and 44 . 60 × (Linux), and, importantly, dramatically reduces worst-case slowdowns to 3 . 15 × (Windows) and 3 . 17 × (Linux)

    The design and implementation of a verification technique for GPU Kernels

    Get PDF
    We present a technique for the formal verification of GPU kernels, addressing two classes of correctness properties: data races and barrier divergence. Our approach is founded on a novel formal operational semantics for GPU kernels termed synchronous, delayed visibility (SDV) semantics, which captures the execution of a GPU kernel by multiple groups of threads. The SDV semantics provides operational definitions for barrier divergence and for both inter- and intra-group data races. We build on the semantics to develop a method for reducing the task of verifying a massively parallel GPU kernel to that of verifying a sequential program. This completely avoids the need to reason about thread interleavings, and allows existing techniques for sequential program verification to be leveraged. We describe an efficient encoding of data race detection and propose a method for automatically inferring the loop invariants that are required for verification. We have implemented these techniques as a practical verification tool, GPUVerify, that can be applied directly to OpenCL and CUDA source code. We evaluate GPUVerify with respect to a set of 162 kernels drawn from public and commercial sources. Our evaluation demonstrates that GPUVerify is capable of efficient, automatic verification of a large number of real-world kernels

    Oil film interferometry in high Reynolds number turbulent boundary layers

    Get PDF
    There is continuing debate regarding the validity of skin friction measurements that are dependent on the functional form of the mean velocity profile, for example, the Clauser chart method. This has brought about the need for independent and direct measures of wall shear stress, tw. Of the independent methods to measure tw, oil film interferometry is the most promising, and it has been extensively used recently at low and moderately high Reynolds number. The technique uses interferometry to measure the thinning rate of an oil film, which is linearly related to the level of shear stress acting on the oil film. In this paper we report on the use of this technique in a high Reynolds number boundary layer up to Rq = 50,000. Being an independent measure of tw, the oil film measurement can be used as a means to validate more conventional techniques, such as the Preston tube and Clauser chart at these high Reynolds numbers. The oil-film measurement is validated by making comparative measurements of tw in a large-scale fully-developed channel flow facility where the skin friction is known from the pressure gradient along the channe

    Stochastic dynamics of model proteins on a directed graph

    Full text link
    A method for reconstructing the energy landscape of simple polypeptidic chains is described. We show that we can construct an equivalent representation of the energy landscape by a suitable directed graph. Its topological and dynamical features are shown to yield an effective estimate of the time scales associated with the folding and with the equilibration processes. This conclusion is drawn by comparing molecular dynamics simulations at constant temperature with the dynamics on the graph, defined by a temperature dependent Markov process. The main advantage of the graph representation is that its dynamics can be naturally renormalized by collecting nodes into "hubs", while redefining their connectivity. We show that both topological and dynamical properties are preserved by the renormalization procedure. Moreover, we obtain clear indications that the heteropolymers exhibit common topological properties, at variance with the homopolymer, whose peculiar graph structure stems from its spatial homogeneity. In order to obtain a clear distinction between a "fast folder" and a "slow folder" in the heteropolymers one has to look at kinetic features of the directed graph. We find that the average time needed to the fast folder for reaching its native configuration is two orders of magnitude smaller than its equilibration time, while for the bad folder these time scales are comparable. Accordingly, we can conclude that the strategy described in this paper can be successfully applied also to more realistic models, by studying their renormalized dynamics on the directed graph, rather than performing lengthy molecular dynamics simulations.Comment: 15 pages, 12 figure

    Combined Impact of Lifestyle Factors on Cancer Mortality in Men

    Get PDF
    PURPOSE - The impact of lifestyle factors on cancer mortality in the U.S. population has not been thoroughly explored. We examined the combined effects of cardiorespiratory fitness, never smoking, and normal waist girth on total cancer mortality in men. METHODS - We followed a total of 24,731 men ages 20-82 years who participated in the Aerobics Center Longitudinal Study. A low-risk profile was defined as never smoking, moderate or high fitness, and normal waist girth, and they were further categorized as having 0, 1, 2, or 3 combined low-risk factors. RESULTS - During an average of 14.5 years of follow-up, there were a total of 384 cancer deaths. After adjustment for age, examination year, and multiple risk factors, men who were physically fit, never smoked, and had a normal waist girth had a 62% lower risk of total cancer mortality (95% confidence interval [CI], 45%-73%) compared with men with zero low-risk factors. Men with all 3 low-risk factors had a 12-year (95% CI: 8.6-14.6) longer life expectancy compared with men with 0 low-risk factors. Approximately 37% (95% CI, 17%-52%) of total cancer deaths might have been avoided if the men had maintained all three low-risk factors. CONCLUSIONS - Being physically fit, never smoking, and maintaining a normal waist girth is associated with lower risk of total cancer mortality in men

    Time series aggregation, disaggregation and long memory

    Get PDF
    We study the aggregation/disaggregation problem of random parameter AR(1) processes and its relation to the long memory phenomenon. We give a characterization of a subclass of aggregated processes which can be obtained from simpler, "elementary", cases. In particular cases of the mixture densities, the structure (moving average representation) of the aggregated process is investigated

    Impacts of hydrophilic nanofillers on separation performance of thin film nanocomposite reverse osmosis membrane

    Get PDF
    The membrane technology is still considered a costly method to produce potable water. In view of this, RO membrane with enhanced water permeability without trade-off in salt rejection is desirable as it could further reduce the cost for water desalination. In this study, thin film nanocomposite (TFN) membranes containing 0.05 or 0.10 w/v% hydrophilic nanofillers in polyamide layer were synthesized via interfacial polymerization of piperazine and trimesoyl chloride monomers. The resultant TFN membranes were characterized and compared with a control thin film composite (TFC) membrane. Results from the filtration experiments showed that TFN membranes exhibited higher water permeability, salt rejection and fouling resistance compared to that of the TFC membrane. Excessive amount of nanofillers incorporated in the membrane PA layer however negatively affected the cross-linking in the polymer matrix, thus deteriorating the membrane salt rejection. TFN membrane containing 0.05 w/v% of nanofillers showed better performances than the TFC membrane, recording a pure water flux of 11.2 L/m2∙he membsalt rejection of 95.4%, 97.3% and 97.5% against NaCl, Na2SO4 and MgSO4, respectively
    corecore