58 research outputs found

    Analyzing the impact of storage shortage on data availability in decentralized online social networks

    Get PDF
    Maintaining data availability is one of the biggest challenges in decentralized online social networks (DOSNs). The existing work often assumes that the friends of a user can always contribute to the sufficient storage capacity to store all data. However, this assumption is not always true in today’s online social networks (OSNs) due to the fact that nowadays the users often use the smart mobile devices to access the OSNs. The limitation of the storage capacity in mobile devices may jeopardize the data availability. Therefore, it is desired to know the relation between the storage capacity contributed by the OSN users and the level of data availability that the OSNs can achieve. This paper addresses this issue. In this paper, the data availability model over storage capacity is established. Further, a novel method is proposed to predict the data availability on the fly. Extensive simulation experiments have been conducted to evaluate the effectiveness of the data availability model and the on-the-fly prediction

    Performance analysis and optimization for workflow authorization

    Get PDF
    Many workflow management systems have been developed to enhance the performance of workflow executions. The authorization policies deployed in the system may restrict the task executions. The common authorization constraints include role constraints, Separation of Duty (SoD), Binding of Duty (BoD) and temporal constraints. This paper presents the methods to check the feasibility of these constraints, and also determines the time durations when the temporal constraints will not impose negative impact on performance. Further, this paper presents an optimal authorization method, which is optimal in the sense that it can minimize a workflow’s delay caused by the temporal constraints. The authorization analysis methods are also extended to analyze the stochastic workflows, in which the tasks’ execution times are not known exactly, but follow certain probability distributions. Simulation experiments have been conducted to verify the effectiveness of the proposed authorization methods. The experimental results show that comparing with the intuitive authorization method, the optimal authorization method can reduce the delay caused by the authorization constraints and consequently reduce the workflows’ response time

    Performance analysis and optimization for workflow authorization

    Get PDF
    Many workflow management systems have been developed to enhance the performance of workflow executions. The authorization policies deployed in the system may restrict the task executions. The common authorization constraints include role constraints, Separation of Duty (SoD), Binding of Duty (BoD) and temporal constraints. This paper presents the methods to check the feasibility of these constraints, and also determines the time durations when the temporal constraints will not impose negative impact on performance. Further, this paper presents an optimal authorization method, which is optimal in the sense that it can minimize a workflow’s delay caused by the temporal constraints. The authorization analysis methods are also extended to analyze the stochastic workflows, in which the tasks’ execution times are not known exactly, but follow certain probability distributions. Simulation experiments have been conducted to verify the effectiveness of the proposed authorization methods. The experimental results show that comparing with the intuitive authorization method, the optimal authorization method can reduce the delay caused by the authorization constraints and consequently reduce the workflows’ response time

    Performance optimization for managing massive numbers of small files in distributed file systems

    Get PDF
    The processing of massive numbers of small files is a challenge in the design of distributed file systems. Currently, the combined-block-storage approach is prevalent. However, the approach employs the traditional file systems such as ExtFS and may cause inefficiency when accessing small files randomly located in the disk. This paper focuses on optimizing the performance of data servers in accessing massive numbers of small files. We present a Flat Lightweight File System (iFlatLFS) to manage small files, which is based on a simple metadata scheme and a flat storage architecture. iFlatLFS is designed to substitute the traditional file system on data servers and can be deployed underneath distributed file systems that store massive numbers of small files. iFlatLFS can greatly simplify the original data access procedure. The new metadata proposed in this paper occupies only a fraction of the metadata size based on traditional file systems. We have implemented iFlatLFS in CentOS 5.5 and integrated it into an open source Distributed File System (DFS), called Taobao FileSystem (TFS), which is developed by a top B2C service provider, Alibaba, in China and is managing over 28.6 billion small photos. We have conducted extensive experiments to verify the performance of iFlatLFS. The results show that when the file size ranges from 1KB to 64KB, iFlatLFS is faster than Ext4 by 48% and 54% on average for random read and write in the DFS environment, respectively. Moreover, after iFlatLFS is integrated into TFS, iFlatLFS-based TFS is faster than the existing Ext4-based TFS by 45% and 49% on average for random read access and hybrid access (the mix of read and write accesses), respectively

    WolfPath : accelerating iterative traversing-based graph processing algorithms on GPU

    Get PDF
    There is the significant interest nowadays in developing the frameworks of parallelizing the processing for the large graphs such as social networks, Web graphs, etc. Most parallel graph processing frameworks employ iterative processing model. However, by benchmarking the state-of-art GPU-based graph processing frameworks, we observed that the performance of iterative traversing-based graph algorithms (such as Bread First Search, Single Source Shortest Path and so on) on GPU is limited by the frequent data exchange between host and GPU. In order to tackle the problem, we develop a GPU-based graph framework called WolfPath to accelerate the processing of iterative traversing-based graph processing algorithms. In WolfPath, the iterative process is guided by the graph diameter to eliminate the frequent data exchange between host and GPU. To accomplish this goal, WolfPath proposes a data structure called Layered Edge list to represent the graph, from which the graph diameter is known before the start of graph processing. In order to enhance the applicability of our WolfPath framework, a graph preprocessing algorithm is also developed in this work to convert any graph into the format of the Layered Edge list. We conducted extensive experiments to verify the effectiveness of WolfPath. The experimental results show that WolfPath achieves significant speedup over the state-of-art GPU-based in-memory and out-of-memory graph processing frameworks

    Effects of long-term fertilization treatments on the weed seed bank in a wheat-soybean rotation system

    No full text
    Controlling weed populations by manipulating their seed banks is an important weed management option. To assist such efforts, we investigated relationships between fertilization treatments and depth-related characteristics of the weed seed bank (density, species composition and diversity) under a wheat-soybean rotation after long-term (16 years) fertilization. Numbers of weed species present and the Shannon-Wiener index were significantly lower under NPK, NP, NK, and PK fertilization treatments than under the fertilization-free control treatment (CK), and the vertical distribution of dominant species differed under the treatments. Generally, the species richness and Shannon-Wiener index decreased and the Pielou index increased with increases in soil depth, but the relationship of the Simpson index with depth was complex and unclear. The results show that effects of considered fertilization treatments on weeds warrant careful attention, and that PK fertilization would be optimal for suppressing weeds in the wheat-soybean rotation system studied. (C) 2019 Published by Elsevier B.V

    MTHFR C677T polymorphism and risk of congenital heart defects: evidence from 29 case-control and TDT studies.

    Get PDF
    BACKGROUND: Methylenetetrahydrofolate reductase (MTHFR) is an important enzyme for folate metabolism in humans; it is encoded by the MTHFR gene. Several studies have assessed the association between MTHFR C677T polymorphism and the risk of congenital heart defects (CHDs), while the results were inconsistent. METHODS AND FINDINGS: Multiple electronic databases were searched to identify relevant studies published up to July 22, 2012. Data from case-control and TDT studies were integrated in an allelic model using the Catmap and Metafor software. Twenty-nine publications were included in this meta-analysis. The overall meta-analysis showed significant association between MTHFR C677T polymorphism and CHDs risk in children with heterogeneity (P heterogeneity = 0.000) and publication bias (P egger = 0.039), but it turned into null after the trim-and-fill method was implemented (OR = 1.12, 95% CI = 0.95-1.31). Nevertheless, positive results were obtained after stratified by ethnicity and sample size in all subgroups except the mixed population. For mothers, there was significant association between the variant and CHDs without heterogeneity (P heterogeneity = 0.150, OR = 1.16, 95% CI = 1.05-1.29) and publication bias (P egger = 0.981). However, the results varied across each subgroup in the stratified analysis of ethnicity and sample size. CONCLUSIONS: Both infant and maternal MTHFR C677T polymorphisms may contribute to the risk of CHDs

    Research on Linux Trusted Boot Method Based on Reverse Integrity Verification

    No full text
    Trusted computing aims to build a trusted computing environment for information systems with the help of secure hardware TPM, which has been proved to be an effective way against network security threats. However, the TPM chips are not yet widely deployed in most computing devices so far, thus limiting the applied scope of trusted computing technology. To solve the problem of lacking trusted hardware in existing computing platform, an alternative security hardware USBKey is introduced in this paper to simulate the basic functions of TPM and a new reverse USBKey-based integrity verification model is proposed to implement the reverse integrity verification of the operating system boot process, which can achieve the effect of trusted boot of the operating system in end systems without TPMs. A Linux operating system booting method based on reverse integrity verification is designed and implemented in this paper, with which the integrity of data and executable files in the operating system are verified and protected during the trusted boot process phase by phase. It implements the trusted boot of operation system without TPM and supports remote attestation of the platform. Enhanced by our method, the flexibility of the trusted computing technology is greatly improved and it is possible for trusted computing to be applied in large-scale computing environment
    corecore