464 research outputs found

    Efficient Large-scale Trace Checking Using MapReduce

    Full text link
    The problem of checking a logged event trace against a temporal logic specification arises in many practical cases. Unfortunately, known algorithms for an expressive logic like MTL (Metric Temporal Logic) do not scale with respect to two crucial dimensions: the length of the trace and the size of the time interval for which logged events must be buffered to check satisfaction of the specification. The former issue can be addressed by distributed and parallel trace checking algorithms that can take advantage of modern cloud computing and programming frameworks like MapReduce. Still, the latter issue remains open with current state-of-the-art approaches. In this paper we address this memory scalability issue by proposing a new semantics for MTL, called lazy semantics. This semantics can evaluate temporal formulae and boolean combinations of temporal-only formulae at any arbitrary time instant. We prove that lazy semantics is more expressive than standard point-based semantics and that it can be used as a basis for a correct parametric decomposition of any MTL formula into an equivalent one with smaller, bounded time intervals. We use lazy semantics to extend our previous distributed trace checking algorithm for MTL. We evaluate the proposed algorithm in terms of memory scalability and time/memory tradeoffs.Comment: 13 pages, 8 figure

    Window-based Streaming Graph Partitioning Algorithm

    Full text link
    In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset

    GraphSE2^2: An Encrypted Graph Database for Privacy-Preserving Social Search

    Full text link
    In this paper, we propose GraphSE2^2, an encrypted graph database for online social network services to address massive data breaches. GraphSE2^2 preserves the functionality of social search, a key enabler for quality social network services, where social search queries are conducted on a large-scale social graph and meanwhile perform set and computational operations on user-generated contents. To enable efficient privacy-preserving social search, GraphSE2^2 provides an encrypted structural data model to facilitate parallel and encrypted graph data access. It is also designed to decompose complex social search queries into atomic operations and realise them via interchangeable protocols in a fast and scalable manner. We build GraphSE2^2 with various queries supported in the Facebook graph search engine and implement a full-fledged prototype. Extensive evaluations on Azure Cloud demonstrate that GraphSE2^2 is practical for querying a social graph with a million of users.Comment: This is the full version of our AsiaCCS paper "GraphSE2^2: An Encrypted Graph Database for Privacy-Preserving Social Search". It includes the security proof of the proposed scheme. If you want to cite our work, please cite the conference version of i

    Low latency via redundancy

    Full text link
    Low latency is critical for interactive networked applications. But while we know how to scale systems to increase capacity, reducing latency --- especially the tail of the latency distribution --- can be much more difficult. In this paper, we argue that the use of redundancy is an effective way to convert extra capacity into reduced latency. By initiating redundant operations across diverse resources and using the first result which completes, redundancy improves a system's latency even under exceptional conditions. We study the tradeoff with added system utilization, characterizing the situations in which replicating all tasks reduces mean latency. We then demonstrate empirically that replicating all operations can result in significant mean and tail latency reduction in real-world systems including DNS queries, database servers, and packet forwarding within networks

    Mineralization behaviour of some new phema-based copolymers with potential uses in tissue engineering

    Get PDF
    This paper reports the mineralization ability of 2-hydroxyethyl methacrylate (HEMA) and 2-methacryloylamido glutamic acid (MAGA) based copolymers incubated in synthetic fluids. MAGA monomer was obtained by organic synthesis and next p(HEMA-co-MAGA) copolymers with different compositions were prepared by bulk radical polymerization using benzoyle peroxide as initiator and ethyleneglycol dimethacrylate as cross-linking agent. The monomer and polymers were further characterized by FTIR-ATR spectroscopy to confirm their structure. Finally, polymers ability to initiate the formation and growth of HA crystals onto their surface in synthetic fluids was proven. SEM analysis showed the formation of apatite-like crystals (calcospherites), fact confirmed also by EDX analysis

    EFFICIENT AND FAST GAUSSIAN BEAM-TRACKING APPROACH FOR INDOOR-PROPAGATION MODELING

    No full text
    International audienceA Gaussian beam-tracking technique is proposed for physical indoor-propagation modeling. Its efficiency stems from the collective treatment of rays, which is realized by using Gaussian beams to propagate fields. The formulation of this method is outlined, the computation-time efficiency is discussed, and the simulation results are compared to those obtained using a commercial ray-tracing software (XSiradif)

    Application and testing of the L neural network with the self-consistent magnetic field model of RAM-SCB

    Get PDF
    Abstract We expanded our previous work on L neural networks that used empirical magnetic field models as the underlying models by applying and extending our technique to drift shells calculated from a physics-based magnetic field model. While empirical magnetic field models represent an average, statistical magnetospheric state, the RAM-SCB model, a first-principles magnetically self-consistent code, computes magnetic fields based on fundamental equations of plasma physics. Unlike the previous L neural networks that include McIlwain L and mirror point magnetic field as part of the inputs, the new L neural network only requires solar wind conditions and the Dst index, allowing for an easier preparation of input parameters. This new neural network is compared against those previously trained networks and validated by the tracing method in the International Radiation Belt Environment Modeling (IRBEM) library. The accuracy of all L neural networks with different underlying magnetic field models is evaluated by applying the electron phase space density (PSD)-matching technique derived from the Liouville\u27s theorem to the Van Allen Probes observations. Results indicate that the uncertainty in the predicted L is statistically (75%) below 0.7 with a median value mostly below 0.2 and the median absolute deviation around 0.15, regardless of the underlying magnetic field model. We found that such an uncertainty in the calculated L value can shift the peak location of electron phase space density (PSD) profile by 0.2 RE radially but with its shape nearly preserved. Key Points L* neural network based on RAM-SCB model is developed L* calculation accuracy is estimated by PSD matching using RBSP data L* uncertainty causes a radial shift in the electron phase space density profile

    The effects of dynamic ionospheric outflow on the ring current

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/94583/1/jgra20739.pd
    • …
    corecore