274,525 research outputs found

    PicoGrid: A Web-Based Distributed Computing Framework for Heterogeneous Networks Using Java

    Get PDF
    We propose a framework for distributed computing applications in heterogeneous networks. The system is simple to deploy and can run on any operating systems that support the Java Virtual Machine. Using our developed system, idle computing power in an organization can be harvested for performing computing tasks. Agent computers can enter and leave the computation at any time which makes our system very flexible and easily scalable. Our system also does not affect the normal use of client machines to guarantee satisfactory user experience. System tests show that the system has comparable performance to the theoretical case and the computation time is significantly reduced by utilizing multiple computers on the network

    Distributed Semantic Web data management in HBase and MySQL cluster

    Get PDF
    Various computing and data resources on the Web are being enhanced with machine-interpretable semantic descriptions to facilitate better search, discovery and integration. This interconnected metadata constitutes the Semantic Web, whose volume can potentially grow the scale of the Web. Efficient management of Semantic Web data, expressed using the W3C\u27s Resource Description Framework (RDF), is crucial for supporting new data-intensive, semantics-enabled applications. In this work, we study and compare two approaches to distributed RDF data management based on emerging cloud computing technologies and traditional relational database clustering technologies. In particular, we design distributed RDF data storage and querying schemes for HBase and MySQL Cluster and conduct an empirical comparison of these approaches on a cluster of commodity machines using datasets and queries from the Third Provenance Challenge and Lehigh University Benchmark. Our study reveals interesting patterns in query evaluation, shows that our algorithms are promising, and suggests that cloud computing has a great potential for scalable Semantic Web data management

    Distributed Semantic Web Data Management in HBase and MySQL Cluster

    Full text link
    Various computing and data resources on the Web are being enhanced with machine-interpretable semantic descriptions to facilitate better search, discovery and integration. This interconnected metadata constitutes the Semantic Web, whose volume can potentially grow the scale of the Web. Efficient management of Semantic Web data, expressed using the W3C's Resource Description Framework (RDF), is crucial for supporting new data-intensive, semantics-enabled applications. In this work, we study and compare two approaches to distributed RDF data management based on emerging cloud computing technologies and traditional relational database clustering technologies. In particular, we design distributed RDF data storage and querying schemes for HBase and MySQL Cluster and conduct an empirical comparison of these approaches on a cluster of commodity machines using datasets and queries from the Third Provenance Challenge and Lehigh University Benchmark. Our study reveals interesting patterns in query evaluation, shows that our algorithms are promising, and suggests that cloud computing has a great potential for scalable Semantic Web data management.Comment: In Proc. of the 4th IEEE International Conference on Cloud Computing (CLOUD'11

    Grist: Grid-based Data Mining for Astronomy

    Get PDF
    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a work ow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the "hyperatlas" project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization

    Enabling Personalized Composition and Adaptive Provisioning of Web Services

    Get PDF
    The proliferation of interconnected computing devices is fostering the emergence of environments where Web services made available to mobile users are a commodity. Unfortunately, inherent limitations of mobile devices still hinder the seamless access to Web services, and their use in supporting complex user activities. In this paper, we describe the design and implementation of a distributed, adaptive, and context-aware framework for personalized service composition and provisioning adapted to mobile users. Users specify their preferences by annotating existing process templates, leading to personalized service-based processes. To cater for the possibility of low bandwidth communication channels and frequent disconnections, an execution model is proposed whereby the responsibility of orchestrating personalized processes is spread across the participating services and user agents. In addition, the execution model is adaptive in the sense that the runtime environment is able to detect exceptions and react to them according to a set of rules

    A Review of Real World Big Data Processing Structure: Problems and Solutions

    Get PDF
    Information sort and sum in human culture is developing in astonishing pace which is brought about by rising new administrations as distributed computing, web of things and area-based administrations, the time of enormous information has arrived. As information, has been principal asset, how to oversee and use enormous information better has pulled in much consideration. Particularly, with the advancement of web of things, how to handling huge sum continuous information has turned into an extraordinary test in research and applications. As of late, distributed computing innovation has pulled in much consideration with elite, yet how to utilize distributed computing innovation for substantial scale ongoing information preparing has not been contemplated. This paper concentrated the difficulties of huge information firstly and finishes up every one of these difficulties into six issues. Keeping in mind the end goal to enhance the execution of constant handling of substantial information, this paper manufactures a sort of real-time big data processing (RTDP) design considering the distributed computing innovation and after that proposed the four layers of the engineering, and various leveled figuring model. This paper proposed a multi-level stockpiling model and the LMA-based application organization technique to meet the continuous and heterogeneity necessities of RTDP framework. We utilize DSMS, CEP, group-basedMap Reduce and other handling mode and FPGA, GPU, CPU, ASIC advancements contrastingly to preparing the information at the terminal of information gathering. We organized the information and afterward transfer to the cloud server and Map Reduce the information consolidated with the effective processing abilities cloud design. This paper brings up the general structure for future RTDP framework and computation techniques, is right now the general strategy RTDP framework outline

    Security for Grid Services

    Full text link
    Grid computing is concerned with the sharing and coordinated use of diverse resources in distributed "virtual organizations." The dynamic and multi-institutional nature of these environments introduces challenging security issues that demand new technical approaches. In particular, one must deal with diverse local mechanisms, support dynamic creation of services, and enable dynamic creation of trust domains. We describe how these issues are addressed in two generations of the Globus Toolkit. First, we review the Globus Toolkit version 2 (GT2) approach; then, we describe new approaches developed to support the Globus Toolkit version 3 (GT3) implementation of the Open Grid Services Architecture, an initiative that is recasting Grid concepts within a service oriented framework based on Web services. GT3's security implementation uses Web services security mechanisms for credential exchange and other purposes, and introduces a tight least-privilege model that avoids the need for any privileged network service.Comment: 10 pages; 4 figure

    Java/CORBA based Real-Time Infrastructure to Integrate Event-Driven Simulations, Collaboration and Distributed Object/Componentware Computing

    Get PDF
    We are discussing the four major standard candidates for distributed object/componentware computing: Java, CORBA, COM and WOM within our proposed coordination framework we call Pragmatic Object Web (POW). We describe our integration approach based on multi-protocol middleware server JWORB (Java Web Object Request Broker) that currently integrates HTTP and IIOP and which we now further develop to also support COM and WOM core functionalities. We are also experimenting with visual data ow authoring front-ends using NPAC WebFlow system on top of JWORB based software bus. Finally, we illustrate our technologies in one major application domain- DoD Modeling and Simulation- where we use JWORB to implement the Real-Time Infrastructure (RTI) layer of High Level Architecture (HLA). HLA was recently specified by DMSO as a general integration framework for DoD distributed simulations and we claim that we can bring it to a broader community of distributed collaborative object/componentware computing via the interactive Web/CORBA/Java/COM interfaces of our Pragmatic Object Web
    • …
    corecore