657 research outputs found

    A Resource Centric Approach For Advancing Collaboration Through Hydrologic Data And Model Sharing

    Full text link
    HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development

    DIMCloud: a distributed framework for district energy simulation and management

    Get PDF
    To optimize energy consumption, it is needed to monitor real-time data and simulate all energy flows. In a city district context, energy consumption data usually come from many sources and encoded in different formats. However, few models have been proposed to trace the energy behavior of city districts and handle related data. In this article, we introduce DIMCloud, a model for heterogeneous data management and integration at district level, in a pervasive computing context. Our model, by means of an ontology, is able to register the relationships between different data sources of the district and to disclose the sources locations using a publish-subscribe design pattern. Furthermore, data sources are published as Web Services, abstracting the underlying hardware from the user’s point-of-view

    Diagnosis of Errors in Stalled Inter-Organizational Workflow Processes

    Get PDF
    Fault-tolerant inter-organizational workflow processes help participant organizations efficiently complete their business activities and operations without extended delays. The stalling of inter-organizational workflow processes is a common hurdle that causes organizations immense losses and operational difficulties. The complexity of software requirements, incapability of workflow systems to properly handle exceptions, and inadequate process modeling are the leading causes of errors in the workflow processes. The dissertation effort is essentially about diagnosing errors in stalled inter-organizational workflow processes. The goals and objectives of this dissertation were achieved by designing a fault-tolerant software architecture of workflow system’s components/modules (i.e., workflow process designer, workflow engine, workflow monitoring, workflow administrative panel, service integration, workflow client) relevant to exception handling and troubleshooting. The complexity and improper implementation of software requirements were handled by building a framework of guiding principles and the best practices for modeling and designing inter-organizational workflow processes. Theoretical and empirical/experimental research methodologies were used to find the root causes of errors in stalled workflow processes. Error detection and diagnosis are critical steps that can be further used to design a strategy to resolve the stalled processes. Diagnosis of errors in stalled workflow processes was in scope, but the resolution of stalled workflow process was out of the scope in this dissertation. The software architecture facilitated automatic and semi-automatic diagnostics of errors in stalled workflow processes from real-time and historical perspectives. The empirical/experimental study was justified by creating state-of-the-art inter-organizational workflow processes using an API-based workflow system, a low code workflow automation platform, a supported high-level programming language, and a storage system. The empirical/experimental measurements and dissertation goals were explained by collecting, analyzing, and interpreting the workflow data. The methodology was evaluated based on its ability to diagnose errors successfully (i.e., identifying the root cause) in stalled processes caused by web service failures in the inter-organizational workflow processes. Fourteen datasets were created to analyze, verify, and validate hypotheses and the software architecture. Amongst fourteen datasets, seven datasets were created for end-to-end IOWF process scenarios, including IOWF web service consumption, and seven datasets were for IOWF web service alone. The results of data analysis strongly supported and validated the software architecture and hypotheses. The guiding principles and the best practices of workflow process modeling and designing conclude opportunities to prevent processes from getting stalled. The outcome of the dissertation, i.e., diagnosis of errors in stalled inter-organization processes, can be utilized to resolve these stalled processes

    Current Trends and New Challenges of Databases and Web Applications for Systems Driven Biological Research

    Get PDF
    Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and Web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases and Web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research

    Degree of Scaffolding: Learning Objective Metadata: A Prototype Leaning System Design for Integrating GIS into a Civil Engineering Curriculum

    Get PDF
    Digital media and networking offer great potential as tools for enhancing classroom learning environments, both local and distant. One concept and related technological tool that can facilitate the effective application and distribution of digital educational resources is learning objects in combination with the SCORM (sharable content objects reference model) compliance framework. Progressive scaffolding is a learning design approach for educational systems that provides flexible guidance to students. We are in the process of utilizing this approach within a SCORM framework in the form of a multi-level instructional design. The associated metadata required by SCORM will describe the degree of scaffolding. This paper will discuss progressive scaffolding as it relates to SCORM compliant learning objects, within the context of the design of an application for integrating Geographic Information Systems (GIS) into the civil engineering curriculum at the University of Missouri - Rolla

    MSV3d: database of human MisSense variants mapped to 3D protein structure

    Get PDF
    The elucidation of the complex relationships linking genotypic and phenotypic variations to protein structure is a major challenge in the post-genomic era. We present MSV3d (Database of human MisSense Variants mapped to 3D protein structure), a new database that contains detailed annotation of missense variants of all human proteins (20 199 proteins). The multi-level characterization includes details of the physico-chemical changes induced by amino acid modification, as well as information related to the conservation of the mutated residue and its position relative to functional features in the available or predicted 3D model. Major releases of the database are automatically generated and updated regularly in line with the dbSNP (database of Single Nucleotide Polymorphism) and SwissVar releases, by exploiting the extensive DĂ©crypthon computational grid resources. The database (http://decrypthon.igbmc.fr/msv3d) is easily accessible through a simple web interface coupled to a powerful query engine and a standard web service. The content is completely or partially downloadable in XML or flat file formats

    A framework for SLA-centric service-based Utility Computing

    Get PDF
    Nicht angegebenService oriented Utility Computing paves the way towards realization of service markets, which promise metered services through negotiable Service Level Agreements (SLA). A market does not necessarily imply a simple buyer-seller relationship, rather it is the culmination point of a complex chain of stake-holders with a hierarchical integration of value along each link in the chain. In service value chains, services corresponding to different partners are aggregated in a producer-consumer manner resulting in hierarchical structures of added value. SLAs are contracts between service providers and service consumers, which ensure the expected Quality of Service (QoS) to different stakeholders at various levels in this hierarchy. \emph{This thesis addresses the challenge of realizing SLA-centric infrastructure to enable service markets for Utility Computing.} Service Level Agreements play a pivotal role throughout the life cycle of service aggregation. The activities of service selection and service negotiation followed by the hierarchical aggregation and validation of services in service value chain, require SLA as an enabling technology. \emph{This research aims at a SLA-centric framework where the requirement-driven selection of services, flexible SLA negotiation, hierarchical SLA aggregation and validation, and related issues such as privacy, trust and security have been formalized and the prototypes of the service selection model and the validation model have been implemented. } The formal model for User-driven service selection utilizes Branch and Bound and Heuristic algorithms for its implementation. The formal model is then extended for SLA negotiation of configurable services of varying granularity in order to tweak the interests of the service consumers and service providers. %and then formalizing the requirements of an enabling infrastructure for aggregation and validation of SLAs existing at multiple levels and spanning % along the corresponding service value chains. The possibility of service aggregation opens new business opportunities in the evolving landscape of IT-based Service Economy. A SLA as a unit of business relationships helps establish innovative topologies for business networks. One example is the composition of computational services to construct services of bigger granularity thus giving room to business models based on service aggregation, Composite Service Provision and Reselling. This research introduces and formalizes the notions of SLA Choreography and hierarchical SLA aggregation in connection with the underlying service choreography to realize SLA-centric service value chains and business networks. The SLA Choreography and aggregation poses new challenges regarding its description, management, maintenance, validation, trust, privacy and security. The aggregation and validation models for SLA Choreography introduce concepts such as: SLA Views to protect the privacy of stakeholders; a hybrid trust model to foster business among unknown partners; and a PKI security mechanism coupled with rule based validation system to enable distributed queries across heterogeneous boundaries. A distributed rule based hierarchical SLA validation system is designed to demonstrate the practical significance of these notions

    Group Membership Management Framework for Decentralized Collaborative Systems

    Get PDF
    Scientific and commercial endeavors could benefit from cross-organizational, decentralized collaboration, which becomes the key to innovation. This work addresses one of its challenges, namely efficient access control to assets for distributed data processing among autonomous data centers. We propose a group membership management framework dedicated for realizing access control in decentralized environments. Its novelty lies in a synergy of two concepts: a decentralized knowledge base and an incremental indexing scheme, both assuming a P2P architecture, where each peer retains autonomy and has full control over the choice of peers it cooperates with. The extent of exchanged information is reduced to the minimum required for user collaboration and assumes limited trust between peers. The indexing scheme is optimized for read-intensive scenarios by offering fast queries -- look-ups in precomputed indices. The index precomputation increases the complexity of update operations, but their performance is arguably sufficient for large organizations, as shown by conducted tests. We believe that our framework is a major contribution towards decentralized, cross-organizational collaboration

    Blending Science Knowledge and AI Gaming Techniques for Experiential Learning

    Get PDF
    This paper addresses the scientific, design and experiential learning issues in creating an extremely realistic 3D interactive of a wild beluga whale pod for a major aquarium that is situated next to a group of real beluga whales in an integrated marine mammal exhibit. The Virtual Beluga Interactive was conceived to better immerse and engage visitors in complicated educational concepts about the life of wild belugas compared to what is typically possible via wall signage or a video display, thereby allowing them to interactively experience wild whale behavior and hopefully have deeper insights into the life of beluga whales. The gaming simulation is specifically informed by research data from live belugas, (e.g. voice recordings tied to mother/calf behavior) and from interviews with marine mammal scientists and education staff at the Vancouver Aquarium. The collaborative user interface allows visitors to engage in educational "what-if" scenarios of wild beluga emergent behavior using techniques from advanced gaming systems, such as physically based animation, real-time photo-realistic rendering, and artificial intelligence algorithms
    • 

    corecore