5,309 research outputs found

    Collaborative WorkBench for Researchers - Work Smarter, Not Harder

    Get PDF
    It is important to define some commonly used terminology related to collaboration to facilitate clarity in later discussions. We define provisioning as infrastructure capabilities such as computation, storage, data, and tools provided by some agency or similarly trusted institution. Sharing is defined as the process of exchanging data, programs, and knowledge among individuals (often strangers) and groups. Collaboration is a specialized case of sharing. In collaboration, sharing with others (usually known colleagues) is done in pursuit of a common scientific goal or objective. Collaboration entails more dynamic and frequent interactions and can occur at different speeds. Synchronous collaboration occurs in real time such as editing a shared document on the fly, chatting, video conference, etc., and typically requires a peer-to-peer connection. Asynchronous collaboration is episodic in nature based on a push-pull model. Examples of asynchronous collaboration include email exchanges, blogging, repositories, etc. The purpose of a workbench is to provide a customizable framework for different applications. Since the workbench will be common to all the customized tools, it promotes building modular functionality that can be used and reused by multiple tools. The objective of our Collaborative Workbench (CWB) is thus to create such an open and extensible framework for the Earth Science community via a set of plug-ins. Our CWB is based on the Eclipse [2] Integrated Development Environment (IDE), which is designed as a small kernel containing a plug-in loader for hundreds of plug-ins. The kernel itself is an implementation of a known specification to provide an environment for the plug-ins to execute. This design enables modularity, where discrete chunks of functionality can be reused to build new applications. The minimal set of plug-ins necessary to create a client application is called the Eclipse Rich Client Platform (RCP) [3]; The Eclipse RCP also supports thousands of community-contributed plug-ins, making it a popular development platform for many diverse applications including the Science Activity Planner developed at JPL for the Mars rovers [4] and the scientific experiment tool Gumtree [5]. By leveraging the Eclipse RCP to provide an open, extensible framework, a CWB supports customizations via plug-ins to build rich user applications specific for Earth Science. More importantly, CWB plug-ins can be used by existing science tools built off Eclipse such as IDL or PyDev to provide seamless collaboration functionalities

    A survey and classification of software-defined storage systems

    Get PDF
    The exponential growth of digital information is imposing increasing scale and efficiency demands on modern storage infrastructures. As infrastructure complexity increases, so does the difficulty in ensuring quality of service, maintainability, and resource fairness, raising unprecedented performance, scalability, and programmability challenges. Software-Defined Storage (SDS) addresses these challenges by cleanly disentangling control and data flows, easing management, and improving control functionality of conventional storage systems. Despite its momentum in the research community, many aspects of the paradigm are still unclear, undefined, and unexplored, leading to misunderstandings that hamper the research and development of novel SDS technologies. In this article, we present an in-depth study of SDS systems, providing a thorough description and categorization of each plane of functionality. Further, we propose a taxonomy and classification of existing SDS solutions according to different criteria. Finally, we provide key insights about the paradigm and discuss potential future research directions for the field.This work was financed by the Portuguese funding agency FCT-Fundacao para a Ciencia e a Tecnologia through national funds, the PhD grant SFRH/BD/146059/2019, the project ThreatAdapt (FCT-FNR/0002/2018), the LASIGE Research Unit (UIDB/00408/2020), and cofunded by the FEDER, where applicable

    Ontological support for the evolution of future services oriented architectures

    Get PDF
    Services Oriented Architectures (SOA) have emerged as a useful framework for developing interoperable, large-scale systems, typically implemented using the Web Services (WS) standards. However, the maintenance and evolution of SOA systems present many challenges. SmartLife applications are intelligent user-centered systems and a special class of SOA systems that present even greater challenges for a software maintainer. Ontologies and ontological modeling can be used to support the evolution of SOA systems. This paper describes the development of a SOA evolution ontology and its use to develop an ontological model of a SOA system. The ontology is based on a standard SOA ontology. The ontological model can be used to provide semantic and visual support for software maintainers during routine maintenance tasks. We discuss a case study to illustrate this approach, as well as the strengths and limitations

    Standards in Disruptive Innovation: Assessment Method and Application to Cloud Computing

    Get PDF
    Die Dissertation schlägt ein konzeptionelles Informationsmodell und eine Methode zur Bewertung von Technologie-Standards im Kontext von Disruptiven Innovationen vor. Das konzeptionelle Informationsmodell stellt die Grundlage zur Strukturierung relevanter Informationen dar. Die Methode definiert ein Prozessmodell, das die Instanziierung des Informationsmodells für verschiedenen Domänen beschreibt und Stakeholder bei der Klassifikation und Evaluation von Technologie-Standards unterstützt

    A Taxonomy of Quality Metrics for Cloud Services

    Full text link
    [EN] A large number of metrics with which to assess the quality of cloud services have been proposed over the last years. However, this knowledge is still dispersed, and stakeholders have little or no guidance when choosing metrics that will be suitable to evaluate their cloud services. The objective of this paper is, therefore, to systematically identify, taxonomically classify, and compare existing quality of service (QoS) metrics in the cloud computing domain. We conducted a systematic literature review of 84 studies selected from a set of 4333 studies that were published from 2006 to November 2018. We specifically identified 470 metric operationalizations that were then classified using a taxonomy, which is also introduced in this paper. The data extracted from the metrics were subsequently analyzed using thematic analysis. The findings indicated that most metrics evaluate quality attributes related to performance efficiency (64%) and that there is a need for metrics that evaluate other characteristics, such as security and compatibility. The majority of the metrics are used during the Operation phase of the cloud services and are applied to the running service. Our results also revealed that metrics for cloud services are still in the early stages of maturity only 10% of the metrics had been empirically validated. The proposed taxonomy can be used by practitioners as a guideline when specifying service level objectives or deciding which metric is best suited to the evaluation of their cloud services, and by researchers as a comprehensive quality framework in which to evaluate their approaches.This work was supported by the Spanish Ministry of Science, Innovation and Universities through the Adapt@Cloud Project under Grant TIN2017-84550-R. The work of Ximena Guerron was supported in part by the Universidad Central del Ecuador (UCE), and in part by the Banco Central del Ecuador.Guerron, X.; Abrahao Gonzales, SM.; Insfran, E.; Fernández-Diego, M.; González-Ladrón-De-Guevara, F. (2020). A Taxonomy of Quality Metrics for Cloud Services. IEEE Access. 8:131461-131498. https://doi.org/10.1109/ACCESS.2020.3009079S131461131498

    Function-as-a-Service Performance Evaluation: A Multivocal Literature Review

    Get PDF
    Function-as-a-Service (FaaS) is one form of the serverless cloud computing paradigm and is defined through FaaS platforms (e.g., AWS Lambda) executing event-triggered code snippets (i.e., functions). Many studies that empirically evaluate the performance of such FaaS platforms have started to appear but we are currently lacking a comprehensive understanding of the overall domain. To address this gap, we conducted a multivocal literature review (MLR) covering 112 studies from academic (51) and grey (61) literature. We find that existing work mainly studies the AWS Lambda platform and focuses on micro-benchmarks using simple functions to measure CPU speed and FaaS platform overhead (i.e., container cold starts). Further, we discover a mismatch between academic and industrial sources on tested platform configurations, find that function triggers remain insufficiently studied, and identify HTTP API gateways and cloud storages as the most used external service integrations. Following existing guidelines on experimentation in cloud systems, we discover many flaws threatening the reproducibility of experiments presented in the surveyed studies. We conclude with a discussion of gaps in literature and highlight methodological suggestions that may serve to improve future FaaS performance evaluation studies.Comment: improvements including postprint update

    Business Capability Mining - Opportunities and Challenges

    Get PDF
    Business capability models are widely used in enterprise architecture management to generate an abstract overview of an organization’s business activities to reach its business objectives. The creation and maintenance of these models are associated with a huge manual workload. Research provides insights into opportunities for automated modeling of enterprise architecture models. However, most models address the application and technology layer and leave the business layer largely unexplored. Particularly, no research has been conducted on the automated generation of business capability models. This research paper uses 19 semi-structured expert interviews to identify possible automated modeling opportunities of business capabilities and related challenges and to jointly develop a business capability mining approach. This research benefit both, practice and research, by describing a situation-based business capability mining approach and identifying appropriate implementation scenarios

    Ontology‐driven perspective of CFRaaS

    Get PDF
    A Cloud Forensic Readiness as a Service (CFRaaS) model allows an environment to preemptively accumulate relevant potential digital evidence (PDE) which may be needed during a post‐event response process. The benefit of applying a CFRaaS model in a cloud environment, is that, it is designed to prevent the modification/tampering of the cloud architectures or the infrastructure during the reactive process, which if it could, may end up having far‐reaching implications. The authors of this article present the reactive process as a very costly exercise when the infrastructure must be reprogrammed every time the process is conducted. This may hamper successful investigation from the forensic experts and law enforcement agencies perspectives. The CFRaaS model, in its current state, has not been presented in a way that can help to classify or visualize the different types of potential evidence in all the cloud deployable models, and this may limit the expectations of what or how the required PDE may be collected. To address this problem, the article presents the CFRaaS from a holistic ontology‐driven perspective, which allows the forensic experts to be able to apply the CFRaaS based on its simplicity of the concepts, relationship or semantics between different form of potential evidence, as well as how the security of a digital environment being investigated could be upheld. The CFRaaS in this context follows a fundamental ontology engineering approach that is based on the classical Resource Description Framework. The proposed ontology‐driven approach to CFRaaS is, therefore, a knowledge‐base that uses layer‐dependencies, which could be an essential toolkit for digital forensic examiners and other stakeholders in cloud‐security. The implementation of this approach could further provide a platform to develop other knowledge base components for cloud forensics and security
    corecore