68 research outputs found

    Multi-Criteria Decision-Making Approach for Container-based Cloud Applications: The SWITCH and ENTICE Workbenches

    Get PDF
    Many emerging smart applications rely on the Internet of Things (IoT) to provide solutions to time-critical problems. When building such applications, a software engineer must address multiple Non-Functional Requirements (NFRs), including requirements for fast response time, low communication latency, high throughput, high energy efficiency, low operational cost and similar. Existing modern container-based software engineering approaches promise to improve the software lifecycle; however, they fail short of tools and mechanisms for NFRs management and optimisation. Our work addresses this problem with a new decision-making approach based on a Pareto Multi-Criteria optimisation. By using different instance configurations on various geo-locations, we demonstrate the suitability of our method, which narrows the search space to only optimal instances for the deployment of the containerised microservice.This solution is included in two advanced software engineering environments, the SWITCH workbench, which includes an Interactive Development Environment (IDE) and the ENTICE Virtual Machine and container images portal. The developed approach is particularly useful when building, deploying and orchestrating IoT applications across multiple computing tiers, from Edge-Cloudlet to Fog-Cloud data centres

    Quality of Service Assurance for Internet of Things Time-Critical Cloud Applications: Experience with the Switch and Entice Projects

    Full text link

    Challenges Emerging from Future Cloud Application Scenarios

    Get PDF
    The cloud computing paradigm encompasses several key differentiating elements and technologies, tackling a number of inefficiencies, limitations and problems that have been identified in the distributed and virtualized computing domain. Nonetheless, and as it is the case for all emerging technologies, their adoption led to the presentation of new challenges and new complexities. In this paper we present key application areas and capabilities of future scenarios, which are not tackled by current advancements and highlight specific requirements and goals for advancements in the cloud computing domain. We discuss these requirements and goals across different focus areas of cloud computing, ranging from cloud service and application integration, development environments and abstractions, to interoperability and relevant to it aspects such as legislation. The future application areas and their requirements are also mapped to the aforementioned areas in order to highlight their dependencies and potential for moving cloud technologies forward and contributing towards their wider adoption

    Time critical requirements and technical considerations for advanced support environments for data-intensive research

    Get PDF
    Data-centric approaches play an increasing role in many scientific domains, but in turn rely increasingly heavily on advanced research support environments for coordinating research activities, providing access to research data, and choreographing complex experiments. Critical time constraints can be seen in several application scenarios e.g., event detection for disaster early warning, runtime execution steering, and failure recovery. Providing support for executing such time critical research applications is still a challenging issue in many current research support environments however. In this paper, we analyse time critical requirements in three key kinds of research support environment—Virtual Research Environments, Research Infrastructures, and e-Infrastructures—and review the current state of the art. An approach for dynamic infrastructure planning is discussed that may help to address some of these requirements. The work is based on requirements collection recently performed in three EU H2020 projects: SWITCH, ENVRIPLUS and VRE4EIC

    Cloud technology options towards Free Flow of Data

    Get PDF
    This whitepaper collects the technology solutions that the projects in the Data Protection, Security and Privacy Cluster propose to address the challenges raised by the working areas of the Free Flow of Data initiative. The document describes the technologies, methodologies, models, and tools researched and developed by the clustered projects mapped to the ten areas of work of the Free Flow of Data initiative. The aim is to facilitate the identification of the state-of-the-art of technology options towards solving the data security and privacy challenges posed by the Free Flow of Data initiative in Europe. The document gives reference to the Cluster, the individual projects and the technologies produced by them

    A deep learning classifier for sentence classification in biomedical and computer science abstracts

    Get PDF
    The automatic classification of abstract sentences into its main elements (background, objectives, methods, results, conclusions) is a key tool to support scientific database querying, to summarize relevant literature works and to assist in the writing of new abstracts. In this paper, we propose a novel deep learning approach based on a convolutional layer and a bidirectional gated recurrent unit to classify sentences of abstracts. First, the proposed neural network was tested on a publicly available repository containing 20 thousand abstracts from the biomedical domain. Competitive results were achieved, with weight-averaged Precision, Recall and F1-score values around 91%, and an area under the ROC curve (AUC) of 99%, which are higher when compared to a state-of-the-art neural network. Then, a crowdsourcing approach using gamification was adopted to create a new comprehensive set of 4111 classified sentences from the computer science domain, focused on social media abstracts. The results of applying the same deep learning modeling technique trained with 3287 (80%) of the available sentences were below the ones obtained for the larger biomedical dataset, with weight-averaged Precision, Recall and F1-score values between 73 and 76%, and an AUC of 91%. Considering the dataset dimension as a likely important factor for such performance decrease, a data augmentation approach was further applied. This involved the use of text mining to translate sentences of the computer science abstract corpus while retaining the same meaning. Such approach resulted in slight improvements (around 2 percentage points) for the weight-averaged Recall and F1-score values.This work was supported by Fundação para a Ciência e Tecnologia (FCT) within the Project Scope: UID/CEC/00319/2019
    corecore