632 research outputs found

    FLOWViZ: framework for phylogenetic processing

    Get PDF
    Final project to obtain the Master Degree in Computer Science and EngineeringO aumento do risco epidemiológico e o constante crescimento da população mundial contribuiu para que se fizesse um forte investimento na análise filogenética, de modo a monitorizar doenças e a conceber tratamentos e medicação rápidos e eficazes. A análise filogenética utiliza grandes quantidades de informação, que deve ser analisada e processada para se extrair conhecimento, utilizando técnicas adequadas e, atualmente, software especializado e algoritmos, de modo a produzir resultados eficazes e rápidos. Estes algoritmos já são fornecidos por um grande conjunto de frameworks e ferramentas disponíveis gratuitamente, um bom exemplo é a framework de inferência filogenética PHYLOViZ[23]. A maioria das técnicas de análise utilizadas na inferência filogenética tendem a formar, topologicamente, pipelines de trabalho - procedimentos constituídos por passos, cujos fluxos de dados são dependentes entre si. Apesar de ser possível executar pipelines de trabalho manualmente, como tem sido feito há várias décadas, atualmente, já não é fazível, dado que os datasets utilizados são volumosos, tornando a sua análise manual contraproducente. A transição manual entre passos necessita também que haja interação humana para que cada passo receba os dados necessários, o que pode também estar sujeito ao erro humano. Por isso, foi construído software que reduzisse a interação humana e que automatizasse estes procedimentos. Este tipo de software é designado por sistemas de workflow - software que permite os utilizadores criarem workflows, através de uma Domain-Specific Language (DSL)[13], onde estes procedimentos são traduzidos para scripts, especificandose o grupo de tarefas, com os seus parâmetros e dependências de dados. Existem atualmente várias soluções de sistemas de workflow, que diferem na sua linguagem e estruturação de workflows, o que leva a que exista uma grande heterogeneidade de software, mas que piora também a partilha destes procedimentos. Por isso, quando se partilham workflows, é necessário despender-se tempo a traduzir pipelines de trabalho para a linguagem específica do sistema de workflow que vai executar a pipeline partilhada. Este problema levou a que fosse criada a Common Workflow Language (CWL)[2] - um novo standard que permite executar workflows entre vários sistemas de workflow. No entanto, nem todos os sistemas suportam este novo standard. Este projeto pretende construir uma framework, recorrendo a um projeto existente - PHYLOViZ e ao seu conjunto de ferramentas de inferência filogenética. Esta framework, permitirá ligar frameworks de inferência filogenética a sistemas de workflow, dando ao utilizador liberdade para construir os seus workflows personalizados, recorrendo à framework e às ferramentas do utilizador, fornecidas remotamente, que poderão ser geridas através de uma interface intuitiva. Tudo isto, fornecerá automatização de workflows e uma análise filogenética mais rápida e eficaz. Este projeto foi financiado, no contexto de uma bolsa de estudo da Fundação para a Ciência e a Tecnologia (FCT) com referência UIDB/50021/2020, no projeto NGPHYLO PTDC/CCI-BIO/29676/2017 e num projeto do IPL - IPL/2021/DIVA_ISEL.The increasing risk of epidemics and a fast-growing world population has contributed to a great investment in phylogenetic analysis, in order to track numerous diseases and conceive effective medication and treatments. Phylogenetic analysis requires large quantities of information to be analyzed and processed for knowledge extraction, using adequate techniques and, nowadays, specific software and algorithms, to deliver results as efficiently and fast as possible. These algorithms and techniques are already provided by a great set of free and available frameworks and tools, such as PHYLOViZ[23]. Most of the applied techniques and algorithms used for phylogenetic inference tend to form work pipelines - procedures formed by steps, which typically have an intrinsic dependency between them. Although it is possible to execute work pipelines manually, as it has been done for decades, nowadays, is not feasible, as genomic datasets are very large, and the respective analysis is time-consuming. The transition between steps also needs human interaction and each step must receive the matching data, correctly, which can introduce human error. Because of this, software were made to ease and reduce manual interaction, so these procedures could be automated. This type of software is typically referred as a workflow system - software which allows users to create workflows, on top of a provided Domain-Specific Language (DSL)[13], where procedures are translated into scripts, through the definition of a group of steps and their specific parameters and dependencies. There are already many software solutions available, which differ in their Domain-Specific Language and workflow structuring, leading to a great software heterogeneity, but also low workflow shareability - as users work on different workflow systems. Thus, when they share workflows with others, time needs to be spent converting and adapting certain workflows to a specific workflow system, so work pipelines can be executed, making workflow sharing a difficult task. This lead to the creation of the Common Workflow Language (CWL)[2] - a new standard which provides a way to execute workflows and work pipelines among diferente workflow systems. However, not every system supports this new standard. This project aims to build a framework on top of an already existing project - PHYLOViZ, which provides a set of state-of-the-art tools for phylogenetic inference. The developed framework, will link phylogenetic inference web frameworks with workflow systems, giving the user freedom to build its workflows, using the provided web framework’s or its remote tools, through a user-friendly web interface. Resulting in workflow automation, task scheduling and a more efficient and faster phylogenetic analysis. The project was supported by funds, under the context of a student grant of Fundação para a Ciência e a Tecnologia (FCT) with reference UIDB/50021/2020, for a INESCID’s project - NGPHYLO PTDC/CCI-BIO/29676/2017 and a Polytechnic Institute of Lisbon project - IPL/2021/DIVA_ISEL.N/

    DIANNE: a modular framework for designing, training and deploying deep neural networks on heterogeneous distributed infrastructure

    Get PDF
    Deep learning has shown tremendous results on various machine learning tasks, but the nature of the problems being tackled and the size of state-of-the-art deep neural networks often require training and deploying models on distributed infrastructure. DIANNE is a modular framework designed for dynamic (re)distribution of deep learning models and procedures. Besides providing elementary network building blocks as well as various training and evaluation routines, DIANNE focuses on dynamic deployment on heterogeneous distributed infrastructure, abstraction of Internet of Things (loT) sensors, integration with external systems and graphical user interfaces to build and deploy networks, while retaining the performance of similar deep learning frameworks. In this paper the DIANNE framework is proposed as an all-in-one solution for deep learning, enabling data and model parallelism though a modular design, offloading to local compute power, and the ability to abstract between simulation and real environment. (C) 2018 Elsevier Inc. All rights reserved

    Regional Climate Model Evaluation System powered by Apache Open Climate Workbench v1.3.0: an enabling tool for facilitating regional climate studies

    Get PDF
    The Regional Climate Model Evaluation System (RCMES) is an enabling tool of the National Aeronautics and Space Administration to support the United States National Climate Assessment. As a comprehensive system for evaluating climate models on regional and continental scales using observational datasets from a variety of sources, RCMES is designed to yield information on the performance of climate models and guide their improvement. Here, we present a user-oriented document describing the latest version of RCMES, its development process, and future plans for improvements. The main objective of RCMES is to facilitate the climate model evaluation process at regional scales. RCMES provides a framework for performing systematic evaluations of climate simulations, such as those from the Coordinated Regional Climate Downscaling Experiment (CORDEX), using in situ observations, as well as satellite and reanalysis data products. The main components of RCMES are (1) a database of observations widely used for climate model evaluation, (2) various data loaders to import climate models and observations on local file systems and Earth System Grid Federation (ESGF) nodes, (3) a versatile processor to subset and regrid the loaded datasets, (4) performance metrics designed to assess and quantify model skill, (5) plotting routines to visualize the performance metrics, (6) a toolkit for statistically downscaling climate model simulations, and (7) two installation packages to maximize convenience of users without Python skills. RCMES website is maintained up to date with a brief explanation of these components. Although there are other open-source software (OSS) toolkits that facilitate analysis and evaluation of climate models, there is a need for climate scientists to participate in the development and customization of OSS to study regional climate change. To establish infrastructure and to ensure software sustainability, development of RCMES is an open, publicly accessible process enabled by leveraging the Apache Software Foundation's OSS library, Apache Open Climate Workbench (OCW). The OCW software that powers RCMES includes a Python OSS library for common climate model evaluation tasks as well as a set of user-friendly interfaces for quickly configuring a model evaluation task. OCW also allows users to build their own climate data analysis tools, such as the statistical downscaling toolkit provided as a part of RCMES.</p

    Liquid stream processing on the web: a JavaScript framework

    Get PDF
    The Web is rapidly becoming a mature platform to host distributed applications. Pervasive computing application running on the Web are now common in the era of the Web of Things, which has made it increasingly simple to integrate sensors and microcontrollers in our everyday life. Such devices are of great in- terest to Makers with basic Web development skills. With them, Makers are able to build small smart stream processing applications with sensors and actuators without spending a fortune and without knowing much about the technologies they use. Thanks to ongoing Web technology trends enabling real-time peer-to- peer communication between Web-enabled devices, Web browsers and server- side JavaScript runtimes, developers are able to implement pervasive Web ap- plications using a single programming language. These can take advantage of direct and continuous communication channels going beyond what was possible in the early stages of the Web to push data in real-time. Despite these recent advances, building stream processing applications on the Web of Things remains a challenging task. On the one hand, Web-enabled devices of different nature still have to communicate with different protocols. On the other hand, dealing with a dynamic, heterogeneous, and volatile environment like the Web requires developers to face issues like disconnections, unpredictable workload fluctuations, and device overload. To help developers deal with such issues, in this dissertation we present the Web Liquid Streams (WLS) framework, a novel streaming framework for JavaScript. Developers implement streaming operators written in JavaScript and may interactively and dynamically define a streaming topology. The framework takes care of deploying the user-defined operators on the available devices and connecting them using the appropriate data channel, removing the burden of dealing with different deployment environments from the developers. Changes in the semantic of the application and in its execution environment may be ap- plied at runtime without stopping the stream flow. Like a liquid adapts its shape to the one of its container, the Web Liquid Streams framework makes streaming topologies flow across multiple heterogeneous devices, enabling dynamic operator migration without disrupting the data flow. By constantly monitoring the execution of the topology with a hierarchical controller infrastructure, WLS takes care of parallelising the operator execution across multiple devices in case of bottlenecks and of recovering the execution of the streaming topology in case one or more devices disconnect, by restarting lost operators on other available devices

    Construction, Operation and Maintenance of Network System(Junior Level)

    Get PDF
    This open access book follows the development rules of network technical talents, simultaneously placing its focus on the transfer of network knowledge, the accumulation of network skills, and the improvement of professionalism. Through the complete process from the elaboration of the theories of network technology to the analysis of application scenarios then to the design and implementation of case projects, readers are enabled to accumulate project experience and eventually acquire knowledge and cultivate their ability so as to lay a solid foundation for adapting to their future positions. This book comprises six chapters, which include “General Operation Safety of Network System,” “Cabling Project,” “Hardware Installation of Network System,” “Basic Knowledge of Network System,” “Basic Operation of Network System,” and “Basic Operation and Maintenance of Network System.” This book can be used for teaching and training for the vocational skills certification of network system construction, operation, and maintenance in the pilot work of Huawei’s “1+X” Certification System, and it is also suitable as a textbook for application-oriented universities, vocational colleges, and technical colleges. In the meantime, it can also serve as a reference book for technicians engaged in network technology development, network management and maintenance, and network system integration. As the world’s leading ICT (information and communications technology) infrastructure and intelligent terminal provider, Huawei Technologies Co., Ltd. has covered many fields such as data communication, security, wireless, storage, cloud computing, intelligent computing, and artificial intelligence. Taking Huawei network equipment (routers, switches, wireless controllers, and wireless access points) as the platform, and based on network engineering projects, this book organizes all the contents according to the actual needs of the industry

    Analysis of Module Federation Implementation in a Micro-Frontend Application

    Get PDF
    This thesis explores the implementation of Module Federation in a micro-frontend application, a technique for dynamically loading components from various bundles at runtime. It is presented an overview of the micro-frontend architecture emphasizing its role in enhancing development efficiency and scalability. This thesis highlights the integration and potential challenges of Module Federation, focusing on providing optimal performance of the federated applications. The insights presented in this research provide a solid framework for developers and organizations aiming to leverage micro-frontends in conjunction with Module Federation in building efficient, scalable, and integrated web applications.Esta dissertação explora a implementação de Module Federation numa aplicação Micro-Frontend, uma técnica usada para carregar dinamicamente componentes de vários pacotes em tempo de execução. É apresentada uma visão geral da arquitetura Micro-Frontend, enfatizando o seu papel no aumento da eficiência e escalabilidade do desenvolvimento. Esta tese destaca a integração e os possíveis desafios de Module Federation, focando em fornecer um desempenho ótimo para as aplicações. As percepções apresentadas nesta pesquisa fornecem um sólido quadro de referência para programadores e organizações que visam aproveitar Micro-Frontends e Module Federation na construção de aplicações web eficientes, escaláveis e integradas

    Automated tools and techniques for distributed Grid Software: Development of the testbed infrastructure

    Get PDF
    Grid technology is becoming more and more important as the new paradigm for sharing computational resources across different organizations in a secure way. The great powerfulness of this solution, requires the definition of a generic stack of services and protocols and this is the scope of the different Grid initiatives. As a result of international collaborations for its development, the Open Grid Forum created the Open Grid Services Architecture (OGSA) which aims to define the common set of services that will enable interoperability across the different implementations. This master thesis has been developed in this framework, as part of the two European-funded projects ETICS and OMII-Europe. The main objective is to contribute to the design and maintenance of large distributed development projects with the automated tool that enables to implement Software Engineering techniques oriented to achieve an acceptable level of quality at the release process. Specifically, this thesis develops the testbed concept as the virtual production-like scenario where to perform compliance tests. As proof of concept, the OGSA Basic Execution Service has been chosen in order to implement and execute conformance tests within the ETICS automated testbed framework
    corecore