7 research outputs found

    Utilizando dispositivos limitados para execução de código personalizado na Internet das coisas

    No full text
    Orientador: Edson BorinTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Perspectivas atuais para a Internet das Coisas (IoT, do inglês, Internet of Things) indicam que, dentro de alguns anos, uma rede global de objetos irá conectar dezenas de milhares de dispositivos através da Internet. A maioria deles conterá sensores que geram fluxos de dados constantemente e é comum que usuários precisem processar esses dados antes de armazenar os valores finais. No entanto, com o crescimento progressivo da IoT, esta estratégia precisa ser reavaliada, já que transmitir um imenso volume de informações pela rede será muito custoso. Uma maneira de atender a estes futuros requisitos é trazer a computação para onde a informação já está. Com essa premissa, surgiu um paradigma chamado computação em névoa que propõe processar os dados perto de dispositivos da borda da rede. Outra característica comum dentre o grande número de dispositivos IoT será que seus recursos (ex.: energia, memória e processamento) serão limitados. O Instituto Nacional de Padrões e Tecnologia dos EUA chama a atenção para este tipo de dispositivo dentro do contexto da computação em névoa, nomeando-os nós de neblina, pois são nós de névoa leves. Contudo, eles caracterizam estes nós como mais especializados, dedicados e geralmente compartilhando a mesma localidade que os dispositivos finais inteligentes a quem eles atendem. Em nosso trabalho, nós propomos uma abordagem diferente, onde estes dispositivos limitados estão integrados com sensores e são capazes de executar código de propósito geral, podendo assim executar códigos personalizados enviados pelo usuário. A análise desta proposta é dividida em duas partes. Primeiro, nós desenvolvemos uma plataforma chamada LibMiletusCOISA (LMC), que permite que usuários executem seus códigos em dispositivos limitados e a comparamos com o arcabouço Apache Edgent. O desempenho do LMC foi melhor que o do Edgent quando ambos foram executados no mesmo dispositivo e ele possibilitou a execução de código em dispositivos limitados que eram muito restritos para executar a outra ferramenta. Depois, criamos dois modelos onde o usuário escolhe uma certa métrica de custo (ex.: número de instruções, tempo de execução ou consumo de energia) e a emprega para decidir onde o seu código deve ser executado. Um deles é um modelo matemático que usa uma equação linear para auxiliar o usuário a dividir o seu problema em um conjunto de variáveis e então determinar os custos de realizar a sua computação usando a nuvem e a névoa. O outro é um modelo visual que permite que o usuário conclua facilmente qual é a abordagem mais vantajosa em um cenário específico. Nós usamos estes modelos para identificar situações onde a abordagem escolhida é executar o código no dispositivo que coletou os dados e também simulamos cenários futuros onde mudanças na tecnologia podem impactar a decisão do usuárioAbstract: Current prospects for the Internet of Things (IoT) indicate that, within the next few years, a global network of objects will connect tens of billions of devices through the Internet. Most of them will contain sensors that constantly generate data streams and it is common for users to need to process these data before storing the final values. However, with the ever-growing scale of the IoT, this strategy must be re-evaluated, as transmitting an immense volume of information through the network will be too taxing. One way to meet these coming requirements is to bring the computation to where the information already is. With this premise, emerged a paradigm called fog computing that proposes to process data closer to network edge devices. Another common characteristic among a large number of IoT devices will be that their resources (e.g., power, memory, and processing) will be limited. The USA's National Institute of Standards and Technology calls attention to this type of device within the fog computing context, naming them mist nodes, as they are lightweight fog computing nodes. Nevertheless, they characterize these nodes as more specialized, dedicated, and often sharing the same locality with the smart end-devices they service. In our work, we propose a different approach, where these constrained devices are integrated with sensors and capable of general-purpose code execution, thus being able to run custom code sent by users. The analysis of this proposal is divided into two parts. First, we developed a platform called LibMiletusCOISA (LMC), which allows users to execute their code on constrained devices and compared it to the Apache Edgent framework. LMC outperformed Edgent when both were executed on the same device, and made it possible to execute the code on a constrained device that was too limited to execute the other tool. Then, we created two models where the user chooses a certain cost metric (e.g., number of instructions, execution time, or energy consumption) and employs it to decide where their code should be executed. One of them is a mathematical model that uses a linear equation to help the user break down their problem into a set of variables and then determine the costs of performing their computation using the cloud and the fog. The other is a visual model that allows the user to easily conclude what is the most profitable approach in a specific scenario. We employed these models to identify the situations where executing the code on the device that collected the data is the chosen approach and also simulated future scenarios where changes in the technology can impact the user's decisionDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação140653/2017-1CAPESCNP

    Fog vs. Cloud Computing: Should I Stay or Should I Go?

    Get PDF
    In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?„. As it is often the case in computer science, the response is “it depends„. To find out the cases where it is more profitable to stay in the device (which is part of the fog) or to go to a different one (for example, a device in the cloud), we propose two models that intend to help the user evaluate the cost of performing a certain computation on the fog or sending all the data to be handled by the cloud. In our generic mathematical model, the user can define a cost type (e.g., number of instructions, execution time, energy consumption) and plug in values to analyze test cases. As filters have a very important role in the future of the Internet of Things and can be implemented as lightweight programs capable of running on resource-constrained devices, this kind of procedure is the main focus of our study. Furthermore, our visual model guides the user in their decision by aiding the visualization of the proposed linear equations and their slope, which allows them to find if either fog or cloud computing is more profitable for their specific scenario. We validated our models by analyzing four benchmark instances (two applications using two different sets of parameters each) being executed on five datasets. We use execution time and energy consumption as the cost types for this investigation

    PY-PITS : a scalable python runtime system for the computation of partially idempotent tasks

    No full text
    Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)The popularization of multi-core architectures and cloud services has allowed users access to high performance computing infrastructures. However, programming for these systems might be cumbersome due to challenges involving system failures, load balancing, and task scheduling. Aiming at solving these problems, we previously introduced SPITS, a programming model and reference architecture for executing bag-of-task applications. In this work, we discuss how this programming model allowed us to design and implement PY-PITS, a simple and effective open source runtime system that is scalable, tolerates faults and allows dynamic provisioning of resources during computation of tasks. We also discuss how PY-PITS can be used to improve utilization of multi-user computational clusters equipped with queues to submit jobs and propose a performance model to aid users to understand when the performance of PY-PITS scales with the number of Workers.The popularization of multi-core architectures and cloud services has allowed users access to high performance computing infrastructures. However, programming for these systems might be cumbersome due to challenges involving system failures, load balancinCAPES - COORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL E NÍVEL SUPERIORFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)sem informaçãosem informaçãosem informaçãoInternational symposium on computer architecture and high performance computing workshops (SBAC-PADW)OCT 26-28, 2016Los Angeles, C

    A unified model for accelerating unsupervised iterative re‐ranking algorithms

    No full text
    Despite the continuous advances in image retrieval technologies, performing effective and efficient content‐based searches remains a challenging task. Unsupervised iterative re‐ranking algorithms have emerged as a promising solution and have been widely used to improve the effectiveness of multimedia retrieval systems. Although substantially more efficient than related approaches based on diffusion processes, these re‐ranking algorithms can still be computationally costly, demanding the specification and implementation of efficient big multimedia analysis approaches. Such demand associated with the significant potential for parallelization and highly effective results achieved by recently proposed re‐ranking algorithms creates the need for exploiting efficiency vs effectiveness trade‐offs. In this article, we introduce a class of unsupervised iterative re‐ranking algorithms and present a model that can be used to guide their implementation and optimization for parallel architectures. We also analyze the impact of the parallelization on the performance of four algorithms that belong to the proposed class: Contextual Spaces, RL‐Sim, Contextual Re‐ranking, and Cartesian Product of Ranking References. The experiments show speedups that reach up to 6.0×, 16.1×, 3.3×, and 7.1× for each algorithm, respectively. These results demonstrate that the proposed parallel programming model can be successfully applied to various algorithms and used to improve the performance of multimedia retrieval systems3214CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO - CNPQCOORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIOR - CAPESFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESP307560/2016‐3; 484254/2012‐0; 308194/2017‐9; 140653/2017‐188881.145912/2017‐012018/15597‐6; 2017/25908‐6; 2014/12236‐1; 2015/24494‐8; 2016/50250‐1; 2017/20945‐0; 2019/19312‐9; 2013/50155‐0; 2013/50169‐1; 2014/50715‐9The authors thank AMD, FAEPEX, CAPES (grant #88881.145912/2017‐01), FAPESP (grants #2018/15597‐6, 2017/25908‐6, #2014/12236‐1, #2015/24494‐8, #2016/50250‐1, #2017/20945‐0, and #2019/19312‐9), the FAPESP‐Microsoft Virtual Institute (grants #2013/50155‐0, #2013/50169‐1, and #2014/50715‐9), and CNPq (grants #307560/2016‐3, #484254/2012‐0, #308194/2017‐9, and #140653/2017‐1) for the financial suppor

    Evaluation of a quality improvement intervention to reduce anastomotic leak following right colectomy (EAGLE): pragmatic, batched stepped-wedge, cluster-randomized trial in 64 countries

    Get PDF
    Background Anastomotic leak affects 8 per cent of patients after right colectomy with a 10-fold increased risk of postoperative death. The EAGLE study aimed to develop and test whether an international, standardized quality improvement intervention could reduce anastomotic leaks. Methods The internationally intended protocol, iteratively co-developed by a multistage Delphi process, comprised an online educational module introducing risk stratification, an intraoperative checklist, and harmonized surgical techniques. Clusters (hospital teams) were randomized to one of three arms with varied sequences of intervention/data collection by a derived stepped-wedge batch design (at least 18 hospital teams per batch). Patients were blinded to the study allocation. Low- and middle-income country enrolment was encouraged. The primary outcome (assessed by intention to treat) was anastomotic leak rate, and subgroup analyses by module completion (at least 80 per cent of surgeons, high engagement; less than 50 per cent, low engagement) were preplanned. Results A total 355 hospital teams registered, with 332 from 64 countries (39.2 per cent low and middle income) included in the final analysis. The online modules were completed by half of the surgeons (2143 of 4411). The primary analysis included 3039 of the 3268 patients recruited (206 patients had no anastomosis and 23 were lost to follow-up), with anastomotic leaks arising before and after the intervention in 10.1 and 9.6 per cent respectively (adjusted OR 0.87, 95 per cent c.i. 0.59 to 1.30; P = 0.498). The proportion of surgeons completing the educational modules was an influence: the leak rate decreased from 12.2 per cent (61 of 500) before intervention to 5.1 per cent (24 of 473) after intervention in high-engagement centres (adjusted OR 0.36, 0.20 to 0.64; P < 0.001), but this was not observed in low-engagement hospitals (8.3 per cent (59 of 714) and 13.8 per cent (61 of 443) respectively; adjusted OR 2.09, 1.31 to 3.31). Conclusion Completion of globally available digital training by engaged teams can alter anastomotic leak rates. Registration number: NCT04270721 (http://www.clinicaltrials.gov)
    corecore