2,893 research outputs found

    An eco-friendly hybrid urban computing network combining community-based wireless LAN access and wireless sensor networking

    Get PDF
    Computer-enhanced smart environments, distributed environmental monitoring, wireless communication, energy conservation and sustainable technologies, ubiquitous access to Internet-located data and services, user mobility and innovation as a tool for service differentiation are all significant contemporary research subjects and societal developments. This position paper presents the design of a hybrid municipal network infrastructure that, to a lesser or greater degree, incorporates aspects from each of these topics by integrating a community-based Wi-Fi access network with Wireless Sensor Network (WSN) functionality. The former component provides free wireless Internet connectivity by harvesting the Internet subscriptions of city inhabitants. To minimize session interruptions for mobile clients, this subsystem incorporates technology that achieves (near-)seamless handover between Wi-Fi access points. The WSN component on the other hand renders it feasible to sense physical properties and to realize the Internet of Things (IoT) paradigm. This in turn scaffolds the development of value-added end-user applications that are consumable through the community-powered access network. The WSN subsystem invests substantially in ecological considerations by means of a green distributed reasoning framework and sensor middleware that collaboratively aim to minimize the network's global energy consumption. Via the discussion of two illustrative applications that are currently being developed as part of a concrete smart city deployment, we offer a taste of the myriad of innovative digital services in an extensive spectrum of application domains that is unlocked by the proposed platform

    FLINT: A Platform for Federated Learning Integration

    Full text link
    Cross-device federated learning (FL) has been well-studied from algorithmic, system scalability, and training speed perspectives. Nonetheless, moving from centralized training to cross-device FL for millions or billions of devices presents many risks, including performance loss, developer inertia, poor user experience, and unexpected application failures. In addition, the corresponding infrastructure, development costs, and return on investment are difficult to estimate. In this paper, we present a device-cloud collaborative FL platform that integrates with an existing machine learning platform, providing tools to measure real-world constraints, assess infrastructure capabilities, evaluate model training performance, and estimate system resource requirements to responsibly bring FL into production. We also present a decision workflow that leverages the FL-integrated platform to comprehensively evaluate the trade-offs of cross-device FL and share our empirical evaluations of business-critical machine learning applications that impact hundreds of millions of users.Comment: Preprint for MLSys 202

    The Water Footprint of Data Centers

    Get PDF
    The internet and associated Information and Communications Technologies (ICT) are diffusing at an astounding pace. As data centers (DCs) proliferate to accommodate this rising demand, their environmental impacts grow too. While the energy efficiency of DCs has been researched extensively, their water footprint (WF) has so far received little to no attention. This article conducts a preliminary WF accounting for cooling and energy consumption in DCs. The WF of DCs is estimated to be between 1047 and 151,061 m3/TJ. Outbound DC data traffic generates a WF of 1–205 liters per gigabyte (roughly equal to the WF of 1 kg of tomatos at the higher end). It is found that, typically, energy consumption constitues by far the greatest share of DC WF, but the level of uncertainty associated with the WF of different energy sources used by DCs makes a comprehensive assessment of DCs’ water use efficiency very challenging. Much better understanding of DC WF is urgently needed if a meaningful evaluation of this rapidly spreading service technology is to be gleaned and response measures are to be put into effect

    EnergAt: Fine-Grained Energy Attribution for Multi-Tenancy

    Full text link
    In the post-Moore's Law era, relying solely on hardware advancements for automatic performance gains is no longer feasible without increased energy consumption, due to the end of Dennard scaling. Consequently, computing accounts for an increasing amount of global energy usage, contradicting the objective of sustainable computing. The lack of hardware support and the absence of a standardized, software-centric method for the precise tracing of energy provenance exacerbates the issue. Aiming to overcome this challenge, we argue that fine-grained software energy attribution is attainable, even with limited hardware support. To support our position, we present a thread-level, NUMA-aware energy attribution method for CPU and DRAM in multi-tenant environments. The evaluation of our prototype implementation, EnergAt, demonstrates the validity, effectiveness, and robustness of our theoretical model, even in the presence of the noisy-neighbor effect. We envisage a sustainable cloud environment and emphasize the importance of collective efforts to improve software energy efficiency.Comment: 8 pages, 4 figures; Published in HotCarbon 2023; Artifact available at https://github.com/HongyuHe/energa

    Comparative Analysis of Fullstack Development Technologies: Frontend, Backend and Database

    Get PDF
    Accessing websites with various devices has brought changes in the field of application development. The choice of cross-platform, reusable frameworks is very crucial in this era. This thesis embarks in the evaluation of front-end, back-end, and database technologies to address the status quo. Study-a explores front-end development, focusing on angular.js and react.js. Using these frameworks, comparative web applications were created and evaluated locally. Important insights were obtained through benchmark tests, lighthouse metrics, and architectural evaluations. React.js proves to be a performance leader in spite of the possible influence of a virtual machine, opening the door for additional research. Study b delves into backend scripting by contrasting node.js with php. The efficiency of sorting algorithms—binary, bubble, quick, and heap—is the main subject of the research. The performance measurement tool is apache jmeter, and the most important indicator is latency. Study c sheds light on database systems by comparing and contrasting the performance of nosql and sql, with a particular emphasis on mongodb for nosql. In a time of enormous data volumes, reliable technologies are necessary for data management. The five basic database activities that apache jmeter examines are insert, select, update, delete, and aggregate. The performance indicator is the amount of time that has passed. The results showed that the elapsed time for insert operations was significantly faster in nosql than in sql. The p-value for each operation result was less than 0.05, indicating that the performance difference is not significant. The results also showed that the elapsed time of update, delete, select, and aggregate operations are less in nosql than in sql. This suggests that the performance difference between sql and nosql is not significant. These research studies are combined in this thesis to provide a comprehensive understanding of database management, backend programming, and development frameworks. The results provide developers and organisations with the information they need to make wise decisions in this constantly changing environment and satisfy the expectations of a dynamic and diverse technology landscape. INDEX WORDS: Framework, JavaScript, frontend, React.js, Angular.js, Node.js, PHP, Backend, technology, Algorithms, Performance, Apache JMeter, T-test, SQL, NoSQL, Database management systems, Performance comparison, Data operations, Decision-making

    Topics in Power Usage in Network Services

    Get PDF
    The rapid advance of computing technology has created a world powered by millions of computers. Often these computers are idly consuming energy unnecessarily in spite of all the efforts of hardware manufacturers. This thesis examines proposals to determine when to power down computers without negatively impacting on the service they are used to deliver, compares and contrasts the efficiency of virtualisation with containerisation, and investigates the energy efficiency of the popular cryptocurrency Bitcoin. We begin by examining the current corpus of literature and defining the key terms we need to proceed. Then we propose a technique for improving the energy consumption of servers by moving them into a sleep state and employing a low powered device to act as a proxy in its place. After this we move on to investigate the energy efficiency of virtualisation and compare the energy efficiency of two of the most common means used to do this. Moving on from this we look at the cryptocurrency Bitcoin. We consider the energy consumption of bitcoin mining and if this compared with the value of bitcoin makes this profitable. Finally we conclude by summarising the results and findings of this thesis. This work increases our understanding of some of the challenges of energy efficient computation as well as proposing novel mechanisms to save energy

    Constructing Cost-Effective and Targetable ICS Honeypots Suited for Production Networks

    Get PDF
    Honeypots are a technique that can mitigate the risk of cyber threats. Effective honeypots are authentic and targetable, and their design and implementation must accommodate risk tolerance and financial constraints. The proprietary, and often expensive, hardware and software used by Industrial Control System (ICS) devices creates the challenging problem of building a flexible, economical, and scalable honeypot. This research extends Honeyd into Honeyd+, making it possible to use the proxy feature to create multiple high interaction honeypots with a single Programmable Logic Controller (PLC). Honeyd+ is tested with a network of 75 decoy PLCs, and the interactions with the decoys are compared to a physical PLC to test for authenticity. The performance test evaluates the impact of multiple simultaneous connections to the PLC. The functional test is successful in all cases. The performance test demonstrated that the PLC is a limiting factor, and that introducing Honeyd+ has a marginal impact on performance. Notable findings are that the Raspberry Pi is the preferred hosting platform, and more than five simultaneous connections were not optimal

    Development and evaluation of a reference measurement model for assessing the resource and energy efficiency of software products and components—Green Software Measurement Model (GSMM)

    Get PDF
    In the past decade, research on measuring and assessing the environmental impact of software has gained significant momentum in science and industry. However, due to the large number of research groups, measurement setups, procedure models, tools, and general novelty of the research area, a comprehensive research framework has yet to be created. The literature documents several approaches from researchers and practitioners who have developed individual methods and models, along with more general ideas like the integration of software sustainability in the context of the UN Sustainable Development Goals, or science communication approaches to make the resource cost of software transparent to society. However, a reference measurement model for the energy and resource consumption of software is still missing. In this article, we jointly develop the Green Software Measurement Model (GSMM), in which we bring together the core ideas of the measurement models, setups, and methods of over 10 research groups in four countries who have done pioneering work in assessing the environmental impact of software. We briefly describe the different methods and models used by these research groups, derive the components of the GSMM from them, and then we discuss and evaluate the resulting reference model. By categorizing the existing measurement models and procedures and by providing guidelines for assimilating and tailoring existing methods, we expect this work to aid new researchers and practitioners who want to conduct measurements for their individual use cases
    • …
    corecore