129 research outputs found

    Opportunities and constraints in the use of simulation on low cost ARM-based computers

    Get PDF
    Abstract The whole-building simulation community has habitually demanded greater computational resources and developers and researchers have responded with a myriad of approaches to address this demand. However, much of the work of creating and evolving models takes a fraction of the available computational power. This paper considers the deployment of simulation at the other extreme of computational cost e.g. ARM based products such as the Raspberry Pi and the BeagleBone Black. This paper reports on the porting of the ESP-r whole-building suite to ARM. It describes the modifications required to the simulation software as well as to the compilation tool-chain and computing environment to support compilation and deployment. It discusses the performance implications of model complexity and adjustments to methodologies required as well as the user reactions. It also describes how research based on tools such as MATLAB can be hosted on these alternative computer platforms

    Comparing Google's Android and Apple's iOS Mobile Software Development Environments

    Get PDF
    Mobile devices have become extremely popular during the past few years. They are used widely in business and everyday life by the young and the elderly. As the mobile devices and their operating systems have developed, the manufacturers have made it possible also for everyday users to create their own applications using specific Software Development Kits. For that reason, it is now common that applications are created not only by third party companies but also by everyone interested in the matter. The mobile business has come to a point where there are a few big companies respon-sible for developing the operating systems used by most hardware manufacturers. Of all the operating systems, there are two which have grown their market share during the past few years: Apple's iOS and Google's Android. The purpose of this thesis was to compare the two, finding out how easy they are to take into use, and to develop and publish applications with. The study was carried out as an empirical research. The research was made on both operating systems and the SDKs. Based on that knowledge, applications were created and published for both systems. The basic outline of the study was installing and work-ing with both SDKs, developing and publishing applications using the SDKs, and es-timating the costs of installing development kits in an educational environment. The objectives of this study were achieved as planned: both SDKs were successfully installed, four applications were created altogether, an estimation of costs was made and overall experience of both systems was gained

    Enhanced Security Utilizing Side Channel Data Analysis

    Get PDF
    The physical state of a system is affected by the activities and processes in which it is tasked with carrying out. In the past there have been many instances where such physical changes have been exploited by bad actors in order to gain insight into the operational state and even the data being held on a system. This method of side channel exploitation is very often effective due to the relative difficulty of obfuscating activity on a physical level. However, in order to take advantage of side channel data streams one must have a detailed working knowledge of how a target behavior, activity, or process affects the system on a physical level which may not always be available to a would be attacker. However, the owner of a system has unfettered access to their own system and is able to introduce a target, measure the effect it has on the physical state of the system through system side channels, and use that information to identify future instances of that same target on their system. System owners using the physical state of their own system in order to identify targeted behaviors, activities, and processes will have the benefit of faster detection with only a small amount of computational resources needed. In this research effort we show the viability of using physical sensor side channel data in order to enhance existing security methods by way of the rapid detection inherent in this technique

    Understanding Home Networks with Lightweight Privacy-Preserving Passive Measurement

    Get PDF
    Homes are involved in a significant fraction of Internet traffic. However, meaningful and comprehensive information on the structure and use of home networks is still hard to obtain. The two main challenges in collecting such information are the lack of measurement infrastructure in the home network environment and individuals’ concerns about information privacy. To tackle these challenges, the dissertation introduces Home Network Flow Logger (HNFL) to bring lightweight privacy-preserving passive measurement to home networks. The core of HNFL is a Linux kernel module that runs on resource-constrained commodity home routers to collect network traffic data from raw packets. Unlike prior passive measurement tools, HNFL is shown to work without harming either data accuracy or router performance. This dissertation also includes a months-long field study to collect passive measurement data from home network gateways where network traffic is not mixed by NAT (Network Address Translation) in a non-intrusive way. The comprehensive data collected from over fifty households are analyzed to learn the characteristics of home networks such as number and distribution of connected devices, traffic distribution among internal devices, network availability, downlink/uplink bandwidth, data usage patterns, and application traffic distribution

    Energy Efficient Geo-Localization for a Wearable Device

    Get PDF
    During the last decade there has been a surge of smart devices on markets around the world. The latest trend is devices that can be worn, so called wearable devices. As for other mobile devices, effective localization are of great interest for many different applications of these devices. However they are small and usually set a high demand on energy efficiency, which makes traditional localization techniques unfeasible for them to use. In this thesis we investigate and succeed in providing a localization solution for a wearable camera that is both accurate and energy efficient. Localization is done through a combination of Wi-Fi and GPS positioning with a mean accuracy of 27 m. Furthermore we utilize an activity recognition algorithm with data from an accelerometer to decide when a new position estimate should be obtained. Our evaluation of the algorithm shows that by applying this method, 83.2 % of the position estimates can be avoided with an insignificant loss in accuracy

    Flexi-WVSNP-DASH: A Wireless Video Sensor Network Platform for the Internet of Things

    Get PDF
    abstract: Video capture, storage, and distribution in wireless video sensor networks (WVSNs) critically depends on the resources of the nodes forming the sensor networks. In the era of big data, Internet of Things (IoT), and distributed demand and solutions, there is a need for multi-dimensional data to be part of the Sensor Network data that is easily accessible and consumable by humanity as well as machinery. Images and video are expected to become as ubiquitous as is the scalar data in traditional sensor networks. The inception of video-streaming over the Internet, heralded a relentless research for effective ways of distributing video in a scalable and cost effective way. There has been novel implementation attempts across several network layers. Due to the inherent complications of backward compatibility and need for standardization across network layers, there has been a refocused attention to address most of the video distribution over the application layer. As a result, a few video streaming solutions over the Hypertext Transfer Protocol (HTTP) have been proposed. Most notable are Apple’s HTTP Live Streaming (HLS) and the Motion Picture Experts Groups Dynamic Adaptive Streaming over HTTP (MPEG-DASH). These frameworks, do not address the typical and future WVSN use cases. A highly flexible Wireless Video Sensor Network Platform and compatible DASH (WVSNP-DASH) are introduced. The platform's goal is to usher video as a data element that can be integrated into traditional and non-Internet networks. A low cost, scalable node is built from the ground up to be fully compatible with the Internet of Things Machine to Machine (M2M) concept, as well as the ability to be easily re-targeted to new applications in a short time. Flexi-WVSNP design includes a multi-radio node, a middle-ware for sensor operation and communication, a cross platform client facing data retriever/player framework, scalable security as well as a cohesive but decoupled hardware and software design.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Combining Edge Computing and Data Processing with Kubernetes

    Get PDF
    The objective of this thesis is to explore and expand on cutting edge concepts that have been introduced in the coursework during this bachelor's degree. In particular, we will be going into the world of distributed and "Cloud Computing", including the use of "Internet of Things" devices. We will also demonstrate our use case scenario implementing "Artificial Intelligence" concepts, including building and configuring a Neural Network. In more detail, we explore how a computer cluster is built, scaling an application throughout an array of connected computers, building such applications on containers, and deploying them into the cluster. To build our use case scenario we will be developing an intelligent temperature control system that uses AI to determine the future temperature and using that prediction to reduce network congestion. This temperature control system and its other supporting applications will be run on the computer cluster and its performance will be evaluated.Grado en Ingeniería Informátic

    Optimizing task allocation for edge compute micro-clusters

    Get PDF
    There are over 30 billion devices at the network edge. This is largely driven by the unprecedented growth of the Internet-of-Things (IoT) and 5G technologies. These devices are being used in various applications and technologies, including but not limited to smart city systems, innovative agriculture management systems, and intelligent home systems. Deployment issues like networking and privacy problems dictate that computing should occur close to the data source at or near the network edge. Edge and fog computing are recent decentralised computing paradigms proposed to augment cloud services by extending computing and storage capabilities to the network’s edge to enable executing computational workloads locally. The benefits can help to solve issues such as reducing the strain on networking backhaul, improving network latency and enhancing application responsiveness. Many edge and fog computing deployment solutions and infrastructures are being employed to deliver cloud resources and services at the edge of the network — for example, cloudless and mobile edge computing. This thesis focuses on edge micro-cluster platforms for edge computing. Edge computing micro-cluster platforms are small, compact, and decentralised groups of interconnected computing resources located close to the edge of a network. These micro-clusters can typically comprise a variety of heterogeneous but resource-constrained computing resources, such as small compute nodes like Single Board Computers (SBCs), storage devices, and networking equipment deployed in local area networks such as smart home management. The goal of edge computing micro-clusters is to bring computation and data storage closer to IoT devices and sensors to improve the performance and reliability of distributed systems. Resource management and workload allocation represent a substantial challenge for such resource-limited and heterogeneous micro-clusters because of diversity in system architecture. Therefore, task allocation and workload management are complex problems in such micro-clusters. This thesis investigates the feasibility of edge micro-cluster platforms for edge computation. Specifically, the thesis examines the performance of micro-clusters to execute IoT applications. Furthermore, the thesis involves the evaluation of various optimisation techniques for task allocation and workload management in edge compute micro-cluster platforms. This thesis involves the application of various optimisation techniques, including simple heuristics-based optimisations, mathematical-based optimisation and metaheuristic optimisation techniques, to optimise task allocation problems in reconfigurable edge computing micro-clusters. The implementation and performance evaluations take place in a configured edge realistic environment using a constructed micro-cluster system comprised of a group of heterogeneous computing nodes and utilising a set of edge-relevant applications benchmark. The research overall characterises and demonstrates a feasible use case for micro-cluster platforms for edge computing environments and provides insight into the performance of various task allocation optimisation techniques for such micro-cluster systems

    Building a scalable global data processing pipeline for large astronomical photometric datasets

    Get PDF
    Astronomical photometry is the science of measuring the flux of a celestial object. Since its introduction, the CCD has been the principle method of measuring flux to calculate the apparent magnitude of an object. Each CCD image taken must go through a process of cleaning and calibration prior to its use. As the number of research telescopes increases the overall computing resources required for image processing also increases. Existing processing techniques are primarily sequential in nature, requiring increasingly powerful servers, faster disks and faster networks to process data. Existing High Performance Computing solutions involving high capacity data centres are complex in design and expensive to maintain, while providing resources primarily to high profile science projects. This research describes three distributed pipeline architectures, a virtualised cloud based IRAF, the Astronomical Compute Node (ACN), a private cloud based pipeline, and NIMBUS, a globally distributed system. The ACN pipeline processed data at a rate of 4 Terabytes per day demonstrating data compression and upload to a central cloud storage service at a rate faster than data generation. The primary contribution of this research is NIMBUS, which is rapidly scalable, resilient to failure and capable of processing CCD image data at a rate of hundreds of Terabytes per day. This pipeline is implemented using a decentralised web queue to control the compression of data, uploading of data to distributed web servers, and creating web messages to identify the location of the data. Using distributed web queue messages, images are downloaded by computing resources distributed around the globe. Rigorous experimental evidence is presented verifying the horizontal scalability of the system which has demonstrated a processing rate of 192 Terabytes per day with clear indications that higher processing rates are possible.Comment: PhD Thesis, Dublin Institute of Technolog
    • …
    corecore