13 research outputs found

    Building Heat Demand Forecasting by Training a Common Machine Learning Model with Physics-Based Simulator

    Get PDF
    Accurate short-term forecasts of building energy consumption are necessary for profitable demand response. Short-term forecasting methods can be roughly classified into physics-based modelling and data-based modelling. Both of these approaches have their advantages and disadvantages and it would be therefore ideal to combine them. This paper proposes a novel approach that allows us to combine the best parts of physics-based modelling and machine learning while avoiding many of their drawbacks. A key idea in the approach is to provide a variety of building parameters as input for an Artificial Neural Network (ANN) and train the model with data from a large group of simulated buildings. The hypothesis is that this forces the ANN model to learn the underlying simulation model-based physics, and thus enables the ANN model to be used in place of the simulator. The advantages of this type of model is the combination of robustness and accuracy from a high-detail physics-based model with the inference speed, ease of deployment, and support for gradient based optimization provided by the ANN model. To evaluate the approach, an ANN model was developed and trained with simulated data from 900–11,700 buildings, including equal distribution of office buildings, apartment buildings, and detached houses. The performance of the ANN model was evaluated with a test set consisting of 60 buildings (20 buildings for each category). The normalized root mean square errors (NRMSE) were on average 0.050, 0.026, 0.052 for apartment buildings, office buildings, and detached houses, respectively. The results show that the model was able to approximate the simulator with good accuracy also outside of the training data distribution and generalize to new buildings in new geographical locations without any building specific heat demand data

    Open Infrastructure for Edge Computing

    Get PDF
    Edge computing, bringing the computation closer to end-users and data producers, has now firmly gained the status of enabling technology for the new kinds of emerging applications, such as Virtual/Augmented Reality and IoT. The motivation backing this rapidly developing computing paradigm is mainly two-fold. On the one hand, the goal is to minimize the latency that end-users experience, not only improving the quality of service but empowering new kinds of applications, which would not even be possible given higher delays. On the other, edge computing aims to save core networking bandwidth from being overwhelmed by myriads of IoT devices, sending their data to the cloud. After analyzing and aggregating IoT streams at edge servers, much less networking capacity will be required to persist remaining information in distant cloud datacenters. Having a solid motivation and experiencing continuous interest from both academia and industry, edge computing is still in its nascency. To leave adolescence and take its place on a par with the cloud computing paradigm, finally forming a versatile edge-cloud environment, the newcomer needs to overcome a number of challenges. First of all, the computing infrastructure to deploy edge applications and services is very limited at the moment. Indeed, there are initiatives supported by the telecommunication industry, like Multi-access Edge Computing. Also, cloud providers plan to establish their facilities near the edge of the network. However, we believe that even more efforts will be required to make edge servers generally available. Second, to emerge and function efficiently, the ecosystem of edge computing needs practices, standards, and governance mechanisms of its own kind. The specificity originates from the highly dispersed nature of the edge, implying high heterogeneity of resources and diverse administrative control over the computing facilities. Finally, the third challenge is the dynamicity of the edge computing environment due to, e.g., varying demand, migrating clients, etc. In this thesis, we outline underlying principles of what we call Open Infrastructure for Edge (OpenIE), identify its key features, and provide solutions for them. Intended to tackle the challenges we mentioned above, OpenIE defines a set of common practices and loosely coupled technologies creating a unified environment out of highly heterogeneous and administratively partitioned edge computing resources. Particularly, we design a protocol capable of discovering edge providers on a global scale. Further, we propose a framework of Ingelligent Containers (ICONs), capable of autonomous decision making and forming a service overlay on a large-scale edge-cloud setting. As edge providers need to be economically incentivized, we devise a truthful double auction mechanism where edge providers can meet application owners or administrators in need of deploying an edge service. Due to truthfulness, in our auction, it is the best strategy for all participants to bid one's privately known valuation (or cost), thus making complex market behavior strategies obsolete. We analyze the potential of distributed ledgers to serve for OpenIE decentralized agreement and transaction handling and show how our auction can be implemented with the help of distributed ledgers. With the key building blocks of OpenIE, mentioned above, we hope to make an entrance for anyone interested in service provisioning at the edge as easy as possible. We hope that with the emergence of independent edge providers, edge computing will finally become pervasive.Reunalaskenta, joka tuo laskentakapasiteettia lähemmäksi loppukäyttäjiä ja datan tuottajia, on noussut uudentyyppisten sovelluksien, kuten virtuaalisen ja lisätyn todellisuuden (VR/AR) sekä esineiden internetin (IoT) keskeiseksi mahdollistajaksi. Reunalaskennan kehitystä tukevat pääosin kaksi sen tuomaa etua. Ensiksi, reunalaskenta minimoi loppukäyttäjien kokemaa latenssia mahdollistaen uudentyyppisiä sovelluksia. Toiseksi, reunalaskenta säästää ydinverkon tiedonsiirtokapasiteettia, esimerkiksi IoT-laitteiden pilveen lähettämien tietojen osalta. Kun reunapalvelimet analysoivat ja aggregoivat IoT-virrat, verkkokapasiteettia tarvitaan paljon vähemmän. Reunalaskentaan on panostettu paljon, sekä teollisuuden, että tutkimuksen osalta. Reunalaskennan kehittymispolulla monipuoliseksi reunapilviympäristöksi on edessä useita haasteita. Ensinnäkin laskentakapasiteetti tietoverkkojen reunalla on tällä hetkellä hyvin rajallinen. Vaikka teleoperaattorit ja pilvipalvelujen tarjoajat suunnittelevat lisäävänsä laskentakapasiteettia reunalaskennan tarpeisiin, uskomme kuitenkin, että enemmän ponnisteluja tarvitaan, jotta reunalaskennan edut olisivat yleisesti saatavilla. Toiseksi, toimiakseen tehokkaasti, reunalaskennan ekosysteemi tarvitsee omat käytäntönsä, standardinsa ja hallintamekanisminsa. Reunalaskenan erityistarpeet johtuvat resurssien heterogeenisyydestä, niiden suuresta maantieteellisesta hajautuksesta ja hallinnollisesta jaosta. Kolmas haaste on reunalaskentaympäristön dynaamisuus, joka johtuu esimerkiksi vaihtelevasta kysynnästä ja asiakkaiden liikkuvuudesta. Tässä väitöstutkimuksessa esittelemme Avoimen Infrastruktuurin Reunalaskennalle (OpenIE), joka vastaa edellä mainittuihin haasteisiin, ja tunnistamme ongelman pääominaisuudet ja tarjoamme niihin ratkaisuja. OpenIE määrittelee joukon yleisiä käytäntöjä ja löyhästi yhdistettyjä tekniikoita, jotka luovat yhtenäisen ympäristön erittäin heterogeenisistä ja hallinnollisesti jaetuista reunalaskentaresursseista. Suunnittelemme protokollan, joka kykenee etsimään reunaoperaattoreita maailmanlaajuisesti. Lisäksi ehdotamme Älykontti (ICON) -kehystä, joka kykenee itsenäiseen päätöksentekoon ja muodostaa palvelupäällysteen laajamittaisessa reunapilviympäristössä. Koska reunaoperaattoreita on kannustettava taloudellisesti, suunnittelemme totuudenmukaisen huutokauppamekanismin, jossa reunapalveluntarjoajat voivat kohdata sovellusten omistajia tai järjestelmien omistajia, jotka tarvitsevat reunalaskentakapasiteettia. Totuudenmukaisessa huutokaupassa paras strategia kaikille osallistujille on tehdä tarjous yksityisesti tunnetun arvostuksen perusteella, mikä tekee monimutkaisen markkinastrategian kehittämisen tarpeettomaksi. Analysoimme lohkoketjualustojen potentiaalia palvella OpenIE:n hajautetun sopimisen ja tapahtumien käsittelyä ja näytämme, miten huutokauppamme voidaan toteuttaa lohkoketjuteknologia hyödyntäen. Edellä mainittujen OpenIE:n keskeisten kompponenttien avulla pyrimme luomaan yleisiä puitteita joiden avulla jokainen reunalaskennan kapasiteetin tarjoamisesta kiinnostunut taho voisi ryhtyä palveluntarjojaksi helposti. Riippumattomien reunapalveluntarjoajien mukaantulo tekisi reunalaskennan lupaamat hyödyt yleisesti saataviksi

    서비스 기반 IoT 플랫폼에서 계층적 디바이스 관리 기법

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 공과대학 컴퓨터공학부, 2022. 8. 하순회.최근 IoT 분야에 대한 큰 관심으로 많은 종류의 IoT 플랫폼이 연구되고 있다. 대부분의 IoT 플랫폼은 디바이스 기반으로 구성되어있다. 사용자는 엣지 디바이스를 탐색하고 디바이스가 제공하는 서비스를 선택하여 이용한다. 하지만 이러한 방식은 IoT 디바이스가 많아질 경우 사용자 경험을 해칠 수 있다. 수많은 엣지 디바이스 중에 사용자가 원하는 서비스가 있는지 일일이 찾아서 서비스를 이용하는 방식은 시간이 많이 소요되고 사용자에게 최적화된 엣지 디바이스의 서비스를 찾는데 어려움이 있을 수 있다. 최근 이런 단점을 극복할 수 있는 서비스 기반 IoT 플랫폼들이 연구되고 있다. 본 논문에서는 이런 서비스 기반 IoT 플랫폼 중 SoPIoT 플랫폼을 목표로 하여 디바이스를 관리하는 기법을 제안한다. 이를 통해 다양한 통신 프로토콜 지원, 써드 파티 플랫폼 지원, 디바이스 군집화 기능 등을 SoPIoT 플랫폼에 추가하였다. 결과적으로 서비스 부하 테스트를 안정적으로 통과하였고, 전력 소모량과 구축 비용에서 71.2%, 74.6%를 절감하는 성과를 달성하였다.Recently, many kinds of IoT platforms are being studied with great interest in the IoT field. Most IoT platforms are device-based. The user navigates the edge device and selects and uses the service provided by the device. However, this approach can harm the user experience if there are many IoT devices. Among the numerous edge devices, the method of using the service by finding out if there is a service that the user wants one by one can be time-consuming and difficult to find the service of the edge device optimized for the user. Recently, service-based IoT platforms that can overcome these shortcomings are being studied. In this paper, we propose a technique for managing devices targeting SoPIoT platforms among these service-based IoT platforms. Through this, various communication protocol support, third-party platform support, and device clustering functions were added to the SoPIoT platform. As a result, the service load test was passed stably, and the results were achieved by saving 71.2% and 74.6% in power consumption and deployment costs.제1장 서론 1 1.1 연구 배경 1 1.2 연구 내용 2 1.3 논문의 구성 4 제2장 관련 연구 5 제3장 배경 지식 7 3.1 SoPIoT 플랫폼 7 제4장 Manager - Staff 시스템 10 4.1 구성 디바이스 11 4.1.1 Manager Thing 11 4.1.2 Staff Thing 11 4.2 다양한 통신 프로토콜 지원 12 4.2.1 Register 과정 13 4.3 써드 파티 플랫폼 지원 15 4.3.1 Register 과정 16 4.4 군집 기능 지원 17 4.4.1 경량화 통신 프로토콜 18 4.4.2 Register 과정 19 4.4.3 디바이스 관리 과정 20 제5장 테스트 베드 22 5.1 스마트 오피스 시스템 23 제6장 실험 25 6.1 실험 환경 25 6.2 Service 부하 실험 26 6.3 전력 실험 및 비용 비교 28 제7장 결론 및 향후 연구 32 참고문헌 34 Abstract 36석

    Big data workflows: Locality-aware orchestration using software containers

    Get PDF
    The emergence of the Edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing Big Data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the Edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric Big Data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo Workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.publishedVersio

    Big data workflows: Locality-aware orchestration using software containers

    Get PDF
    The emergence of the Edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing Big Data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the Edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric Big Data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo Workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.publishedVersio

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced

    Towards Autonomous Computer Networks in Support of Critical Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore