79 research outputs found

    Peer-to-Peer Energy Trading in Smart Residential Environment with User Behavioral Modeling

    Get PDF
    Electric power systems are transforming from a centralized unidirectional market to a decentralized open market. With this shift, the end-users have the possibility to actively participate in local energy exchanges, with or without the involvement of the main grid. Rapidly reducing prices for Renewable Energy Technologies (RETs), supported by their ease of installation and operation, with the facilitation of Electric Vehicles (EV) and Smart Grid (SG) technologies to make bidirectional flow of energy possible, has contributed to this changing landscape in the distribution side of the traditional power grid. Trading energy among users in a decentralized fashion has been referred to as Peer- to-Peer (P2P) Energy Trading, which has attracted significant attention from the research and industry communities in recent times. However, previous research has mostly focused on engineering aspects of P2P energy trading systems, often neglecting the central role of users in such systems. P2P trading mechanisms require active participation from users to decide factors such as selling prices, storing versus trading energy, and selection of energy sources among others. The complexity of these tasks, paired with the limited cognitive and time capabilities of human users, can result sub-optimal decisions or even abandonment of such systems if performance is not satisfactory. Therefore, it is of paramount importance for P2P energy trading systems to incorporate user behavioral modeling that captures users’ individual trading behaviors, preferences, and perceived utility in a realistic and accurate manner. Often, such user behavioral models are not known a priori in real-world settings, and therefore need to be learned online as the P2P system is operating. In this thesis, we design novel algorithms for P2P energy trading. By exploiting a variety of statistical, algorithmic, machine learning, and behavioral economics tools, we propose solutions that are able to jointly optimize the system performance while taking into account and learning realistic model of user behavior. The results in this dissertation has been published in IEEE Transactions on Green Communications and Networking 2021, Proceedings of IEEE Global Communication Conference 2022, Proceedings of IEEE Conference on Pervasive Computing and Communications 2023 and ACM Transactions on Evolutionary Learning and Optimization 2023

    An Investigation into Dynamic Web Service Composition Using a Simulation Framework

    Get PDF
    [Motivation] Web Services technology has emerged as a promising solution for creat- ing distributed systems with the potential to overcome the limitation of former distrib- uted system technologies. Web services provide a platform-independent framework that enables companies to run their business services over the internet. Therefore, many techniques and tools are being developed to create business to business/business to customer applications. In particular, researchers are exploring ways to build new services from existing services by dynamically composing services from a range of resources. [Aim] This thesis aims to identify the technologies and strategies cur- rently being explored for organising the dynamic composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. In addition, the thesis will study the matchmaking and selection processes which are essential processes for Web service composition. [Research Method] We under- took a mapping study of empirical papers that had been published over the period 2000 to 2009. The aim of the mapping study was to identify the technologies and strategies currently being explored for organising the composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. We then built a simulation framework to carry out some experiments on composition strategies. The rst experiment compared the results of a close replication of an ex- isting study with the original results in order to evaluate our close replication study. The simulation framework was then used to investigate the use of a QoS model for supporting the selection process, comparing this with the ranking technique in terms of their performance. [Results] The mapping study found 1172 papers that matched our search terms, from which 94 were classied as providing practical demonstration of ideas related to dynamic composition. We have analysed 68 of these in more detail. Only 29 provided a `formal' empirical evaluation. From these, we selected a `baseline' study to test our simulation model. Running the experiments using simulated data- sets have shown that in the rst experiment the results of the close replication study and the original study were similar in terms of their prole. In the second experiment, the results demonstrated that the QoS model was better than the ranking mechanism in terms of selecting a composite plan that has highest quality score. [Conclusions] No one approach to service composition seemed to meet all needs, but a number has been investigated more. The similarity between the results of the close replication and the original study showed the validity of our simulation framework and a proof that the results of the original study can be replicated. Using the simulation it was demonstrated that the performance of the QoS model was better than the ranking mechanism in terms of the overall quality for a selected plan. The overall objectives of this research are to develop a generic life-cycle model for Web service composition from a mapping study of the literature. This was then used to run simulations to replicate studies on matchmaking and compare selection methods

    Sustainability of systems interoperability in dynamic business networks

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de ComputadoresCollaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property to support businesses development. When achieved seamlessly, efficiency is increased in the entire product life cycle support. However, due to the different sources of knowledge, models and semantics, enterprise organisations are experiencing difficulties exchanging critical information, even when they operate in the same business environments. To solve this issue, most of them try to attain interoperability by establishing peer-to-peer mappings with different business partners, or use neutral data and product standards as the core for information sharing, in optimized networks. In current industrial practice, the model mappings that regulate enterprise communications are only defined once, and most of them are hardcoded in the information systems. This solution has been effective and sufficient for static environments, where enterprise and product models are valid for decades. However, more and more enterprise systems are becoming dynamic, adapting and looking forward to meet further requirements; a trend that is causing new interoperability disturbances and efficiency reduction on existing partnerships. Enterprise Interoperability (EI) is a well established area of applied research, studying these problems, and proposing novel approaches and solutions. This PhD work contributes to that research considering enterprises as complex and adaptive systems, swayed to factors that are making interoperability difficult to sustain over time. The analysis of complexity as a neighbouring scientific domain, in which features of interoperability can be identified and evaluated as a benchmark for developing a new foundation of EI, is here proposed. This approach envisages at drawing concepts from complexity science to analyse dynamic enterprise networks and proposes a framework for sustaining systems interoperability, enabling different organisations to evolve at their own pace, answering the upcoming requirements but minimizing the negative impact these changes can have on their business environment

    BUILDING A DISTRIBUTED TRUST MODEL OF RESTFUL WEB SERVICES FOR MOBILE DEVICES

    Get PDF
    As of 2011, there were about 5,981 million mobile devices in the world [1] and there are 113.9 million mobile web users in 2012 [2]. With the popularity of web services for mobile devices, the concern of security for mobile devices has been brought up. Furthermore, with more and more cooperation of organizations, web services are now normally involved with more than one organization. How to trust coming requests from other organizations is an issue. This research focuses on building a trust model for the web services of mobile devices. It resolves the issues caused by mobile devices being stolen, lost, users abusing privileges, and cross-domain’s access control. The trust model is distributed in each node of the web servers. The trust value is calculated for every incoming request to decide whether the request should be served or not. The goals of the trust model are 1) flexible; 2) scalable; 3) lightweight. The implementation is designed and accomplished with the goals in mind. The experiments evaluate the overhead for the trust module and maximum capacity of the system

    Software Technologies - 8th International Joint Conference, ICSOFT 2013 : Revised Selected Papers

    Get PDF

    Systematische Transaction-Level-Kommunikations-Modellierung mit SystemC

    Get PDF
    An emerging approach to embedded system design is to assemble them from a library of hardware and software component models (IP, intellectual property) using a system description language, such as SystemC. SystemC allows describing the communication among IPs in terms of abstract operations (transactions). The promise is that with transaction-level modeling (TLM), future systems-on-chip with one billion transistors and more can be composed out of IPs as simply as playing with LEGO bricks. However, reality is far out. In fact, each IP vendor promotes another proprietary interface standard and the provided design tools lack compatibility, such that heterogeneous IPs cannot be integrated efficiently. A novel generic interconnect fabric for TLM is presented which aims at enabling inter-operation between models of different levels of abstraction (mixed-mode) and models with different interfaces (heterogeneous components), with as little overhead as possible. A generic, protocol independent representation of transactions is developed, among with an abstraction level formalism. This approach is shown to support systematic simulation of state-of-the-art buses and networks-on-chip such as IBM CoreConnect and PCI Express over several levels of TLM abstraction. A layered simulation framework for SystemC, GreenBus, is developed to examine the proposed concepts. The thesis discusses new implementation techniques for communication modeling with SystemC which outperform the existing approaches in terms of flexibility, simulation accuracy, and performance. Based on these techniques, advanced concepts for TLM-based hardware/software co-design and FPGA prototyping are examined. Several experiments and a video processor case study highlight the efficiency of the approach and show its applicability in a TLM design flow.Eingebettete Systeme werden zunehmend auf Basis vorgefertigter Hard- und Softwarebausteine entwickelt, die in Form von Modellen (IP, Intellectual Property) vorliegen. Hierzu werden Systembeschreibungssprachen wie SystemC eingesetzt. SystemC ermöglicht, die Kommunikation zwischen IPs durch abstrakte Operationen, sog. Transaktionen zu beschreiben. Mit dieser Transaction-Level-Modellierung (TLM) sollen auch zukünftige Systeme mit 1 Milliarde Transistoren und mehr effizient entwickelt werden können. Idealerweise sollte das Hantieren mit IPs dabei so einfach sein wie das Spielen mit LEGO-Steinen. In der Realität sind jedoch IPs unterschiedlicher Hersteller nicht ohne weiteres integrierbar, und auch die Entwurfswerkzeuge sind nicht kompatibel. In dieser Doktorarbeit wird ein neuer, generischer Ansatz für die Transaction-Level-Modellierung mit SystemC vorgestellt, der Kommunikation zwischen Modellen auf unterschiedlichen Abstraktionsebenen (Mixed-Mode) und mit unterschiedlichen Schnittstellen (heterogene Komponenten) möglich macht. Der zusätzlich benötigte Simulations- und Code-Aufwand ist minimal. Ein protokollunabhängiges Transaktionsmodell und ein formaler Ansatz zur Beschreibung von Abstraktionsebenen werden vorgestellt, mit denen verschiedenartige Busse und Networks-on-Chip wie IBM CoreConnect und PCI Express auf verschiedenen TLM-Abstraktionsebenen simuliert werden können. Ein modulares Simulationsframework für SystemC wird entwickelt (GreenBus), um die vorgeschlagenen Konzepte zu untersuchen. Anhand von GreenBus werden neue Implementierungstechniken diskutiert, die den existierenden Ansätzen in Flexibilität, Simulationsgenauigkeit und -geschwindigkeit überlegen sind. Die Vor- und Nachteile der entwickelten Techniken werden mit Experimenten belegt, und eine Videoprozessor-Fallstudie demonstriert die Effizienz des Ansatzes in einem TLM-basierten Entwurfsfluss

    Incentive-driven QoS in peer-to-peer overlays

    Get PDF
    A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

    Runtime MPI Correctness Checking with a Scalable Tools Infrastructure

    Get PDF
    Increasing computational demand of simulations motivates the use of parallel computing systems. At the same time, this parallelism poses challenges to application developers. The Message Passing Interface (MPI) is a de-facto standard for distributed memory programming in high performance computing. However, its use also enables complex parallel programing errors such as races, communication errors, and deadlocks. Automatic tools can assist application developers in the detection and removal of such errors. This thesis considers tools that detect such errors during an application run and advances them towards a combination of both precise checks (neither false positives nor false negatives) and scalability. This includes novel hierarchical checks that provide scalability, as well as a formal basis for a distributed deadlock detection approach. At the same time, the development of parallel runtime tools is challenging and time consuming, especially if scalability and portability are key design goals. Current tool development projects often create similar tool components, while component reuse remains low. To provide a perspective towards more efficient tool development, which simplifies scalable implementations, component reuse, and tool integration, this thesis proposes an abstraction for a parallel tools infrastructure along with a prototype implementation. This abstraction overcomes the use of multiple interfaces for different types of tool functionality, which limit flexible component reuse. Thus, this thesis advances runtime error detection tools and uses their redesign and their increased scalability requirements to apply and evaluate a novel tool infrastructure abstraction. The new abstraction ultimately allows developers to focus on their tool functionality, rather than on developing or integrating common tool components. The use of such an abstraction in wide ranges of parallel runtime tool development projects could greatly increase component reuse. Thus, decreasing tool development time and cost. An application study with up to 16,384 application processes demonstrates the applicability of both the proposed runtime correctness concepts and of the proposed tools infrastructure

    Service Oriented Mobile Computing

    Get PDF
    La diffusione di concetti quali Pervasive e Mobile Computing introduce nell'ambito dei sistemi distribuiti due aspetti fondamentali: la mobilità dell'utente e l'interazione con l'ambiente circostante, favorite anche dal crescente utilizzo di dispositivi mobili dotati di connettività wireless come prodotti di consumo. Per estendere le funzionalità introdotte nell'ambito dei sistemi distribuiti dalle Architetture Orientate ai Servizi (SOA) e dal paradigma peer-to-peer anche a dispositivi dalle risorse limitate (in termini di capacità computazionale, memoria e batteria), è necessario disporre di un middleware leggero e progettato tenendo in considerazione tali caratteristiche. In questa tesi viene presentato NAM (Networked Autonomic Machine), un formalismo che descrive in modo esaustivo un sistema di questo tipo; si tratta di un modello teorico per la definizione di entità hardware e software in grado di condividere le proprie risorse in modo completamente altruistico. In particolare, il lavoro si concentra sulla definizione e gestione di un determinato tipo di risorse, i servizi, che possono essere offerti ed utilizzati da dispositivi mobili, mediante meccanismi di composizione e migrazione. NSAM (Networked Service-oriented Autonomic Machine) è una specializzazione di NAM per la condivisione di servizi in una rete peer-to-peer, ed è basato su tre concetti fondamentali: schemi di overlay, composizione dinamica di servizi e auto-configurazione dei peer. Nella tesi vengono presentate anche diverse attività applicative, che fanno riferimento all'utilizzo di due middleware sviluppati dal gruppo di Sistemi Distribuiti (DSG) dell'Università di Parma: SP2A (Service Oriented Peer-to-peer Architecture), framework per lo sviluppo di applicazioni distribuite attraverso la condivisione di risorse in una rete peer-to-peer, e Jxta-Soap che consente la condivisione di Web Services in una rete peer-to-peer JXTA. Le applicazioni realizzate spaziano dall'ambito della logistica, alla creazione di comunità per l'e-learning, all'Ambient Intelligence alla gestione delle emergenze, ed hanno come denominatore comune la creazione di reti eterogenee e la condivisione di risorse anche tra dispositivi mobili. Viene inoltre messo in evidenza come tali applicazioni possano essere ottimizzate mediante l'introduzione del framework NAM descritto, per consentire la condivisione di diversi tipi di risorse in modo efficiente e proattivo
    • …
    corecore