254,215 research outputs found

    Smart homes : a domestic demand response and demand side energy management system for future smart grids

    Get PDF
    Abstract: Smart homes or the homes of the future will be equipped with advanced technologies for user comfort and entertainment. Intelligent systems will be available to ensure this comfort and reliability. With these technological advancements comes further energy management. The concept of domestic energy efficiency is a concern at present and will be, in the future. So how do we optimize homes and users as to how they conserve energy? Domestic userā€™s energy usage represents a large amount of total electricity demand. Typical home energy systems utilize a rudimentary form of energy efficiency and management. In this paper we look at a Demand Response and Demand side management system model to curb this situation. The demand response system is achieved by the utility turning on/off smart power plugs wirelessly throughout the home based on peak and off peak periods via communication through its smart grid. To help consumers shift their loads during these times, appliance power sources that can act autonomously based on wired or wireless signals received from the utility via its smart grid is required. Users in response to this, connect their appliances to these plugs by generating their own hierarchy system by prioritizing their appliance usage. Whereas the demand side management system allows users to manually configure dates and times for the turning on/off of the smart power plugs wirelessly through the userā€™s smart user interface. Therefore, an energy efficient future smart home that can save the user on monthly expenditure and save on energy simultaneously

    Support for flexible and transparent distributed computing

    Get PDF
    Modern distributed computing developed from the traditional supercomputing community rooted firmly in the culture of batch management. Therefore, the field has been dominated by queuing-based resource managers and work flow based job submission environments where static resource demands needed be determined and reserved prior to launching executions. This has made it difficult to support resource environments (e.g. Grid, Cloud) where the available resources as well as the resource requirements of applications may be both dynamic and unpredictable. This thesis introduces a flexible execution model where the compute capacity can be adapted to fit the needs of applications as they change during execution. Resource provision in this model is based on a fine-grained, self-service approach instead of the traditional one-time, system-level model. The thesis introduces a middleware based Application Agent (AA) that provides a platform for the applications to dynamically interact and negotiate resources with the underlying resource infrastructure. We also consider the issue of transparency, i.e., hiding the provision and management of the distributed environment. This is the key to attracting public to use the technology. The AA not only replaces user-controlled process of preparing and executing an application with a transparent software-controlled process, it also hides the complexity of selecting right resources to ensure execution QoS. This service is provided by an On-line Feedback-based Automatic Resource Configuration (OAC) mechanism cooperating with the flexible execution model. The AA constantly monitors utility-based feedbacks from the application during execution and thus is able to learn its behaviour and resource characteristics. This allows it to automatically compose the most efficient execution environment on the fly and satisfy any execution requirements defined by users. Two policies are introduced to supervise the information learning and resource tuning in the OAC. The Utility Classification policy classifies hosts according to their historical performance contributions to the application. According to this classification, the AA chooses high utility hosts and withdraws low utility hosts to configure an optimum environment. The Desired Processing Power Estimation (DPPE) policy dynamically configures the execution environment according to the estimated desired total processing power needed to satisfy usersā€™ execution requirements. Through the introducing of flexibility and transparency, a user is able to run a dynamic/normal distributed application anywhere with optimised execution performance, without managing distributed resources. Based on the standalone model, the thesis further introduces a federated resource negotiation framework as a step forward towards an autonomous multi-user distributed computing world

    Policy-based power consumption management in smart energy community using single agent and multi agent Q learning algorithms

    Get PDF
    Power consumption in residential sector has increased due to growing population, economic growth, invention of many electrical appliances and therefore is becoming a growing concern in the power industry. Managing power consumption in residential sector without sacrificing user comfort has become one of the main research areas recently. The complexity of the power system keeps growing due to the penetration of alternative sources of electric energy such as solar plant, Hydro, Biomass, Geothermal and wind farm to meet the growing demand for electricity. To overcome the challenges due to complexity, the power grid needs to be intelligent in all aspects. As the grid gets smarter and smarter, considerable efforts are being undertaken to make the houses and businesses smarter in consuming the electrical energy to minimize and level the electricity demand which is also known as Demand Side Management (DSM). It also necessitates that the conventional way of modelling, control and energy management in all sectors needs to be enhanced or replaced by intelligent information processing techniques. In our research work, it has been done in several stages. (Purpose of Study and Results) We proposed a policy-based framework which allows intelligent and flexible energy management of home appliances in a smart home which is complex and dynamic in ways that saves energy automatically. We considered the challenges in formalizing the behaviour of the appliances using their states and managing the energy consumption using policies. Policies are rules which are created and edited by a house agent to deal with situations or power problems that are likely to occur. Each time the power problem arises the house agent will refer to policy and one or a set of rules will be executed to overcome that situation. Our policy-based smart home can manage energy efficiently and can significantly participate in reducing peak energy demand (thereby may reduce carbon emission). Our proposed policy-based framework achieves peak shaving so that power consumption adapts to available power, while ensuring the comfort level of the inhabitants and taking device characteristics in to account. Our simulation results on MATLAB indicate that the proposed Policy driven homes can effectively contribute to Demand side power management by decreasing the peak hour usage of the appliances and can efficiently manage energy in a smart home in a user-friendly way. We propounded and developed peak demand management algorithms for a Smart Energy Community using different types of coordination mechanisms for coordination of multiple house agents working in the same environment. These algorithms use centralized model, decentralized model, hybrid model and Pareto resource allocation model for resource allocation. We modelled user comfort for the appliance based on user preference, the power reduction capability and the important activities that run around the house associated with that appliance. Moreover, we compared these algorithms with respect to their peak reduction capability, overall comfort of the community, simplicity of the algorithm and community involvement and finally able to find the best performing algorithm among them. Our simulation results show that the proposed coordination algorithms can effectively reduce peak demand while maintaining user comfort. With the help of our proposed algorithms, the demand for electricity of a smart community can be managed intelligently and sustainably. This work is not only aiming for peak reduction management it aims for achieving it while keeping the comfort level of the inhabitants is minimum. It can learn userā€™s behaviour and establish the set of optimal rules dynamically. If the available power to a house is kept at a certain level the house agent will learn to use this notional power to operate all the appliances according to the requirements and comfort level of the household. This way the consumers are forced to use the power below the set level which can result in the over-all power consumption be maintained at a certain rate or level which means sustainability is possible or depletion of natural resources for electricity can be reduced. Temporal interactions of Energy Demand by local users and renewable energy sources can also be done more efficiently by having a set of new policy rules to switch between the utility and the renewable source of energy but it is beyond the scope of this thesis. We applied Q learning techniques to a home energy management agent where the agent learns to find the optimal sequence of turning off appliances so that the appliances with higher priority will not be switched off during peak demand period or power consumption management. The policy-based home energy management determines the optimal policy at every instant dynamically by learning through the interaction with the environment using one of the reinforcement learning approaches called Q-learning. The Q-learning home power consumption problem formulation consisting of state space, actions and reward function is presented. The implications of these simulation results are that the proposed Q- learning based power consumption management is very effective and enables the users to have minimum discomfort during participation in peak demand management or at the time when power consumption management is essential when the available power is rationale. This work is extended to a group of 10 houses and three multi agent Q- learning algorithms are proposed and developed for improving the individual and community comfort while at the same time keeping the power consumption below the available power level or electricity price below the set price. The proposed algorithms are weighted strategy sharing algorithm, concurrent Q learning algorithm and cooperative distributive learning algorithm. These proposed algorithms are coded and tested for managing power consumption of a group of 10 houses and the performance of all three algorithms with respect to power management and community comfort is studied and compared. Actual power consumption of a community and modified power consumption curves using Weighted Strategy Sharing algorithm, Concurrent learning and Distributive Q Learning and user comfort results are presented, and the results are analysed in this thesis
    • ā€¦
    corecore