12 research outputs found

    Demand response for smart homes

    Get PDF
    RÉSUMÉ: Problèmes dans l’opération de la transmission d’électricité, surcharge, émission de carbone sont, entre autres, les préoccupations des gestionnaires de réseaux électriques partout dans le monde. Dans ce contexte, face au besoin de réduire les coûts d’exploitation ainsi que le besoin d’adaptation aux différentes exigences de qualité, de sécurité, de flexibilité et de durabilité, les réseaux intelligents sont considérés comme une révolution technologique dans le secteur de l’énergie électrique. Cette transformation sera nécessaire pour atteindre les objectifs environnementaux, intégrer la participation de la demande, appuyer l’adoption de véhicules électriques et hybrides ainsi que la production distribuée à basse tension. Chaque partie prenante dans le processus de gestion de l’énergie peut avoir des avantages avec le réseau intelligent, ce qui justifie son importance dans l’actualité. Dans ce travail, on se concentre plutôt sur l’utilisateur final. En plus de l’utilisateur final, nous utilisons également l’agrégateur, qui est une entité qui agrège un ensemble d’utilisateurs de sorte que l’union de leurs participations individuelles devienne plus représentative pour les décisions relatives au système d’énergie. La fonction de l’agrégateur est d’établir un engagement d’intérêts entre les utilisateurs finaux et l’entreprise de génération afin de satisfaire les deux parties. L’une des contributions principales de cette thèse est la mise au point d’une méthode qui donne à un agrégateur la possibilité de coordonner la consommation d’un ensemble d’utilisateurs, en maintenant le niveau de confort souhaité pour chacun d’entre eux et en les encourageant via des incitations monétaires à changer ses consommations, de sorte que la charge globale ait le coût minimal pour le producteur. Dans la première contribution (chapitre 4), ce travail se concentre sur le développement d’un modèle mathématique représentatif pour la planification des équipements d’un utilisateur. Le modèle intègre des modèles détaillés et fiables pour des équipements spécifiques tout en conservant une complexité telle que les solveurs commerciaux puissent résoudre le problème en quelques secondes. Notre modèle peut donner des résultats qui, comparés aux modèles les plus proches de la littérature, permettent des économies de coûts allant de 8% à 389% sur un horizon de 24 heures. Dans la deuxième contribution (chapitre 5), l’accent a été mis sur la création d’un cadre algorithimique destiné à aider un utilisateur final particulier dans son processus de décision lié à la récupération d’investissement sur l’acquisition d’appareils ou d’équipements (composants) intelligents. Pour un utilisateur spécifique, le cadre analyse différentes combinaisons de composants intelligents afin de déterminer lequel est le plus rentable et à quel moment il convient de l’installer. Ce cadre peut être utilisé pour encourager un utilisateur à adopter un concept de maison intelligente réduisant les risques liés à son investissement. La troisième contribution(chapitre 6) regroupe plusieurs maisons intelligentes. Un cadre algorithimique basé sur les programmes de réponse à la demande est proposé. Il utilise les résultats des deux contributions précédentes pour représenter plusieurs utilisateurs, et son objectif est de maximiser le bien-être social, en tenant compte de la réduction des coûts pour un producteur donné ainsi que de la satisfaction de chaque consommateur. Les résultats montrent que, du point de vue du producteur, la courbe de charge globale est aplatie sans que cela ait un impact négatif sur le confort des utilisateurs ou sur leurs coûts. Enfin, les expériences rapportées dans chaque contribution valident théoriquement l’efficacité des approches proposées.----------ABSTRACT: Transmission operation issues, overload, carbon emissions are, among others, the concerns of power system operators worldwide. In this context, faced with the need to reduce operating costs and the need to adapt to the different requirements of quality, security, flexibility and sustainability, smart grids are seen as a technological revolution in the field of power system. This transformation will be necessary to achieve environmental objectives, support the adoption of electric and hybrid vehicles, improve distributed low-voltage generation and integrate demand participation. Each stakeholder in the energy management process can have advantages with the smart grid, which justifies its current importance. The focus of this thesis is rather on the end user. In addition to the end-user, this work also uses the aggregator that is an entity that aggregates a set of users such that the union of the individual participation of each user becomes more representative for power system decisions. The function of the aggregator is to establish an engagement of interests between the end users and the generator company in order to satisfy both parties. One of the main contributions of this thesis is the development of a method that gives an aggregator the possibility to coordinate the consumption of a set of users, keeping the desired comfort level for each of them and encouraging them via monetary incentives to change their consumption such that their aggregated load has the minimal cost for the generator company. In the first contribution (Chapter 4), this work focuses on developing a representative mathematical model for user appliances scheduling. The model integrates detailed and reliable models for specific appliances while keeping a complexity such that commercial solvers are able to solve the problem in seconds. Our model can give results that, compared to the closest models in the literature, provide a cost savings in the range of 8% and 389% over a scheduling horizon of 24 hours. In the second contribution (Chapter 5), the focus was given in making a framework to help a specific end-user in their decision process related to the payback for an acquisition of smart appliances or equipment (components). For a specific user, the framework analyses various combinations of smart components to discover which one is the most profitable and when it should be installed. This framework can be used to encourage users towards a smart home concept decreasing the risks about their investment. The third contribution (Chapter 6) aggregates several smart homes. A framework based on demand response programs is proposed. It uses outputs from the two previous contributions to represent multiple users, and its goal is to maximize the social welfare, considering the reduction of costs for a given generator company as well the satisfaction of every user. Results show that, from the generator company perspective, the aggregate load consumption is flattened without impacting negatively the users’ comfort or their costs. Finally, the experiments reported in each contribution validate, in theory, the efficiency of the proposed approaches

    Leveraging disaggregated accelerators and non-volatile memories to improve the efficiency of modern datacenters

    Get PDF
    (English) Traditional data centers consist of computing nodes that possess all the resources physically attached. When there was the need to deal with more significant demands, the solution has been to either add more nodes (scaling out) or increase the capacity of existing ones (scaling-up). Workload requirements are traditionally fulfilled by selecting compute platforms from pools that better satisfy their average or maximum resource requirements depending on the price that the user is willing to pay. The amount of processor, memory, storage, and network bandwidth of a selected platform needs to meet or exceed the platform requirements of the workload. Beyond those explicitly required by the workload, additional resources are considered stranded resources (if not used) or bonus resources (if used). Meanwhile, workloads in all market segments have evolved significantly during the last decades. Today, workloads have a larger variety of requirements in terms of characteristics related to the computing platforms. Those workload new requirements include new technologies such as GPU, FPGA, NVMe, etc. These new technologies are more expensive and thus become more limited. It is no longer feasible to increase the number of resources according to potential peak demands, as this significantly raises the total cost of ownership. Software-Defined-Infrastructures (SDI), a new concept for the data center architecture, is being developed to address those issues. The main SDI proposition is to disaggregate all the resources over the fabric to enable the required flexibility. On SDI, instead of pools of computational nodes, the pools consist of individual units of resources (CPU, memory, FPGA, NVMe, GPU, etc.). When an application needs to be executed, SDI identifies the computational requirements and assembles all the resources required, creating a composite node. Resource disaggregation brings new challenges and opportunities that this thesis will explore. This thesis demonstrates that resource disaggregation brings opportunities to increase the efficiency of modern data centers. This thesis demonstrates that resource disaggregation may increase workloads' performance when sharing a single resource. Thus, needing fewer resources to achieve similar results. On the other hand, this thesis demonstrates how through disaggregation, aggregation of resources can be made, increasing a workload's performance. However, to take maximum advantage of those characteristics and flexibility, orchestrators must be aware of them. This thesis demonstrates how workload-aware techniques applied at the resource management level allow for improved quality of service leveraging resource disaggregation. Enabling resource disaggregation, this thesis demonstrates a reduction of up to 49% missed deadlines compared to a traditional schema. This reduction can rise up to 100% when enabling workload awareness. Moreover, this thesis demonstrates that GPU partitioning and disaggregation further enhances the data center flexibility. This increased flexibility can achieve the same results with half the resources. That is, with a single physical GPU partitioned and disaggregated, the same results can be achieved with 2 GPU disaggregated but not partitioned. Finally, this thesis demonstrates that resource fragmentation becomes key when having a limited set of heterogeneous resources, namely NVMe and GPU. For the case of an heterogeneous set of resources, and specifically when some of those resources are highly demanded but limited in quantity. That is, the situation where the demand for a resource is unexpectedly high, this thesis proposes a technique to minimize fragmentation that reduces deadlines missed compared to a disaggregation-aware policy of up to 86%.(Català) Els datacenters tradicionals consisteixen en un seguit de nodes computacionals que contenen al seu interior tots els recursos necessaris. Quan hi ha una necessitat de gestionar demandes superiors la solució era o afegir més nodes (scale-out) o incrementar la capacitat dels existents (scale-up). Els requisits de les aplicacions tradicionalment són satisfets seleccionant recursos de racks que satisfan millor el seu SLA basats o en la mitjana dels requisits o en el màxim possible, en funció del preu que l'usuari estigui disposat a pagar. La quantitat de processadors, memòria, disc, i banda d'ampla d'un rack necessita satisfer o excedir els requisits de l'aplicació. Els recursos addicionals als requerits per les aplicacions són considerats inactius (si no es fan servir) o addicionals (si es fan servir). Per altra banda, les aplicacions en tots els segments de mercat han evolucionat significativament en les últimes dècades. Avui en dia, les aplicacions tenen una gran varietat de requisits en termes de característiques que ha de tenir la infraestructura. Aquests nous requisits inclouen tecnologies com GPU, FPGA, NVMe, etc. Aquestes tecnologies són més cares i, per tant, més limitades. Ja no és factible incrementar el nombre de recursos segons el potencial pic de demanda, ja que això incrementa significativament el cost total de la infraestructura. Software-Defined Infrastructures és un nou concepte per a l'arquitectura de datacenters que s'està desenvolupant per pal·liar aquests problemes. La proposició principal de SDI és desagregar tots els recursos sobre la xarxa per garantir una major flexibilitat. Sota SDI, en comptes de racks de nodes computacionals, els racks consisteix en unitats individuals de recursos (CPU, memòria, FPGA, NVMe, GPU, etc). Quan una aplicació necessita executar, SDI identifica els requisits computacionals i munta una plataforma amb tots els recursos necessaris, creant un node composat. La desagregació de recursos porta nous reptes i oportunitats que s'exploren en aquesta tesi. Aquesta tesi demostra que la desagregació de recursos ens dona l'oportunitat d'incrementar l'eficiència dels datacenters moderns. Aquesta tesi demostra la desagregació pot incrementar el rendiment de les aplicacions. Però per treure el màxim partit a aquestes característiques i d'aquesta flexibilitat, els orquestradors n'han de ser conscient. Aquesta tesi demostra que aplicant tècniques conscients de l'aplicació aplicades a la gestió de recursos permeten millorar la qualitat del servei a través de la desagregació de recursos. Habilitar la desagregació de recursos porta a una reducció de fins al 49% els deadlines perduts comparat a una política tradicional. Aquesta reducció pot incrementar-se fins al 100% quan s'habilita la consciència de l'aplicació. A més a més, aquesta tesi demostra que el particionat de GPU combinat amb la desagregació millora encara més la flexibilitat. Aquesta millora permet aconseguir els mateixos resultats amb la meitat de recursos. És a dir, amb una sola GPU física particionada i desagregada, els mateixos resultats són obtinguts que utilitzant-ne dues desagregades però no particionades. Finalment, aquesta tesi demostra que la gestió de la fragmentació de recursos és una peça clau quan la quantitat de recursos és limitada en un conjunt heterogeni de recursos. Pel cas d'un conjunt heterogeni de recursos, i especialment quan aquests recursos tenen molta demanda però són limitats en quantitat. És a dir, quan la demanda pels recursos és inesperadament alta, aquesta tesi proposa una tècnica minimitzant la fragmentació que redueix els deadlines perduts comparats a una política de desagregació de fins al 86%.Arquitectura de computador

    Development of an Intelligent Monitoring and Control System for a Heterogeneous Numerical Propulsion System Simulation

    Get PDF
    The NASA Numerical Propulsion System Simulation (NPSS) project is exploring the use of computer simulation to facilitate the design of new jet engines. Several key issues raised in this research are being examined in an NPSS-related research project: zooming, monitoring and control, and support for heterogeneity. The design of a simulation executive that addresses each of these issues is described. In this work, the strategy of zooming, which allows codes that model at different levels of fidelity to be integrated within a single simulation, is applied to the fan component of a turbofan propulsion system. A prototype monitoring and control system has been designed for this simulation to support experimentation with expert system techniques for active control of the simulation. An interconnection system provides a transparent means of connecting the heterogeneous systems that comprise the prototype

    The Sidney Review Wed, April 28, 1982

    Get PDF

    Информационная безопасность

    Get PDF
    В сборнике опубликованы материалы докладов, представленных на 59-й научной конференции аспирантов, магистрантов и студентов БГУИР. Материалы одобрены оргкомитетом и публикуются в авторской редакции. Для научных и инженерно-технических работников, преподавателей, аспирантов, магистрантов и студентов вузов

    Sustainable Building and Indoor Air Quality

    Get PDF
    This Special Issue addresses a topic of great contemporary relevance; in developed countries, most of peoples’ time is spent indoors and, depending on each person, the presence in the home ranges from 60% to 90% of the day, and 30% of that time is spent sleeping. Taking into account these data, indoor residential environments have a direct influence on human health. In addition to this, in developing countries, significant levels of indoor pollution make housing unsafe, with a detrimental impact on the health of inhabitants. Housing is therefore a key health factor for people all over the world, and various parameters such as air quality, ventilation, hygrothermal comfort, lighting, physical environment, and building efficiency, among others, can contribute to healthy architecture, and the conditions that can result from the poor application of these parameters

    Modelling Incremental Self-Repair Processing in Dialogue.

    Get PDF
    PhDSelf-repairs, where speakers repeat themselves, reformulate or restart what they are saying, are pervasive in human dialogue. These phenomena provide a window into real-time human language processing. For explanatory adequacy, a model of dialogue must include mechanisms that account for them. Artificial dialogue agents also need this capability for more natural interaction with human users. This thesis investigates the structure of self-repair and its function in the incremental construction of meaning in interaction. A corpus study shows how the range of self-repairs seen in dialogue cannot be accounted for by looking at surface form alone. More particularly it analyses a string-alignment approach and shows how it is insufficient, provides requirements for a suitable model of incremental context and an ontology of self-repair function. An information-theoretic model is developed which addresses these issues along with a system that automatically detects self-repairs and edit terms on transcripts incrementally with minimal latency, achieving state-of-the-art results. Additionally it is shown to have practical use in the psychiatric domain. The thesis goes on to present a dialogue model to interpret and generate repaired utterances incrementally. When processing repaired rather than fluent utterances, it achieves the same degree of incremental interpretation and incremental representation. Practical implementation methods are presented for an existing dialogue system. Finally, a more pragmatically oriented approach is presented to model self-repairs in a psycholinguistically plausible way. This is achieved through extending the dialogue model to include a probabilistic semantic framework to perform incremental inference in a reference resolution domain. The thesis concludes that at least as fine-grained a model of context as word-by-word is required for realistic models of self-repair, and context must include linguistic action sequences and information update effects. The way dialogue participants process self-repairs to make inferences in real time, rather than filter out their disfluency effects, has been modelled formally and in practical systems.Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Account (DTA) scholarship from the School of Electronic Engineering and Computer Science at Queen Mary University of London

    GPU Array Access Auto-Tuning

    Get PDF
    GPUs have been used for years in compute intensive applications. Their massive parallel processing capabilities can speedup calculations significantly. However, to leverage this speedup it is necessary to rethink and develop new algorithms that allow parallel processing. These algorithms are only one piece to achieve high performance. Nearly as important as suitable algorithms is the actual implementation and the usage of special hardware features such as intra-warp communication, shared memory, caches, and memory access patterns. Optimizing these factors is usually a time consuming task that requires deep understanding of the algorithms and the underlying hardware. Unlike CPUs, the internal structure of GPUs has changed significantly and will likely change even more over the years. Therefore it does not suffice to optimize the code once during the development, but it has to be optimized for each new GPU generation that is released. To efficiently (re-)optimize code towards the underlying hardware, auto-tuning tools have been developed that perform these optimizations automatically, taking this burden from the programmer. In particular, NVIDIA -- the leading manufacturer for GPUs today -- applied significant changes to the memory hierarchy over the last four hardware generations. This makes the memory hierarchy an attractive objective for an auto-tuner. In this thesis we introduce the MATOG auto-tuner that automatically optimizes array access for NVIDIA CUDA applications. In order to achieve these optimizations, MATOG has to analyze the application to determine optimal parameter values. The analysis relies on empirical profiling combined with a prediction method and a data post-processing step. This allows to find nearly optimal parameter values in a minimal amount of time. Further, MATOG is able to automatically detect varying application workloads and can apply different optimization parameter settings at runtime. To show MATOG's capabilities, we evaluated it on a variety of different applications, ranging from simple algorithms up to complex applications on the last four hardware generations, with a total of 14 GPUs. MATOG is able to achieve equal or even better performance than hand-optimized code. Further, it is able to provide performance portability across different GPU types (low-, mid-, high-end and HPC) and generations. In some cases it is able to exceed the performance of hand-crafted code that has been specifically optimized for the tested GPU by dynamically changing data layouts throughout the execution

    Health allies in the prevention of obesity: the adoption of the Sugar Tax and Front-of-Package food labelling systems in Mexico

    Get PDF
    How experts and evidence influence policy change is a point of debate in the literature. It is argued that the theorisation of experts’ roles and knowledge utilisation has occurred in separate siloes failing to address experts’ influence on policy change. Two policy process theories focused on policy networks, the Advocacy Coalition Framework (ACF) and Epistemic Communities (ECs), stress the importance of evidence and experts’ role in policy change. They suggest that experts are influential in times of uncertainty (ECs) and can create networks (coalitions) with actors with similar beliefs to influence the adoption of policies that mirror their preferences (ACF). Yet, how networks are formed, how they influence policy change in contested policy areas, and whether networks are maintained post-policy adoption, remain unclear. This research aims to study networks over time to address the identified theoretical gaps. This objective is addressed through an instrumental multiple-case study that analyses Mexico’s Sugar Tax and Front-of-Package food labelling (FOPL) systems policy developments. It draws on 32 semi-structured stakeholder interviews and complementary documentary materials to explain the configuration and influence of networks of experts based on the ACF and ECs. It explores networks over time using qualitative Social Network Analysis. My results suggest that health experts formed a network with a broader set of actors to influence the Sugar Tax adoption. However, in contrast to what theory indicates, the network’s formation beyond shared values and beliefs was enabled by resources provided by an external donor. Networks were also identified in the FOPL systems case. Despite the high level of conflict in the policy area, the influence of experts and the uses of evidence varied between cases. Regarding networks over time, the study finds that at the organisational level, networks remain active in the policy process post-policy adoption
    corecore