1,103 research outputs found

    Context-Aware Framework for Performance Tuning via Multi-action Evaluation

    Get PDF
    Context-aware systems perform adaptive changes in several ways. One way is for the system developers to encompass all possible context changes in a context-aware application and embed them into the system. However, this may not suit situations where the system encounters unknown contexts. In such cases, system inferences and adaptive learning are used whereby the system executes one action and evaluates the outcome to self-adapts/self-learns based on that. Unfortunately, this iterative approach is time-consuming if high number of actions needs to be evaluated. By contrast, our framework for context-aware systems finds the best action for unknown context through concurrent multi-action evaluation and self-adaptation which reduces significantly the evolution time in comparison to the iterative approach. In our implementation we show how the context-aware multi-action system can be used for a context-aware evaluation for database performance tuning

    Proposal of a health care network based on big data analytics for PDs

    Get PDF
    Health care networks for Parkinson's disease (PD) already exist and have been already proposed in the literature, but most of them are not able to analyse the vast volume of data generated from medical examinations and collected and organised in a pre-defined manner. In this work, the authors propose a novel health care network based on big data analytics for PD. The main goal of the proposed architecture is to support clinicians in the objective assessment of the typical PD motor issues and alterations. The proposed health care network has the ability to retrieve a vast volume of acquired heterogeneous data from a Data warehouse and train an ensemble SVM to classify and rate the motor severity of a PD patient. Once the network is trained, it will be able to analyse the data collected during motor examinations of a PD patient and generate a diagnostic report on the basis of the previously acquired knowledge. Such a diagnostic report represents a tool both to monitor the follow up of the disease for each patient and give robust advice about the severity of the disease to clinicians

    Development of a control strategy to compensate transient behaviour due to atmospheric disturbances in solar thermal energy generation systems using short-time prediction data

    Get PDF
    La energía solar térmica concentrada (CSP) es una forma prometedora de energía renovable que puede aprovechar la energía del sol y ayudar a sustituir el uso de combustibles fósiles para la generación de electricidad. Sin embargo, enfrenta retos para aumentar su despliegue a nivel mundial. Las torres solares, un tipo de tecnología CSP, se componen principalmente de un campo solar y una torre en la que un receptor funciona como intercambiador de calor para alimentar un bloque de potencia. El campo solar está formado por miles de heliostatos, que son espejos capaces de seguir el sol y proyectar la luz solar concentrada sobre el receptor. Las torres solares con almacenamiento térmico funcionan continuamente, pero están sujetas a perturbaciones causadas por la interacción de la luz solar con la atmósfera. Este comportamiento puede afectar la integridad del receptor. Para determinar la posición de cada helióstato se utilizan complejos métodos de optimización. Sin embargo, estos métodos están sujetos a incertidumbre en los parámetros y no pueden compensar perturbaciones en tiempo real, como las nubes, debido a su costo computacional. Esta tesis aborda esta cuestión como un problema de control, reduciendo el número de variables. En lugar de encontrar el ángulo de elevación y azimutal para miles de helióstatos, se utilizan dos variables dentro de grupos de helióstatos. A continuación, se implementa una estrategia de control por retroalimentación, aprovechando esta reducción dimensional. Además, la metodología desarrollada en esta tesis utiliza información de un sistema de predicción de radiación solar a corto plazo de última generación, dentro de una novedosa estrategia de control adaptativo para el campo solar.DoctoradoDoctor en Ingeniería Mecánic

    Comparison of PostgreSQL & Oracle Database

    Get PDF
    Goal thesis is to compare the practical side of Database Management Systems of famous databases: PostgreSQL and ORACLE in the consideration of commercial opinions and technical features. As the goal that they have conjunctly engaged an essential part in the field of Database Management Systems, so, the complete comparison of the collection standards of these two databases is under a lot of attention of mixed users. Along with the profitable market analyze and anatomize the detailed technical functionalities of the two databases, readers could get a rich view of the usability of the databases while standing on a great vantage opinion and have a prophetic view of their future. The similar theme has not been in the Electronic library Theseus already which means it is a great chance to make up the vacant field. The Database Management Systems ideas and issues care the thesis content over the whole description construction that is regard as analysis tips for readers. The experiential study is more based on study and match of regular data that gather from Internet establishments and guide references the outcome of specific tests that done on dissimilar platforms. These rewards are to classify variations among ORACLE and PostgreSQL. Communications with administrators also plays a key role as a dynamic source of stimulus. Conversely limits are still being as the inaccessible to those private statistics and the databases core skill confidentialities. Limitation also emerged for the analysis equipment were more persuasive enough to make deductions which are pleased the scientific standards. Additional observes and checks are advised if the thesis outcome is wanted to be used in academic

    Evaluating Machine Learning Techniques for Smart Home Device Classification

    Get PDF
    Smart devices in the Internet of Things (IoT) have transformed the management of personal and industrial spaces. Leveraging inexpensive computing, smart devices enable remote sensing and automated control over a diverse range of processes. Even as IoT devices provide numerous benefits, it is vital that their emerging security implications are studied. IoT device design typically focuses on cost efficiency and time to market, leading to limited built-in encryption, questionable supply chains, and poor data security. In a 2017 report, the United States Government Accountability Office recommended that the Department of Defense investigate the risks IoT devices pose to operations security, information leakage, and endangerment of senior leaders [1]. Recent research has shown that it is possible to model a subject’s pattern-of-life through data leakage from Bluetooth Low Energy (BLE) and Wi-Fi smart home devices [2]. A key step in establishing pattern-of-life is the identification of the device types within the smart home. Device type is defined as the functional purpose of the IoT device, e.g., camera, lock, and plug. This research hypothesizes that machine learning algorithms can be used to accurately perform classification of smart home devices. To test this hypothesis, a Smart Home Environment (SHE) is built using a variety of commercially-available BLE and Wi-Fi devices. SHE produces actual smart device traffic that is used to create a dataset for machine learning classification. Six device types are included in SHE: door sensors, locks, and temperature sensors using BLE, and smart bulbs, cameras, and smart plugs using Wi-Fi. In addition, a device classification pipeline (DCP) is designed to collect and preprocess the wireless traffic, extract features, and produce tuned models for testing. K-nearest neighbors (KNN), linear discriminant analysis (LDA), and random forests (RF) classifiers are built and tuned for experimental testing. During this experiment, the classifiers are tested on their ability to distinguish device types in a multiclass classification scheme. Classifier performance is evaluated using the Matthews correlation coefficient (MCC), mean recall, and mean precision metrics. Using all available features, the classifier with the best overall performance is the KNN classifier. The KNN classifier was able to identify BLE device types with an MCC of 0.55, a mean precision of 54%, and a mean recall of 64%, and Wi-Fi device types with an MCC of 0.71, a mean precision of 81%, and a mean recall of 81%. Experimental results provide support towards the hypothesis that machine learning can classify IoT device types to a high level of performance, but more work is necessary to build a more robust classifier

    Data Driven Energy Efficiency Strategies for Commercial Buildings Using Occupancy Information

    Get PDF
    Most building automation systems operate with settings based on design assumptions with fixed operational schedules and fixed occupancy, when in fact both schedules and occupancy levels vary dynamically. In particular, the heating ventilation and air conditioning (HVAC) system provides a minimum ventilation airflow calculated for the maximum room capacity, when rooms are rarely fully occupied. Energy is wasted by over-supplying and conditioning air that is not required, which also leads to thermal discomfort. In higher educational institutions, where classroom occupancy goals vary from 60% to 80% of their maximum capacity, potential savings are substantial. Existing occupancy and schedule information from academic registration can be integrated with the facility data and the building automation system, allowing dynamic resetting of the controllers. This dissertation provides a methodology to reduce HVAC energy consumption by using occupancy information from the academic registrar. The methodology integrates three energy conservation strategies: shortening schedules, modifying thermostat settings and reducing the minimum airflow. Analysis of the proposed solution includes an economic benefit estimation at a campus level with validation through an experimental study performed on a LEED platinum building. Experiment results achieved an electricity savings of 39% and a natural gas savings of 31% for classrooms’ air conditioning consumption. Extending these savings to the campus level yields 164 MWh of electricity savings per year, 48MMBtu natural gas savings per year, 35.16 MTCO2 of greenhouse gases emissions reduction per year, approximately $20k economic savings per year

    SLA-Based Performance Tuning Techniques for Cloud Databases

    Get PDF
    Today, cloud databases are widely used in many applications. The pay-per-use model of cloud databases enables on-demand access to reliable and configurable services (CPU, storage, networks, and software) that can be quickly provisioned and released with minimal management and cost for different categories of users (also called tenants). There is no need for users to set up the infrastructure or buy the software. Users without related technical background can easily manage the cloud database through the console provided by service providers, and they just need to pay to the cloud service provider only for the services they use through a service level agreement (SLA) that specifies the performance requirements and the pricing associated with the leased services. However, due to the resource sharing structure of the cloud, different tenants’ workloads compete for computing resource. This will affect tenants’ performance, especially during the workload peak time. So it is important for cloud database service providers to develop techniques that can tune the database in order to re-guarantee the SLA when a tenant’s SLA is violated. In this dissertation, two algorithms are presented in order to improve the cloud database’s performance in a multi-tenancy environment. The first algorithm is a memory buffer management algorithm called SLA-LRU and the second algorithm is a vertical database partitioning algorithm called AutoClustC. SLA-LRU takes SLA, buffer page’s frequency, buffer page’s recency, and buffer page’s value into account in order to perform buffer page replacement. The value of a buffer page represents the removal cost of this page and can be computed using the corresponding tenant’s SLA penalty function. Only the buffer pages whose tenants have the least SLA penalty cost increment will be considered by the SLA-LRU algorithm when a buffer page replacement action is taken place. AutoClustC estimates the tuning cost for resource provisioning and database partitioning, then selects the most cost saving tuning method to tune the database. If database partitioning is selected, the algorithm will use data mining to identify the database partitions accessed frequently together and will re-partition the database accordingly. The algorithm will then distribute the resulting partitions to the standby physical machines (PMs) that have the least overload score computed based on both the PMs’ communication cost and overload status. Comprehensive experiments were conducted in order to study the performance of SLA-LRU and AutoClustC using the TPC-H benchmark on both the public cloud (Amazon RDS) and private cloud. The experiment results show that SLA-LRU gives the best overall performance in terms of query response time and SLA penalty cost improvement ratio, compared to the existing memory buffer management algorithms; and AutoClustC is capable of identifying the most cost-saving cloud database tuning method with high accuracy from resource provisioning and database partitioning, and performing database re-partitioning dynamically to provide better query response time than the current partitioning configuration

    Application of mainstream object relational database to real time database applications in industrial automation

    Get PDF
    This thesis examines the proposition that because of recent huge increases in processing power, disk and memory capacities the commercial mainstream object relational databases may now be a viable option to replace dedicated real-time databases in industrial automation. The benefits are lower product cost, greater availability of trained manpower for development and maintenance and lower risks due to larger installed base and larger number of platforms supported. The issues considered in testing this proposition were performance, ability to mimic critical real-time database features, replication of the real-time database application development and administration tools and finally the low overhead high speed, real-time data compression facility available in real-time databases. An efficient yet simple real-time compression algorithm was developed for use with relational databases and benchmarked. Extensive comparative benchmarking has been done to convincingly prove the proposition. The results overwhelmingly show, that for a majority of industrial real-time database applications, the performance offered by a commercial object relational database on a current platform are more than adequate

    Memory Power Consumption in Main-Memory Database Systems

    Get PDF
    In main-memory database systems, memory can consume a substantial amount of power, comparable to that of the processors. However, existing memory power-saving mechanisms are much less effective than processor power management. Unless the system is almost idle, memory power consumption will be high. The reason for poor memory power proportionality is that the bulk of memory power consumption is attributable to background power, which is determined by memory power state residency. The memory workload in existing systems is evenly distributed over the memory modules and also in time, which precludes the occurrence of long idle intervals. As a result, deep low-power states, which could significantly reduce background power consumption, are rarely entered. In this work, we aim to reduce the memory power consumption of main-memory data- base systems. We start by investigating and explaining the patterns of memory power consumption, under various workloads. We then propose two techniques, implemented at the database system level, that skew memory traffic, creating long periods of idleness in a subset of memory modules. This allows those modules to enter low-power states, reducing overall memory power consumption. We prototyped these techniques in DimmStore, an experimental database system. The first technique is rate-aware data placement, which places data on memory modules according to its access frequency. The background power in the unused or least-used modules is reduced, without affecting background power in the most-used modules. Rate- aware placement saves power and has little performance impact. Under a TPC-C workload, rate-aware placement resulted in memory power savings up to 44%, with a maximum throughput reduction of 10%. The second technique is memory access gating, which targets background power in less- frequently accessed memory modules by inserting periodic idle intervals. Memory gating reduces power consumption of memory modules for which rate-aware placement alone does not create sufficient idleness to reduce power consumption. With gating, memory accesses to these modules become concentrated outside of the idle intervals, creating the opportunity for low-power state use. However, because it delays memory accesses, memory gating impacts performance. Higher memory power savings and lower performance impact occur in workloads with lower memory access rates. Thus, in the YCSB workload with a medium transaction rate, memory gating reduced memory power by 26%, adding 0.25 ms (30%) of transaction latency, compared to DimmStore without gating. In the more memory intensive TPC-C workload and low to medium transaction rate, gating can save 5% of memory power, adding 1.5 ms (60%) of transaction latency, compared to DimmStore without gating
    corecore