1,711 research outputs found

    Mitigating Anomalous Electricity Consumption in Smart Cities Using an AI-Based Stacked-Generalization Technique

    Get PDF
    Energy management and efficient asset utilization play an important role in the economic development of a country. The electricity produced at the power station faces two types of losses from the generation point to the end user. These losses are technical losses (TL) and non-technical losses (NTL). TLs occurs due to the use of inefficient equipment. While NTLs occur due to the anomalous consumption of electricity by the customers, which happens in many ways; energy theft being one of them. Energy theft majorly happens to cut down on the electricity bills. These losses in the smart grid (SG) are the main issue in maintaining grid stability and cause revenue loss to the utility. The automatic metering infrastructure (AMI) system has reduced grid instability but it has opened up new ways for NTLs in the form of different cyber-physical theft attacks (CPTA). Machine learning (ML) techniques can be used to detect and minimize CPTA. However, they have certain limitations and cannot capture the energy consumption patterns (ECPs) of all the users, which decreases the performance of ML techniques in detecting malicious users. In this paper, we propose a novel ML-based stacked generalization method for the cyber-physical theft issue in the smart grid. The original data obtained from the grid is preprocessed to improve model training and processing. This includes NaN-imputation, normalization, outliers\u27 capping, support vector machine-synthetic minority oversampling technique (SVM-SMOTE) balancing, and principal component analysis (PCA) based data reduction techniques. The pre-processed dataset is provided to the ML models light gradient boosting (LGB), extra trees (ET), extreme gradient boosting (XGBoost), and random forest (RF), to accurately capture all consumers\u27 overall ECP. The predictions from these base models are fed to a meta-classifier multi-layer perceptron (MLP). The MLP combines the learning capability of all the base models and gives an improved final prediction. The proposed structure is implemented and verified on the publicly available real-time large dataset of the State Grid Corporation of China (SGCC). The proposed model outperformed the individual base classifiers and the existing research in terms of CPTA detection with false positive rate (FPR), false negative rate (FNR), F1-score, and accuracy values of 0.72%, 2.05%, 97.6%, and 97.69%, respectively

    Exploring Malaysian University Students’ Awareness of Green Computing

    Get PDF
    This study explored Malaysian universitystudents’ awareness of green computing in two aspects, i.e.vocabulary and issues, and sought to ascertain whether these twoaspects were influenced by gender and field of study (ICT versusnon-ICT). A total of 224 university students from ICT- and non-ICT related fields participated in the survey. Students filled out agreen computing questionnaire with 21 items measuringawareness of vocabulary and issues. Descriptive statistics,independent-samples t-test and Principal Components Analysis(PCA) were used to analyze the data. Results show that amajority of students lacked awareness of terms, ideas and issuescentral to green computing, such as E-PEAT, Energy Star, greenPC, Malaysia Green Technology Policy, e-waste, and carbon-freecomputing. The PCA analysis extracted two factors, namedEnvironmental Protection and Nature of Computers, that couldbe used to explain students’ lack of familiarity with green ICT.Field of study was shown to impact awareness in all the aspectsmeasured in favor of students educated in ICT-related fields, butthe findings produced mixed gender effects. The results indicatethe need for green computing education to be integrated intohigher education curriculum and for university-led greeninitiatives to be implemented on Malaysian university campusesto increase awareness in the subject matter

    Power Bounded Computing on Current & Emerging HPC Systems

    Get PDF
    Power has become a critical constraint for the evolution of large scale High Performance Computing (HPC) systems and commercial data centers. This constraint spans almost every level of computing technologies, from IC chips all the way up to data centers due to physical, technical, and economic reasons. To cope with this reality, it is necessary to understand how available or permissible power impacts the design and performance of emergent computer systems. For this reason, we propose power bounded computing and corresponding technologies to optimize performance on HPC systems with limited power budgets. We have multiple research objectives in this dissertation. They center on the understanding of the interaction between performance, power bounds, and a hierarchical power management strategy. First, we develop heuristics and application aware power allocation methods to improve application performance on a single node. Second, we develop algorithms to coordinate power across nodes and components based on application characteristic and power budget on a cluster. Third, we investigate performance interference induced by hardware and power contentions, and propose a contention aware job scheduling to maximize system throughput under given power budgets for node sharing system. Fourth, we extend to GPU-accelerated systems and workloads and develop an online dynamic performance & power approach to meet both performance requirement and power efficiency. Power bounded computing improves performance scalability and power efficiency and decreases operation costs of HPC systems and data centers. This dissertation opens up several new ways for research in power bounded computing to address the power challenges in HPC systems. The proposed power and resource management techniques provide new directions and guidelines to green exscale computing and other computing systems

    Power-Aware Job Dispatching in High Performance Computing Systems

    Get PDF
    This works deals with the power-aware job dispatching problem in supercomputers; broadly speaking the dispatching consists of assigning finite capacity resources to a set of activities, with a special concern toward power and energy efficient solutions. We introduce novel optimization approaches to address its multiple aspects. The proposed techniques have a broad application range but are aimed at applications in the field of High Performance Computing (HPC) systems. Devising a power-aware HPC job dispatcher is a complex, where contrasting goals must be satisfied. Furthermore, the online nature of the problem request that solutions must be computed in real time respecting stringent limits. This aspect historically discouraged the usage of exact methods and favouring instead the adoption of heuristic techniques. The application of optimization approaches to the dispatching task is still an unexplored area of research and can drastically improve the performance of HPC systems. In this work we tackle the job dispatching problem on a real HPC machine, the Eurora supercomputer hosted at the Cineca research center, Bologna. We propose a Constraint Programming (CP) model that outperforms the dispatching software currently in use. An essential element to take power-aware decisions during the job dispatching phase is the possibility to estimate jobs power consumptions before their execution. To this end, we applied Machine Learning techniques to create a prediction model that was trained and tested on the Euora supercomputer, showing a great prediction accuracy. Then we finally develop a power-aware solution, considering the same target machine, and we devise different approaches to solve the dispatching problem while curtailing the power consumption of the whole system under a given threshold. We proposed a heuristic technique and a CP/heuristic hybrid method, both able to solve practical size instances and outperform the current state-of-the-art techniques

    Improving data center efficiency through smart grid integration and intelligent analytics

    Full text link
    The ever-increasing growth of the demand in IT computing, storage and large-scale cloud services leads to the proliferation of data centers that consist of (tens of) thousands of servers. As a result, data centers are now among the largest electricity consumers worldwide. Data center energy and resource efficiency has started to receive significant attention due to its economical, environmental, and performance impacts. In tandem, facing increasing challenges in stabilizing the power grids due to growing needs of intermittent renewable energy integration, power market operators have started to offer a number of demand response (DR) opportunities for energy consumers (such as data centers) to receive credits by modulating their power consumption dynamically following specific requirements. This dissertation claims that data centers have strong capabilities to emerge as major enablers of substantial electricity integration from renewables. The participation of data centers into emerging DR, such as regulation service reserves (RSRs), enables the growth of the data center in a sustainable, environmentally neutral, or even beneficial way, while also significantly reducing data center electricity costs. In this dissertation, we first model data center participation in DR, and then propose runtime policies to dynamically modulate data center power in response to independent system operator (ISO) requests, leveraging advanced server power and workload management techniques. We also propose energy and reserve bidding strategies to minimize the data center energy cost. Our results demonstrate that a typical data center can achieve up to 44% monetary savings in its electricity cost with RSR provision, dramatically surpassing savings achieved by traditional energy management strategies. In addition, we investigate the capabilities and benefits of various types of energy storage devices (ESDs) in DR. Finally, we demonstrate RSR provision in practice on a real server. In addition to its contributions on improving data center energy efficiency, this dissertation also proposes a novel method to address data center management efficiency. We propose an intelligent system analytics approach, "discovery by example", which leverages fingerprinting and machine learning methods to automatically discover software and system changes. Our approach eases runtime data center introspection and reduces the cost of system management.2018-11-04T00:00:00

    Predicting model training time to optimize distributed machine learning applications

    Get PDF
    Despite major advances in recent years, the field of Machine Learning continues to face research and technical challenges. Mostly, these stem from big data and streaming data, which require models to be frequently updated or re-trained, at the expense of significant computational resources. One solution is the use of distributed learning algorithms, which can learn in a distributed manner, from distributed datasets. In this paper, we describe CEDEs—a distributed learning system in which models are heterogeneous distributed Ensembles, i.e., complex models constituted by different base models, trained with different and distributed subsets of data. Specifically, we address the issue of predicting the training time of a given model, given its characteristics and the characteristics of the data. Given that the creation of an Ensemble may imply the training of hundreds of base models, information about the predicted duration of each of these individual tasks is paramount for an efficient management of the cluster’s computational resources and for minimizing makespan, i.e., the time it takes to train the whole Ensemble. Results show that the proposed approach is able to predict the training time of Decision Trees with an average error of 0.103 s, and the training time of Neural Networks with an average error of 21.263 s. We also show how results depend significantly on the hyperparameters of the model and on the characteristics of the input data.This work has been supported by national funds through FCT – Fundação para a Ciência e Tecnologia through projects UIDB/04728/2020, EXPL/CCI-COM/0706/2021, and CPCA-IAC/AV/475278/2022

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails

    Configurable data center switch architectures

    Get PDF
    In this thesis, we explore alternative architectures for implementing con_gurable Data Center Switches along with the advantages that can be provided by such switches. Our first contribution centers around determining switch architectures that can be implemented on Field Programmable Gate Array (FPGA) to provide configurable switching protocols. In the process, we identify a gap in the availability of frameworks to realistically evaluate the performance of switch architectures in data centers and contribute a simulation framework that relies on realistic data center traffic patterns. Our framework is then used to evaluate the performance of currently existing as well as newly proposed FPGA-amenable switch designs. Through collaborative work with Meng and Papaphilippou, we establish that only small-medium range switches can be implemented on today's FPGAs. Our second contribution is a novel switch architecture that integrates a custom in-network hardware accelerator with a generic switch to accelerate Deep Neural Network training applications in data centers. Our proposed accelerator architecture is prototyped on an FPGA, and a scalability study is conducted to demonstrate the trade-offs of an FPGA implementation when compared to an ASIC implementation. In addition to the hardware prototype, we contribute a light weight load-balancing and congestion control protocol that leverages the unique communication patterns of ML data-parallel jobs to enable fair sharing of network resources across different jobs. Our large-scale simulations demonstrate the ability of our novel switch architecture and light weight congestion control protocol to both accelerate the training time of machine learning jobs by up to 1.34x and benefit other latency-sensitive applications by reducing their 99%-tile completion time by up to 4.5x. As for our final contribution, we identify the main requirements of in-network applications and propose a Network-on-Chip (NoC)-based architecture for supporting a heterogeneous set of applications. Observing the lack of tools to support such research, we provide a tool that can be used to evaluate NoC-based switch architectures.Open Acces
    corecore