278 research outputs found

    ECONOMIC AND POLICY ANALYSIS FOR SOLAR PV SYSTEMS IN INDIANA

    Get PDF
    In recent years, the energy market in the US and globally is expanding the production of renewable energy. With other energy sources, solar energy for electricity is also expanding in the US. Indiana is one of the states expanding solar energy with solar PV systems. However, the economics of solar PV systems in Indiana have not been analyzed and electricity customers in Indiana are not informed enough about the economics of solar PV systems. Therefore, we conduct benefit cost analysis with several uncertain input variables to determine the economics of adopting solar PV systems in Indiana based on policy instruments that could increase adoption of solar PV systems. The specific objectives of this study are analyses of the cost distribution of solar PV systems compared with grid electricity in homes and on the probability that solar can be less than current electricity from grids under different combinations of policies. We first do the analysis under current policy options and then do the analysis under potential policy options for a variety of scenarios. With the information addressed in our study, customers can be informed how beneficial or not it would be to adopt solar PV systems in their homes. Also, government can be informed how effective policies can be and how to manage policy options for encouraging solar PV systems. The results show that the current policies are important in reducing the cost of solar PV systems. However, with current policies, there is only 50-50 chance of solar being cheaper than electricity from grids. However, if potential policies are implemented, solar PV systems can be more economical than electricity from the grids. Thus, it is arguable that government still should implement other policies to encourage people to adopt solar PV systems in Indiana

    Practical Systems For Strengthening And Weakening Binary Analysis Frameworks

    Get PDF
    Binary analysis detects software vulnerability. Cutting-edge analysis techniques can quickly and automatically explore the internals of a program and report any discovered problems. Therefore, developers commonly use various analysis techniques as part of their software development process. Unfortunately, it also means that such techniques and the automatic natures of binary analysis methods are appealing to adversaries who are looking for zero-day vulnerabilities. In this thesis, binary analysis is considered a double-edged sword for the users, based on their purpose. To deliver the benefit of the binary analysis only for credible users such as developers or testers, this thesis aims to present a practical system to strengthening the binary analysis for the trusted parties and weakening the power of the binary analysis against the untrusted groups exclusively. To achieve the aforementioned goals, this thesis presents the new domain of the binary analysis in two directions: 1) a protection technique against the fuzz testing and 2) a new binary analysis system to expand the applicability of the current binary analysis techniques. The mitigation approach will help developers protect the released software from attackers who can apply fuzzing techniques. On the other hand, the new binary analysis frameworks will provide a set of solutions to address the challenges that COTS binary fuzzing faces.Ph.D

    On the Use of Bayesian Probabilistic Matrix Factorization for Predicting Student Performance in Online Learning Environments

    Full text link
    Thanks to the advances in digital educational technology, online learning (or e-learning) environments such as Massive Open Online Course (MOOC) have been rapidly growing. In the online educational systems, however, there are two inherent challenges in predicting performance of students and providing personalized supports to them: sparse data and cold-start problem. To overcome such challenges, this article aims to employ a pertinent machine learning algorithm, the Bayesian Probabilistic Matrix Factorization (BPMF) that can enhance the prediction by incorporating background information on the side of students and/or items. An experimental study with two prediction settings was conducted to apply the BPMF to the Statistics Online data. The results shows that the BPMF with using side information provided more accurate prediction in the performance of both existing and new students on items, compared to the algorithm without using any side information. When the data are sparse, it is demonstrated that a lower dimensional solution of the BPMF would benefit the prediction accuracy. Lastly, the applicability of the BPMF to the online educational systems were discussed in the context of educational assessment.Kim, J.; Park, JY.; Van Den Noortgate, W. (2020). On the Use of Bayesian Probabilistic Matrix Factorization for Predicting Student Performance in Online Learning Environments. En 6th International Conference on Higher Education Advances (HEAd'20). Editorial Universitat Politècnica de València. (30-05-2020):751-759. https://doi.org/10.4995/HEAd20.2020.11137OCS75175930-05-202

    GraNNDis: Efficient Unified Distributed Training Framework for Deep GNNs on Large Clusters

    Full text link
    Graph neural networks (GNNs) are one of the most rapidly growing fields within deep learning. According to the growth in the dataset and the model size used for GNNs, an important problem is that it becomes nearly impossible to keep the whole network on GPU memory. Among numerous attempts, distributed training is one popular approach to address the problem. However, due to the nature of GNNs, existing distributed approaches suffer from poor scalability, mainly due to the slow external server communications. In this paper, we propose GraNNDis, an efficient distributed GNN training framework for training GNNs on large graphs and deep layers. GraNNDis introduces three new techniques. First, shared preloading provides a training structure for a cluster of multi-GPU servers. We suggest server-wise preloading of essential vertex dependencies to reduce the low-bandwidth external server communications. Second, we present expansion-aware sampling. Because shared preloading alone has limitations because of the neighbor explosion, expansion-aware sampling reduces vertex dependencies that span across server boundaries. Third, we propose cooperative batching to create a unified framework for full-graph and minibatch training. It significantly reduces redundant memory usage in mini-batch training. From this, GraNNDis enables a reasonable trade-off between full-graph and mini-batch training through unification especially when the entire graph does not fit into the GPU memory. With experiments conducted on a multi-server/multi-GPU cluster, we show that GraNNDis provides superior speedup over the state-of-the-art distributed GNN training frameworks

    Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System

    Full text link
    The recent huge advance of Large Language Models (LLMs) is mainly driven by the increase in the number of parameters. This has led to substantial memory capacity requirements, necessitating the use of dozens of GPUs just to meet the capacity. One popular solution to this is storage-offloaded training, which uses host memory and storage as an extended memory hierarchy. However, this obviously comes at the cost of storage bandwidth bottleneck because storage devices have orders of magnitude lower bandwidth compared to that of GPU device memories. Our work, Smart-Infinity, addresses the storage bandwidth bottleneck of storage-offloaded LLM training using near-storage processing devices on a real system. The main component of Smart-Infinity is SmartUpdate, which performs parameter updates on custom near-storage accelerators. We identify that moving parameter updates to the storage side removes most of the storage traffic. In addition, we propose an efficient data transfer handler structure to address the system integration issues for Smart-Infinity. The handler allows overlapping data transfers with fixed memory consumption by reusing the device buffer. Lastly, we propose accelerator-assisted gradient compression/decompression to enhance the scalability of Smart-Infinity. When scaling to multiple near-storage processing devices, the write traffic on the shared channel becomes the bottleneck. To alleviate this, we compress the gradients on the GPU and decompress them on the accelerators. It provides further acceleration from reduced traffic. As a result, Smart-Infinity achieves a significant speedup compared to the baseline. Notably, Smart-Infinity is a ready-to-use approach that is fully integrated into PyTorch on a real system. We will open-source Smart-Infinity to facilitate its use.Comment: Published at HPCA 2024 (Best Paper Award Honorable Mention

    ProtoFL: Unsupervised Federated Learning via Prototypical Distillation

    Full text link
    Federated learning (FL) is a promising approach for enhancing data privacy preservation, particularly for authentication systems. However, limited round communications, scarce representation, and scalability pose significant challenges to its deployment, hindering its full potential. In this paper, we propose 'ProtoFL', Prototypical Representation Distillation based unsupervised Federated Learning to enhance the representation power of a global model and reduce round communication costs. Additionally, we introduce a local one-class classifier based on normalizing flows to improve performance with limited data. Our study represents the first investigation of using FL to improve one-class classification performance. We conduct extensive experiments on five widely used benchmarks, namely MNIST, CIFAR-10, CIFAR-100, ImageNet-30, and Keystroke-Dynamics, to demonstrate the superior performance of our proposed framework over previous methods in the literature.Comment: Accepted by ICCV 2023. Hansol Kim and Youngjun Kwak contributed equally to this wor

    Experimental Investigations On The Performance Improvement Of Oil-gas Separator In Electric Driven Scroll Compressor For Eco-friendly Vehicles

    Get PDF
    Experimental research about the oil-gas separator was conducted in order to improve the performance of electric driven scroll compressors used in eco-friendly HEV/EVs. The compressor used in the tests was “back pressure†method electric driven scroll compressors using oil. To maintain adequate back pressure, an oil separator in the discharge chamber is required. It is inevitable that the refrigerant passing an oil separator inside a discharge chamber experiences a pressure drop. This pressure drop increases input power, resulting in some decrease of the COP of the compressor. Various parameters of the oil separator— the length of vortex finders; installation angle; and the inlet, outlet diameters— related to the pressure drop were considered. The installation angle and the outlet diameters had no significant effect on the pressure drop; however, a pressure drop decrease in relation to the length of the vortex finder and the inlet diameter was confirmed. As the vortex finder length decreased and the inlet diameter increased, the input power of the compressor is decreased about 4.12% and the COP is increased about 2.66% by the reduction of the pressure drop

    User-centric resource allocation with two-dimensional reverse pricing in mobile communication services

    Get PDF
    • …
    corecore