30,375 research outputs found

    Large-Scale Detection of Non-Technical Losses in Imbalanced Data Sets

    Get PDF
    Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.Comment: Proceedings of the Seventh IEEE Conference on Innovative Smart Grid Technologies (ISGT 2016

    Automatic assembly design project 1968/9 :|breport of economic planning committee

    Get PDF
    Investigations into automatic assembly systems have been carried out. The conclusions show the major features to be considered by a company operating the machine to assemble the contact block with regard to machine output and financial aspects. The machine system has been shown to be economically viable for use under suitable conditions, but the contact block is considered to be unsuitable for automatic assembly. Data for machine specification, reliability and maintenance has been provided

    Applied Markovian Approach for Determining Optimal Process Means in Single Stage SME Production System

    Get PDF
    The determination of optimum process mean has become one of the focused research area in order to improve product quality. Depending on the value of quality characteristic of juice filling in the bottle, an item can be reworked, accepted or accepted with penalty cost by the system which is successfully transform to the finishing product by using the Markovian model. By assuming the quality characteristic is normally distributed, the probability of rework, accept and accept with penalty cost is obtained by the Markov model and next the optimum of process mean is determined which maximizes the expected profit per item. In this paper, we present the analysis of selecting the process mean in the filling process. By varying the rework and accept with penalty cost, the analysis shown the sensitivity of the Markov approach to determine the process mean

    Artificial Neural Network-based error compensation procedure for low-cost encoders

    Full text link
    An Artificial Neural Network-based error compensation method is proposed for improving the accuracy of resolver-based 16-bit encoders by compensating for their respective systematic error profiles. The error compensation procedure, for a particular encoder, involves obtaining its error profile by calibrating it on a precision rotary table, training the neural network by using a part of this data and then determining the corrected encoder angle by subtracting the ANN-predicted error from the measured value of the encoder angle. Since it is not guaranteed that all the resolvers will have exactly similar error profiles because of the inherent differences in their construction on a micro scale, the ANN has been trained on one error profile at a time and the corresponding weight file is then used only for compensating the systematic error of this particular encoder. The systematic nature of the error profile for each of the encoders has also been validated by repeated calibration of the encoders over a period of time and it was found that the error profiles of a particular encoder recorded at different epochs show near reproducible behavior. The ANN-based error compensation procedure has been implemented for 4 encoders by training the ANN with their respective error profiles and the results indicate that the accuracy of encoders can be improved by nearly an order of magnitude from quoted values of ~6 arc-min to ~0.65 arc-min when their corresponding ANN-generated weight files are used for determining the corrected encoder angle.Comment: 16 pages, 4 figures. Accepted for Publication in Measurement Science and Technology (MST

    Spray automated balancing of rotors: Methods and materials

    Get PDF
    The work described consists of two parts. In the first part, a survey is performed to assess the state of the art in rotor balancing technology as it applies to Army gas turbine engines and associated power transmission hardware. The second part evaluates thermal spray processes for balancing weight addition in an automated balancing procedure. The industry survey reveals that: (1) computerized balancing equipment is valuable to reduce errors, improve balance quality, and provide documentation; (2) slow-speed balancing is used exclusively, with no forseeable need for production high-speed balancing; (3) automated procedures are desired; and (4) thermal spray balancing is viewed with cautious optimism whereas laser balancing is viewed with concern for flight propulsion hardware. The FARE method (Fuel/Air Repetitive Explosion) was selected for experimental evaluation of bond strength and fatigue strength. Material combinations tested were tungsten carbide on stainless steel (17-4), Inconel 718 on Inconel 718, and Triballoy 800 on Inconel 718. Bond strengths were entirely adequate for use in balancing. Material combinations have been identified for use in hot and cold sections of an engine, with fatigue strengths equivalent to those for hand-ground materials

    Index to NASA Tech Briefs, January - June 1966

    Get PDF
    Index to NASA technological innovations for January-June 196

    Kernel Ellipsoidal Trimming

    No full text
    Ellipsoid estimation is an issue of primary importance in many practical areas such as control, system identification, visual/audio tracking, experimental design, data mining, robust statistics and novelty/outlier detection. This paper presents a new method of kernel information matrix ellipsoid estimation (KIMEE) that finds an ellipsoid in a kernel defined feature space based on a centered information matrix. Although the method is very general and can be applied to many of the aforementioned problems, the main focus in this paper is the problem of novelty or outlier detection associated with fault detection. A simple iterative algorithm based on Titterington's minimum volume ellipsoid method is proposed for practical implementation. The KIMEE method demonstrates very good performance on a set of real-life and simulated datasets compared with support vector machine methods
    • …
    corecore