206 research outputs found

    Review of Non-Technical Losses Identification Techniques

    Get PDF
    Illegally consumption of electric power, termed as non-technical losses for the distribution companies is one of the dominant factors all over the world for many years. Although there are some conventional methods to identify these irregularities, such as physical inspection of meters at the consumer premises etc, but it requires large number of manpower and time; then also it does not seem to be adequate. Now a days there are various methods and algorithms have been developed that are proposed in different research papers, to detect non-technical losses. In this paper these methods are reviewed, their important features are highlighted and also the limitations are identified. Finally, the qualitative comparison of various non-technical losses identification algorithms is presented based on their performance, costs, data handling, quality control and execution times. It can be concluded that the graph-based classifier, Optimum-Path Forest algorithm that have both supervised and unsupervised variants, yields the most accurate result to detect non-technical losses

    Detecting, Modeling, and Predicting User Temporal Intention

    Get PDF
    The content of social media has grown exponentially in the recent years and its role has evolved from narrating life events to actually shaping them. Unfortunately, content posted and shared in social networks is vulnerable and prone to loss or change, rendering the context associated with it (a tweet, post, status, or others) meaningless. There is an inherent value in maintaining the consistency of such social records as in some cases they take over the task of being the first draft of history as collections of these social posts narrate the pulse of the street during historic events, protest, riots, elections, war, disasters, and others as shown in this work. The user sharing the resource has an implicit temporal intent: either the state of the resource at the time of sharing, or the current state of the resource at the time of the reader \clicking . In this research, we propose a model to detect and predict the user\u27s temporal intention of the author upon sharing content in the social network and of the reader upon resolving this content. To build this model, we first examine the three aspects of the problem: the resource, time, and the user. For the resource we start by analyzing the content on the live web and its persistence. We noticed that a portion of the resources shared in social media disappear, and with further analysis we unraveled a relationship between this disappearance and time. We lose around 11% of the resources after one year of sharing and a steady 7% every following year. With this, we turn to the public archives and our analysis reveals that not all posted resources are archived and even they were an average 8% per year disappears from the archives and in some cases the archived content is heavily damaged. These observations prove that in regards to archives resources are not well-enough populated to consistently and reliably reconstruct the missing resource as it existed at the time of sharing. To analyze the concept of time we devised several experiments to estimate the creation date of the shared resources. We developed Carbon Date, a tool which successfully estimated the correct creation dates for 76% of the test sets. Since the resources\u27 creation we wanted to measure if and how they change with time. We conducted a longitudinal study on a data set of very recently-published tweet-resource pairs and recording observations hourly. We found that after just one hour, ~4% of the resources have changed by ≥30% while after a day the change rate slowed to be ~12% of the resources changed by ≥40%. In regards to the third and final component of the problem we conducted user behavioral analysis experiments and built a data set of 1,124 instances manually assigned by test subjects. Temporal intention proved to be a difficult concept for average users to understand. We developed our Temporal Intention Relevancy Model (TIRM) to transform the highly subjective temporal intention problem into the more easily understood idea of relevancy between a tweet and the resource it links to, and change of the resource through time. On our collected data set TIRM produced a significant 90.27% success rate. Furthermore, we extended TIRM and used it to build a time-based model to predict temporal intention change or steadiness at the time of posting with 77% accuracy. We built a service API around this model to provide predictions and a few prototypes. Future tools could implement TIRM to assist users in pushing copies of shared resources into public web archives to ensure the integrity of the historical record. Additional tools could be used to assist the mining of the existing social media corpus by derefrencing the intended version of the shared resource based on the intention strength and the time between the tweeting and mining

    Clustering And Control Of Streaming Synchrophasor Datasets

    Get PDF
    With the large scale deployment of phasor measurement units (PMU) in the United States, a resonating topic has been the question of how to extract useful “information” or “knowledge” from voluminous data generated by supervisory control and data acquisition (SCADA), PMUs and advanced metering infrastructure (AMI). With a sampling rate of 30 to as high as 120 samples per second, the PMU provide a fine-grained monitoring of the power grid with time synchronized measurements of voltage, current, frequency and phase angle. Running the sensors continuously can produce nearly 2,592,000 samples of data every day. This large data need to be treated efficiently to extract information for better decision making in a smart grid network (SG) environment. My research presents a flexible software framework to process the streaming data sets for smart-grid applications. The proposed Integrated Software Suite (ISS) is capable of mining the data using various clustering algorithms for better decision-making purposes. The decisions based on the proposed methods can help electric grid’s system operators to reduce blackouts, instabilities and oscillations in the smart-grid. The research work primarily focus on integrating a density-based clustering (DBSCAN) and variations of k-means clustering methods to capture specific types of anomalies or faults. A novel method namely, multi-tier k-means was developed to cluster the PMU data. Such a grouping scheme will enable system operators for better decision making. Different fault conditions, such as voltage, current, phase angle or frequency deviations, generation, and load trips, are investigated and a comparative analysis of application of three methods are studied. A collection of forecasting techniques has also been applied to PMU datasets. The datasets considered are from the PJM Corporation that describes the energy demand for 13 states and District of Columbia (DC). The applications and suitability of forecasting techniques to PMU data using random forest (RF), locally weighted scatterplot smoothing (LOWESS) and seasonal auto regressive integrated moving average (SARIMA) has been investigated. The approaches are tested against standardized error indices like mean absolute percentage error (MAPE), mean squared error (MSE), root mean squared error (RMSE) and normal percentage error (PCE), to compare the performance. It is observed that the proposed hybrid combination of RF and SARIMA can be usd with good results in day ahead forecasting of load dispatch

    Realistic visualisation of cultural heritage objects

    Get PDF
    This research investigation used digital photography in a hemispherical dome, enabling a set of 64 photographic images of an object to be captured in perfect pixel register, with each image illuminated from a different direction. This representation turns out to be much richer than a single 2D image, because it contains information at each point about both the 3D shape of the surface (gradient and local curvature) and the directionality of reflectance (gloss and specularity). Thereby it enables not only interactive visualisation through viewer software, giving the illusion of 3D, but also the reconstruction of an actual 3D surface and highly realistic rendering of a wide range of materials. The following seven outcomes of the research are claimed as novel and therefore as representing contributions to knowledge in the field: A method for determining the geometry of an illumination dome; An adaptive method for finding surface normals by bounded regression; Generating 3D surfaces from photometric stereo; Relationship between surface normals and specular angles; Modelling surface specularity by a modified Lorentzian function; Determining the optimal wavelengths of colour laser scanners; Characterising colour devices by synthetic reflectance spectra

    Contributory studies to the development, validation and field use of a telemetry system to monitor ventilation and trophic activity in wild Brown Trout

    Get PDF
    This work was performed as part of a major research project into the evaluation of the ecology of lake dwelling Brown Trout, Salmo trutta L. using ultrasonic biotelemetry techniques. The supplementary research results. leading up to and after the execution of a program of experiments involving the telemetry of feeding and ventilatory rhythms are described: 1. The presence of red (slow) fibres in the adductor mandibulae muscle of Brown Trout was confirmed to be as previously described in the Rainbow Trout, Sälmo gairdneri Richardson and other salmonids. 2. By electromyographic (EMG) and pharmacological means, the red fibres in the a. mandibulae were shown to be active during ventilation and the mosaic fibres comprising the bulk of the muscle were recruited during more dynamic events such as feeding and coughing. Observations were made on the innervation of the red fibres. 3. Comparative investigations made at sea on large deep sea Squaloid and Galeoid sharks (which have a simple adductor muscle like the Trout) showed an identical functional differentiation as obtained in the Trout. 4. The presence of a migratory 'pace setter potential' was found for the first time in Fish. Its use as an indicator of feeding activity by telemetry was rejected on practical grounds. ýýY NO 5. An ultrasonic transmitter was developed to telemeter an analogue of the adductor mandibulae EMG from wild Brown Trout, using a novel electrode design. Four fish were so equipped and released into Airthrey Loch, University of Stirling and tracked for up to 24 hours (following a 24 hr allowance for post-anaesthetic recovery). Feeding and ventilatory periodicity, linear and angular movement patterns and photoperiod were intercorrelated. Angle of turn and subsequent step length were positively correlated and feeding activity was marked by a preference for dextral turning. 'Area restricted searching' and 'area avoided searching' were the probable causes of the movement patterns seen in this and previous investigations at Airthrey Loch. A depth preference and orientation of the fish to topography was demonstrated. Following analysis of the angle of turn and step length data, it was concluded that the larger transmitter package and more severe surgery materially affected the fishes' behaviour relative to data previously obtained at Airthrey Loch using smaller transmitters. 6. Due to difficulties experienced in 5 above due to an unsuspected effect on the a. mandibulae EMG detectable up to 24 hrs post-anaesthesia, a frequency analysis was made of the a. mandibulae EMG of the Brown Trout and several other species. This disclosed that the EMG from red fibres has a frequency spectrum considerably lower than that of 'standard' mammalian muscle. The progressive failure of the EMG transmitter with time was due to a combination of the anaesthetic effect and the frequency spectrum relative to certain design features. (vii, In the light of these observations, subsequent designs of the EMG transmitter were able to take this into account

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p

    Facial Modelling and animation trends in the new millennium : a survey

    Get PDF
    M.Sc (Computer Science)Facial modelling and animation is considered one of the most challenging areas in the animation world. Since Parke and Waters’s (1996) comprehensive book, no major work encompassing the entire field of facial animation has been published. This thesis covers Parke and Waters’s work, while also providing a survey of the developments in the field since 1996. The thesis describes, analyses, and compares (where applicable) the existing techniques and practices used to produce the facial animation. Where applicable, the related techniques are grouped in the same chapter and described in a chronological fashion, outlining their differences, as well as their advantages and disadvantages. The thesis is concluded by exploratory work towards a talking head for Northern Sotho. Facial animation and lip synchronisation of a fragment of Northern Sotho is done by using software tools primarily designed for English.Computin

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio
    • …
    corecore