20 research outputs found

    PQ Disturbances Dataset

    No full text
    In order to prepare the PQ disturbances dataset, for PQ disturbances classification projects using deep learning, we consider a total of 12 disturbances, i.e., sag, swell, interruption, flicker, harmonics, transients, Swell with harmonics, Sag with harmonics, interrupt with harmonics, flicker with harmonics, swell with flicker; sag with flicker. All of these signals are created in MATLAB using a variety of parameters. They are then broken down into detail [d1, d2, d3, d4, d5, d6, d7, d8] and approximate coefficients (A8) using the Daubechies mother wavelet at level 8. The complete PQ dataset consists total 750 samples and each sample has 72 features. Each decomposed signal yields eight features: mean, standard deviation, RMS value, energy, entropy, skewness, kurtosis, and ranges. Because there are 9 decomposed signals, the total number of features is 9 x 8 = 72.The dimensionality of the dataset is reduced by compressing 72 input features into 64 features using an autoencoder in order to help machine learning engineers to build an effective AI model, These 64 features are extracted from the latent space of the autoencoder and further reduced to 21 based on statistical analysis

    Indian Currency Dataset

    No full text
    New Indian currency denomination dataset is not available online so a new dataset is created using Moto X-Play mobile camera has a 21MP camera. The size of all the currency images in the dataset which are captured in landscape mode is 5344×3006. Whereas the size of all the currency images in the dataset which are captured in portrait mode is 3006×5344. A total of 4657 images are captured to create the dataset. All the currency notes which are acceptable in the market are used like old and new 10 rupee notes, old 20 rupee notes, old and new 50 rupee notes, old and new 100 rupee notes, new 200, 500, and 2000 notes. To improve the dataset size data augmentation is applied to the currency note images. The different types of augmentations used are Zoom, Rotate90, Rotate270, Tilt, Distortion, and Flip. Dataset after data augmentation contains a total of 11657 images

    Active power load dataset - IEX data

    No full text
    Active power load dataset is developed by collecting the hourly load data from during the period between 01-01-2021 and 31-08-2023. During this period have total (365+365+243=973) days. Each day have 24 hours, and collected hourly load data, so total 23352 hourly load samples are collected from IEX. Reconstructed whole dataset in such a way that load at particular hour of the day is able to predict based on last two hours (L(T-1),L(T-2)) and last two days load data (L(T-24),L(T-48)), last two hours and last two days electricity price data (P(T-1), P(T-2), P(T-24), P(T-48)), season and day status (Day). Hence the developed load dataset has total 10 input features i.e. L(T-1),L(T-2), L(T-24),L(T-48), P(T-1), P(T-2), P(T-24), P(T-48), Season and Day. Whereas output variable or predicted variable is only one L(T). This whole dataset has total 23,304 sample and 11 features (10 input features and 1 output feature). Load values are in MW and price is in INR per MWh.THIS DATASET IS ARCHIVED AT DANS/EASY, BUT NOT ACCESSIBLE HERE. TO VIEW A LIST OF FILES AND ACCESS THE FILES IN THIS DATASET CLICK ON THE DOI-LINK ABOV

    Electrical Energy Price dataset - IEX data

    No full text
    Electrical energy price dataset is developed by collecting the hourly price data from during the period between 01-01-2021 and 31-08-2023. During this periodhave total (365+365+243=973) days. Each day have 24 hours, and collected hourly price data, so total 23352 hourly electrical energy price samples are collected from IEX. Reconstructed whole dataset in such a way that electrical energy price at particular hour of the day is able to predict based on last two hours (L(T-1),L(T-2)) and last two days load data (L(T-24),L(T-48)), last two hours and last two days electricity price data (P(T-1), P(T-2), P(T-24), P(T-48)), season and day status (Day) same as active power load dataset. Hence the developed load dataset has total 10 input features i.e. L(T-1),L(T-2), L(T-24),L(T-48), P(T-1), P(T-2), P(T-24), P(T-48), Season and Day. Whereas output variable or predicted variable i.e., electrical energy price is only one P(T). This whole dataset has total 23,304 sample and 11 features (10 input features and 1 output feature)THIS DATASET IS ARCHIVED AT DANS/EASY, BUT NOT ACCESSIBLE HERE. TO VIEW A LIST OF FILES AND ACCESS THE FILES IN THIS DATASET CLICK ON THE DOI-LINK ABOV

    ZCP Dataset - Distorted Sinusoidal Signal

    No full text
    Zero-crossing point detection is necessary to establish a consistent performance in various power system applications. Machine learning models can be used to detect zero-crossing points. A dataset is required to train and test machine learning models in order to detect the zero crossing point. These datasets can be helpful to the researchers who are working on zero-crossing point detection problem using machine learning models. All these datasets are created based on MATLAB simulations. Total 28 datasets developed based on various window size like 5,10,15,20 and noise levels like 10%,20%,30%,40%,50% and 60%. Similarly, total 28 datasets developed based on various window size like 5,10,15,20 and THD levels like 10%,20%,30%,40%,50% and 60%. Also, total 36 datasets prepared based on window size like 5,10,15,20 and combination of noise (10%,30%,60%) and THD (20%,40%,60%). Each dataset consists 4 input features called slope, intercept, correlation and RMSE, and one output label with the values either 0 or 1. 0 represents non zero-crossing point class, whereas 1 represents zero-crossing point class. Datasets Information like number of samples and combinations (Window size, Noise and THD) is available in Data Details excel sheet. These datasets will be useful for faculty, students and researchers who are working on ZCP problem.THIS DATASET IS ARCHIVED AT DANS/EASY, BUT NOT ACCESSIBLE HERE. TO VIEW A LIST OF FILES AND ACCESS THE FILES IN THIS DATASET CLICK ON THE DOI-LINK ABOV

    Zero-crossing Point Detection Dataset - Distorted Sinusoidal Signals

    No full text
    Zero-crossing point detection is necessary to establish a consistent performance in various power system applications. Machine learning models can be used to detect zero-crossing points. A dataset is required to train and test machine learning models in order to detect the zero crossing point. Four datasets are developed for distorted sinusoidal signals. First dataset consists 4936 samples deduced from sinusoidal signals with 10%, 20%, 30%, 40% and 50% noise levels. Second dataset consists 4436 samples deduced from sinusoidal signals with 10%, 20%, 30%, 40% and 50% THD levels. Third dataset consists 3949 samples deduced from sinusoidal signals with 50% THD, and noise levels 10%, 20%, 30% and 40%. Fourth dataset consists 3949 samples deduced from sinusoidal signals with noise levels 5%, 10%, 15% and 20%.These datasets can be helpful to the researchers who are working on zero-crossing point detection problem using machine learning models. All these datasets are created based on MATLAB simulations. Each dataset consists 4 input features called slope, intercept, correlation and RMSE, and one output label with the values either 0 or 1. 0 represents non zero-crossing point class, whereas 1 represents zero-crossing point class

    Electric power load dataset

    No full text
    Electric power load (active power load) data is created based on hourly voltage (V), current(I) and power factor(pf) information available at 33/11 KV substation (Godishala, huzurabad, Telangana state, India). Hourly voltage, current and power factor information collected during the period from 01.01.2021 to 31.12.2021. Based these values (V,I and pf) 3-phase load on the substation at each hour calculated. This data consists hourly load, status of the day [weekday:0 and weekend:1], season[Winter:1, Summer:2 and Rainy:0], hourly temperature and humidity information. While preparing the data total 66 missing values are found. This missing data is due to shutdown of the substation for maintenance or power outage. Missing values are replaced with average of load at previous day and next day but at same time of load data missing (In case of Tuesday, Wednesday, Thursday and Friday). Whereas in case of missing load data related to Monday then average of load at Saturday and Tuesday considered, Similarly if the missing load data belongs to Saturday then average of load at Friday and Monday is considered. If the missing load data belongs to weekend then average of load at previous weekend and next weekend considered. This dataset consists total 8760 hourly load data values. Load data available in this data set are in kilo-watts, temperature in degree Fahrenheit and humidity in percentage. It has been observed that load data distribution has a mean value of 2130kW, standard deviation of 1302kW, minimum load is 412kW and peak load is 6306kW. Total number of hours substation shutdown happened in the year 2021 is 66 hours. During the year 2021, godishala town feeder (F1) has 25 outage hours, Bommakal feeder (F2) has 71 outage hours, Godishala rural feeder (F3) has 97outage hours and Raikal feeder(F4) has 46 outage hours.THIS DATASET IS ARCHIVED AT DANS/EASY, BUT NOT ACCESSIBLE HERE. TO VIEW A LIST OF FILES AND ACCESS THE FILES IN THIS DATASET CLICK ON THE DOI-LINK ABOV

    Low dimension active power load data using autoencoder

    No full text
    Dimensionality Reduction (DR) is key machine learning technique used to convert data from higher dimensional space to lower dimensionality space in order to build a predictive machine learning models with less number of model parameters. Original active power load dataset is prepared by collecting the data from 33/11KV substation near Godishala village in Telangana state, India. It consists total 12 features like L(T-1), L(T-2), L(T-3), L(T-4), L(T-24), L(T-48), L(T-72), L(T-96), Temperature, Humidity, Season and Day. This 12 features data is reconstructed in to 10 features using autoencoder with a training loss of 0.0061 and validation loss of 0.0062.THIS DATASET IS ARCHIVED AT DANS/EASY, BUT NOT ACCESSIBLE HERE. TO VIEW A LIST OF FILES AND ACCESS THE FILES IN THIS DATASET CLICK ON THE DOI-LINK ABOV

    Indian Currency Dataset

    No full text
    New Indian currency denomination dataset is not available online so a new dataset is created using Moto X-Play mobile camera has a 21MP camera. The size of all the currency images in the dataset which are captured in landscape mode is 5344×3006. Whereas the size of all the currency images in the dataset which are captured in portrait mode is 3006×5344. A total of 4657 images are captured to create the dataset. All the currency notes which are acceptable in the market are used like old and new 10 rupee notes, old 20 rupee notes, old and new 50 rupee notes, old and new 100 rupee notes, new 200, 500, and 2000 notes. To improve the dataset size data augmentation is applied to the currency note images. The different types of augmentations used are Zoom, Rotate90, Rotate270, Tilt, Distortion, and Flip. Dataset after data augmentation contains a total of 11657 images

    PQ Disturbances Dataset

    No full text
    In order to prepare the PQ disturbances dataset, for PQ disturbances classification projects using deep learning, we consider a total of 12 disturbances, i.e., sag, swell, interruption, flicker, harmonics, transients, Swell with harmonics, Sag with harmonics, interrupt with harmonics, flicker with harmonics, swell with flicker; sag with flicker. All of these signals are created in MATLAB using a variety of parameters. They are then broken down into detail [d1, d2, d3, d4, d5, d6, d7, d8] and approximate coefficients (A8) using the Daubechies mother wavelet at level 8. The complete PQ dataset consists total 750 samples and each sample has 72 features. Each decomposed signal yields eight features: mean, standard deviation, RMS value, energy, entropy, skewness, kurtosis, and ranges. Because there are 9 decomposed signals, the total number of features is 9 x 8 = 72.The dimensionality of the dataset is reduced by compressing 72 input features into 64 features using an autoencoder in order to help machine learning engineers build an effective AI model, These 64 features are extracted from the latent space of the autoencoder and further reduced to 21 based on statistical analysis.THIS DATASET IS ARCHIVED AT DANS/EASY, BUT NOT ACCESSIBLE HERE. TO VIEW A LIST OF FILES AND ACCESS THE FILES IN THIS DATASET CLICK ON THE DOI-LINK ABOV
    corecore