25 research outputs found

    Dark Web Analytics : A Comparative Study of Feature Selection and Prediction Algorithms

    Get PDF
    The value and size of information exchanged through dark-web pages are remarkable. Recently Many researches showed values and interests in using machine-learning methods to extract security-related useful knowledge from those dark-web pages. In this scope, our goals in this research focus on evaluating best prediction models while analyzing traffic level data coming from the dark web. Results and analysis showed that feature selection played an important role when trying to identify the best models. Sometimes the right combination of features would increase the model’s accuracy. For some feature set and classifier combinations, the Src Port and Dst Port both proved to be important features. When available, they were always selected over most other features. When absent, it resulted in many other features being selected to compensate for the information they provided. The Protocol feature was never selected as a feature, regardless of whether Src Port and Dst Port were available

    A novel gradient based optimizer for solving unit commitment problem

    Get PDF
    Secure and economic operation of the power system is one of the prime concerns for the engineers of 21st century. Unit Commitment (UC) represents an enhancement problem for controlling the operating schedule of units in each hour interval with different loads at various technical and environmental constraints. UC is one of the complex optimization tasks performed by power plant engineers for regular planning and operation of power system. Researchers have used a number of metaheuristics (MH) for solving this complex and demanding problem. This work aims to test the Gradient Based Optimizer (GBO) performance for treating with the UC problem. The evaluation of GBO is applied on five cases study, first case is power system network with 4-unit and the second case is power system network with 10-unit, then 20 units, then 40 units, and 100-unit system. Simulation results establish the efficacy and robustness of GBO in solving UC problem as compared to other metaheuristics such as Differential Evolution, Enhanced Genetic Algorithm, Lagrangian Relaxation, Genetic Algorithm, Ionic Bond-direct Particle Swarm Optimization, Bacteria Foraging Algorithm and Grey Wolf Algorithm. The GBO method achieve the lowest average run time than the competitor methods. The best cost function for all systems used in this work is achieved by the GBO technique

    A Single-Phase GaN Totem-Pole Bridgeless PFC with an H-Bridge Active Power Decoupling

    No full text
    This research proposes a single-phase power factor correction (PFC) approach employing a GaN Totem-Pole topology with an H-Bridge Active Power Decoupling (APD). The proposed topology assures the achievement of high efficiency with unity power factor and high-power density with minimum losses over a wide range of voltages. Moreover, the GaN Totem-Pole PFC with the H-Bridge APD has shown a significant enhancement on the total energy storage requirement in comparison with the GaN Totem-Pole PFC without the H-Bridge APD. The total energy storage requirement is reduced from 143 J on the Totem-Pole PFC without the H-Bridge APD to around 3.76 J on the Totem-Pole PFC with the H-Bridge APD and the large aluminum electrolytic DC-Link Capacitor (1,880 μF) located at the interface between the converter and the DC load is replaced by the suppressed polypropylene film DC-Link Capacitor (5 μF). The additional H-Bridge APD circuit generates a reactive power that matches and buffers the undesirable low-frequency power ripple caused by the single-phase inherited double-line frequency that exists naturally at the AC side and gets injected into the converter. The topology composes of three GaN high switching frequency legs (100 kHz) and one low (line) frequency leg (60 Hz). The H-Bridge APD circuit consists of two of the high switching frequency legs (100 kHz) with 4 GaN FETs, a decoupling capacitor and an inductor. GaN FETs were used instead of MOSFETs due to their superiorities of having higher switching frequency ensuring lower switching losses, higher efficiency leading to lower conduction losses, lower reverse recovery losses and higher power density

    A mobile Deep Sparse Wavelet autoencoder for Arabic acoustic unit modeling and recognition

    No full text
    In this manuscript, we introduce a novel methodology for modeling acoustic units within a mobile architecture, employing a synergistic combination of various motivating techniques, including deep learning, sparse coding, and wavelet networks. The core concept involves constructing a Deep Sparse Wavelet Network (DSWN) through the integration of stacked wavelet autoencoders. The DSWN is designed to classify a specific class and discern it from other classes within a dataset of acoustic units. Mel-frequency cepstral coefficients (MFCC) and perceptual linear predictive (PLP) features are utilized for encoding speech units. This approach is tailored to leverage the computational capabilities of mobile devices by establishing deep networks with minimal connections, thereby immediately reducing computational overhead. The experimental findings demonstrate the efficacy of our system when applied to a segmented corpus of Arabic words. Notwithstanding promising results, we will explore the limitations of our methodology. One limitation concerns the use of a specific dataset of Arabic words. The generalizability of the sparse deep wavelet network (DSWN) to various contexts requires further investigation “We will evaluate the impact of speech variations, such as accents, on the performance of our model, for a nuanced understanding

    Developing a multivariate time series forecasting framework based on stacked autoencoders and multi-phase feature

    No full text
    Time series forecasting across different domains has received massive attention as it eases intelligent decision-making activities. Recurrent neural networks and various deep learning algorithms have been applied to modeling and forecasting multivariate time series data. Due to intricate non-linear patterns and significant variations in the randomness of characteristics across various categories of real-world time series data, achieving effectiveness and robustness simultaneously poses a considerable challenge for specific deep-learning models. We have proposed a novel prediction framework with a multi-phase feature selection technique, a long short-term memory-based autoencoder, and a temporal convolution-based autoencoder to fill this gap. The multi-phase feature selection is applied to retrieve the optimal feature selection and optimal lag window length for different features. Moreover, the customized stacked autoencoder strategy is employed in the model. The first autoencoder is used to resolve the random weight initialization problem. Additionally, the second autoencoder models the temporal relation between non-linear correlated features with convolution networks and recurrent neural networks.Finally, the model's ability to generalize, predict accurately, and perform effectively is validated through experimentation with three distinct real-world time series datasets. In this study, we conducted experiments on three real-world datasets: Energy Appliances, Beijing PM2.5 Concentration, and Solar Radiation. The Energy Appliances dataset consists of 29 attributes with a training size of 15,464 instances and a testing size of 4239 instances. For the Beijing PM2.5 Concentration dataset, there are 18 attributes, with 34,952 instances in the training set and 8760 instances in the testing set. The Solar Radiation dataset comprises 11 attributes, with 22,857 instances in the training set and 9797 instances in the testing set. The experimental setup involved evaluating the performance of forecasting models using two distinct error measures: root mean square error and mean absolute error. To ensure robust evaluation, the errors were calculated at the identical scale of the data. The results of the experiments demonstrate the superiority of the proposed model compared to existing models, as evidenced by significant advantages in various metrics such as mean squared error and mean absolute error. For PM2.5 air quality data, the proposed model's mean absolute error is 7.51 over 12.45, about ∼40% improvement. Similarly, the mean square error for the dataset is improved from 23.75 to 11.62, which is ∼51%of improvement. For the solar radiation dataset, the proposed model resulted in ∼34.7% improvement in means squared error and ∼75% in mean absolute error. The recommended framework demonstrates outstanding capabilities in generalization and outperforms datasets spanning multiple indigenous domains

    СВЕРХЗВУКОВОЙ НЕВЯЗКИЙ ПОТОК ОКОЛО ТЕЛ ВРАЩЕНИЯ: ЭМПИРИЧЕСКИЙ И ЧИСЛЕННЫЙ РАСЧЕТ

    No full text
    .Расчетные данные по волновому коэффициенту сопротивления, температуре торможения, толщине ударного слоя и другим газодинамическим параметрам были использованы для того, чтобы показать приемлемое согласие между численными и эмпирическими результатами для обтекания сверхзвуковым потоком заостренного кругового конуса, кругового конуса со сферическим носиком и усеченного конуса. Из анализа численных данных, позволяющих получить представление о рассмотренных физических явлениях, можно сделать вывод, что использованные эмпирические соотношения могут быть рекомендованы для верификации разрабатываемого нового программного обеспечения вычислительной гидрогазодинамики, а также для оценки свойств применяемых вычислительных сеток. Это позволяет получать более точные результаты, а также разрешить такие особенности сверхзвукового потока, как ударные волны и контактные границы

    Requirement Change Prediction Model for Small Software Systems

    No full text
    The software industry plays a vital role in driving technological advancements. Software projects are complex and consist of many components, so change is unavoidable in these projects. The change in software requirements must be predicted early to preserve resources, since it can lead to project failures. This work focuses on small-scale software systems in which requirements are changed gradually. The work provides a probabilistic prediction model, which predicts the probability of changes in software requirement specifications. The first part of the work considers analyzing the changes in software requirements due to certain variables with the help of stakeholders, developers, and experts by the questionnaire method. Then, the proposed model incorporates their knowledge in the Bayesian network as conditional probabilities of independent and dependent variables. The proposed approach utilizes the variable elimination method to obtain the posterior probability of the revisions in the software requirement document. The model was evaluated by sensitivity analysis and comparison methods. For a given dataset, the proposed model computed the low state revisions probability to 0.42, and the high state revisions probability to 0.45. Thus, the results proved that the proposed approach can predict the change in the requirements document accurately by outperforming existing models

    A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms

    No full text
    One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis
    corecore