39 research outputs found

    “Intrusion Detection System Evaluation “A Comparative Study of Machine Learning Algorithms

    Get PDF
    The need for cheaper and faster delivery in the electronics industry has increased as a result of information technology advancements. The quick development of technology not only makes life simpler but also raises several security concerns. The number of attacks conducted online has increased as the Internet has developed through time. One of the supporting layers that can be used for information security is the intrusion detection system (IDS). IDS offers a clean atmosphere for conducting business and steers clear of shady network activity. The security on the user's end of web transactions is the most difficult task in the construction of an e-commerce system. This study examined intrusion detection security techniques. Continuous monitoring of intrusion detection is required for further technological adaptation, and as a result, presents a comparative comparison of adaptive artificial intelligence-based intrusion detection algorithms. This work shows how reinforcement learning (RL) and regression learning-based intrusion detection systems (IDS) can be used to solve extremely difficult issues, such as choosing input features and taking limited resources into account

    Long-Term Hourly Scenario Generation for Correlated Wind and Solar Power combining Variational Autoencoders with Radial Basis Function Kernels

    Full text link
    Accurate generation of realistic future scenarios of renewable energy generation is crucial for long-term planning and operation of electrical systems, especially considering the increasing focus on sustainable energy and the growing penetration of renewable generation in energy matrices. These predictions enable power system operators and energy planners to effectively manage the variability and intermittency associated with renewable generation, allowing for better grid stability, improved energy management, and enhanced decision-making processes. In this paper, we propose an innovative method for generating long-term hourly scenarios for wind and solar power generation, taking into consideration the correlation between these two energy sources. To achieve this, we combine the capabilities of a Variational Autoencoder (VAE) with the additional benefits of incorporating the Radial Basis Function (RBF) kernel in our artificial neural network architecture. By incorporating them, we aim to obtain a latent space with improved regularization properties. To evaluate the effectiveness of our proposed method, we conduct experiments in a representative study scenario, utilizing real-world wind and solar power generation data from the Brazil system. We compare the scenarios generated by our model with the observed data and with other sets of scenarios produced by a conventional VAE architecture. Our experimental results demonstrate that the proposed method can generate long-term hourly scenarios for wind and solar power generation that are highly correlated, accurately capturing the temporal and spatial characteristics of these energy sources. Taking advantage of the benefits of RBF in obtaining a well-regularized latent space, our approach offers improved accuracy and robustness in generating long-term hourly scenarios for renewable energy generation

    DEVELOPMENT OF R PACKAGE AND EXPERIMENTAL ANALYSIS ON PREDICTION OF THE CO2 COMPRESSIBILITY FACTOR USING GRADIENT DESCENT

    Get PDF
    Nowadays, many variants of gradient descent (i.e., the methods included in machine learning for regression) have been proposed. Moreover, these algorithms have been widely used to deal with real-world problems. However, the implementations of these algorithms into a software library are few. Therefore, we focused on building a package written in R that includes eleven algorithms based on gradient descent, as follows: Mini-Batch Gradient Descent (MBGD), Stochastic Gradient Descent (SGD), Stochastic Average Gradient Descent (SAGD), Momentum Gradient Descent (MGD), Accelerated Gradient Descent (AGD), Adagrad, Adadelta, RMSprop and Adam. Additionally, experimental analysis on prediction of the CO2 compressibility factor were also conducted. The results show that the accuracy and computational cost are reasonable, which are 0.0085 and 0.142 second for the average of root mean square root and simulation time

    Identification of Properties Important to Protein Aggregation Using Feature Selection

    Get PDF
    Background: Protein aggregation is a significant problem in the biopharmaceutical industry (protein drug stability) and is associated medically with over 40 human diseases. Although a number of computational models have been developed for predicting aggregation propensity and identifying aggregation-prone regions in proteins, little systematic research has been done to determine physicochemical properties relevant to aggregation and their relative importance to this important process. Such studies may result in not only accurately predicting peptide aggregation propensities and identifying aggregation prone regions in proteins, but also aid in discovering additional underlying mechanisms governing this process. Results: We use two feature selection algorithms to identify 16 features, out of a total of 560 physicochemical properties, presumably important to protein aggregation. Two predictors (ProA-SVM and ProA-RF) using selected features are built for predicting peptide aggregation propensity and identifying aggregation prone regions in proteins. Both methods are compared favourably to other state-of-the-art algorithms in cross validation. The identified important properties are fairly consistent with previous studies and bring some new insights into protein and peptide aggregation. One interesting new finding is that aggregation prone peptide sequences have similar properties to signal peptide and signal anchor sequences. Conclusions: Both predictors are implemented in a freely available web application (http://www.abl.ku.edu/ProA/ webcite). We suggest that the quaternary structure of protein aggregates, especially soluble oligomers, may allow the formation of new molecular recognition signals that guide aggregate targeting to specific cellular sites

    TOWARDS ARTIFICIAL NEURAL NETWORK MODEL TO DIAGNOSE THYROID PROBLEMS

    Get PDF
    Medical diagnosis can be viewed as a pattern classification problem: based a set of input features the goal is to classify a patient as having a particular disorder or as not having it. Thyroid hormone problems are the most prevalent problems nowadays. In this paper an artificial neural network approach is developed using a back propagation algorithm in order to diagnose thyroid problems. It gets a number of factors as input and produces an output which gives the result of whether a person has the problem or is healthy. It is found that back propagation algorithm is proved to be having high sensitivity and specificity

    Composable probabilistic inference with BLAISE

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 185-190).If we are to understand human-level cognition, we must understand how the mind finds the patterns that underlie the incomplete, noisy, and ambiguous data from our senses and that allow us to generalize our experiences to new situations. A wide variety of commercial applications face similar issues: industries from health services to business intelligence to oil field exploration critically depend on their ability to find patterns in vast amounts of data and use those patterns to make accurate predictions. Probabilistic inference provides a unified, systematic framework for specifying and solving these problems. Recent work has demonstrated the great value of probabilistic models defined over complex, structured domains. However, our ability to imagine probabilistic models has far outstripped our ability to programmatically manipulate them and to effectively implement inference, limiting the complexity of the problems that we can solve in practice. This thesis presents BLAISE, a novel framework for composable probabilistic modeling and inference, designed to address these limitations. BLAISE has three components: * The BLAISE State-Density-Kernel (SDK) graphical modeling language that generalizes factor graphs by: (1) explicitly representing inference algorithms (and their locality) using a new type of graph node, (2) representing hierarchical composition and repeated substructures in the state space, the interest distribution, and the inference procedure, and (3) permitting the structure of the model to change during algorithm execution. * A suite of SDK graph transformations that may be used to extend a model (e.g. to construct a mixture model from a model of a mixture component), or to make inference more effective (e.g. by automatically constructing a parallel tempered version of an algorithm or by exploiting conjugacy in a model).(cont.) * The BLAISE Virtual Machine, a runtime environment that can efficiently execute the stochastic automata represented by BLAISE SDK graphs. BLAISE encourages the construction of sophisticated models by composing simpler models, allowing the designer to implement and verify small portions of the model and inference method, and to reuse mode components from one task to another. BLAISE decouples the implementation of the inference algorithm from the specification of the interest distribution, even in cases (such as Gibbs sampling) where the shape of the interest distribution guides the inference. This gives modelers the freedom to explore alternate models without slow, error-prone reimplementation. The compositional nature of BLAISE enables novel reinterpretations of advanced Monte Carlo inference techniques (such as parallel tempering) as simple transformations of BLAISE SDK graphs. In this thesis, I describe each of the components of the BLAISE modeling framework, as well as validating the BLAISE framework by highlighting a variety of contemporary sophisticated models that have been developed by the BLAISE user community. I also present several surprising findings stemming from the BLAISE modeling framework, including that an Infinite Relational Model can be built using exactly the same inference methods as a simple mixture model, that constructing a parallel tempered inference algorithm should be a point-and-click/one-line-of-code operation, and that Markov chain Monte Carlo for probabilistic models with complicated long-distance dependencies, such as a stochastic version of Scheme, can be managed using standard BLAISE mechanisms.by Keith Allen Bonawitz.Ph.D

    Providing Information by Resource- Constrained Data Analysis

    Get PDF
    The Collaborative Research Center SFB 876 (Providing Information by Resource-Constrained Data Analysis) brings together the research fields of data analysis (Data Mining, Knowledge Discovery in Data Bases, Machine Learning, Statistics) and embedded systems and enhances their methods such that information from distributed, dynamic masses of data becomes available anytime and anywhere. The research center approaches these problems with new algorithms respecting the resource constraints in the different scenarios. This Technical Report presents the work of the members of the integrated graduate school
    corecore