8 research outputs found

    COMPARISON OF LINEAR REGRESSIONS AND NEURAL NETWORKS FOR FORECASTING ELECTRICITY CONSUMPTION

    Get PDF
    Electricity has a major role in humans that is very necessary for daily life. Forecasting of electricity consumption can guide the government's strategy for the use and development of energy in the future. But the complex and non-linear electricity consumption dataset is a challenge. Traditional time series models in such as linear regression are unable to solve nonlinear and complex data-related problems. While neural networks can overcome the problems of nonlinear and complex data relationships. This was proven in the experiments in this study. Experiments carried out with linear regressions and neural networks on the electricity consumption dataset A and the electricity consumption dataset B. Then the RMSE results are compared on the linear regressions and neural networks of the two datasets. On the electricity consumption dataset, A obtained by RMSE of 0.032 used the linear regression, and RMSE of 0.015 used the neural network. On the electricity consumption, dataset B obtained by RMSE of 0.488 used the linear regression, and RMSE of 0.466 used the neural network. The use of neural networks shows a smaller RMSE value compared to the use of linear regressions. This shows that neural networks can overcome nonlinear problems in the electricity consumption dataset A and the electricity consumption dataset B. So that the neural networks are afforded to improve performance better than linear regressions.  This study to prove that there is a nonlinear relationship in the electricity consumption dataset used in this study, and compare which performance is better between using linear regression and neural networks

    Improved particle swarm optimization and gravitational search algorithm for parameter estimation in aspartate pathways

    Get PDF
    One of the main issues in biological system is to characterize the dynamic behaviour of the complex biological processes. Usually, metabolic pathway models are used to describe the complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. Therefore, the parameter values are estimated by fitting the model with experimental data. However, the estimation on these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Previously, a computational approach namely optimization algorithms are used to estimate the measurement of the model parameters. Most of these algorithms previously often suffered bad estimation for the biological system models, which resulted in bad fitting (error) the model with the experimental data. This research proposes a parameter estimation algorithm that can reduce the fitting error between the models and the experimental data. The proposed algorithm is an Improved Particle Swarm Optimization and Gravitational Search Algorithm (IPSOGSA) to obtain the near-optimal kinetic parameter values from experimental data. The improvement in this algorithm is a local search, which aims to increase the chances to obtain the global solution. The outcome of this research is that IPSOGSA can outperform other comparison algorithms in terms of root mean squared error (RMSE) and predictive residual error sum of squares (PRESS) for the estimated results. IPSOGSA manages to score the smallest RMSE with 12.2125 and 0.0304 for Ile and HSP metabolite respectively. The predicted results are benefits for the estimation of optimal kinetic parameters to improve the production of desired metabolites

    Coefficient Adaptation Method for the Zwart Model

    Get PDF
    The coefficient adaptation problem is often encountered in CFD simulations. The accuracy of simulation results depends much on the empirical coefficients of mathematical models. Cavitation simulation is a typical application of CFD. Researchers have proposed methods to optimize the empirical coefficients of the cavitation model. However, these methods can only acquire constant values which aren’t adaptive to all the operating conditions. This paper focused on the condensation and the evaporation coefficients of the Zwart model and considered quasi-steady cavitating flows around a 2-D NACA66(MOD) hydrofoil. For the first time, we gave a formal description of the coefficient adaptation problem, and put forward a method to model the relationship between the best coefficient values and the operating conditions. We designed and implemented the coefficient adaptation platform combining OpenFOAM, and validated the best coefficient values predicted by our method. The overall results show the predicted coefficient values result in an increase of accuracy by 12% in average, compared with the default values and the tuned values by Morgut, thus indicating our method can effectively solve the coefficient adaptation problem for the Zwart model. We believe the proposed method can be extended to other mathematical models in practical uses

    APPLYING PARTICLE SWARM OPTIMIZATION TO ESTIMATE PSYCHOMETRIC MODELS WITH CATEGORICAL RESPONSES

    Get PDF
    Current psychometrics tend to model response data hypothesized to arise from multiple attributes. As a result, the estimation complexity has been greatly increased so that traditional approaches such as the expected-maximization algorithm would fail to produce accurate results. To improve the estimation quality, high-dimensional models are estimated via a global optimization approach- particle swarm optimization (PSO), which is an efficient stochastic method of handling the complexity difficulties. The PSO has been widely used in machine learning fields but remains less-known in the psychometrics community. Details on the integration of the proposed approach to current psychometric model estimation practices are provided. The algorithm tuning process and the accuracy of the proposed approach are demonstrated with simulations. As an illustration, the proposed approach is applied to log-linear cognitive diagnosis models and multi-dimensional item response theory models. These two model families are fairly popular yet challenging frameworks used in assessment and evaluation research to explain how participants respond to item level stimuli. The aim of this dissertation is to fill the gap between the field of psychometric modeling and machine learning estimation techniques

    Load forecasting on the user‐side by means of computational intelligence algorithms

    Get PDF
    Nowadays, it would be very difficult to deny the need to prioritize sustainable development through energy efficiency at all consumption levels. In this context, an energy management system (EMS) is a suitable option for continuously improving energy efficiency, particularly on the user side. An EMS is a set of technological tools that manages energy consumption information and allows its analysis. EMS, in combination with information technologies, has given rise to intelligent EMS (iEMS), which, aside from lending support to monitoring and reporting functions as an EMS does, it has the ability to model, forecast, control and diagnose energy consumption in a predictive way. The main objective of an iEMS is to continuously improve energy efficiency (on-line) as automatically as possible. The core of an iEMS is its load modeling forecasting system (LMFS). It takes advantage of historical information on energy consumption and energy-related variables in order to model and forecast load profiles and, if available, generator profiles. These models and forecasts are the main information used for iEMS applications for control and diagnosis. That is why in this thesis we have focused on the study, analysis and development of LMFS on the user side. The fact that the LMFS is applied on the user side to support an iEMS means that specific characteristics are required that in other areas of load forecasting they are not. First of all, the user-side load profiles (LPs) have a higher random behavior than others, as for example, in power system distribution or generation. This makes the modeling and forecasting process more difficult. Second, on the user side --for example an industrial user-- there is a high number and variety of places that can be monitored, modeled and forecasted, as well as their precedence or nature. Thus, on the one hand, an LMFS requires a high degree of autonomy to automatically or autonomously generate the demanded models. And on the other hand, it needs a high level of adaptability in order to be able to model and forecast different types of loads and different types of energies. Therefore, the addressed LMFS are those that do not look only for accuracy, but also adaptability and autonomy. Seeking to achieve these objectives, in this thesis work we have proposed three novel LMFS schemes based on hybrid algorithms from computational intelligence, signal processing and statistical theory. The first of them looked to improve adaptability, keeping in mind the importance of accuracy and autonomy. It was called an evolutionary training algorithm (ETA) and is based on adaptivenetwork-based-fuzzy-inference system (ANFIS) that is trained by a multi-objective genetic algorithm instead of its traditional training algorithm. As a result of this hybrid, the generalization capacity was improved (avoiding overfitting) and an easily adaptable training algorithm for new adaptive networks based on traditional ANFIS was obtained. The second scheme deals with LMF autonomy in order to build models from multiple loads automatically. Similar to the previous proposal, an ANFIS and a MOGA were used. In this case, the MOGA was used to find a near-optimal configuration for the ANFIS instead of training it. The LMFS relies on this configuration to work properly, as well as to maintain accuracy and generalization capabilities. Real data from an industrial scenario were used to test the proposed scheme and the multi-site modeling and self-configuration results were satisfactory. Furthermore, other algorithms were satisfactorily designed and tested for processing raw data in outlier detection and gap padding. The last of the proposed approaches sought to improve accuracy while keeping autonomy and adaptability. It took advantage of dominant patterns (DPs) that have lower time resolution than the target LP, so they are easier to model and forecast. The Hilbert-Huang transform and Hilbert-spectral analysis were used for detecting and selecting the DPs. Those selected were used in a proposed scheme of partial models (PM) based on parallel ANFIS or artificial neural networks (ANN) to extract the information and give it to the main PM. Therefore, LMFS accuracy improved and the user-side LP noising problem was reduced. Additionally, in order to compensate for the added complexity, versions of self-configured sub-LMFS for each PM were used. This point was fundamental since, the better the configuration, the better the accuracy of the model; and subsequently the information provided to the main partial model was that much better. Finally, and to close this thesis, an outlook of trends regarding iEMS and an outline of several hybrid algorithms that are pending study and testing are presented.En el contexto energético actual y particularmente en el lado del usuario, el concepto de sistema de gestión energética (EMS) se presenta como una alternativa apropiada para mejorar continuamente la eficiencia energética. Los EMSs en combinación con las tecnologías informáticas dan origen al concepto de iEMS, que además de soportar las funciones de los EMS, tienen la capacidad de modelar, pronosticar, controlar y supervisar los consumos energéticos. Su principal objetivo es el de realizar una mejora continua, lo más autónoma posible y predictiva de la eficiencia energética. Este tipo de sistemas tienen como núcleo fundamental el sistema de modelado y pronóstico de consumos (Load Modeling and Forecasting System, LMFS). El LMFS está habilitado para pronosticar el comportamiento futuro de cargas y, si es necesario, de generadores. Es sobre estos pronósticos sobre los cuales el iEMS puede realizar sus tareas automáticas y predictivas de optimización y supervisión. Los LMFS en el lado del usuario son el foco de esta tesis. Un LMFS en el lado del usuario, diseñado para soportar un iEMS requiere o demanda ciertas características que en otros contextos no serían tan necesarias. En primera estancia, los perfiles de los usuarios tienen un alto grado de aleatoriedad que los hace más difíciles de pronosticar. Segundo, en el lado del usuario, por ejemplo en la industria, el gran número de puntos a modelar requiere que el LMFS tenga por un lado, un nivel elevado de autonomía para generar de la manera más desatendida posible los modelos. Por otro lado, necesita un nivel elevado de adaptabilidad para que, usando la misma estructura o metodología, pueda modelar diferentes tipos de cargas cuya procedencia pude variar significativamente. Por lo tanto, los sistemas de modelado abordados en esta tesis son aquellos que no solo buscan mejorar la precisión, sino también la adaptabilidad y autonomía. En busca de estos objetivos y soportados principalmente por algoritmos de inteligencia computacional, procesamiento de señales y estadística, hemos propuesto tres algoritmos novedosos para el desarrollo de un LMFS en el lado del usuario. El primero de ellos busca mejorar la adaptabilidad del LMFS manteniendo una buena precisión y capacidad de autonomía. Denominado ETA, consiste del uso de una estructura ANFIS que es entrenada por un algoritmo genético multi objetivo (MOGA). Como resultado de este híbrido, obtenemos un algoritmo con excelentes capacidades de generalización y fácil de adaptar para el entrenamiento y evaluación de nuevas estructuras adaptativas basadas en ANFIS. El segundo de los algoritmos desarrollados aborda la autonomía del LMFS para así poder generar modelos de múltiples cargas. Al igual que en la anterior propuesta usamos un ANFIS y un MOGA, pero esta vez el MOGA en vez de entrenar el ANFIS, se utiliza para encontrar la configuración cuasi-óptima del ANFIS. Encontrar la configuración apropiada de un ANFIS es muy importante para obtener un buen funcionamiento del LMFS en lo que a precisión y generalización respecta. El LMFS propuesto, además de configurar automáticamente el ANFIS, incluyó diversos algoritmos para procesar los datos puros que casi siempre estuvieron contaminados de datos espurios y gaps de información, operando satisfactoriamente en las condiciones de prueba en un escenario real. El tercero y último de los algoritmos buscó mejorar la precisión manteniendo la autonomía y adaptabilidad, aprovechando para ello la existencia de patrones dominantes de más baja resolución temporal que el consumo objetivo, y que son más fáciles de modelar y pronosticar. La metodología desarrollada se basa en la transformada de Hilbert-Huang para detectar y seleccionar tales patrones dominantes. Además, esta metodología define el uso de modelos parciales de los patrones dominantes seleccionados, para mejorar la precisión del LMFS y mitigar el problema de aleatoriedad que afecta a los consumos en el lado del usuario. Adicionalmente, se incorporó el algoritmo de auto configuración que se presentó en la propuesta anterior para hallar la configuración cuasi-óptima de los modelos parciales. Este punto fue crucial puesto que a mejor configuración de los modelos parciales mayor es la mejora en precisión del pronóstico final. Finalmente y para cerrar este trabajo de tesis, se realizó una prospección de las tendencias en cuanto al uso de iEMS y se esbozaron varias propuestas de algoritmos híbridos, cuyo estudio y comprobación se plantea en futuros estudios

    The use of computational intelligence for security in named data networking

    Get PDF
    Information-Centric Networking (ICN) has recently been considered as a promising paradigm for the next-generation Internet, shifting from the sender-driven end-to-end communication paradigma to a receiver-driven content retrieval paradigm. In ICN, content -rather than hosts, like in IP-based design- plays the central role in the communications. This change from host-centric to content-centric has several significant advantages such as network load reduction, low dissemination latency, scalability, etc. One of the main design requirements for the ICN architectures -since the beginning of their design- has been strong security. Named Data Networking (NDN) (also referred to as Content-Centric Networking (CCN) or Data-Centric Networking (DCN)) is one of these architectures that are the focus of an ongoing research effort that aims to become the way Internet will operate in the future. Existing research into security of NDN is at an early stage and many designs are still incomplete. To make NDN a fully working system at Internet scale, there are still many missing pieces to be filled in. In this dissertation, we study the four most important security issues in NDN in order to defense against new forms of -potentially unknown- attacks, ensure privacy, achieve high availability, and block malicious network traffics belonging to attackers or at least limit their effectiveness, i.e., anomaly detection, DoS/DDoS attacks, congestion control, and cache pollution attacks. In order to protect NDN infrastructure, we need flexible, adaptable and robust defense systems which can make intelligent -and real-time- decisions to enable network entities to behave in an adaptive and intelligent manner. In this context, the characteristics of Computational Intelligence (CI) methods such as adaption, fault tolerance, high computational speed and error resilient against noisy information, make them suitable to be applied to the problem of NDN security, which can highlight promising new research directions. Hence, we suggest new hybrid CI-based methods to make NDN a more reliable and viable architecture for the future Internet.Information-Centric Networking (ICN) ha sido recientemente considerado como un paradigma prometedor parala nueva generación de Internet, pasando del paradigma de la comunicación de extremo a extremo impulsada por el emisora un paradigma de obtención de contenidos impulsada por el receptor. En ICN, el contenido (más que los nodos, como sucede en redes IPactuales) juega el papel central en las comunicaciones. Este cambio de "host-centric" a "content-centric" tiene varias ventajas importantes como la reducción de la carga de red, la baja latencia, escalabilidad, etc. Uno de los principales requisitos de diseño para las arquitecturas ICN (ya desde el principiode su diseño) ha sido una fuerte seguridad. Named Data Networking (NDN) (también conocida como Content-Centric Networking (CCN) o Data-Centric Networking (DCN)) es una de estas arquitecturas que son objetode investigación y que tiene como objetivo convertirse en la forma en que Internet funcionará en el futuro. Laseguridad de NDN está aún en una etapa inicial. Para hacer NDN un sistema totalmente funcional a escala de Internet, todavía hay muchas piezas que faltan por diseñar. Enesta tesis, estudiamos los cuatro problemas de seguridad más importantes de NDN, para defendersecontra nuevas formas de ataques (incluyendo los potencialmente desconocidos), asegurar la privacidad, lograr una alta disponibilidad, y bloquear los tráficos de red maliciosos o al menos limitar su eficacia. Estos cuatro problemas son: detección de anomalías, ataques DoS / DDoS, control de congestión y ataques de contaminación caché. Para solventar tales problemas necesitamos sistemas de defensa flexibles, adaptables y robustos que puedantomar decisiones inteligentes en tiempo real para permitir a las entidades de red que se comporten de manera rápida e inteligente. Es por ello que utilizamos Inteligencia Computacional (IC), ya que sus características (la adaptación, la tolerancia a fallos, alta velocidad de cálculo y funcionamiento adecuado con información con altos niveles de ruido), la hace adecuada para ser aplicada al problema de la seguridad ND
    corecore