90 research outputs found

    Forecasting Stock Exchange Data using Group Method of Data Handling Neural Network Approach

    Get PDF
    The increasing uncertainty of the natural world has motivated computer scientists to seek out the best approach to technological problems. Nature-inspired problem-solving approaches include meta-heuristic methods that are focused on evolutionary computation and swarm intelligence. One of these problems significantly impacting information is forecasting exchange index, which is a serious concern with the growth and decline of stock as there are many reports on loss of financial resources or profitability. When the exchange includes an extensive set of diverse stock, particular concepts and mechanisms for physical security, network security, encryption, and permissions should guarantee and predict its future needs. This study aimed to show it is efficient to use the group method of data handling (GMDH)-type neural networks and their application for the classification of numerical results. Such modeling serves to display the precision of GMDH-type neural networks. Following the US withdrawal from the Joint Comprehensive Plan of Action in April 2018, the behavior of the stock exchange data stream and commend algorithms has not been able to predict correctly and fit in the network satisfactorily. This paper demonstrated that Group Method Data Handling is most likely to improve inductive self-organizing approaches for addressing realistic severe problems such as the Iranian financial market crisis. A new trajectory would be used to verify the consistency of the obtained equations hence the models' validity

    Adaptive Heterogeneous Multi-Population Cultural Algorithm

    Get PDF
    Optimization problems is a class of problems where the goal is to make a system as effective as possible. The goal of this research area is to design an algorithm to solve optimization problems effectively and efficiently. Being effective means that the algorithm should be able to find the optimal solution (or near optimal solutions), while efficiency refers to the computational effort required by the algorithm to find an optimal solution. In other words, an optimization algorithm should be able to find the optimal solution in an acceptable time. Therefore, the aim of this dissertation is to come up with a new algorithm which presents an effective as well as efficient performance. There are various kinds of algorithms proposed to deal with optimization problems. Evolutionary Algorithms (EAs) is a subset of population-based methods which are successfully applied to solve optimization problems. In this dissertation the area of evolutionary methods and specially Cultural Algorithms (CAs) are investigated. The results of this investigation reveal that there are some room for improving the existing EAs. Consequently, a number of EAs are proposed to deal with different optimization problems. The proposed EAs offer better performance compared to the state-of-the-art methods. The main contribution of this dissertation is to introduce a new architecture for optimization algorithms which is called Heterogeneous Multi-Population Cultural Algorithm (HMP-CA). The new architecture first incorporates a decomposition technique to divide the given problem into a number of sub-problems, and then it assigns the sub-problems to different local CAs to be optimized separately in parallel. In order to evaluate the proposed architecture, it is applied on numerical optimization problems. The evaluation results reveal that HMP-CA is fully effective such that it can find the optimal solution for every single run. Furthermore, HMP-CA outperforms the state-of-the-art methods by offering a more efficient performance. The proposed HMP-CA is further improved by incorporating an adaptive decomposition technique. The improved version which is called Adaptive HMP-CA (A-HMP-CA) is evaluated over large scale global optimization problems. The results of this evaluation show that HMP-CA significantly outperforms the state-of-the-art methods in terms of both effectiveness and efficiency

    Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

    Full text link
    An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.Comment: See http://www.jair.org/ for any accompanying file

    Soft computing for tool life prediction a manufacturing application of neural - fuzzy systems

    Get PDF
    Tooling technology is recognised as an element of vital importance within the manufacturing industry. Critical tooling decisions related to tool selection, tool life management, optimal determination of cutting conditions and on-line machining process monitoring and control are based on the existence of reliable detailed process models. Among the decisive factors of process planning and control activities, tool wear and tool life considerations hold a dominant role. Yet, both off-line tool life prediction, as well as real tune tool wear identification and prediction are still issues open to research. The main reason lies with the large number of factors, influencing tool wear, some of them being of stochastic nature. The inherent variability of workpiece materials, cutting tools and machine characteristics, further increases the uncertainty about the machining optimisation problem. In machining practice, tool life prediction is based on the availability of data provided from tool manufacturers, machining data handbooks or from the shop floor. This thesis recognises the need for a data-driven, flexible and yet simple approach in predicting tool life. Model building from sample data depends on the availability of a sufficiently rich cutting data set. Flexibility requires a tool-life model with high adaptation capacity. Simplicity calls for a solution with low complexity and easily interpretable by the user. A neural-fuzzy systems approach is adopted, which meets these targets and predicts tool life for a wide range of turning operations. A literature review has been carried out, covering areas such as tool wear and tool life, neural networks, frizzy sets theory and neural-fuzzy systems integration. Various sources of tool life data have been examined. It is concluded that a combined use of simulated data from existing tool life models and real life data is the best policy to follow. The neurofuzzy tool life model developed is constructed by employing neural network-like learning algorithms. The trained model stores the learned knowledge in the form of frizzy IF-THEN rules on its structure, thus featuring desired transparency. Low model complexity is ensured by employing an algorithm which constructs a rule base of reduced size from the available data. In addition, the flexibility of the developed model is demonstrated by the ease, speed and efficiency of its adaptation on the basis of new tool life data. The development of the neurofuzzy tool life model is based on the Fuzzy Logic Toolbox (vl.0) of MATLAB (v4.2cl), a dedicated tool which facilitates design and evaluation of fuzzy logic systems. Extensive results are presented, which demonstrate the neurofuzzy model predictive performance. The model can be directly employed within a process planning system, facilitating the optimisation of turning operations. Recommendations aremade for further enhancements towards this direction

    Flexible and Intelligent Learning Architectures for SOS (FILA-SoS)

    Get PDF
    Multi-faceted systems of the future will entail complex logic and reasoning with many levels of reasoning in intricate arrangement. The organization of these systems involves a web of connections and demonstrates self-driven adaptability. They are designed for autonomy and may exhibit emergent behavior that can be visualized. Our quest continues to handle complexities, design and operate these systems. The challenge in Complex Adaptive Systems design is to design an organized complexity that will allow a system to achieve its goals. This report attempts to push the boundaries of research in complexity, by identifying challenges and opportunities. Complex adaptive system-of-systems (CASoS) approach is developed to handle this huge uncertainty in socio-technical systems

    STATISTICAL MACHINE LEARNING BASED MODELING FRAMEWORK FOR DESIGN SPACE EXPLORATION AND RUN-TIME CROSS-STACK ENERGY OPTIMIZATION FOR MANY-CORE PROCESSORS

    Get PDF
    The complexity of many-core processors continues to grow as a larger number of heterogeneous cores are integrated on a single chip. Such systems-on-chip contains computing structures ranging from complex out-of-order cores, simple in-order cores, digital signal processors (DSPs), graphic processing units (GPUs), application specific processors, hardware accelerators, I/O subsystems, network-on-chip interconnects, and large caches arranged in complex hierarchies. While the industry focus is on putting higher number of cores on a single chip, the key challenge is to optimally architect these many-core processors such that performance, energy and area constraints are satisfied. The traditional approach to processor design through extensive cycle accurate simulations are ill-suited for designing many-core processors due to the large microarchitecture design space that must be explored. Additionally it is hard to optimize such complex processors and the applications that run on them statically at design time such that performance and energy constraints are met under dynamically changing operating conditions. The dissertation establishes statistical machine learning based modeling framework that enables the efficient design and operation of many-core processors that meets performance, energy and area constraints. We apply the proposed framework to rapidly design the microarchitecture of a many-core processor for multimedia, computer graphics rendering, finance, and data mining applications derived from the Parsec benchmark. We further demonstrate the application of the framework in the joint run-time adaptation of both the application and microarchitecture such that energy availability constraints are met

    Control of the interaction of a gantry robot end effector with the environment by the adaptive behaviour of its joint drive actuators

    Get PDF
    The thesis examines a way in which the performance of the robot electric actuators can be precisely and accurately force controlled where there is a need for maintaining a stable specified contact force with an external environment. It describes the advantages of the proposed research, which eliminates the need for any external sensors and solely depends on the precise torque control of electric motors. The aim of the research is thus the development of a software based control system and then a proposal for possible inclusion of this control philosophy in existing range of automated manufacturing techniques.The primary aim of the research is to introduce force controlled behaviour in the electric actuators when the robot interacts with the environment, by measuring and controlling the contact forces between them. A software control system is developed and implemented on a robot gantry manipulator to follow two dimensional contours without the explicit geometrical knowledge of those contours. The torque signatures from the electric actuators are monitored and maintained within a desired force band. The secondary aim is the optimal design of the software controller structure. Experiments are performed and the mathematical model is validated against conventional Proportional Integral Derivative (PID) control. Fuzzy control is introduced in the software architecture to incorporate a sophisticated control. Investigation is carried out with the combination of PID and Fuzzy logic which depend on the geometrical complexity of the external environment to achieve the expected results

    Parallel population-based algorithm portfolios::An empirical study

    Get PDF
    Although many algorithms have been proposed, no single algorithm is better than others on all types of problems. Therefore, the search characteristics of different algorithms that show complementary behavior can be combined through portfolio structures to improve the performance on a wider set of problems. In this work, a portfolio of the Artificial Bee Colony, Differential Evolution and Particle Swarm Optimization algorithms was constructed and the first parallel implementation of the population-based algorithm portfolio was carried out by means of a Message Passing Interface environment. The parallel implementation of an algorithm or a portfolio can be performed by different models such as master-slave, coarse-grained or a hybrid of both, as used in this study. Hence, the efficiency and running time of various parallel implementations with different parameter values and combinations were investigated on benchmark problems. The performance of the parallel portfolio was compared to those of the single constituent algorithms. The results showed that the proposed models reduced the running time and the portfolio delivered a robust performance compared to each constituent algorithm. It is observed that the speedup gained over the sequential counterpart changed significantly depending on the structure of the portfolio. The portfolio is also applied to a training of neural networks which has been used for time series prediction. Result demonstrate that, portfolio is able to produce good prediction accuracy. (C) 2017 Elsevier B.V. All rights reserved

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms
    • …
    corecore