651 research outputs found

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set

    A Comparative Representation Approach to Modern Heuristic Search Methods in a Job Shop

    Get PDF
    The job shop problem is among the class of NP- hard combinatorial problems. This Research paper addresses the problem of static job shop scheduling on the job-based representation and the rule based representations. The popular search techniques like the genetic algorithm and simulated annealing are used for the determination of the objectives like minimizations of the makespan time and mean flow time. Various rules like the SPT, LPT, MWKR, and LWKR are used for the objective function to attain the results. The summary of results from this paper gives a conclusion that the genetic algorithm gives better results in the makespan time determination on both the job based representation and the rule based representation and the simulated annealing algorithm gives the better results in the mean flow time in both the representations

    Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization

    Get PDF
    Stochastic convex optimization, by which the objective is the expectation of a random convex function, is an important and widely used method with numerous applications in machine learning, statistics, operations research, and other areas. We study the complexity of stochastic convex optimization given only statistical query (SQ) access to the objective function. We show that well-known and popular first-order iterative methods can be implemented using only statistical queries. For many cases of interest, we derive nearly matching upper and lower bounds on the estimation (sample) complexity, including linear optimization in the most general setting. We then present several consequences for machine learning, differential privacy, and proving concrete lower bounds on the power of convex optimization–based methods. The key ingredient of our work is SQ algorithms and lower bounds for estimating the mean vector of a distribution over vectors supported on a convex body in Rd. This natural problem has not been previously studied, and we show that our solutions can be used to get substantially improved SQ versions of Perceptron and other online algorithms for learning halfspaces

    Neural Networks for cost estimating in project management

    Get PDF
    This thesis considers the application of neural networks in cost estimating in project management and whether they lead to more accurate estimates. It strikes two areas of research, namely neural networks and project management; an introductory chapter on both subjects is included. The statistical problem of parametric cost estimating is described and an explanation of the general principles is given. The Multi-Layer Perceptron with the Backpropagation learning algorithm is determined to be the most appropriate network and a selection of available software programs is reviewed. A Multi-Layer Perceptron neural model is used to determine one of the most important cost estimating relationships of the PRICE model. A comparison of the outputs of the neural network and the PRICE model shows that the Backpropagation algorithm is able to find the underlying estimating relationships used by PRJCE. To investigate whether other underlying functions can be found with artificial intelligence methods, other input parameters are selected and the costs generated by the PRICE model and by the neural network are compared with each other. Further experiments were undertaken in order to improve the performance of the neural network. The neural networks were applied to real data. and their output compared with the PRICE model. The processes of achieving better results are analogous to those used for the artificial data. A neural network was created which performs better than the PRICE model in terms of the accuracy of the estimates produced. The results are discussed and the collection of significant and accurate information and then deciding on which type of network is the best network to be used are identified as the major problems in the application of artificial intelligence for cost estimation in project management. The limitations and restrictions of the implementation of neural networks are examined and the scope and topics of further research are suggested

    Computational intelligence approaches for energy load forecasting in smart energy management grids: state of the art, future challenges, and research directions and Research Directions

    Get PDF
    Energy management systems are designed to monitor, optimize, and control the smart grid energy market. Demand-side management, considered as an essential part of the energy management system, can enable utility market operators to make better management decisions for energy trading between consumers and the operator. In this system, a priori knowledge about the energy load pattern can help reshape the load and cut the energy demand curve, thus allowing a better management and distribution of the energy in smart grid energy systems. Designing a computationally intelligent load forecasting (ILF) system is often a primary goal of energy demand management. This study explores the state of the art of computationally intelligent (i.e., machine learning) methods that are applied in load forecasting in terms of their classification and evaluation for sustainable operation of the overall energy management system. More than 50 research papers related to the subject identified in existing literature are classified into two categories: namely the single and the hybrid computational intelligence (CI)-based load forecasting technique. The advantages and disadvantages of each individual techniques also discussed to encapsulate them into the perspective into the energy management research. The identified methods have been further investigated by a qualitative analysis based on the accuracy of the prediction, which confirms the dominance of hybrid forecasting methods, which are often applied as metaheurstic algorithms considering the different optimization techniques over single model approaches. Based on extensive surveys, the review paper predicts a continuous future expansion of such literature on different CI approaches and their optimizations with both heuristic and metaheuristic methods used for energy load forecasting and their potential utilization in real-time smart energy management grids to address future challenges in energy demand managemen

    The Electrophysiology of Resting State fMRI Networks

    Get PDF
    Traditional research in neuroscience has studied the topography of specific brain functions largely by presenting stimuli or imposing tasks and measuring evoked brain activity. This paradigm has dominated neuroscience for 50 years. Recently, investigations of brain activity in the resting state, most frequently using functional magnetic resonance imaging (fMRI), have revealed spontaneous correlations within widely distributed brain regions known as resting state networks (RSNs). Variability in RSNs across individuals has found to systematically relate to numerous diseases as well as differences in cognitive performance within specific domains. However, the relationship between spontaneous fMRI activity and the underlying neurophysiology is not well understood. This thesis aims to combine invasive electrophysiology and resting state fMRI in human subjects to better understand the nature of spontaneous brain activity. First, we establish an approach to precisely coregister intra-cranial electrodes to fMRI data (Chapter 2). We then created a novel machine learning approach to define resting state networks in individual subjects (Chapter 3). This approach is validated with cortical stimulation in clinical electrocorticography (ECoG) patients (Chapter 4). Spontaneous ECoG data are then analyzed with respect to fMRI time-series and fMRI-defined RSNs in order to illustrate novel ECoG correlates of fMRI for both local field potentials and band-limited power (BLP) envelopes (Chapter 5). In Chapter 6, we show that the spectral specificity of these resting state ECoG correlates link classic brain rhythms with large-scale functional domains. Finally, in Chapter 7 we show that the frequencies and topographies of spontaneous ECoG correlations specifically recapitulate the spectral and spatial structure of task responses within individual subjects

    Applying Neural Network Approach with Imperialist Competitive Algorithm for Software Reliability Prediction

    Get PDF
    Software systems exist in different critical domains. Software reliability assessment has become a critical issue due to the variety levels of software complexity. Software reliability, as a sub-branch of software quality, has been exploited to evaluate to what extend the desired software is trustable. To overcome the problem of dependency to human power and time limitation for software reliability prediction, researchers consider soft computing approaches such as Neural Network and Fuzzy Logic. These techniques suffer from some limitations including lack of analyzing mathematical foundations, local minima trapping and convergence problem. This study develops a novel model for software reliability prediction through the combination of Multi-Layer Perceptron Neural Network (MLP) and Imperialist Competitive Algorithm (ICA). The proposed model has solved some of the problems of existing methods such as convergence problem and demanding on huge number of data. This model can be used in complicated software systems. The results prove that both training and testing phases of this model outperform existing approaches in terms of predicting the number of software failures
    corecore