10,102 research outputs found

    Handbook of Computational Intelligence in Manufacturing and Production Management

    Get PDF
    Artificial intelligence (AI) is simply a way of providing a computer or a machine to think intelligently like human beings. Since human intelligence is a complex abstraction, scientists have only recently began to understand and make certain assumptions on how people think and to apply these assumptions in order to design AI programs. It is a vast knowledge base discipline that covers reasoning, machine learning, planning, intelligent search, and perception building. Traditional AI had the limitations to meet the increasing demand of search, optimization, and machine learning in the areas of large, biological, and commercial database information systems and management of factory automation for different industries such as power, automobile, aerospace, and chemical plants. The drawbacks of classical AI became more pronounced due to successive failures of the decade long Japanese project on fifth generation computing machines. The limitation of traditional AI gave rise to development of new computational methods in various applications of engineering and management problems. As a result, these computational techniques emerged as a new discipline called computational intelligence (CI)

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Hybrid approach for metabolites production using differential evolution and minimization of metabolic adjustment

    Get PDF
    Microbial strains can be optimized using metabolic engineering which implements gene knockout techniques. These techniques manipulate potential genes to increase the yield of metabolites through restructuring metabolic networks. Nowadays, several hybrid optimization algorithms have been proposed to optimize the microbial strains. However, the existing algorithms were unable to obtain optimal strains because the nonessential genes are hardly to be diagnosed and need to be removed due to high complexity of metabolic network. Therefore, the main goal of this study is to overcome the limitation of the existing algorithms by proposing a hybrid of Differential Evolution and Minimization of Metabolic Adjustments (DEMOMA). Differential Evolution (DE) is known as population-based stochastic search algorithm with few tuneable parameter control. Minimization of Metabolic Adjustment (MOMA) is one of the constraint based algorithms which act to simulate the cellular metabolism after perturbation (gene knockout) occurred to the metabolic model. The strength of MOMA is the ability to simulate the strains that have undergone mutation precisely compared to Flux Balance Analysis. The data set used for the production of fumaric acid is S. cerevisiae whereas data set for lycopene production is Y. lipolytica metabolic networks model. Experimental results show that the DEMOMA was able to improve the growth rate for the fumaric acid production rate while for the lycopene production, Biomass Product Coupled Yield (BPCY) and production rate were both able to be optimized

    Precis of neuroconstructivism: how the brain constructs cognition

    Get PDF
    Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr [1982]) between levels of organization. We propose that three mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment

    Integrated smoothed location model and data reduction approaches for multi variables classification

    Get PDF
    Smoothed Location Model is a classification rule that deals with mixture of continuous variables and binary variables simultaneously. This rule discriminates groups in a parametric form using conditional distribution of the continuous variables given each pattern of the binary variables. To conduct a practical classification analysis, the objects must first be sorted into the cells of a multinomial table generated from the binary variables. Then, the parameters in each cell will be estimated using the sorted objects. However, in many situations, the estimated parameters are poor if the number of binary is large relative to the size of sample. Large binary variables will create too many multinomial cells which are empty, leading to high sparsity problem and finally give exceedingly poor performance for the constructed rule. In the worst case scenario, the rule cannot be constructed. To overcome such shortcomings, this study proposes new strategies to extract adequate variables that contribute to optimum performance of the rule. Combinations of two extraction techniques are introduced, namely 2PCA and PCA+MCA with new cutpoints of eigenvalue and total variance explained, to determine adequate extracted variables which lead to minimum misclassification rate. The outcomes from these extraction techniques are used to construct the smoothed location models, which then produce two new approaches of classification called 2PCALM and 2DLM. Numerical evidence from simulation studies demonstrates that the computed misclassification rate indicates no significant difference between the extraction techniques in normal and non-normal data. Nevertheless, both proposed approaches are slightly affected for non-normal data and severely affected for highly overlapping groups. Investigations on some real data sets show that the two approaches are competitive with, and better than other existing classification methods. The overall findings reveal that both proposed approaches can be considered as improvement to the location model, and alternatives to other classification methods particularly in handling mixed variables with large binary size

    On the clinical potential of ion computed tomography with different detector systems and ion species

    Get PDF
    corecore