11,898 research outputs found

    Data mining and classification for traffic systems using genetic network programming

    Get PDF
    制度:新 ; 報告番号:甲3271号 ; 学位の種類:博士(工学) ; 授与年月日:2011/3/15 ; 早大学位記番号:新557

    Genetic network programming based rule accumulation for agent control

    Get PDF
    制度:新 ; 報告番号:甲3768号 ; 学位の種類:博士(工学) ; 授与年月日:2013/1/28 ; 早大学位記番号:新6141Waseda Universit

    Challenges of Big Data Analysis

    Full text link
    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article give overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasis on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions

    A new technology for manufacturing scheduling derived from space system operations

    Get PDF
    A new technology for producing finite capacity schedules has been developed in response to complex requirements for operating space systems such as the Space Shuttle, the Space Station, and the Deep Space Network for telecommunications. This technology has proven its effectiveness in manufacturing environments where popular scheduling techniques associated with Materials Resources Planning (MRPII) and with factory simulation are not adequate for shop-floor work planning and control. The technology has three components. The first is a set of data structures that accommodate an extremely general description of a factory's resources, its manufacturing activities, and the constraints imposed by the environment. The second component is a language and set of software utilities that enable a rapid synthesis of functional capabilities. The third component is an algorithmic architecture called the Five Ruleset Model which accommodates the unique needs of each factory. Using the new technology, systems can model activities that generate, consume, and/or obligate resources. This allows work-in-process (WIP) to be generated and used; it permits constraints to be imposed or intermediate as well as finished goods inventories. It is also possible to match as closely as possible both the current factory state and future conditions such as promise dates. Schedule revisions can be accommodated without impacting the entire production schedule. Applications have been successful in both discrete and process manufacturing environments. The availability of a high-quality finite capacity production planning capability enhances the data management capabilities of MRP II systems. These schedules can be integrated with shop-floor data collection systems and accounting systems. Using the new technology, semi-custom systems can be developed at costs that are comparable to products that do not have equivalent functional capabilities and/or extensibility

    Automatic differentiation in machine learning: a survey

    Get PDF
    Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure

    A Theory Explains Deep Learning

    Get PDF
    This is our journal for developing Deduction Theory and studying Deep Learning and Artificial intelligence. Deduction Theory is a Theory of Deducing World’s Relativity by Information Coupling and Asymmetry. We focus on information processing, see intelligence as an information structure that relatively close object-oriented, probability-oriented, unsupervised learning, relativity information processing and massive automated information processing. We see deep learning and machine learning as an attempt to make all types of information processing relatively close to probability information processing. We will discuss about how to understand Deep Learning and Artificial intelligence and why Deep Learning is shown better performance than the other methods by metaphysical logic

    Satisfiability Logic Analysis Via Radial Basis Function Neural Network with Artificial Bee Colony Algorithm

    Get PDF
    Radial Basis Function Neural Network (RBFNN) is a variant of artificial neural network (ANN) paradigm, utilized in a plethora of fields of studies such as engineering, technology and science. 2 Satisfiability (2SAT) programming has been coined as a prominent logical rule that defines the identity of RBFNN. In this research, a swarm-based searching algorithm namely, the Artificial Bee Colony (ABC) will be introduced to facilitate the training of RBFNN. Worth mentioning that ABC is a new population-based metaheuristics algorithm inspired by the intelligent comportment of the honey bee hives. The optimization pattern in ABC was found fruitful in RBFNN since ABC reduces the complexity of the RBFNN in optimizing important parameters. The effectiveness of ABC in RBFNN has been examined in terms of various performance evaluations. Therefore, the simulation has proved that the ABC complied efficiently in tandem with the Radial Basis Neural Network with 2SAT according to various evaluations such as the Root Mean Square Error (RMSE), Sum of Squares Error (SSE), Mean Absolute Percentage Error (MAPE), and CPU Time. Overall, the experimental results have demonstrated the capability of ABC in enhancing the learning phase of RBFNN-2SAT as compared to the Genetic Algorithm (GA), Differential Evolution (DE) algorithm and Particle Swarm Optimization (PSO) algorithm

    A study on the memory schemes for genetic network programming

    Get PDF
    制度:新 ; 報告番号:甲3376号 ; 学位の種類:博士(工学) ; 授与年月日:2011/9/15 ; 早大学位記番号:新569
    corecore