12,616 research outputs found

    Encapsulating and representing the knowledge on the evaluation of an engineering system

    Get PDF
    This paper proposes a cross-disciplinary methodology for a fundamental question in product development: How can the innovation patterns during the evolution of an engineering system (ES) be encapsulated, so that it can later be mined through data mining analysis methods? Reverse engineering answers the question of which components a developed engineering system consists of, and how the components interact to make the working product. TRIZ answers the question of which problem-solving principles can be, or have been employed in developing that system, in comparison to its earlier versions, or with respect to similar systems. While these two methodologies have been very popular, to the best of our knowledge, there does not yet exist a methodology that reverseengineers and encapsulates and represents the information regarding the complete product development process in abstract terms. This paper suggests such a methodology, that consists of mathematical formalism, graph visualization, and database representation. The proposed approach is demonstrated by analyzing the design and development process for a prototype wrist-rehabilitation robot

    POWERPLAY: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    Get PDF
    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. The novel algorithmic framework POWERPLAY (2011) continually searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Wow-effects are achieved by continually making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. POWERPLAY's search orders candidate pairs of tasks and solver modifications by their conditional computational (time & space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. The computational costs of validating new tasks need not grow with task repertoire size. POWERPLAY's ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Goedel's sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. POWERPLAY may be viewed as a greedy but practical implementation of basic principles of creativity. A first experimental analysis can be found in separate papers [53,54].Comment: 21 pages, additional connections to previous work, references to first experiments with POWERPLA

    Improved decision support for engine-in-the-loop experimental design optimization

    Get PDF
    Experimental optimization with hardware in the loop is a common procedure in engineering and has been the subject of intense development, particularly when it is applied to relatively complex combinatorial systems that are not completely understood, or where accurate modelling is not possible owing to the dimensions of the search space. A common source of difficulty arises because of the level of noise associated with experimental measurements, a combination of limited instrument precision, and extraneous factors. When a series of experiments is conducted to search for a combination of input parameters that results in a minimum or maximum response, under the imposition of noise, the underlying shape of the function being optimized can become very difficult to discern or even lost. A common methodology to support experimental search for optimal or suboptimal values is to use one of the many gradient descent methods. However, even sophisticated and proven methodologies, such as simulated annealing, can be significantly challenged in the presence of noise, since approximating the gradient at any point becomes highly unreliable. Often, experiments are accepted as a result of random noise which should be rejected, and vice versa. This is also true for other sampling techniques, including tabu and evolutionary algorithms. After the general introduction, this paper is divided into two main sections (sections 2 and 3), which are followed by the conclusion. Section 2 introduces a decision support methodology based upon response surfaces, which supplements experimental management based on a variable neighbourhood search and is shown to be highly effective in directing experiments in the presence of a significant signal-to-noise ratio and complex combinatorial functions. The methodology is developed on a three-dimensional surface with multiple local minima, a large basin of attraction, and a high signal-to-noise ratio. In section 2, the methodology is applied to an automotive combinatorial search in the laboratory, on a real-time engine-in-the-loop application. In this application, it is desired to find the maximum power output of an experimental single-cylinder spark ignition engine operating under a quasi-constant-volume operating regime. Under this regime, the piston is slowed at top dead centre to achieve combustion in close to constant volume conditions. As part of the further development of the engine to incorporate a linear generator to investigate free-piston operation, it is necessary to perform a series of experiments with combinatorial parameters. The objective is to identify the maximum power point in the least number of experiments in order to minimize costs. This test programme provides peak power data in order to achieve optimal electrical machine design. The decision support methodology is combined with standard optimization and search methods – namely gradient descent and simulated annealing – in order to study the reductions possible in experimental iterations. It is shown that the decision support methodology significantly reduces the number of experiments necessary to find the maximum power solution and thus offers a potentially significant cost saving to hardware-in-the-loop experi- mentation

    Metaheuristics for online drive train efficiency optimization in electric vehicles

    Get PDF
    Utilization of electric vehicles provides a solution to several challenges in today’s individual mobility. However, ensuring maximum efficient operation of electric vehicles is required in order to overcome their greatest weakness: the limited range. Even though the overall efficiency is already high, incorporating DC/DC converter into the electric drivetrain improves the efficiency level further. This inclusion enables the dynamic optimization of the intermediate voltage level subject to the current driving demand (operating point) of the drivetrain. Moreover, the overall drivetrain efficiency depends on the setup of other drivetrain components’ electric parameters. Solving this complex problem for different drivetrain parameter setups subject to the current driving demand needs considerable computing time for conventional solvers and cannot be delivered in real-time. Therefore, basic metaheuristics are identified and applied in order to assure the optimization process during driving. In order to compare the performance of metaheuristics for this task, we adjust and compare the performance of different basic metaheuristics (i.e. Monte-Carlo, Evolutionary Algorithms, Simulated Annealing and Particle Swarm Optimization). The results are statistically analyzed and based on a developed simulation model of an electric drivetrain. By applying the bestperforming metaheuristic, the efficiency of the drivetrain could be improved by up to 30% compared to an electric vehicle without the DC/DC- converter. The difference between computing times vary between 30 minutes (for the Exhaustive Search Algorithm) to about 0.2 seconds (Particle Swarm) per operating point. It is shown, that the Particle Swarm Optimization as well as the Evolutionary Algorithm procedures are the best-performing methods on this optimization problem. All in all, the results support the idea that online efficiency optimization in electric vehicles is possible with regard to computing time and success probability

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    • …
    corecore