18 research outputs found

    The Automatic Neuroscientist: automated experimental design with real-time fMRI

    Get PDF
    A standard approach in functional neuroimaging explores how a particular cognitive task activates a set of brain regions (one task-to-many regions mapping). Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system (many tasks-to-region mapping). In our work, presented here we propose an alternative framework, the Automatic Neuroscientist, which turns the typical fMRI approach on its head. We use real-time fMRI in combination with state-of-the-art optimisation techniques to automatically design the optimal experiment to evoke a desired target brain state. Here, we present two proof-of-principle studies involving visual and auditory stimuli. The data demonstrate this closed-loop approach to be very powerful, hugely speeding up fMRI and providing an accurate estimation of the underlying relationship between stimuli and neural responses across an extensive experimental parameter space. Finally, we detail four scenarios where our approach can be applied, suggesting how it provides a novel description of how cognition and the brain interrelate.Comment: 22 pages, 7 figures, work presented at OHBM 201

    Black box simulation optimization:Generalized response surface methodology

    Get PDF
    The thesis consists of three papers in the area of Response Surface Methodology (RSM). The first paper deals with optimization problems with a single random objective. The contributions of that paper are a scale independent search direction and possible solutions for the step size. The second paper considers optimization problems with a single random objective and multiple random constraints, as well as deterministic box constraints. That paper extends RSM to cope with constraints, using ideas from interior point methods. Furthermore, it provides a heuristic to reach quickly a neighborhood of the optimum. The third paper copes with optimization problems with a single random objective and multiple random constraints. The contribution of that paper is an asymptotic stopping rule that tests the first-order necessary optimality conditions at a feasible point.

    Black Box Simulation Optimization: Generalized Response Surface Methodology.

    Get PDF
    The thesis consists of three papers in the area of Response Surface Methodology (RSM). The first paper deals with optimization problems with a single random objective. The contributions of that paper are a scale independent search direction and possible solutions for the step size. The second paper considers optimization problems with a single random objective and multiple random constraints, as well as deterministic box constraints. That paper extends RSM to cope with constraints, using ideas from interior point methods. Furthermore, it provides a heuristic to reach quickly a neighborhood of the optimum. The third paper copes with optimization problems with a single random objective and multiple random constraints. The contribution of that paper is an asymptotic stopping rule that tests the first-order necessary optimality conditions at a feasible point.

    Pattern Search Ranking and Selection Algorithms for Mixed-Variable Optimization of Stochastic Systems

    Get PDF
    A new class of algorithms is introduced and analyzed for bound and linearly constrained optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is extended to a new problem setting in which objective function evaluations require sampling from a model of a stochastic system. The approach combines GPS with ranking and selection (R&S) statistical procedures to select new iterates. The derivative-free algorithms require only black-box simulation responses and are applicable over domains with mixed variables (continuous, discrete numeric, and discrete categorical) to include bound and linear constraints on the continuous variables. A convergence analysis for the general class of algorithms establishes almost sure convergence of an iteration subsequence to stationary points appropriately defined in the mixed-variable domain. Additionally, specific algorithm instances are implemented that provide computational enhancements to the basic algorithm. Implementation alternatives include the use modern R&S procedures designed to provide efficient sampling strategies and the use of surrogate functions that augment the search by approximating the unknown objective function with nonparametric response surfaces. In a computational evaluation, six variants of the algorithm are tested along with four competing methods on 26 standardized test problems. The numerical results validate the use of advanced implementations as a means to improve algorithm performance

    Continuous optimization via simulation using Golden Region search

    Get PDF
    Simulation Optimization (SO) is the use of mathematical optimization techniques in which the objective function (and/or constraints) could only be numerically evaluated through simulation. Many of the proposed SO methods in the literature are rooted in or originally developed for deterministic optimization problems with available objective function. We argue that since evaluating the objective function in SO requires a simulation run which is more computationally costly than evaluating an available closed form function, SO methods should be more conservative and careful in proposing new candidate solutions for objective function evaluation. Based on this principle, a new SO approach called Golden Region (GR) search is developed for continuous problems. GR divides the feasible region into a number of (sub) regions and selects one region in each iteration for further search based on the quality and distribution of simulated points in the feasible region and the result of scanning the response surface through a metamodel. The experiments show the GR method is efficient compared to three well-established approaches in the literature. We also prove the convergence in probability to global optimum for a large class of random search methods in general and GR in particular

    Maritime Empty Container Repositioning with Inventory-based Control

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Simulation Optimization for Manufacturing System Design

    Get PDF
    A manufacturing system characterized by its stochastic nature, is defined by both qualitative and quantitative variables. Often there exists a situation when a performance measure such as throughput, work-in-process or cycle time of the system needs to be optimized with respect to some decision variables. It is generally convenient to express a manufacturing system in the form of an analytical model, to get the solutions as quickly as possible. However, as the complexity of the system increases, it gets more and more difficult to accommodate that complexity into the analytical model due to the uncertainty involved. In such situations, we resort to simulation modeling as an effective alternative.Equipment selection forms a separate class of problems in the domain of manufacturing systems. It assumes a high significance for capital-intensive industry, especially the semiconductor industry whose equipment cost comprises a significant amount of the total budget spent. For semiconductor wafer fabs that incorporate complex product flows of multiple product families, a reduction in the cycle time through the choice of appropriate equipment could result in significant profits. This thesis focuses on the equipment selection problem, which selects tools for the workstations with a choice of different tool types at each workstation. The objective is to minimize the average cycle time of a wafer lot in a semiconductor fab, subject to throughput and budget constraints. To solve the problem, we implement five simulation-based algorithms and an analytical algorithm. The simulation-based algorithms include the hill climbing algorithm, two gradient-based algorithms biggest leap and safer leap, and two versions of the nested partitions algorithm. We compare the performance of the simulation-based algorithms against that of the analytical algorithm and discuss the advantages of prior knowledge of the problem structure for the selection of a suitable algorithm
    corecore