1,410 research outputs found

    SHADHO: Massively Scalable Hardware-Aware Distributed Hyperparameter Optimization

    Full text link
    Computer vision is experiencing an AI renaissance, in which machine learning models are expediting important breakthroughs in academic research and commercial applications. Effectively training these models, however, is not trivial due in part to hyperparameters: user-configured values that control a model's ability to learn from data. Existing hyperparameter optimization methods are highly parallel but make no effort to balance the search across heterogeneous hardware or to prioritize searching high-impact spaces. In this paper, we introduce a framework for massively Scalable Hardware-Aware Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the relative complexity of each search space and monitors performance on the learning task over all trials. These metrics are then used as heuristics to assign hyperparameters to distributed workers based on their hardware. We first demonstrate that our framework achieves double the throughput of a standard distributed hyperparameter optimization framework by optimizing SVM for MNIST using 150 distributed workers. We then conduct model search with SHADHO over the course of one week using 74 GPUs across two compute clusters to optimize U-Net for a cell segmentation task, discovering 515 models that achieve a lower validation loss than standard U-Net.Comment: 10 pages, 6 figure

    International conference on software engineering and knowledge engineering: Session chair

    Get PDF
    The Thirtieth International Conference on Software Engineering and Knowledge Engineering (SEKE 2018) will be held at the Hotel Pullman, San Francisco Bay, USA, from July 1 to July 3, 2018. SEKE2018 will also be dedicated in memory of Professor Lofti Zadeh, a great scholar, pioneer and leader in fuzzy sets theory and soft computing. The conference aims at bringing together experts in software engineering and knowledge engineering to discuss on relevant results in either software engineering or knowledge engineering or both. Special emphasis will be put on the transference of methods between both domains. The theme this year is soft computing in software engineering & knowledge engineering. Submission of papers and demos are both welcome

    Particle Swarm Optimization Performance: Comparison of Dynamic Economic Dispatch with Dantzig-Wolfe Decomposition

    Get PDF
    Economic Dispatch (ED) problem, in practice, is a nonlinear, non-convex type,which has developed gradually into a serious task management goal in the planning phase of the power system. The prime purpose of Dynamic Economic Dispatch (DED) is to minimize generators’ total cost of the power system. DED is to engage the committed generating units at a minimum cost to meet the load demand while fulfilling various constraints. Utilizing heuristic, population-based, and advanced optimization technique, Particle Swarm Optimization (PSO), represents a challenging problem with large dimension in providing a superior solution for DED optimization problem. The feasibility of the PSO method has been demonstrated technically, and economically for two different systems, and it is compared with the Dantzig-Wolfe technique regarding the solution quality and simplicity of implementation. While Dantzig-Wolfe method has its intrinsic drawbacks and positive features, PSO algorithm is the finest and the most appropriate solution. Conventional techniques have been unsuccessful to present compatible solutions to such problems due to their susceptibility to first estimates and possible entrapment into local optima which may complicate computations

    Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences

    Get PDF
    Results: We present an application that enables the quantitative analysis of multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. Conclusions: By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. There is a pressing need for visualization and analysis tools for 5-D live cell image data. We combine accurate unsupervised processes with an intuitive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc

    Baldwinian accounts of language evolution

    Get PDF
    Since Hinton & Nowlan published their seminal paper (Hinton & Nowlan 1987), the neglected evolutionary process of the Baldwin effect has been widely acknowledged. Especially in the field of language evolution, the Baldwin effect (Baldwin 1896d, Simpson 1953) has been expected to salvage the long-lasting deadlocked situation of modern linguistics: i.e., it may shed light on the relationship between environment and innateness in the formation of language.However, as intense research of this evolutionary theory goes on, certain robust difficulties have become apparent. One example is genotype-phenotype correlation. By computer simulations, both Yamauchi (1999, 2001) and Mayley (19966) show that for the Baldwin effect to work legitimately, correlation between genotypes and phenotypes is the most essential underpinning. This is due to the fact that this type of the Baldwin effect adopts as its core mechanism Waddington's (1975) "genetic assimilation". In this mechanism, phenocopies have to be genetically closer to the innately predisposed genotype. Unfortunately this is an overly naiive assumption for the theory of language evolution. As a highly complex cognitive ability, the possibility that this type of genotype-phenotype correlation exists in the domain of linguistic ability is vanishingly small.In this thesis, we develop a new type of mechanism, called "Baldwinian Niche Construction (BNC), that has a rich explanatory power and can potentially over¬ come this bewildering problem of the Baldwin effect. BNC is based on the theory of niche construction that has been developed by Odling-Smee et al. (2003). The incorporation of the theory into the Baldwin effect was first suggested by Deacon (1997) and briefly introduced by Godfrey-Smith (2003). However, its formulation is yet incomplete.In the thesis, first, we review the studies of the Baldwin effect in both biology and the study of language evolution. Then the theory of BNC is more rigorously developed. Linguistic communication has an intrinsic property that is fundamentally described in the theory of niche construction. This naturally leads us to the theoretical necessity of BNC in language evolution. By creating a new linguistic niche, learning discloses a previously hidden genetic variance on which the Baldwin 'canalizing' effect can take place. It requires no genetic modification in a given genepool. There is even no need that genes responsible for learning occupy the same loci as genes for the innate linguistic knowledge. These and other aspects of BNC are presented with some results from computer simulations

    A scalable evolvable hardware processing array

    Get PDF
    Evolvable hardware (EH) is an interesting alternative to conventional digital circuit design, since autonomous generation of solutions for a given task permits self-adaptivity of the system to changing environments, and they present inherent fault tolerance when evolution is intrinsically performed. Systems based on FPGAs that use Dynamic and Partial Reconfiguration (DPR) for evolving the circuit are an example. Also, thanks to DPR, these systems can be provided with scalability, a feature that allows a system to change the number of allocated resources at run-time in order to vary some feature, such as performance. The combination of both aspects leads to scalable evolvable hardware (SEH), which changes in size as an extra degree of freedom when trying to achieve the optimal solution by means of evolution. The main contributions of this paper are an architecture of a scalable and evolvable hardware processing array system, some preliminary evolution strategies which take scalability into consideration, and to show in the experimental results the benefits of combined evolution and scalability. A digital image filtering application is used as use case

    Adaptive Search Optimization: Dynamic Algorithm Selection and Caching for Enhanced Database Performance

    Full text link
    Efficient search operations in databases are paramount for timely retrieval of information various applications. This research introduces a novel approach, combining dynamicalgorithm1 selection and caching2 strategies, to optimize search performance. The proposed dynamic search algorithm intelligently switches between Binary3 and Interpolation 4 Search based on dataset characteristics, significantly improving efficiency for non-uniformly distributed data. Additionally, a robust caching mechanism5 stores and retrieves previous search results, further enhancing computational efficiency6. Theoretical analysis and extensive experiments demonstrate the effectiveness of the approach, showcasing its potential to revolutionize database performance7 in scenarios with diverse data distributions. This research contributes valuable insights and practical solutions to the realm of database optimization, offering a promising avenue for enhancing search operations in real-world application

    Improved sampling of the pareto-front in multiobjective genetic optimizations by steady-state evolution: a Pareto converging genetic algorithm

    Get PDF
    Previous work on multiobjective genetic algorithms has been focused on preventing genetic drift and the issue of convergence has been given little attention. In this paper, we present a simple steady-state strategy, Pareto Converging Genetic Algorithm (PCGA), which naturally samples the solution space and ensures population advancement towards the Pareto-front. PCGA eliminates the need for sharing/niching and thus minimizes heuristically chosen parameters and procedures. A systematic approach based on histograms of rank is introduced for assessing convergence to the Pareto-front, which, by definition, is unknown in most real search problems. We argue that there is always a certain inheritance of genetic material belonging to a population, and there is unlikely to be any significant gain beyond some point; a stopping criterion where terminating the computation is suggested. For further encouraging diversity and competition, a nonmigrating island model may optionally be used; this approach is particularly suited to many difficult (real-world) problems, which have a tendency to get stuck at (unknown) local minima. Results on three benchmark problems are presented and compared with those of earlier approaches. PCGA is found to produce diverse sampling of the Pareto-front without niching and with significantly less computational effort

    Developing a Computational Framework for a Construction Scheduling Decision Support Web Based Expert System

    Get PDF
    Decision-making is one of the basic cognitive processes of human behaviors by which a preferred option or a course of action is chosen from among a set of alternatives based on certain criteria. Decision-making is the thought process of selecting a logical choice from the available options. When trying to make a good decision, all the positives and negatives of each option should be evaluated. This decision-making process is particularly challenging during the preparation of a construction schedule, where it is difficult for a human to analyze all possible outcomes of each and every situation because, construction of a project is performed in a real time environment with real time events which are subject to change at any time. The development of a construction schedule requires knowledge of the construction process that takes place to complete a project. Most of this knowledge is acquired through years of work/practical experiences. Currently, working professionals and/or students develop construction schedules without the assistance of a decision support system (that provides work/practical experiences captured in previous jobs or by other people). Therefore, a scheduling decision support expert system will help in decision-making by expediting and automating the situation analysis to discover the best possible solution. However, the algorithm/framework needed to develop such a decision support expert system does not exist so far. Thus, the focus of my research is to develop a computational framework for a web-based expert system that helps the decision-making process during the preparation of a construction schedule. My research to develop a new computational framework for construction scheduling follows an action research methodology. The main foundation components for my research are scheduling techniques (such as: Job Shop Problem), path-finding techniques (such as: travelling salesman problem), and rule-based languages (such as JESS). My computational framework is developed by combining these theories. The main contribution of my dissertation to computational science is the new scheduling framework, which consists of a combination of scheduling algorithms that is tested with construction scenarios. This framework could be useful in more areas where automatic job and/or task scheduling is necessary

    MINIMAL CUT SETS IDENTIFICATION OF NUCLEAR SYSTEMS BY EVOLUTIONARY ALGORITHMS

    No full text
    Fault Trees (FTs) for the Probabilistic Safety Analysis (PSA) of real systems suffer from the combinatorial explosion of failure sets. Then, minimal cut sets (mcs) identification is not a trivial technical issue. In this work, we transform the search of the event sets leading to system failure and the identification of the mcs into an optimization problem. We do so by hierarchically looking for the minimum combination of cut sets that can guarantee the best coverage of all the minterms that make the system fail. A multiple-population, parallel search policy based on a Differential Evolution (DE) algorithm is developed and shown to be efficient for mcs identification, on a case study considering the Airlock System (AS) of CANDU reactor
    corecore