1,481 research outputs found

    The Expected Fitness Cost of a Mutation Fixation under the One-dimensional Fisher Model

    Get PDF
    This paper employs Fisherā€™s model of adaptation to understand the expected fitness eļ¬€ect of ļ¬xing a mutation in a natural population. Fisherā€™s model in one dimension admits a closed form solution for this expected ļ¬tness eļ¬€ect. A combination of different parameters, including the distribution of mutation lengths, population sizes, and the initial state that the population is in, are examined to see how they affect the expected ļ¬tness effect of state transitions. The results show that the expected fitness change due to the ļ¬xation of a mutation is always positive, regardless of the distributional shapes of mutation lengths, effective population sizes, and the initial state that the population is in. The further away the initial state of a population is from the optimal state, the slower the population returns to the optimal state. Effective population size (except when very small) has little effect on the expected ļ¬tness change due to mutation fixation. The always positive expected ļ¬tness change suggests that small populations may not necessarily be doomed due to the runaway process of fixation of deleterious mutations

    Probability-one Homotopies in Computational Science

    Get PDF
    Probability-one homotopy algorithms are a class of methods for solving nonlinear systems of equations that,under mild assumptions,are globally convergent for a wide range of problems in science and engineering.Convergence theory, robust numerical algorithms,and production quality mathematical software exist for general nonlinear systems of equations, and special cases suc as Brouwer fixed point problems,polynomial systems,and nonlinear constrained optimization.Using a sample of challenging scientific problems as motivation,some pertinent homotopy theory and algorithms are presented. The problems considered are analog circuit simulation (for nonlinear systems),reconfigurable space trusses (for polynomial systems),and fuel-optimal orbital rendezvous (for nonlinear constrained optimization).The mathematical software packages HOMPACK90 and POLSYS_PLP are also briefly described

    Performance Analysis of a Novel GPU Computation-to-core Mapping Scheme for Robust Facet Image Modeling

    Get PDF
    Though the GPGPU concept is well-known in image processing, much more work remains to be done to fully exploit GPUs as an alternative computation engine. This paper investigates the computation-to-core mapping strategies to probe the efficiency and scalability of the robust facet image modeling algorithm on GPUs. Our fine-grained computation-to-core mapping scheme shows a significant performance gain over the standard pixel-wise mapping scheme. With in-depth performance comparisons across the two different mapping schemes, we analyze the impact of the level of parallelism on the GPU computation and suggest two principles for optimizing future image processing applications on the GPU platform

    Deep Attributes Driven Multi-Camera Person Re-identification

    Full text link
    The visual appearance of a person is easily affected by many factors like pose variations, viewpoint changes and camera parameter differences. This makes person Re-Identification (ReID) among multiple cameras a very challenging task. This work is motivated to learn mid-level human attributes which are robust to such visual appearance variations. And we propose a semi-supervised attribute learning framework which progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a three-stage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely \emph{deep attributes} exhibit superior generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained surprisingly good accuracy on four person ReID datasets. Experiments also show that a simple metric learning modular further boosts our method, making it significantly outperform many recent works.Comment: Person Re-identification; 17 pages; 5 figures; In IEEE ECCV 201

    Optimization by nonhierarchical asynchronous decomposition

    Get PDF
    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness

    Enrichment Procedures for Soft Clusters: A Statistical Test and its Applications

    Get PDF
    Clusters, typically mined by modeling locality of attribute spaces, are often evaluated for their ability to demonstrate ā€˜enrichmentā€™ of categorical features. A cluster enrichment procedure evaluates the membership of a cluster for significant representation in pre-defined categories of interest. While classical enrichment procedures assume a hard clustering deļ¬nition, in this paper we introduce a new statistical test that computes enrichments for soft clusters. We demonstrate an application of this test in reļ¬ning and evaluating soft clusters for classification of remotely sensed images

    Adjusting process count on demand for petascale global optimizationā‹†

    Get PDF
    There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed

    Modern Homotopy Methods in Optimization

    Get PDF
    Probability-one homotopy methods are a class of algorithms for solving nonlinear systems of equations that are accurate, robust, and converge from an arbitrary starting point almost surely. These new techniques have been successfully applied to solve Brouwer faced point problems, polynomial systems of equations, and discretizations of nonlinear two-point boundary value problems based on shooting, finite differences, collocation, and finite elements. This paper summarizes the theory of globally convergent homotopy algorithms for unconstrained and constrained optimization, and gives some examples of actual application of homotopy techniques to engineering optimization problems

    An SMP Soft Classification Algorithm for Remote Sensing

    Get PDF
    This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classiļ¬cation algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classiļ¬cation containing inherently more information than a comparable hard classiļ¬cation at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 10^8 pixels and six bands demonstrate superlinear speedup. A soft two class classiļ¬cation is generated in just over four minutes using 32 processors
    • ā€¦
    corecore