1,785 research outputs found

    An investigation of messy genetic algorithms

    Get PDF
    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented

    User community development for the space transportation system/Skylab

    Get PDF
    The New User Function plan for identifying beneficial uses of space is described. Critical issues such as funding, manpower, and protection of user proprietary rights are discussed along with common barriers which impede the development of a user community. Studies for developing methodologies of identifying new users and uses of the space transportation system are included

    Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets

    Full text link
    The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted. The inherent low-dimensionality of the information in this data has led neuroscientists to consider factor analysis methods to extract and analyze the underlying brain activity. In this work, we consider two recent multi-subject factor analysis methods: the Shared Response Model and Hierarchical Topographic Factor Analysis. We perform analytical, algorithmic, and code optimization to enable multi-node parallel implementations to scale. Single-node improvements result in 99x and 1812x speedups on these two methods, and enables the processing of larger datasets. Our distributed implementations show strong scaling of 3.3x and 5.5x respectively with 20 nodes on real datasets. We also demonstrate weak scaling on a synthetic dataset with 1024 subjects, on up to 1024 nodes and 32,768 cores

    Automation of orbit determination functions for National Aeronautics and Space Administration (NASA)-supported satellite missions

    Get PDF
    The Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC) provides spacecraft trajectory determination for a wide variety of National Aeronautics and Space Administration (NASA)-supported satellite missions, using the Tracking Data Relay Satellite System (TDRSS) and Ground Spaceflight and Tracking Data Network (GSTDN). To take advantage of computerized decision making processes that can be used in spacecraft navigation, the Orbit Determination Automation System (ODAS) was designed, developed, and implemented as a prototype system to automate orbit determination (OD) and orbit quality assurance (QA) functions performed by orbit operations. Based on a machine-resident generic schedule and predetermined mission-dependent QA criteria, ODAS autonomously activates an interface with the existing trajectory determination system using a batch least-squares differential correction algorithm to perform the basic OD functions. The computational parameters determined during the OD are processed to make computerized decisions regarding QA, and a controlled recovery process isactivated when the criteria are not satisfied. The complete cycle is autonomous and continuous. ODAS was extensively tested for performance under conditions resembling actual operational conditions and found to be effective and reliable for extended autonomous OD. Details of the system structure and function are discussed, and test results are presented

    Fourier sparsity, spectral norm, and the Log-rank conjecture

    Full text link
    We study Boolean functions with sparse Fourier coefficients or small spectral norm, and show their applications to the Log-rank Conjecture for XOR functions f(x\oplus y) --- a fairly large class of functions including well studied ones such as Equality and Hamming Distance. The rank of the communication matrix M_f for such functions is exactly the Fourier sparsity of f. Let d be the F2-degree of f and D^CC(f) stand for the deterministic communication complexity for f(x\oplus y). We show that 1. D^CC(f) = O(2^{d^2/2} log^{d-2} ||\hat f||_1). In particular, the Log-rank conjecture holds for XOR functions with constant F2-degree. 2. D^CC(f) = O(d ||\hat f||_1) = O(\sqrt{rank(M_f)}\logrank(M_f)). We obtain our results through a degree-reduction protocol based on a variant of polynomial rank, and actually conjecture that its communication cost is already \log^{O(1)}rank(M_f). The above bounds also hold for the parity decision tree complexity of f, a measure that is no less than the communication complexity (up to a factor of 2). Along the way we also show several structural results about Boolean functions with small F2-degree or small spectral norm, which could be of independent interest. For functions f with constant F2-degree: 1) f can be written as the summation of quasi-polynomially many indicator functions of subspaces with \pm-signs, improving the previous doubly exponential upper bound by Green and Sanders; 2) being sparse in Fourier domain is polynomially equivalent to having a small parity decision tree complexity; 3) f depends only on polylog||\hat f||_1 linear functions of input variables. For functions f with small spectral norm: 1) there is an affine subspace with co-dimension O(||\hat f||_1) on which f is a constant; 2) there is a parity decision tree with depth O(||\hat f||_1 log ||\hat f||_0).Comment: v2: Corollary 31 of v1 removed because of a bug in the proof. (Other results not affected.
    corecore