216 research outputs found

    Refined Genetic Algorithms for Polypeptide Structure Prediction

    Get PDF
    Accurate and reliable prediction of macromolecular structures has eluded researchers for nearly 40 years. Prediction via energy minimization assumes the native conformation has the globally minimal energy potential. An exhaustive search is impossible since for molecules of normal size, the size of the search space exceeds the size of the universe. Domain knowledge sources, such as the Brookhaven PDB can be mined for constraints to limit the search space. Genetic algorithms (GAs) are stochastic, population based, search algorithms of polynomial (P) time complexity that can produce semi-optimal solutions for problems of nondeterministic polynomial (NP) time complexity such as PSP. Three refined GAs are presented: A farming model parallel hybrid GA (PHGA) preserves the effectiveness of the serial algorithm with substantial speed up. Portability across distributed and MPP platforms is accomplished with the Message Passing Interface (MPI) communications standard. A Real-valved GA system, real-valued Genetic Algorithm, Limited by constraints (REGAL), exploiting domain knowledge. Experiments with the pentapeptide Met-enkephalin have identified conformers with lower energies (CHARMM) than the accepted optimal conformer (Scheraga, et al), -31.98 vs -28.96 kcals/mol. Analysis of exogenous parameters yields additional insight into performance. A parallel version (Para-REGAL), an island model modified to allow different active constraints in the distributed subpopulations and novel concepts of Probability of Migration and Probability of Complete Migration

    Parallel, iterative solution of sparse linear systems: Models and architectures

    Get PDF
    A model of a general class of asynchronous, iterative solution methods for linear systems is developed. In the model, the system is solved by creating several cooperating tasks that each compute a portion of the solution vector. A data transfer model predicting both the probability that data must be transferred between two tasks and the amount of data to be transferred is presented. This model is used to derive an execution time model for predicting parallel execution time and an optimal number of tasks given the dimension and sparsity of the coefficient matrix and the costs of computation, synchronization, and communication. The suitability of different parallel architectures for solving randomly sparse linear systems is discussed. Based on the complexity of task scheduling, one parallel architecture, based on a broadcast bus, is presented and analyzed

    Validation, Calibration, and Evaluation of ITS Technologies on the Borman Corridor

    Full text link

    Parallel Architectures for Planetary Exploration Requirements (PAPER)

    Get PDF
    The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified

    Active Processor Scheduling Using Evolution Algorithms

    Get PDF
    The allocation of processes to processors has long been of interest to engineers. The processor allocation problem considered here assigns multiple applications onto a computing system. With this algorithm researchers could more efficiently examine real-time sensor data like that used by United States Air Force digital signal processing efforts or real-time aerosol hazard detection as examined by the Department of Homeland Security. Different choices for the design of a load balancing algorithm are examined in both the problem and algorithm domains. Evolutionary algorithms are used to find near-optimal solutions. These algorithms incorporate multiobjective coevolutionary and parallel principles to create an effective and efficient algorithm for real-world allocation problems. Three evolutionary algorithms (EA) are developed. The primary algorithm generates a solution to the processor allocation problem. This allocation EA is capable of evaluating objectives in both an aggregate single objective and a Pareto multiobjective manner. The other two EAs are designed for fine turning returned allocation EA solutions. One coevolutionary algorithm is used to optimize the parameters of the allocation algorithm. This meta-EA is parallelized using a coarse-grain approach to improve performance. Experiments are conducted that validate the improved effectiveness of the parallelized algorithm. Pareto multiobjective approach is used to optimize both effectiveness and efficiency objectives. The other coevolutionary algorithm generates difficult allocation problems for testing the capabilities of the allocation EA. The effectiveness of both coevolutionary algorithms for optimizing the allocation EA is examined quantitatively using standard statistical methods. Also the allocation EAs objective tradeoffs are analyzed and compared

    A public transport bus assignment problem: parallel metaheuristics assessment

    Get PDF
    Combinatorial Optimization Problems occur in a wide variety of contexts and generally are NP-hard problems. At a corporate level solving this problems is of great importance since they contribute to the optimization of operational costs. In this thesis we propose to solve the Public Transport Bus Assignment problem considering an heterogeneous fleet and line exchanges, a variant of the Multi-Depot Vehicle Scheduling Problem in which additional constraints are enforced to model a real life scenario. The number of constraints involved and the large number of variables makes impracticable solving to optimality using complete search techniques. Therefore, we explore metaheuristics, that sacrifice optimality to produce solutions in feasible time. More concretely, we focus on the development of algorithms based on a sophisticated metaheuristic, Ant-Colony Optimization (ACO), which is based on a stochastic learning mechanism. For complex problems with a considerable number of constraints, sophisticated metaheuristics may fail to produce quality solutions in a reasonable amount of time. Thus, we developed parallel shared-memory (SM) synchronous ACO algorithms, however, synchronism originates the straggler problem. Therefore, we proposed three SM asynchronous algorithms that break the original algorithm semantics and differ on the degree of concurrency allowed while manipulating the learned information. Our results show that our sequential ACO algorithms produced better solutions than a Restarts metaheuristic, the ACO algorithms were able to learn and better solutions were achieved by increasing the amount of cooperation (number of search agents). Regarding parallel algorithms, our asynchronous ACO algorithms outperformed synchronous ones in terms of speedup and solution quality, achieving speedups of 17.6x. The cooperation scheme imposed by asynchronism also achieved a better learning rate than the original one

    High-performance evolutionary computation for scalable spatial optimization

    Get PDF
    Spatial optimization (SO) is an important and prolific field of interdisciplinary research. Spatial optimization methods seek optimal allocation or arrangement of spatial units under spatial constraints such as distance, adjacency, contiguity, partition, etc. As spatial granularity becomes finer and problem formulations incorporate increasingly complex compositions of spatial information, the performance of spatial optimization solvers becomes more imperative. My research focuses on scalable spatial optimization methods within the evolutionary algorithm (EA) framework. The computational scalability challenge in EA is addressed by developing a parallel EA library that eliminates the costly global synchronization in massively parallel computing environment and scales to 131,072 processors. Classic EA operators are based on linear recombination and experience serious problems in traversing the decision space with non-linear spatial configurations. I propose a spatially explicit EA framework that couples graph representations of spatial constraints with intelligent guided search heuristics such as path relinking and ejection chain to effectively explore SO decision space. As a result, novel spatial recombination operators are developed to handle strong spatial constraints effectively and are generic to incorporate problem-specific spatial characteristics. This framework is employed to solve large political redistricting problems. Voting district-level redistricting problems are solved and sampled to create billions of feasible districting plans that adhere to Supreme Court mandates, suitable for statistical analyses of redistricting phenomena such as gerrymandering

    College of Engineering research activities and annual report, July 1, 1980 - June 30, 1981

    Get PDF
    Annual report of research activities and publications of the College of Engineering, July 1, 1980 - June 30, 1981

    Hardware Accelerators for Animated Ray Tracing

    Get PDF
    Future graphics processors are likely to incorporate hardware accelerators for real-time ray tracing, in order to render increasingly complex lighting effects in interactive applications. However, ray tracing poses difficulties when drawing scenes with dynamic content, such as animated characters and objects. In dynamic scenes, the spatial datastructures used to accelerate ray tracing are invalidated on each animation frame, and need to be rapidly updated. Tree update is a complex subtask in its own right, and becomes highly expensive in complex scenes. Both ray tracing and tree update are highly memory-intensive tasks, and rendering systems are increasingly bandwidth-limited, so research on accelerator hardware has focused on architectural techniques to optimize away off-chip memory traffic. Dynamic scene support is further complicated by the recent introduction of compressed trees, which use low-precision numbers for storage and computation. Such compression reduces both the arithmetic and memory bandwidth cost of ray tracing, but adds to the complexity of tree update.This thesis proposes methods to cope with dynamic scenes in hardware-accelerated ray tracing, with focus on reducing traffic to external memory. Firstly, a hardware architecture is designed for linear bounding volume hierarchy construction, an algorithm which is a basic building block in most state-of-the-art software tree builders. The algorithm is rearranged into a streaming form which reduces traffic to one-third of software implementations of the same algorithm. Secondly, an algorithm is proposed for compressing bounding volume hierarchies in a streaming manner as they are output from a hardware builder, instead of performing compression as a postprocessing pass. As a result, with the proposed method, compression reduces the overall cost of tree update rather than increasing it. The last main contribution of this thesis is an evaluation of shallow bounding volume hierarchies, common in software ray tracing, for use in hardware pipelines. These are found to be more energy-efficient than binary hierarchies. The results in this thesis both confirm that dynamic scene support may become a bottleneck in real time ray tracing, and add to the state of the art on tree update in terms of energy-efficiency, as well as the complexity of scenes that can be handled in real time on resource-constrained platforms
    corecore