272 research outputs found

    A data-flow modification of the MUSCLE algorithm for multiprocessors and a web interface for it 1

    Get PDF
    AbstractNucleotide and amino acid sequences research is actual for molecular biology and bioengineering. An important aspect of analysis of such sequences is multiple alignment. This article describes the implementations of the MUSCLE and ClustalW programs on multiprocessors and a web interface to them. The modification of the MUSCLE algorithm realize a data-flow manner of sequence alignment. It uses the PARUS system to build a data-flow graph and execute it on one multiprocessor. The data-flow algorithm has been tested on the sequences of human Long Terminal Repeats class five (LTR5) and several other examples

    RIACS

    Get PDF
    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: (1) Automated Reasoning. (2) Human-Centered Computing. and (3) High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling

    Tools for interfacing, extracting, and analyzing neural signals using wide-field fluorescence imaging and optogenetics in awake behaving mice

    Get PDF
    Imaging of multiple cells has rapidly multiplied the rate of data acquisition as well as our knowledge of the complex dynamics within the mammalian brain. The process of data acquisition has been dramatically enhanced with highly affordable, sensitive image sensors enable high-throughput detection of neural activity in intact animals. Genetically encoded calcium sensors deliver a substantial boost in signal strength and in combination with equally critical advances in the size, speed, and sensitivity of image sensors available in scientific cameras enables high-throughput detection of neural activity in behaving animals using traditional wide-field fluorescence microscopy. However, the tremendous increase in data flow presents challenges to processing, analysis, and storage of captured video, and prompts a reexamination of traditional routines used to process data in neuroscience and now demand improvements in both our hardware and software applications for processing, analyzing, and storing captured video. This project demonstrates the ease with which a dependable and affordable wide-field fluorescence imaging system can be assembled and integrated with behavior control and monitoring system such as found in a typical neuroscience laboratory. An Open-source MATLAB toolbox is employed to efficiently analyze and visualize large imaging data sets in a manner that is both interactive and fully automated. This software package provides a library of image pre-processing routines optimized for batch-processing of continuous functional fluorescence video, and additionally automates a fast unsupervised ROI detection and signal extraction routine. Further, an extension of this toolbox that uses GPU programming to process streaming video, enabling the identification, segmentation and extraction of neural activity signals on-line is described in which specific algorithms improve signal specificity and image quality at the single cell level in a behaving animal. This project describes the strategic ingredients for transforming a large bulk flow of raw continuous video into proportionally informative images and knowledge

    GPU Computing for Cognitive Robotics

    Get PDF
    This thesis presents the first investigation of the impact of GPU computing on cognitive robotics by providing a series of novel experiments in the area of action and language acquisition in humanoid robots and computer vision. Cognitive robotics is concerned with endowing robots with high-level cognitive capabilities to enable the achievement of complex goals in complex environments. Reaching the ultimate goal of developing cognitive robots will require tremendous amounts of computational power, which was until recently provided mostly by standard CPU processors. CPU cores are optimised for serial code execution at the expense of parallel execution, which renders them relatively inefficient when it comes to high-performance computing applications. The ever-increasing market demand for high-performance, real-time 3D graphics has evolved the GPU into a highly parallel, multithreaded, many-core processor extraordinary computational power and very high memory bandwidth. These vast computational resources of modern GPUs can now be used by the most of the cognitive robotics models as they tend to be inherently parallel. Various interesting and insightful cognitive models were developed and addressed important scientific questions concerning action-language acquisition and computer vision. While they have provided us with important scientific insights, their complexity and application has not improved much over the last years. The experimental tasks as well as the scale of these models are often minimised to avoid excessive training times that grow exponentially with the number of neurons and the training data. This impedes further progress and development of complex neurocontrollers that would be able to take the cognitive robotics research a step closer to reaching the ultimate goal of creating intelligent machines. This thesis presents several cases where the application of the GPU computing on cognitive robotics algorithms resulted in the development of large-scale neurocontrollers of previously unseen complexity enabling the conducting of the novel experiments described herein.European Commission Seventh Framework Programm

    Transport in complex systems : a lattice Boltzmann approach

    Get PDF
    Celem niniejszej pracy jest zbadanie możliwości efektywnego modelowania procesów transportu w złożonych systemach z zakresu dynamiki płynów za pomocą metody siatkowej Boltzmanna (LBM). Złożoność systemu została potraktowana wieloaspektowo i konkretne układy, które poddano analizie pokrywały szeroki zakres zagadnień fizycznych, m.in. przepływy wielofazowe, hemodynamikę oraz turbulencje. We wszystkich przypadkach szczególna uwaga została zwrócona na aspekty numeryczne — dokładność używanych modeli, jak również szybkość z jaką pozwalają one uzyskać zadowalające rozwiązanie. W ramach pracy rozwinięty został pakiet oprogramowania Sailfish, będący otwarta implementacja metody siatkowej Boltzmanna na procesory kart graficznych (GPU). Po analizie szybkości jego działania, walidacji oraz omówieniu założeń projektowych, pakiet ten został użyty do symulacji trzech typów przepływów. Pierwszym z nich były przepływy typu Brethertona/Taylora w dwu- i trójwymiarowych geometriach, do symulacji których zastosowano model energii swobodnej. Analiza otrzymanych wyników pokazała dobra zgodność z danymi dostępnymi w literaturze, zarówno eksperymentalnymi, jak i otrzymanymi za pomocą innych metod numerycznych. Drugim badanym problemem były przepływy krwi w realistycznych geometriach tętnic dostarczających krew do ludzkiego mózgu. Wyniki symulacji zostały dokładnie porównane z rozwiązaniem otrzymanym metoda objętości skończonych z wykorzystaniem pakietu OpenFOAM, przyspieszonego komercyjna biblioteka pozwalająca na wykonywanie obliczeń na GPU. Otrzymano dobra zgodność między badanymi metodami oraz pokazano, że metoda siatkowa Boltzmanna pozwala na wykonywanie symulacji do ok. 20 razy szybciej. Trzecim przeanalizowanym zagadnieniem były turbulentne przepływy w prostych geometriach. Po zwalidowaniu wszystkich zaimplementowanych modeli relaksacji na przypadku wiru Kidy, zbadano przepływy w pustym kanale oraz w obecności przeszkód. Do symulacji wykorzystano zarówno siatki zapewniające pełną rozdzielczość aż do skal Kolmogorova, jak i siatki o mniejszej rozdzielczości. Również w tym kontekście pokazano dobrą zgodność wyników otrzymanych metodą siatkową Boltzmanna z wynikami innych symulacji oraz badaniami eksperymentalnymi. Pokazano również, że implementacja LBM w pakiecie Sailfish zapewnia większą stabilność obliczeń niż ta opisana w literaturze dla tych samych przepływów i modeli relaksacji

    Research and Technology Highlights 1995

    Get PDF
    The mission of the NASA Langley Research Center is to increase the knowledge and capability of the United States in a full range of aeronautics disciplines and in selected space disciplines. This mission is accomplished by performing innovative research relevant to national needs and Agency goals, transferring technology to users in a timely manner, and providing development support to other United States Government agencies, industry, other NASA Centers, the educational community, and the local community. This report contains highlights of the major accomplishments and applications that have been made by Langley researchers and by our university and industry colleagues during the past year. The highlights illustrate both the broad range of research and technology (R&T) activities carried out by NASA Langley Research Center and the contributions of this work toward maintaining United States leadership in aeronautics and space research. An electronic version of the report is available at URL http://techreports.larc.nasa.gov/RandT95. This color version allows viewing, retrieving, and printing of the highlights, searching and browsing through the sections, and access to an on-line directory of Langley researchers

    FPGA acceleration of sequence analysis tools in bioinformatics

    Full text link
    Thesis (Ph.D.)--Boston UniversityWith advances in biotechnology and computing power, biological data are being produced at an exceptional rate. The purpose of this study is to analyze the application of FPGAs to accelerate high impact production biosequence analysis tools. Compared with other alternatives, FPGAs offer huge compute power, lower power consumption, and reasonable flexibility. BLAST has become the de facto standard in bioinformatic approximate string matching and so its acceleration is of fundamental importance. It is a complex highly-optimized system, consisting of tens of thousands of lines of code and a large number of heuristics. Our idea is to emulate the main phases of its algorithm on FPGA. Utilizing our FPGA engine, we quickly reduce the size of the database to a small fraction, and then use the original code to process the query. Using a standard FPGA-based system, we achieved 12x speedup over a highly optimized multithread reference code. Multiple Sequence Alignment (MSA)--the extension of pairwise Sequence Alignment to multiple Sequences--is critical to solve many biological problems. Previous attempts to accelerate Clustal-W, the most commonly used MSA code, have directly mapped a portion of the code to the FPGA. We use a new approach: we apply prefiltering of the kind commonly used in BLAST to perform the initial all-pairs alignments. This results in a speedup of from 8Ox to 190x over the CPU code (8 cores). The quality is comparable to the original according to a commonly used benchmark suite evaluated with respect to multiple distance metrics. The challenge in FPGA-based acceleration is finding a suitable application mapping. Unfortunately many software heuristics do not fall into this category and so other methods must be applied. One is restructuring: an entirely new algorithm is applied. Another is to analyze application utilization and develop accuracy/performance tradeoffs. Using our prefiltering approach and novel FPGA programming models we have achieved significant speedup over reference programs. We have applied approximation, seeding, and filtering to this end. The bulk of this study is to introduce the pros and cons of these acceleration models for biosequence analysis tools

    Cellular Automata

    Get PDF
    Modelling and simulation are disciplines of major importance for science and engineering. There is no science without models, and simulation has nowadays become a very useful tool, sometimes unavoidable, for development of both science and engineering. The main attractive feature of cellular automata is that, in spite of their conceptual simplicity which allows an easiness of implementation for computer simulation, as a detailed and complete mathematical analysis in principle, they are able to exhibit a wide variety of amazingly complex behaviour. This feature of cellular automata has attracted the researchers' attention from a wide variety of divergent fields of the exact disciplines of science and engineering, but also of the social sciences, and sometimes beyond. The collective complex behaviour of numerous systems, which emerge from the interaction of a multitude of simple individuals, is being conveniently modelled and simulated with cellular automata for very different purposes. In this book, a number of innovative applications of cellular automata models in the fields of Quantum Computing, Materials Science, Cryptography and Coding, and Robotics and Image Processing are presented
    corecore