1,006 research outputs found

    An distributed environment for data storage and processing in support for bioinformatics analysis.

    Get PDF
    In this work, we investigate the use of a distributed file system (DFS) for data storage; associated with a distributed resource management (DRM) for control the parallel tasks execution.X-meeting 2015

    Virtualização com o Xen: instalando e configurando o ambiente.

    Get PDF
    O documento descreve o processo de instalação de ambientes virtuais para servidores, utilizando o hypervisor Xen e tendo-se um sistema Linux Ubuntu (Lucid release) como Dom0. Descreve-se o processo de instalação do Xen 3.3 e 4.0 e também a instalação de hóspedes HVM (totalmente virtualizados) e PV (paravirtualizados). São apresentados também alguns testes comparando o desempenho de sistemas rodando em máquinas reais e em máquinas virtuais.bitstream/item/31526/1/ct102-10.pd

    Computational investigations in eukaryotes genome de novo assembly using short reads.

    Get PDF
    Recently news technologies in molecular biology enormously improved the sequencing data production, making it possible to generate billions of short reads totalizing gibabases of data per experiment. Prices for sequencing are decreasing rapidly and experiments that were impossible in the past because of costs are now being executed. Computational methodologies that were successfully used to solve the genome assembler problem with data obtained by the shotgun strategy, are now inefficient. Efforts are under way to develop new programs. At this moment, a stabilized condition for producing quality assembles is to use paired-end reads to virtually increase the length of reads, but there is a lot of controversy in other points. The works described in literature basically use two strategies: one is based in a high coverage[1] and the other is based in an incremental assembly, using the made pairs with shorter inserts first[2]. Independently of the strategy used the computational resources demanded are actually very high. Basically the present computational solution for the de novo genome assembly involves the generation of a graph of some kind [3], and one because those graphs use as node whole reads or k-mers, and considering that the amount of reads is very expressive; it is possible to infer that the memory resource of the computational system will be very important. Works in literature corroborate this idea showing that multiprocessors computational systems with at least 512 Gb of principal memory were used in de novo projects of eukaryotes [1,2,3]. As an example and benchmark source it is possible use the Panda project, which was executed by a research group consortium at China and generated de novo genome of the giant Panda (Ailuropoda melanoleura) . The project initially produced 231 Gb of raw data, which was reduced to 176 Gb after removing low-quality and duplicated reads. In the de novo assembly process just 134 Gb were used. Those bases were distributed in approximately 3 billions short reads. After the assembly, 200604 contigs were generated and 5701 multicontig scaffolds were obtained using 124336 contigs. The N50 was respectively . 36728 bp and 1.22 Mb for contigs and scaffolds. The present work investigated the computational demands of de novo assembly of eukaryotes genomes, reproducing the results of the Panda project. The strategy used was incremental as implemented in the SOAPdenovo software, which basically divides the assembly process in four steps: pre-graph to construction of kmer-graph; contig to eliminate errors and output contigs, map to map reads in the contigs and scaff to scaffold contigs. It used a NUMA (non-uniform memory access) computational system with 8 six-core processors with hyperthread tecnology and 512 Gb of RAM (random access memory), and the consumption of resources as memory and processor time were pointed for every steps in the process. The incremental strategy to solve the problem seems practical and can produce effective results. At this moment a work is in progress which is investigating a new methodology to group the short reads together using the entropy concept. It is possible that assemblies with better quality will be generated, because this methodology initially uses more informative reads. References [1] Gnerre et. al.; High-quality draft assemblies of mammalian genomes from massively parallel sequence data, Proceedings of the National Academy of Sciences USA, v. 108, n. 4, p. 1513-1518, 2010 [2] Li et. al.; The sequence and de novo assembly of the giant panda genome, Nature, v. 463, p. 311-317, 2010 [3] Schatz et. al.; Assembly of large genomes using second-generation sequencing, Genome Research, v. 20, p. 1165-1173, 2010X-MEETING 2011

    Uma proposta para a virtualização de servidores utilizando Xen.

    Get PDF
    O documento descreve uma proposta de virtualização para servidores que utilizam redundância de máquinas físicas e, por esse motivo, apresentam-se bastante resiliente a falhas. É descrito o processo de instalação de ambientes virtuais para servidores, utilizando o hypervisor Xen 4.1 e tendo-se um sistema Linux Ubuntu (Natty release) como Dom0. Com relação ao DomU, dá-se ênfase para máquinas virtuais paravirtualizadas (PV) que apresentam maior desempenho e flexibilidade em relação às máquinas totalmente virtualizadas (HVM).bitstream/item/56765/1/ComTec111.pd

    Coesão dos solos dos tabuleiros: como as camadas coesas podem prejudicar a citricultura sergipana.

    Get PDF
    bitstream/item/26147/1/f-16.pd

    Operant Discriminative Learning and Evidence of Subtelencephalic Plastic Changes After Long-Term Detelencephalation in Pigeons

    Get PDF
    We analyzed operant discrimination in detelencephalated pigeons and neuroanatomical substrates after long-term detelencephalation. In Experiment I, experimental pigeons with massive telencephalic ablation and control pigeons were conditioned to key peck for food. Successive discrimination was made under alternating red (variable-ratio reinforcement) and yellow (extinction) lights in one key of the chamber. These relations were interchanged during reversal discrimination. The sessions were run until steady-state rates were achieved. Experiment II analyzed the morphology of the nucleus rotundus and optic tectum in long-term detelencephalated and control birds, using a Klüver-Barrera staining and image analyzer system. Detelencephalated birds had more training sessions for response shaping and steady-state behavior (p<0.001), higher red key peck rates during discrimination (p<0.01), and reversal discrimination indexes around 0.50. Morphometric analysis revealed a decreased number of neurons and increased vascularity, associated with increases in the perimeter (p<0.001) in the nucleus rotundus. In the optic tectum, increases in the perimeter (p<0.05) associated with disorganization in the layers arrangement were seen. The data indicate that telencephalic systems might have an essential function in reversal operant discrimination learning. The structural characteristics of subtelencephalic systems after long-term detelencephalation evidence plastic changes that might be related to functional mechanisms of learning and neural plasticity in pigeons
    corecore