2,472 research outputs found
Virtual Environment for Next Generation Sequencing Analysis
Next Generation Sequencing technology, on the
one hand, allows a more accurate analysis, and, on the other
hand, increases the amount of data to process. A new protocol
for sequencing the messenger RNA in a cell, known as RNA-
Seq, generates millions of short sequence fragments in a single
run. These fragments, or reads, can be used to measure levels
of gene expression and to identify novel splice variants of genes.
The proposed solution is a distributed architecture consisting
of a Grid Environment and a Virtual Grid Environment, in
order to reduce processing time by making the system scalable
and flexibl
Enhancing Job Scheduling of an Atmospheric Intensive Data Application
Nowadays, e-Science applications involve great
deal of data to have more accurate analysis. One of its application domains is the Radio Occultation which manages satellite data. Grid Processing Management is a physical infrastructure geographically distributed based on Grid Computing, that is implemented for the overall processing Radio Occultation analysis. After a brief description of algorithms adopted to characterize atmospheric profiles, the paper presents an improvement of job scheduling in order to decrease processing time and optimize resource utilization. Extension of grid computing capacity is implemented by virtual machines in existing physical Grid in order to satisfy temporary job requests. Also scheduling plays an important role in the infrastructure that is handled by a couple of schedulers which are developed to manage data automatically
Reverse Engineering of TopHat: Splice Junction Mapper for Improving Computational Aspect
TopHat is a fast splice junction mapper for Next Generation Sequencing analysis, a technology for functional genomic research. Next Generation Sequencing technology allows more accurate analysis increasing data to elaborate, this opens to new challenges in terms of development of tools and computational infrastructures. We present a solution that cover aspects both software and hardware, the first one, after a reverse engineering phase, provides an improvement of algorithm of TopHat making it parallelizable, the second aspect is an implementation of an hybrid infrastructure: grid and virtual grid computing. Moreover the system allows to have a multi sample environment and is able to process automatically totally transparent to user
Neural-powered unit disk graph embedding: qubits connectivity for some QUBO problems
Graph embedding is a recurrent problem in quantum computing, for instance, quantum annealers need to solve a minor graph embedding in order to map a given Quadratic Unconstrained Binary Optimization (QUBO) problem onto their internal connectivity pattern. This work presents a novel approach to constrained unit disk graph embedding, which is encountered when trying to solve combinatorial optimization problems in QUBO form, using quantum hardware based on neutral Rydberg atoms. The qubits, physically represented by the atoms, are excited to the Rydberg state through laser pulses. Whenever qubits pairs are closer together than the blockade radius, entanglement can be reached, thus preventing entangled qubits to be simultaneously in the excited state. Hence, the blockade radius determines the adjacency pattern among qubits, corresponding to a unit disk configuration. Although it is straight-forward to compute the adjacency pattern given the qubits' coordinates, identifying a feasible unit disk arrangement that matches the desired QUBO matrix is, on the other hand, a much harder task. In the context of quantum optimization, this issue translates into the physical placement of the qubits in the 2D/3D register to match the machine's Ising-like Hamiltonian with the QUBO formulation of the optimization problems. The proposed solution exploits the power of neural networks to transform an initial embedding configuration, which does not match the quantum hardware requirements or does not account for the unit disk property, into a feasible embedding properly representing the target optimization problems. Experimental results show that this new approach overcomes in performance Gurobi solver
Exascale Computing Deployment Challenges
As Exascale computing proliferates, we see an accelerating shift towards clusters with thousands of nodes and thousands of cores per node, often on the back of commodity graphics processing units. This paper argues that this drives a once in a generation shift of computation, and that fundamentals of computer science therefore need to be re-examined. Exploiting the full power of Exascale computation will require attention to the fundamentals of programme design and specification, programming language design, systems and software engineering, analytic, performance and cost models, fundamental algorithmic design, and to increasing replacement of human bandwidth by computational analysis. As part of this, we will argue that Exascale computing will require a significant degree of co-design and close attention to the economics underlying the challenges ahead
- …