244,761 research outputs found
Accelerated Cardiac Diffusion Tensor Imaging Using Joint Low-Rank and Sparsity Constraints
Objective: The purpose of this manuscript is to accelerate cardiac diffusion
tensor imaging (CDTI) by integrating low-rankness and compressed sensing.
Methods: Diffusion-weighted images exhibit both transform sparsity and
low-rankness. These properties can jointly be exploited to accelerate CDTI,
especially when a phase map is applied to correct for the phase inconsistency
across diffusion directions, thereby enhancing low-rankness. The proposed
method is evaluated both ex vivo and in vivo, and is compared to methods using
either a low-rank or sparsity constraint alone. Results: Compared to using a
low-rank or sparsity constraint alone, the proposed method preserves more
accurate helix angle features, the transmural continuum across the myocardium
wall, and mean diffusivity at higher acceleration, while yielding significantly
lower bias and higher intraclass correlation coefficient. Conclusion:
Low-rankness and compressed sensing together facilitate acceleration for both
ex vivo and in vivo CDTI, improving reconstruction accuracy compared to
employing either constraint alone. Significance: Compared to previous methods
for accelerating CDTI, the proposed method has the potential to reach higher
acceleration while preserving myofiber architecture features which may allow
more spatial coverage, higher spatial resolution and shorter temporal footprint
in the future.Comment: 11 pages, 16 figures, published on IEEE Transactions on Biomedical
Engineerin
Flux cost functions and the choice of metabolic fluxes
Metabolic fluxes in cells are governed by physical, biochemical,
physiological, and economic principles. Cells may show "economical" behaviour,
trading metabolic performance against the costly side-effects of high enzyme or
metabolite concentrations. Some constraint-based flux prediction methods score
fluxes by heuristic flux costs as proxies of enzyme investments. However,
linear cost functions ignore enzyme kinetics and the tight coupling between
fluxes, metabolite levels and enzyme levels. To derive more realistic cost
functions, I define an apparent "enzymatic flux cost" as the minimal enzyme
cost at which the fluxes can be realised in a given kinetic model, and a
"kinetic flux cost", which includes metabolite cost. I discuss the mathematical
properties of such flux cost functions, their usage for flux prediction, and
their importance for cells' metabolic strategies. The enzymatic flux cost
scales linearly with the fluxes and is a concave function on the flux polytope.
The costs of two flows are usually not additive, due to an additional
"compromise cost". Between flux polytopes, where fluxes change their
directions, the enzymatic cost shows a jump. With strictly concave flux cost
functions, cells can reduce their enzymatic cost by running different fluxes in
different cell compartments or at different moments in time. The enzymactic
flux cost can be translated into an approximated cell growth rate, a convex
function on the flux polytope. Growth-maximising metabolic states can be
predicted by Flux Cost Minimisation (FCM), a variant of FBA based on general
flux cost functions. The solutions are flux distributions in corners of the
flux polytope, i.e. typically elementary flux modes. Enzymatic flux costs can
be linearly or nonlinearly approximated, providing model parameters for linear
FBA based on kinetic parameters and extracellular concentrations, and justified
by a kinetic model
Adapting the interior point method for the solution of LPs on serial, coarse grain parallel and massively parallel computers
In this paper we describe a unified scheme for implementing an interior point algorithm (IPM) over a range of computer architectures. In the inner iteration of the IPM a search direction is computed using Newton's method. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system, and the design of data structures to take advantage of serial, coarse grain parallel and massively parallel computer architectures, are considered in detail. We put forward arguments as to why integration of the system within a sparse simplex solver is important and outline how the system is designed to achieve this integration
- …