401 research outputs found
Recursive formulation of the multiconfigurational time-dependent Hartree method for fermions, bosons and mixtures thereof in terms of one-body density operators
The multiconfigurational time-dependent Hartree method (MCTDH) [Chem. Phys.
Lett. {\bf 165}, 73 (1990); J. Chem. Phys. {\bf 97}, 3199 (1992)] is
celebrating nowadays entering its third decade of tackling numerically-exactly
a broad range of correlated multi-dimensional non-equilibrium quantum dynamical
systems. Taking in recent years particles' statistics explicitly into account,
within the MCTDH for fermions (MCTDHF) and for bosons (MCTDHB), has opened up
further opportunities to treat larger systems of interacting identical
particles, primarily in laser-atom and cold-atom physics. With the increase of
experimental capabilities to simultaneously trap mixtures of two, three, and
possibly even multiple kinds of interacting composite identical particles
together, we set up the stage in the present work and specify the MCTDH method
for such cases. Explicitly, the MCTDH method for systems with three kinds of
identical particles interacting via all combinations of two- and three-body
forces is presented, and the resulting equations-of-motion are briefly
discussed. All four possible mixtures of fermions and bosons are presented in a
unified manner. Particular attention is paid to represent the coefficients'
part of the equations-of-motion in a compact recursive form in terms of
one-body density operators only. The recursion utilizes the recently proposed
Combinadic-based mapping for fermionic and bosonic operators in Fock space
[Phys. Rev. A {\bf 81}, 022124 (2010)] and successfully applied and implemented
within MCTDHB. Our work sheds new light on the representation of the
coefficients' part in MCTDHF and MCTDHB without resorting to the matrix
elements of the many-body Hamiltonian with respect to the time-dependent
configurations. It suggests a recipe for efficient implementation of the
schemes derived here for mixtures which is suitable for parallelization.Comment: 43 page
Comparing variant calling algorithms for target-exon sequencing in a large sample
Abstract
Background
Sequencing studies of exonic regions aim to identify rare variants contributing to complex traits. With high coverage and large sample size, these studies tend to apply simple variant calling algorithms. However, coverage is often heterogeneous; sites with insufficient coverage may benefit from sophisticated calling algorithms used in low-coverage sequencing studies. We evaluate the potential benefits of different calling strategies by performing a comparative analysis of variant calling methods on exonic data from 202 genes sequenced at 24x in 7,842 individuals. We call variants using individual-based, population-based and linkage disequilibrium (LD)-aware methods with stringent quality control. We measure genotype accuracy by the concordance with on-target GWAS genotypes and between 80 pairs of sequencing replicates. We validate selected singleton variants using capillary sequencing.
Results
Using these calling methods, we detected over 27,500 variants at the targeted exons; >57% were singletons. The singletons identified by individual-based analyses were of the highest quality. However, individual-based analyses generated more missing genotypes (4.72%) than population-based (0.47%) and LD-aware (0.17%) analyses. Moreover, individual-based genotypes were the least concordant with array-based genotypes and replicates. Population-based genotypes were less concordant than genotypes from LD-aware analyses with extended haplotypes. We reanalyzed the same dataset with a second set of callers and showed again that the individual-based caller identified more high-quality singletons than the population-based caller. We also replicated this result in a second dataset of 57 genes sequenced at 127.5x in 3,124 individuals.
Conclusions
We recommend population-based analyses for high quality variant calls with few missing genotypes. With extended haplotypes, LD-aware methods generate the most accurate and complete genotypes. In addition, individual-based analyses should complement the above methods to obtain the most singleton variants.http://deepblue.lib.umich.edu/bitstream/2027.42/110906/1/12859_2015_Article_489.pd
Generation of annotated multimodal ground truth datasets for abdominal medical image registration
Sparsity of annotated data is a major limitation in medical image processing
tasks such as registration. Registered multimodal image data are essential for
the diagnosis of medical conditions and the success of interventional medical
procedures. To overcome the shortage of data, we present a method that allows
the generation of annotated multimodal 4D datasets. We use a CycleGAN network
architecture to generate multimodal synthetic data from the 4D extended
cardiac-torso (XCAT) phantom and real patient data. Organ masks are provided by
the XCAT phantom, therefore the generated dataset can serve as ground truth for
image segmentation and registration. Realistic simulation of respiration and
heartbeat is possible within the XCAT framework. To underline the usability as
a registration ground truth, a proof of principle registration is performed.
Compared to real patient data, the synthetic data showed good agreement
regarding the image voxel intensity distribution and the noise characteristics.
The generated T1-weighted magnetic resonance imaging (MRI), computed tomography
(CT), and cone beam CT (CBCT) images are inherently co-registered. Thus, the
synthetic dataset allowed us to optimize registration parameters of a
multimodal non-rigid registration, utilizing liver organ masks for evaluation.
Our proposed framework provides not only annotated but also multimodal
synthetic data which can serve as a ground truth for various tasks in medical
imaging processing. We demonstrated the applicability of synthetic data for the
development of multimodal medical image registration algorithms.Comment: 12 pages, 5 figures. This work has been published in the
International Journal of Computer Assisted Radiology and Surgery volum
Bose-Hubbard model with occupation dependent parameters
We study the ground-state properties of ultracold bosons in an optical
lattice in the regime of strong interactions. The system is described by a
non-standard Bose-Hubbard model with both occupation-dependent tunneling and
on-site interaction. We find that for sufficiently strong coupling the system
features a phase-transition from a Mott insulator with one particle per site to
a superfluid of spatially extended particle pairs living on top of the Mott
background -- instead of the usual transition to a superfluid of single
particles/holes. Increasing the interaction further, a superfluid of particle
pairs localized on a single site (rather than being extended) on top of the
Mott background appears. This happens at the same interaction strength where
the Mott-insulator phase with 2 particles per site is destroyed completely by
particle-hole fluctuations for arbitrarily small tunneling. In another regime,
characterized by weak interaction, but high occupation numbers, we observe a
dynamical instability in the superfluid excitation spectrum. The new ground
state is a superfluid, forming a 2D slab, localized along one spatial direction
that is spontaneously chosen.Comment: 16 pages, 4 figure
Comparing variant calling algorithms for target-exon sequencing in a large sample
Abstract
Background
Sequencing studies of exonic regions aim to identify rare variants contributing to complex traits. With high coverage and large sample size, these studies tend to apply simple variant calling algorithms. However, coverage is often heterogeneous; sites with insufficient coverage may benefit from sophisticated calling algorithms used in low-coverage sequencing studies. We evaluate the potential benefits of different calling strategies by performing a comparative analysis of variant calling methods on exonic data from 202 genes sequenced at 24x in 7,842 individuals. We call variants using individual-based, population-based and linkage disequilibrium (LD)-aware methods with stringent quality control. We measure genotype accuracy by the concordance with on-target GWAS genotypes and between 80 pairs of sequencing replicates. We validate selected singleton variants using capillary sequencing.
Results
Using these calling methods, we detected over 27,500 variants at the targeted exons; >57% were singletons. The singletons identified by individual-based analyses were of the highest quality. However, individual-based analyses generated more missing genotypes (4.72%) than population-based (0.47%) and LD-aware (0.17%) analyses. Moreover, individual-based genotypes were the least concordant with array-based genotypes and replicates. Population-based genotypes were less concordant than genotypes from LD-aware analyses with extended haplotypes. We reanalyzed the same dataset with a second set of callers and showed again that the individual-based caller identified more high-quality singletons than the population-based caller. We also replicated this result in a second dataset of 57 genes sequenced at 127.5x in 3,124 individuals.
Conclusions
We recommend population-based analyses for high quality variant calls with few missing genotypes. With extended haplotypes, LD-aware methods generate the most accurate and complete genotypes. In addition, individual-based analyses should complement the above methods to obtain the most singleton variants.http://deepblue.lib.umich.edu/bitstream/2027.42/134735/1/12859_2015_Article_489.pd
The granularity of weakly occupied bosonic fields beyond the local density approximation
We examine ground state correlations for repulsive, quasi one-dimensional
bosons in a harmonic trap. In particular, we focus on the few particle limit
N=2,3,4,..., where exact numerical solutions of the many particle Schroedinger
equation are available employing the Multi-Configuration Time-dependent Hartree
method. Our numerical results for the inhomogeneous system are modeled with the
analytical solution of the homogeneous problem using the Bethe ansatz and the
local density approximation. Tuning the interaction strength from the weakly
correlated Gross-Pitaevskii- to the strongly correlated Tonks-Girardeau regime
reveals finite particle number effects in the second order correlation function
beyond the local density approximation.Comment: 20 pages, 9 figures, submitted to NJ
A fast algorithm for genome-wide haplotype pattern mining
<p>Abstract</p> <p>Background</p> <p>Identifying the genetic components of common diseases has long been an important area of research. Recently, genotyping technology has reached the level where it is cost effective to genotype single nucleotide polymorphism (SNP) markers covering the entire genome, in thousands of individuals, and analyse such data for markers associated with a diseases. The statistical power to detect association, however, is limited when markers are analysed one at a time. This can be alleviated by considering multiple markers simultaneously. The <it>Haplotype Pattern Mining </it>(HPM) method is a machine learning approach to do exactly this.</p> <p>Results</p> <p>We present a new, faster algorithm for the HPM method. The new approach use patterns of haplotype diversity in the genome: locally in the genome, the number of observed haplotypes is much smaller than the total number of possible haplotypes. We show that the new approach speeds up the HPM method with a factor of 2 on a genome-wide dataset with 5009 individuals typed in 491208 markers using default parameters and more if the pattern length is increased.</p> <p>Conclusion</p> <p>The new algorithm speeds up the HPM method and we show that it is feasible to apply HPM to whole genome association mapping with thousands of individuals and hundreds of thousands of markers.</p
Frequency drift in MR spectroscopy at 3T
Purpose: Heating of gradient coils and passive shim components is a common cause of instability in the B0 field, especially when gradient intensive sequences are used. The aim of the study was to set a benchmark for typical drift encountered during MR spectroscopy (MRS) to assess the need for real-time field-frequency locking on MRI scanners by comparing field drift data from a large number of sites. Method: A standardized protocol was developed for 80 participating sites using 99 3T MR scanners from 3 major vendors. Phantom water signals were acquired before and after an EPI sequence. The protocol consisted of: minimal preparatory imaging; a short pre-fMRI PRESS; a ten-minute fMRI acquisition; and a long post-fMRI PRESS acquisition. Both pre- and post-fMRI PRESS were non-water suppressed. Real-time frequency stabilization/adjustment was switched off when appropriate. Sixty scanners repeated the protocol for a second dataset. In addition, a three-hour post-fMRI MRS acquisition was performed at one site to observe change of gradient temperature and drift rate. Spectral analysis was performed using MATLAB. Frequency drift in pre-fMRI PRESS data were compared with the first 5:20 minutes and the full 30:00 minutes of data after fMRI. Median (interquartile range) drifts were measured and showed in violin plot. Paired t-tests were performed to compare frequency drift pre- and post-fMRI. A simulated in vivo spectrum was generated using FID-A to visualize the effect of the observed frequency drifts. The simulated spectrum was convolved with the frequency trace for the most extreme cases. Impacts of frequency drifts on NAA and GABA were also simulated as a function of linear drift. Data from the repeated protocol were compared with the corresponding first dataset using Pearson\u27s and intraclass correlation coefficients (ICC). Results: Of the data collected from 99 scanners, 4 were excluded due to various reasons. Thus, data from 95 scanners were ultimately analyzed. For the first 5:20 min (64 transients), median (interquartile range) drift was 0.44 (1.29) Hz before fMRI and 0.83 (1.29) Hz after. This increased to 3.15 (4.02) Hz for the full 30 min (360 transients) run. Average drift rates were 0.29 Hz/min before fMRI and 0.43 Hz/min after. Paired t-tests indicated that drift increased after fMRI, as expected (p \u3c 0.05). Simulated spectra convolved with the frequency drift showed that the intensity of the NAA singlet was reduced by up to 26%, 44 % and 18% for GE, Philips and Siemens scanners after fMRI, respectively. ICCs indicated good agreement between datasets acquired on separate days. The single site long acquisition showed drift rate was reduced to 0.03 Hz/min approximately three hours after fMRI. Discussion: This study analyzed frequency drift data from 95 3T MRI scanners. Median levels of drift were relatively low (5-min average under 1 Hz), but the most extreme cases suffered from higher levels of drift. The extent of drift varied across scanners which both linear and nonlinear drifts were observed
Spatio-Temporal Interpolation Is Accomplished by Binocular Form and Motion Mechanisms
Spatio-temporal interpolation describes the ability of the visual system to perceive shapes as whole figures (Gestalts), even if they are moving behind narrow apertures, so that only thin slices of them meet the eye at any given point in time. The interpolation process requires registration of the form slices, as well as perception of the shape's global motion, in order to reassemble the slices in the correct order. The commonly proposed mechanism is a spatio-temporal motion detector with a receptive field, for which spatial distance and temporal delays are interchangeable, and which has generally been regarded as monocular. Here we investigate separately the nature of the motion and the form detection involved in spatio-temporal interpolation, using dichoptic masking and interocular presentation tasks. The results clearly demonstrate that the associated mechanisms for both motion and form are binocular rather than monocular. Hence, we question the traditional view according to which spatio-temporal interpolation is achieved by monocular first-order motion-energy detectors in favour of models featuring binocular motion and form detection
Consensus-Based Technical Recommendations for Clinical Translation of Renal Phase Contrast MRI
BACKGROUND:
Phase-contrast (PC) MRI is a feasible and valid noninvasive technique to measure renal artery blood flow, showing potential to support diagnosis and monitoring of renal diseases. However, the variability in measured renal blood flow values across studies is large, most likely due to differences in PC-MRI acquisition and processing. Standardized acquisition and processing protocols are therefore needed to minimize this variability and maximize the potential of renal PC-MRI as a clinically useful tool.
PURPOSE:
To build technical recommendations for the acquisition, processing, and analysis of renal 2D PC-MRI data in human subjects to promote standardization of renal blood flow measurements and facilitate the comparability of results across scanners and in multicenter clinical studies.
STUDY TYPE:
Systematic consensus process using a modified Delphi method.
POPULATION:
Not applicable.
SEQUENCE FIELD/STRENGTH:
Renal fast gradient echo-based 2D PC-MRI.
ASSESSMENT:
An international panel of 27 experts from Europe, the USA, Australia, and Japan with 6 (interquartile range 4–10) years of experience in 2D PC-MRI formulated consensus statements on renal 2D PC-MRI in two rounds of surveys. Starting from a recently published systematic review article, literature-based and data-driven statements regarding patient preparation, hardware, acquisition protocol, analysis steps, and data reporting were formulated.
STATISTICAL TESTS:
Consensus was defined as ≥75% unanimity in response, and a clear preference was defined as 60–74% agreement among the experts.
RESULTS:
Among 60 statements, 57 (95%) achieved consensus after the second-round survey, while the remaining three showed a clear preference. Consensus statements resulted in specific recommendations for subject preparation, 2D renal PC-MRI data acquisition, processing, and reporting.
DATA CONCLUSION:
These recommendations might promote a widespread adoption of renal PC-MRI, and may help foster the set-up of multicenter studies aimed at defining reference values and building larger and more definitive evidence, and will facilitate clinical translation of PC-MRI.
LEVEL OF EVIDENCE:
1
TECHNICAL EFFICACY STAGE:
- …