3,994 research outputs found

    Benchmark of a modified Iterated Perturbation Theory approach on the 3d FCC lattice at strong coupling

    Full text link
    The Dynamical Mean-Field theory (DMFT) approach to the Hubbard model requires a method to solve the problem of a quantum impurity in a bath of non-interacting electrons. Iterated Perturbation Theory (IPT) has proven its effectiveness as a solver in many cases of interest. Based on general principles and on comparisons with an essentially exact Continuous-Time Quantum Monte Carlo (CTQMC) solver, here we show that the standard implementation of IPT fails away from half-filling when the interaction strength is much larger than the bandwidth. We propose a slight modification to the IPT algorithm that replaces one of the equations by the requirement that double occupancy calculated with IPT gives the correct value. We call this method IPT-DD. We recover the Fermi liquid ground state away from half-filling. The Fermi liquid parameters, density of states, chemical potential, energy and specific heat on the FCC lattice are calculated with both IPT-DD and CTQMC as benchmark examples. We also calculated the resistivity and the optical conductivity within IPT-DD. Particle-hole asymmetry persists even at coupling twice the bandwidth. Several algorithms that speed up the calculations are described in appendices.Comment: 17 pages, 15 figures, minor changes to improve clarit

    Computational Techniques to Predict Orthopaedic Implant Alignment and Fit in Bone

    Get PDF
    Among the broad palette of surgical techniques employed in the current orthopaedic practice, joint replacement represents one of the most difficult and costliest surgical procedures. While numerous recent advances suggest that computer assistance can dramatically improve the precision and long term outcomes of joint arthroplasty even in the hands of experienced surgeons, many of the joint replacement protocols continue to rely almost exclusively on an empirical basis that often entail a succession of trial and error maneuvers that can only be performed intraoperatively. Although the surgeon is generally unable to accurately and reliably predict a priori what the final malalignment will be or even what implant size should be used for a certain patient, the overarching goal of all arthroplastic procedures is to ensure that an appropriate match exists between the native and prosthetic axes of the articulation. To address this relative lack of knowledge, the main objective of this thesis was to develop a comprehensive library of numerical techniques capable to: 1) accurately reconstruct the outer and inner geometry of the bone to be implanted; 2) determine the location of the native articular axis to be replicated by the implant; 3) assess the insertability of a certain implant within the endosteal canal of the bone to be implanted; 4) propose customized implant geometries capable to ensure minimal malalignments between native and prosthetic axes. The accuracy of the developed algorithms was validated through comparisons performed against conventional methods involving either contact-acquired data or navigated implantation approaches, while various customized implant designs proposed were tested with an original numerical implantation method. It is anticipated that the proposed computer-based approaches will eliminate or at least diminish the need for undesirable trial and error implantation procedures in a sense that present error-prone intraoperative implant insertion decisions will be at least augmented if not even replaced by optimal computer-based solutions to offer reliable virtual “previews” of the future surgical procedure. While the entire thesis is focused on the elbow as the most challenging joint replacement surgery, many of the developed approaches are equally applicable to other upper or lower limb articulations

    Loads Kernel User Guide

    Get PDF
    The Loads Kernel Software allows for the calculation of quasi-steady and dynamic maneuver loads, unsteady gust loads in the time and frequency domain as well as dynamic landing loads based on a generic landing gear module. This report is a published of the Loads Kernel User Guide, version 1.0 as of 7. October 2020

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Multi-scale active shape description in medical imaging

    Get PDF
    Shape description in medical imaging has become an increasingly important research field in recent years. Fast and high-resolution image acquisition methods like Magnetic Resonance (MR) imaging produce very detailed cross-sectional images of the human body - shape description is then a post-processing operation which abstracts quantitative descriptions of anatomically relevant object shapes. This task is usually performed by clinicians and other experts by first segmenting the shapes of interest, and then making volumetric and other quantitative measurements. High demand on expert time and inter- and intra-observer variability impose a clinical need of automating this process. Furthermore, recent studies in clinical neurology on the correspondence between disease status and degree of shape deformations necessitate the use of more sophisticated, higher-level shape description techniques. In this work a new hierarchical tool for shape description has been developed, combining two recently developed and powerful techniques in image processing: differential invariants in scale-space, and active contour models. This tool enables quantitative and qualitative shape studies at multiple levels of image detail, exploring the extra image scale degree of freedom. Using scale-space continuity, the global object shape can be detected at a coarse level of image detail, and finer shape characteristics can be found at higher levels of detail or scales. New methods for active shape evolution and focusing have been developed for the extraction of shapes at a large set of scales using an active contour model whose energy function is regularized with respect to scale and geometric differential image invariants. The resulting set of shapes is formulated as a multiscale shape stack which is analysed and described for each scale level with a large set of shape descriptors to obtain and analyse shape changes across scales. This shape stack leads naturally to several questions in regard to variable sampling and appropriate levels of detail to investigate an image. The relationship between active contour sampling precision and scale-space is addressed. After a thorough review of modem shape description, multi-scale image processing and active contour model techniques, the novel framework for multi-scale active shape description is presented and tested on synthetic images and medical images. An interesting result is the recovery of the fractal dimension of a known fractal boundary using this framework. Medical applications addressed are grey-matter deformations occurring for patients with epilepsy, spinal cord atrophy for patients with Multiple Sclerosis, and cortical impairment for neonates. Extensions to non-linear scale-spaces, comparisons to binary curve and curvature evolution schemes as well as other hierarchical shape descriptors are discussed

    Simulating galaxies in the reionization era with FIRE-2: morphologies and sizes

    Get PDF
    We study the morphologies and sizes of galaxies at z>5 using high-resolution cosmological zoom-in simulations from the Feedback In Realistic Environments project. The galaxies show a variety of morphologies, from compact to clumpy to irregular. The simulated galaxies have more extended morphologies and larger sizes when measured using rest-frame optical B-band light than rest-frame UV light; sizes measured from stellar mass surface density are even larger. The UV morphologies are usually dominated by several small, bright young stellar clumps that are not always associated with significant stellar mass. The B-band light traces stellar mass better than the UV, but it can also be biased by the bright clumps. At all redshifts, galaxy size correlates with stellar mass/luminosity with large scatter. The half-light radii range from 0.01 to 0.2 arcsec (0.05-1 kpc physical) at fixed magnitude. At z>5, the size of galaxies at fixed stellar mass/luminosity evolves as (1+z)^{-m}, with m~1-2. For galaxies less massive than M_star~10^8 M_sun, the ratio of the half-mass radius to the halo virial radius is ~10% and does not evolve significantly at z=5-10; this ratio is typically 1-5% for more massive galaxies. A galaxy's "observed" size decreases dramatically at shallower surface brightness limits. This effect may account for the extremely small sizes of z>5 galaxies measured in the Hubble Frontier Fields. We provide predictions for the cumulative light distribution as a function of surface brightness for typical galaxies at z=6.Comment: 11 pages, 11 figures, resubmitted to MNRAS after revision for referee's comment

    Cosmological Simulations with Self-Interacting Dark Matter I: Constant Density Cores and Substructure

    Full text link
    We use cosmological simulations to study the effects of self-interacting dark matter (SIDM) on the density profiles and substructure counts of dark matter halos from the scales of spiral galaxies to galaxy clusters, focusing explicitly on models with cross sections over dark matter particle mass \sigma/m = 1 and 0.1 cm^2/g. Our simulations rely on a new SIDM N-body algorithm that is derived self-consistently from the Boltzmann equation and that reproduces analytic expectations in controlled numerical experiments. We find that well-resolved SIDM halos have constant-density cores, with significantly lower central densities than their CDM counterparts. In contrast, the subhalo content of SIDM halos is only modestly reduced compared to CDM, with the suppression greatest for large hosts and small halo-centric distances. Moreover, the large-scale clustering and halo circular velocity functions in SIDM are effectively identical to CDM, meaning that all of the large-scale successes of CDM are equally well matched by SIDM. From our largest cross section runs we are able to extract scaling relations for core sizes and central densities over a range of halo sizes and find a strong correlation between the core radius of an SIDM halo and the NFW scale radius of its CDM counterpart. We construct a simple analytic model, based on CDM scaling relations, that captures all aspects of the scaling relations for SIDM halos. Our results show that halo core densities in \sigma/m = 1 cm^2/g models are too low to match observations of galaxy clusters, low surface brightness spirals (LSBs), and dwarf spheroidal galaxies. However, SIDM with \sigma/m ~ 0.1 cm^2/g appears capable of reproducing reported core sizes and central densities of dwarfs, LSBs, and galaxy clusters without the need for velocity dependence. (abridged)Comment: 26 pages, 16 figures, all figures include colors, submitted for publication in MNRA

    IRAS versus POTENT Density Fields on Large Scales: Biasing and Omega

    Get PDF
    The galaxy density field as extracted from the IRAS 1.2 Jy redshift survey is compared to the mass density field as reconstructed by the POTENT method from the Mark III catalog of peculiar velocities. The reconstruction is done with Gaussian smoothing of radius 12 h^{-1}Mpc, and the comparison is carried out within volumes of effective radii 31-46 h^{-1}Mpc, containing approximately 10-26 independent samples. Random and systematic errors are estimated from multiple realizations of mock catalogs drawn from a simulation that mimics the observed density field in the local universe. The relationship between the two density fields is found to be consistent with gravitational instability theory in the mildly nonlinear regime and a linear biasing relation between galaxies and mass. We measure beta = Omega^{0.6}/b_I = 0.89 \pm 0.12 within a volume of effective radius 40 h^{-1}Mpc, where b_I is the IRAS galaxy biasing parameter at 12 h^{-1}Mpc. This result is only weakly dependent on the comparison volume, suggesting that cosmic scatter is no greater than \pm 0.1. These data are thus consistent with Omega=1 and b_I\approx 1. If b_I>0.75, as theoretical models of biasing indicate, then Omega>0.33 at 95% confidence. A comparison with other estimates of beta suggests scale-dependence in the biasing relation for IRAS galaxies.Comment: 35 pages including 10 figures, AAS Latex, Submitted to The Astrophysical Journa

    Nuclear halo of a 177 MeV proton beam in water: theory, measurement and parameterization

    Full text link
    The dose distribution of a monoenergetic pencil beam in water consists of an electromagnetic "core", a "halo" from charged nuclear secondaries, and a much larger "aura" from neutral secondaries. These regions overlap, but each has distinct spatial characteristics. We have measured the core/halo using a 177MeV test beam offset in a water tank. The beam monitor was a fluence calibrated plane parallel ionization chamber (IC) and the field chamber, a dose calibrated Exradin T1, so the dose measurements are absolute (MeV/g/p). We performed depth-dose scans at ten displacements from the beam axis ranging from 0 to 10cm. The dose spans five orders of magnitude, and the transition from halo to aura is clearly visible. We have performed model-dependent (MD) and model-independent (MI) fits to the data. The MD fit separates the dose into core, elastic/inelastic nuclear, nonelastic nuclear and aura terms, and achieves a global rms measurement/fit ratio of 15%. The MI fit uses cubic splines and the same ratio is 9%. We review the literature, in particular the use of Pedroni's parametrization of the core/halo. Several papers improve on his Gaussian transverse distribution of the halo, but all retain his T(w), the radial integral of the depth-dose multiplying both the core and halo terms and motivating measurements with large "Bragg peak chambers" (BPCs). We argue that this use of T(w), which by its definition includes energy deposition by nuclear secondaries, is incorrect. T(w) should be replaced in the core term, and in at least part of the halo, by a purely electromagnetic mass stopping power. BPC measurements are unnecessary, and irrelevant to parameterizing the pencil beam.Comment: 55 pages, 4 tables, 29 figure
    • …
    corecore