104 research outputs found

    An immersed boundary method for computing heat and fluid flow in porous media

    Get PDF
    A volume-penalizing immersed boundary (IB) method is presented that facilitates the computation of fluid flow in complex porous media. The computational domain is composed of a uniform Cartesian grid, and solid bodies are approximated on this grid using a series of grid cells (i.e., a ''staircase'' approximation). Solid bodies are distinguished from fluid regions using a binary phase-indicator function: Taking the value of ''1'' in the solid parts of the domain and ''0'' in the fluid parts. The effect of solid bodies on the flow is modeled using a source term in the momentum equations. The source term is active only within solid parts of the domain, and enforces the no-slip boundary condition. Fluid regions are governed by the incompressible Navier-Stokes equations. An extension of the IB method is proposed to tackle coupled fluid-solid heat transfer. The extended IB method is validated for Poiseuille flow, which allows for a direct comparison of the numerical results against a closed analytical solution. We subsequently apply the extended IB method to flow in a structured porous medium and focus on bulk properties such as the gradient of the average pressure and the Nusselt number. Reliable qualitative results were obtained with 16-32 grid points per singly-connected fluid region

    Impurity state in Haldane gap for S=1 Heisenberg antiferromagnetic chain with bond doping

    Full text link
    Using a new impurity density matrix renormalization group scheme, we establish a reliable picture of how the low lying energy levels of a S=1S=1 Heisenberg antiferromagnetic chain change {\it quantitatively} upon bond doping. A new impurity state gradually occurs in the Haldane gap as Jâ€Č<JJ' < J, while it appears only if Jâ€Č/J>ÎłcJ'/J>\gamma_c with 1/Îłc=0.7081/\gamma_c=0.708 as Jâ€Č>JJ'>J. The system is non-perturbative as 1≀Jâ€Č/J≀γc1\leq J'/J\leq\gamma_c. This explains the appearance of a new state in the Haldane gap in a recent experiment on Y2−x_{2-x}Cax_xBaNiO5_5 [J.F. DiTusa, et al., Phys. Rev. Lett. 73 1857(1994)].Comment: 4 pages of uuencoded gzip'd postscrip

    Gravitino Dark Matter Scenarios with Massive Metastable Charged Sparticles at the LHC

    Get PDF
    We investigate the measurement of supersymmetric particle masses at the LHC in gravitino dark matter (GDM) scenarios where the next-to-lightest supersymmetric partner (NLSP) is the lighter scalar tau, or stau, and is stable on the scale of a detector. Such a massive metastable charged sparticle would have distinctive Time-of-Flight (ToF) and energy-loss (dE/dxdE/dx) signatures. We summarise the documented accuracies expected to be achievable with the ATLAS detector in measurements of the stau mass and its momentum at the LHC. We then use a fast simulation of an LHC detector to demonstrate techniques for reconstructing the cascade decays of supersymmetric particles in GDM scenarios, using a parameterisation of the detector response to staus, taus and jets based on full simulation results. Supersymmetric pair-production events are selected with high redundancy and efficiency, and many valuable measurements can be made starting from stau tracks in the detector. We recalibrate the momenta of taus using transverse-momentum balance, and use kinematic cuts to select combinations of staus, taus, jets and leptons that exhibit peaks in invariant masses that correspond to various heavier sparticle species, with errors often comparable with the jet energy scale uncertainty.Comment: 23 pages, 10 figures, updated to version published in JHE

    Constraining warm dark matter with cosmic shear power spectra

    Full text link
    We investigate potential constraints from cosmic shear on the dark matter particle mass, assuming all dark matter is made up of light thermal relic particles. Given the theoretical uncertainties involved in making cosmological predictions in such warm dark matter scenarios we use analytical fits to linear warm dark matter power spectra and compare (i) the halo model using a mass function evaluated from these linear power spectra and (ii) an analytical fit to the non-linear evolution of the linear power spectra. We optimistically ignore the competing effect of baryons for this work. We find approach (ii) to be conservative compared to approach (i). We evaluate cosmological constraints using these methods, marginalising over four other cosmological parameters. Using the more conservative method we find that a Euclid-like weak lensing survey together with constraints from the Planck cosmic microwave background mission primary anisotropies could achieve a lower limit on the particle mass of 2.5 keV.Comment: 26 pages, 9 figures, minor changes to match the version accepted for publication in JCA

    Spatial tools for diagnosing the degree of safety and liveability, and to regenerate urban areas in the Netherlands

    Get PDF
    This contribution describes the tool Social Safe Urban Design (SSUD), seen together with socio-spatial and linguistic challenges when applying space syntax in the regenerating of problem urban areas. The Space Syntax jargon is technical and needs to be translated into a language understandable and acceptable to stakeholders who are responsible for the implementation of improvement strategies acceptable for the users of a neighbourhood. Moreover, the degree of public-private interface between buildings and streets needs to be incorporated in the Space Syntax analyses. As it turns out from spatial analyses and crime registrations, there is a correlation between crime and anti-social behaviour and the spatial layout of built environments in the investigated eight pilot cases. Simultaneously, there is also a challenge to come up with locally and globally functioning spatial solutions for reducing opportunities for crime and anti-social behaviour for the neighbourhoods. Proposed solutions for three of these neighbourhoods are presented in this contribution

    Life after charge noise: recent results with transmon qubits

    Full text link
    We review the main theoretical and experimental results for the transmon, a superconducting charge qubit derived from the Cooper pair box. The increased ratio of the Josephson to charging energy results in an exponential suppression of the transmon's sensitivity to 1/f charge noise. This has been observed experimentally and yields homogeneous broadening, negligible pure dephasing, and long coherence times of up to 3 microseconds. Anharmonicity of the energy spectrum is required for qubit operation, and has been proven to be sufficient in transmon devices. Transmons have been implemented in a wide array of experiments, demonstrating consistent and reproducible results in very good agreement with theory.Comment: 6 pages, 4 figures. Review article, accepted for publication in Quantum Inf. Pro

    Theoretical upper bound on the mass of the LSP in the MNSSM

    Get PDF
    We study the neutralino sector of the Minimal Non-minimal Supersymmetric Standard Model (MNSSM) where the Ό\mu problem of the Minimal Supersymmetric Standard Model (MSSM) is solved without accompanying problems related with the appearance of domain walls. In the MNSSM as in the MSSM the lightest neutralino can be the absolutely stable lightest supersymmetric particle (LSP) providing a good candidate for the cold dark matter component of the Universe. In contrast with the MSSM the allowed range of the mass of the lightest neutralino in the MNSSM is limited. We establish the theoretical upper bound on the lightest neutralino mass in the framework of this model and obtain an approximate solution for this mass.Comment: 15 pages, 2 figures, references adde

    Quantum computing implementations with neutral particles

    Full text link
    We review quantum information processing with cold neutral particles, that is, atoms or polar molecules. First, we analyze the best suited degrees of freedom of these particles for storing quantum information, and then we discuss both single- and two-qubit gate implementations. We focus our discussion mainly on collisional quantum gates, which are best suited for atom-chip-like devices, as well as on gate proposals conceived for optical lattices. Additionally, we analyze schemes both for cold atoms confined in optical cavities and hybrid approaches to entanglement generation, and we show how optimal control theory might be a powerful tool to enhance the speed up of the gate operations as well as to achieve high fidelities required for fault tolerant quantum computation.Comment: 19 pages, 12 figures; From the issue entitled "Special Issue on Neutral Particles

    Muon Track Reconstruction and Data Selection Techniques in AMANDA

    Full text link
    The Antarctic Muon And Neutrino Detector Array (AMANDA) is a high-energy neutrino telescope operating at the geographic South Pole. It is a lattice of photo-multiplier tubes buried deep in the polar ice between 1500m and 2000m. The primary goal of this detector is to discover astrophysical sources of high energy neutrinos. A high-energy muon neutrino coming through the earth from the Northern Hemisphere can be identified by the secondary muon moving upward through the detector. The muon tracks are reconstructed with a maximum likelihood method. It models the arrival times and amplitudes of Cherenkov photons registered by the photo-multipliers. This paper describes the different methods of reconstruction, which have been successfully implemented within AMANDA. Strategies for optimizing the reconstruction performance and rejecting background are presented. For a typical analysis procedure the direction of tracks are reconstructed with about 2 degree accuracy.Comment: 40 pages, 16 Postscript figures, uses elsart.st

    Sensitivity of the IceCube Detector to Astrophysical Sources of High Energy Muon Neutrinos

    Full text link
    We present the results of a Monte-Carlo study of the sensitivity of the planned IceCube detector to predicted fluxes of muon neutrinos at TeV to PeV energies. A complete simulation of the detector and data analysis is used to study the detector's capability to search for muon neutrinos from sources such as active galaxies and gamma-ray bursts. We study the effective area and the angular resolution of the detector as a function of muon energy and angle of incidence. We present detailed calculations of the sensitivity of the detector to both diffuse and pointlike neutrino emissions, including an assessment of the sensitivity to neutrinos detected in coincidence with gamma-ray burst observations. After three years of datataking, IceCube will have been able to detect a point source flux of E^2*dN/dE = 7*10^-9 cm^-2s^-1GeV at a 5-sigma significance, or, in the absence of a signal, place a 90% c.l. limit at a level E^2*dN/dE = 2*10^-9 cm^-2s^-1GeV. A diffuse E-2 flux would be detectable at a minimum strength of E^2*dN/dE = 1*10^-8 cm^-2s^-1sr^-1GeV. A gamma-ray burst model following the formulation of Waxman and Bahcall would result in a 5-sigma effect after the observation of 200 bursts in coincidence with satellite observations of the gamma-rays.Comment: 33 pages, 13 figures, 6 table
    • 

    corecore