1,458 research outputs found

    The effect of spanwise heterogeneous surfaces on mixed convection in turbulent channels

    Get PDF
    Turbulent mixed convection in channel flows with heterogeneous surfaces is studied using direct numerical simulations. The relative importance of buoyancy and shear effects, characterised by the bulk Richardson number Rib\textit{Ri}_{b}, is varied in order to cover the flow regimes of forced, mixed and natural convection, which are associated with different large-scale flow organisations. The heterogeneous surface consists of streamwise-aligned ridges, which are known to induce secondary motion in the case of forced convection. The large-scale streamwise rolls emerging under smooth-wall mixed convection conditions are significantly affected by the heterogeneous surfaces and their appearance is considerably reduced for dense ridge spacings. It is found that the formation of these rolls requires larger buoyancy forces than over smooth walls due to the additional drag induced by the ridges. Therefore, the transition from forced convection structures to rolls is delayed towards larger Rib\textit{Ri}_{b} for spanwise heterogeneous surfaces. The influence of the heterogeneous surface on the flow organisation of mixed convection is particularly pronounced in the roll-to-cell transition range, where ridges favour the transition to convective cells at significantly lower Rib\textit{Ri}_{b}. In addition, the convective cells are observed to align perpendicular to the ridges with decreasing ridge spacing. We attribute this reorganisation to the fact that flow parallel to the ridges experience less drag than flow across the ridges, which is energetically more beneficial. Furthermore, we find that streamwise rolls exhibit a very slow dynamics for Rib = 1\textit{Ri}_{b}\textit{ = 1} and Rib = 3.2\textit{Ri}_{b}\textit{ = 3.2} when the ridge spacing is of the order of the rolls’ width. For these cases the up- and downdrafts of the rolls move slowly across the entire channel instead of being fixed in space, as observed for the smooth-wall cases

    Simulation of turbulent flow over roughness strips

    Get PDF
    Heterogeneous roughness in the form of streamwise aligned strips is known to generate large scale secondary motions under turbulent flow conditions that can induce the intriguing feature of larger flow rates above rough than smooth surface parts. The hydrodynamical definition of a surface roughness includes a large scale separation between the roughness height and the boundary layer thickness which is directly related to the fact that the drag of a laminar flow is not altered by the presence of roughness. Existing simplified approaches for direct numerical simulation of roughness strips do not fulfil this requirement of an unmodified laminar base flow compared with a smooth wall reference. It is shown that disturbances induced in a modified laminar base flow can trigger large-scale motions with resemblance to turbulent secondary flow. We propose a simple roughness model that allows us to capture the particular features of turbulent secondary flow without impacting the laminar base flow. The roughness model is based on the prescription of a spanwise slip length, a quantity that can directly be translated into the Hama roughness function for a homogeneous rough surface. The heterogeneous application of the slip-length boundary condition results in very good agreement with existing experimental data in terms of the secondary flow topology. In addition, the proposed modelling approach allows us to quantitatively evaluate the drag increasing contribution of the secondary flow. Both the secondary flow itself and the related drag increase reveal a very small dependence on the gradient of the transition between rough and smooth surface parts only. Interestingly, the observed drag increase due to secondary flows above the modelled roughness is significantly smaller than the one previously reported for roughness resolving simulations. We hypothesise that this difference arises from the fact that roughness resolving simulations cannot truly fulfil the requirement of large scale separation

    Interacting crumpled manifolds

    Full text link
    In this article we study the effect of a delta-interaction on a polymerized membrane of arbitrary internal dimension D. Depending on the dimensionality of membrane and embedding space, different physical scenarios are observed. We emphasize on the difference of polymers from membranes. For the latter, non-trivial contributions appear at the 2-loop level. We also exploit a ``massive scheme'' inspired by calculations in fixed dimensions for scalar field theories. Despite the fact that these calculations are only amenable numerically, we found that in the limit of D to 2 each diagram can be evaluated analytically. This property extends in fact to any order in perturbation theory, allowing for a summation of all orders. This is a novel and quite surprising result. Finally, an attempt to go beyond D=2 is presented. Applications to the case of self-avoiding membranes are mentioned

    Whole-Body Hyperthermia (WBH) in Psychiatry

    Get PDF
    New effective therapies for managing and treating psychiatric disorders such as major depression are urgently needed. Mood-enhancing effects have repeatedly been observed after whole-body hyperthermia (WBH) treatment in other medical disciplines, and there is promising evidence that WBH may be used in psychiatry for patients suffering from depressive disorders. Most importantly, a recent study demonstrated a significant, rapid, and partially lasting reduction of depressive symptoms in patients with major depressive disorder following a single session of water-filtered infrared-A induced whole-body hyperthermia (wIRA-WBH). Underlying mechanisms of actions may include immune modulation and serotonergic neurotransmission via warm-sensitive afferent thermosensory pathways to the midbrain. Current studies are focused on verifying these earlier findings and clarifying the mechanisms involved. Herein, we report on the establishment of WBH methodology in the psychiatry setting and provide our opinions on necessary future research

    Detecting Sunyaev-Zel'dovich clusters with PLANCK: I. Construction of all-sky thermal and kinetic SZ-maps

    Full text link
    All-sky thermal and kinetic Sunyaev-Zel'dovich (SZ) maps are presented for assessing how well the PLANCK-mission can find and characterise clusters of galaxies, especially in the presence of primary anisotropies of the cosmic microwave background (CMB) and various galactic and ecliptic foregrounds. The maps have been constructed from numerical simulations of structure formation in a standard LCDM cosmology and contain all clusters out to redshifts of z = 1.46 with masses exceeding 5e13 M_solar/h. By construction, the maps properly account for the evolution of cosmic structure, the halo-halo correlation function, the evolving mass function, halo substructure and adiabatic gas physics. The velocities in the kinetic map correspond to the actual density environment at the cluster positions. We characterise the SZ-cluster sample by measuring the distribution of angular sizes, the integrated thermal and kinetic Comptonisations, the source counts in the three relevant PLANCK-channels, and give the angular power-spectra of the SZ-sky. While our results are broadly consistent with simple estimates based on scaling relations and spherically symmetric cluster models, some significant differences are seen which may affect the number of cluster detectable by PLANCK.Comment: 14 pages, 16 figures, 3 tables, submitted to MNRAS, 05.Jul.200

    The XMM Cluster Survey: Forecasting cosmological and cluster scaling-relation parameter constraints

    Get PDF
    We forecast the constraints on the values of sigma_8, Omega_m, and cluster scaling relation parameters which we expect to obtain from the XMM Cluster Survey (XCS). We assume a flat Lambda-CDM Universe and perform a Monte Carlo Markov Chain analysis of the evolution of the number density of galaxy clusters that takes into account a detailed simulated selection function. Comparing our current observed number of clusters shows good agreement with predictions. We determine the expected degradation of the constraints as a result of self-calibrating the luminosity-temperature relation (with scatter), including temperature measurement errors, and relying on photometric methods for the estimation of galaxy cluster redshifts. We examine the effects of systematic errors in scaling relation and measurement error assumptions. Using only (T,z) self-calibration, we expect to measure Omega_m to +-0.03 (and Omega_Lambda to the same accuracy assuming flatness), and sigma_8 to +-0.05, also constraining the normalization and slope of the luminosity-temperature relation to +-6 and +-13 per cent (at 1sigma) respectively in the process. Self-calibration fails to jointly constrain the scatter and redshift evolution of the luminosity-temperature relation significantly. Additional archival and/or follow-up data will improve on this. We do not expect measurement errors or imperfect knowledge of their distribution to degrade constraints significantly. Scaling-relation systematics can easily lead to cosmological constraints 2sigma or more away from the fiducial model. Our treatment is the first exact treatment to this level of detail, and introduces a new `smoothed ML' estimate of expected constraints.Comment: 28 pages, 17 figures. Revised version, as accepted for publication in MNRAS. High-resolution figures available at http://xcs-home.org (under "Publications"

    Probing the accelerating Universe with radio weak lensing in the JVLA Sky Survey

    Get PDF
    We outline the prospects for performing pioneering radio weak gravitational lensing analyses using observations from a potential forthcoming JVLA Sky Survey program. A large-scale survey with the JVLA can offer interesting and unique opportunities for performing weak lensing studies in the radio band, a field which has until now been the preserve of optical telescopes. In particular, the JVLA has the capacity for large, deep radio surveys with relatively high angular resolution, which are the key characteristics required for a successful weak lensing study. We highlight the potential advantages and unique aspects of performing weak lensing in the radio band. In particular, the inclusion of continuum polarisation information can greatly reduce noise in weak lensing reconstructions and can also remove the effects of intrinsic galaxy alignments, the key astrophysical systematic effect that limits weak lensing at all wavelengths. We identify a VLASS "deep fields" program (total area ~10-20 square degs), to be conducted at L-band and with high-resolution (A-array configuration), as the optimal survey strategy from the point of view of weak lensing science. Such a survey will build on the unique strengths of the JVLA and will remain unsurpassed in terms of its combination of resolution and sensitivity until the advent of the Square Kilometre Array. We identify the best fields on the JVLA-accessible sky from the point of view of overlapping with existing deep optical and near infra-red data which will provide crucial redshift information and facilitate a host of additional compelling multi-wavelength science.Comment: Submitted in response to NRAO's recent call for community white papers on the VLA Sky Survey (VLASS

    Simulation vs. Reality: A Comparison of In Silico Distance Predictions with DEER and FRET Measurements

    Get PDF
    Site specific incorporation of molecular probes such as fluorescent- and nitroxide spin-labels into biomolecules, and subsequent analysis by Förster resonance energy transfer (FRET) and double electron-electron resonance (DEER) can elucidate the distance and distance-changes between the probes. However, the probes have an intrinsic conformational flexibility due to the linker by which they are conjugated to the biomolecule. This property minimizes the influence of the label side chain on the structure of the target molecule, but complicates the direct correlation of the experimental inter-label distances with the macromolecular structure or changes thereof. Simulation methods that account for the conformational flexibility and orientation of the probe(s) can be helpful in overcoming this problem. We performed distance measurements using FRET and DEER and explored different simulation techniques to predict inter-label distances using the Rpo4/7 stalk module of the M. jannaschii RNA polymerase. This is a suitable model system because it is rigid and a high-resolution X-ray structure is available. The conformations of the fluorescent labels and nitroxide spin labels on Rpo4/7 were modeled using in vacuo molecular dynamics simulations (MD) and a stochastic Monte Carlo sampling approach. For the nitroxide probes we also performed MD simulations with explicit water and carried out a rotamer library analysis. Our results show that the Monte Carlo simulations are in better agreement with experiments than the MD simulations and the rotamer library approach results in plausible distance predictions. Because the latter is the least computationally demanding of the methods we have explored, and is readily available to many researchers, it prevails as the method of choice for the interpretation of DEER distance distributions

    Measurement of the cross-section and charge asymmetry of WW bosons produced in proton-proton collisions at s=8\sqrt{s}=8 TeV with the ATLAS detector

    Get PDF
    This paper presents measurements of the W+μ+νW^+ \rightarrow \mu^+\nu and WμνW^- \rightarrow \mu^-\nu cross-sections and the associated charge asymmetry as a function of the absolute pseudorapidity of the decay muon. The data were collected in proton--proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS experiment at the LHC and correspond to a total integrated luminosity of 20.2~\mbox{fb^{-1}}. The precision of the cross-section measurements varies between 0.8% to 1.5% as a function of the pseudorapidity, excluding the 1.9% uncertainty on the integrated luminosity. The charge asymmetry is measured with an uncertainty between 0.002 and 0.003. The results are compared with predictions based on next-to-next-to-leading-order calculations with various parton distribution functions and have the sensitivity to discriminate between them.Comment: 38 pages in total, author list starting page 22, 5 figures, 4 tables, submitted to EPJC. All figures including auxiliary figures are available at https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/STDM-2017-13

    The clinical utility of pain classification in non-specific arm pain

    Get PDF
    Mechanisms-based pain classification has received considerable attention recently for its potential use in clinical decision making. A number of algorithms for pain classification have been proposed. Non-specific arm pain (NSAP) is a poorly defined condition, which could benefit from classification according to pain mechanisms to improve treatment selection. This study used three published classification algorithms (hereafter called NeuPSIG, Smart, Schafer) to investigate the frequency of different pain classifications in NSAP and the clinical utility of these systems in assessing NSAP. Forty people with NSAP underwent a clinical examination and quantitative sensory testing. Findings were used to classify participants according to three classification algorithms. Frequency of pain classification including number unclassified was analysed using descriptive statistics. Inter-rater agreement was analysed using kappa coefficients. NSAP was primarily classified as ‘unlikely neuropathic pain’ using NeuPSIG criteria, ‘peripheral neuropathic pain’ using the Smart classification and ‘peripheral nerve sensitisation’ using the Schafer algorithm. Two of the three algorithms allowed classification of all but one participant; up to 45% of participants (n = 18) were categorised as mixed by the Smart classification. Inter-rater agreement was good for the Schafer algorithm (к = 0.78) and moderate for the Smart classification (к = 0.40). A kappa value was unattainable for the NeuPSIG algorithm but agreement was high. Pain classification was achievable with high inter-rater agreement for two of the three algorithms assessed. The Smart classification may be useful but requires further direction regarding the use of clinical criteria included. The impact of adding a pain classification to clinical assessment on patient outcomes needs to be evaluated
    corecore