4,225 research outputs found

    Characterizing octagonal and rectangular fibers for MAROON-X

    Full text link
    We report on the scrambling performance and focal-ratio-degradation (FRD) of various octagonal and rectangular fibers considered for MAROON-X. Our measurements demonstrate the detrimental effect of thin claddings on the FRD of octagonal and rectangular fibers and that stress induced at the connectors can further increase the FRD. We find that fibers with a thick, round cladding show low FRD. We further demonstrate that the scrambling behavior of non-circular fibers is often complex and introduce a new metric to fully capture non-linear scrambling performance, leading to much lower scrambling gain values than are typically reported in the literature (<1000 compared to 10,000 or more). We find that scrambling gain measurements for small-core, non-circular fibers are often speckle dominated if the fiber is not agitated.Comment: 10 pages, 8 figures, submitted to SPIE Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation 2016 (9912-185

    Behavior of confined granular beds under cyclic thermal loading

    Full text link
    We investigate the mechanical behavior of a confined granular packing of irregular polyhedral particles under repeated heating and cooling cycles by means of numerical simulations with the Non-Smooth Contact Dynamics method. Assuming a homogeneous temperature distribution as well as constant temperature rate, we study the effect of the container shape, and coefficients of thermal expansions on the pressure buildup at the confining walls and the density evolution. We observe that small changes in the opening angle of the confinement can lead to a drastic peak pressure reduction. Furthermore, the displacement fields over several thermal cycles are obtained and we discover the formation of convection cells inside the granular material having the shape of a torus. The root mean square of the vorticity is then calculated from the displacement fields and a quadratic dependency on the ratio of thermal expansion coefficients is established

    Real photon structure at an e^+e^- linear collider

    Get PDF
    Previous studies of the kinematic coverage for measuring the photon structure function F_2^gamma at a future 500 GeV e^+e^- linear collider are updated using current estimates of luminosities and important detector parameters. The perturbative expansion for the evolution of F_2^gamma is briefly recalled in view of a recent claim that all existing next-to-leading order analyses of the photon structure are incorrect. A simple illustration is given of the different sensitivities of hadronic and photonic structure functions on the strong coupling constant alpha_s.Comment: 6 pages LaTeX including 7 eps-figures, uses espcrc2.sty (included). Talk presented at PHOTON'99, Freiburg (Germany), May 1999. To appear in the proceedings [Nucl. Phys. B (Proc. Suppl.)

    Enhancing Simulation Composability and Interoperability Using Conceptual/Semantic/Ontological Models

    Get PDF
    (First paragraph) Two emerging trends in Modeling and Simulation (M&S) are beginning to dovetail in a potentially highly productive manner, namely conceptual modeling and semantic modeling. Conceptual modeling has existed for several decades, but its importance has risen to the forefront in the last decade (Taylor and Robinson, 2006; Robinson, 2007). Also, during the last decade, progress on the Semantic Web has begun to influence M&S, with the development of general modeling ontologies (Miller et al, 2004), as well as ontologies for modeling particular domains (Durak, 2006). An ontology, which is a formal specification of a conceptualization (Gruber et al, 1993), can be used to rigorously define a domain of discourse in terms of classes/concepts, properties/relationships and instances/individuals. For the Semantic Web, ontologies are typically specified using the Web Ontology Language (OWL). Although, conceptual modeling is broader than just semantics (it includes additional issues such as pragmatics (Tolk et al, 2008)), progress in the Semantic Web and ontologies is certainly beneficial to conceptual modeling. Benefits are accrued in many ways including the large knowledge bases being placed on the Web in numerous fields in which simulation studies are conducted and the powerful reasoning algorithms based on description logic being developed that allow the consistency of large specifications to be checked

    Improving estimates of diving lung volume in air-breathing marine vertebrates

    Get PDF
    The air volume in the respiratory system of marine tetrapods provides a store of O2 to fuel aerobic metabolism during dives; however, it can also be a liability, as the associated N2 can increase the risk of decompression sickness. In order to more fully understand the physiological limitations of different air-breathing marine vertebrates, it is therefore important to be able to accurately estimate the air volume in the respiratory system during diving. One method that has been used to do so is to calculate the air volume from glide phases - periods of movement during which no thrust is produced by the animal - which many species conduct during ascent periods, when gases are expanding owing to decreasing hydrostatic pressure. This method assumes that there is conservation of mass in the respiratory system, with volume changes only driven by pressure. In this Commentary, we use previously published data to argue that both the respiratory quotient and differences in tissue and blood gas solubility potentially alter the mass balance in the respiratory system throughout a dive. Therefore, near the end of a dive, the measured volume of gas at a given pressure may be 12-50% less than from the start of the dive; the actual difference will depend on the length of the dive, the cardiac output, the pulmonary shunt and the metabolic rate. Novel methods and improved understanding of diving physiology will be required to verify the size of the effects described here and to more accurately estimate the volume of gas inhaled at the start of a dive.Publisher PDFPeer reviewe

    A standard curve based method for relative real time PCR data processing

    Get PDF
    BACKGROUND: Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. RESULTS: We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I) Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II) The optimal threshold is selected automatically from regression parameters of the standard curve. (III) Crossing points (CPs) are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV) The means and their variances are calculated for CPs in PCR replicas. (V) The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. CONCLUSION: A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that standard curve design remains a reliable and simple alternative to the PCR-efficiency based calculations in relative real time PCR

    Neutron-deuteron scattering cross sections with chiral NN interactions using wave-packet continuum discretization

    Get PDF
    In this work we present a framework that allows one to solve the Faddeev equations\ua0for three-nucleon scattering using the wave-packet continuum-discretization method. We perform systematic benchmarks using results in the literature and study in detail the convergence of this method with respect to the number of wave packets. We compute several different elastic neutron-deuteron scattering cross-section observables for a variety of energies using chiral nucleon-nucleon interactions. For the optimized next-to-next-to-leading order interaction N2LOopt we find good agreement with data for nucleon scattering-energies ELab≤70 MeV and a slightly larger maximum of the neutron analyzing power Ay(n) at ELab=10 and 21 MeV compared with other interactions. This work represents a first step towards a systematic inclusion of three-nucleon scattering observables in the construction of next-generation nuclear interactions

    Measuring of Corneal Thickness of Contact Lens Wearers with Keratoconus and Keratoplasty by Means of Optical Coherence Tomography (OCT)

    Get PDF
    To measure the corneal thickness and the depth of the precorneal tear film of contact lens wearers with keratoconus or keratoplasty and to reconfirm the identification and classification of the keratoconus with optical coherence tomography (OCT). The cornea and precorneal tear film of 123 eyes with keratoconus, of 39 eyes after keratoplasty and 8 eyes after LASIK were examined with an OCT (Zeiss VisanteTM) and a keratograph (Oculus). Visual acuity was determined. The mean age of all patients was 42.7 years (s=9). There were 35% female patients and 65% were male patients. The central corneal thickness of 123 eyes with keratoconus was 467 ± 73 mm. The nasal and especially the inferior corneal periphery exhibit a 9% lesser thickness (426 ± 83 mm). The cornea with keratoconus is thinner in the 90° meridian, than in the 180° meridian [p<0.01). This could be a clinically relevant result for the reduction of astigmatism after keratoplastic surgery. The central corneal thickness of 39 eyes with keratoplasty was 555 ± 65 mm. These eyes showed peripheral parts with even less thickness. The thickness of the precorneal tear film of 114 contact lens wearers with keratoconus was 89 ± 42 mm in the horizontal meridian, 113 ± 56 mm in the vertical meridian. All the comparative results in case of keratoconus, keratoplasty and the depth of the precorneal tear film had high statistical significance (p<0.001). Optical coherence tomography is particularly suitable for the examination of eyes with keratoconus and keratoplasty. It delivers new insight into corneal thickness of eyes with keratoconus and keratoplasty

    Posterior predictive distributions of neutron-deuteron cross sections

    Get PDF
    We quantify the posterior predictive distributions (PPDs) of elastic neutron-deuteron (nd) scattering cross sections using nucleon-nucleon (NN) interactions from chiral effective field theory (χEFT) up to and including next-to-next-to-next-to-leading order (N3LO). These PPDs quantify the spread in nd predictions due to the variability of the low-energy constants (LECs) inferred from NN scattering data. We use the wave-packet continuum discretization method to solve the Alt-Grassberger-Sandhas form of the Faddeev equations for elastic scattering. We draw 100 samples from the PPDs of nd cross sections up to 67 MeV in scattering energy, i.e., in the energy region where the effects of three-nucleon forces are expected to be small. We find that the uncertainty about NN LECs inferred from NN scattering data, when assuming uncorrelated errors, does not translate to significant uncertainty in the low-energy nd continuum. Based on our estimates, the uncertainty of nd predictions are dominated by the χEFT truncation error, at least below N3LO. At this order, the 90% credible interval of the PPD and the truncation error are comparable, although both are very small on an absolute scale

    Modeling Austrian Consumer Responses to a Vignette Television Commercial Drama For a Vacation Resort Destination

    Get PDF
    Abstract- This project involved the formulation and test of a model of Austrian consumers\u27 cognitive and affective responses to a vignette television commercial drama for a vacation resort area, the Halkidiki region of Greece. Results indicate that response to the commercial was both cognitive and affective with sympathy and empathy, mediating the influence of verisimilitude on attitudes toward the ad and brand. These results were consistent with what was expected from a sample of consumers from a low power distance and moderate individualistic culture such as Austria. Results suggest that to promote tourism services effectively, a commercial\u27s production value and realism must be high to produce verisimilitude and, in turn, sufficient sympathy and empathy to influence attitudes
    corecore