2,200 research outputs found

    Soft Tissue to Hard Tissue Advancement Ratios for Mandibular Elongation Using Distraction Osteogenesis in Children

    Get PDF
    Distraction osteogenesis is extensively used for the elongation of hypoplastic mandibles in children, yet the soft tissue profile response to this is not well understood. The pre- and posttreatment lateral cephalometric radiographs of 27 pediatric patients who underwent bilateral mandibular elongation using distraction osteogenesis were analyzed retrospectively to correlate horizontal soft tissue advancement with horizontal underlying bone advancement at B point and pogonion. Horizontal advancement (in millimeters) of bone and overlying soft tissue at these points was collected from the radiographs of each patient, and linear regression analysis was performed to determine the relationship of hard to soft tissue horizontal advancement at these points. A 1:0.90 mean ratio of bone to soft tissue advancement was observed at B point/labiomental sulcus and at pogonion/soft tissue pogonion (linear regression analysis demonstrated slopes [β1 values] of 0.94 and 0.92, respectively). These ratios were consistent throughout the sample population and are highly predictive of the soft tissue response that can be anticipated. Magnitude of advancement, age, and sex of the patient had no effect on these ratios in our population. This study assists with our understanding of the soft tissue response that accompanies bony elongation during distraction osteogenesis which will allow us to more effectively treatment plan the orthodontic and surgical intervention that will optimize the patients\u27 functional and esthetic outcome

    Soma: live performance where congruent musical, visual, and proprioceptive stimuli fuse to form a combined aesthetic narrative

    Get PDF
    Artists and scientists have long had an interest in the relationship between music and visual art. Today, many occupy themselves with correlated animation and music, called 'visual music'. Established tools and paradigms for performing live visual music however, have several limitations: Virtually no user interface exists, with an expressivity comparable to live musical performance. Mappings between music and visuals are typically reduced to the music‘s beat and amplitude being statically associated to the visuals, disallowing close audiovisual congruence, tension and release, and suspended expectation in narratives. Collaborative performance, common in other live art, is mostly absent due to technical limitations. Preparing or improvising performances is complicated, often requiring software development. This thesis addresses these, through a transdisciplinary integration of findings from several research areas, detailing the resulting ideas, and their implementation in a novel system: Musical instruments are used as the primary control data source, accurately encoding all musical gestures of each performer. The advanced embodied knowledge musicians have of their instruments, allows increased expressivity, the full control data bandwidth allows high mapping complexity, while musicians‘ collaborative performance familiarity may translate to visual music performance. The conduct of Mutable Mapping, gradually creating, destroying and altering mappings, may allow for a narrative in mapping during performance. The art form of Soma, in which correlated auditory, visual and proprioceptive stimulus form a combined narrative, builds on knowledge that performers and audiences are more engaged in performance requiring advanced motor knowledge, and when congruent percepts across modalities coincide. Preparing and improvising is simplified, through re-adapting the Processing programming language for artists to behave as a plug-in API, thus encapsulating complexity in modules, which may be dynamically layered during performance. Design research methodology is employed during development and evaluation, while introducing the additional viewpoint of ethnography during evaluation, engaging musicians, audience and visuals performers

    The Plausibility of a String Quartet Performance in Virtual Reality

    Get PDF
    We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. ‘Plausibility’ refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant’s movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility

    Microscopic calculation of 6Li elastic and transition form factors

    Get PDF
    Variational Monte Carlo wave functions, obtained from a realistic Hamiltonian consisting of the Argonne v18 two-nucleon and Urbana-IX three-nucleon interactions, are used to calculate the 6Li ground-state longitudinal and transverse form factors as well as transition form factors to the first four excited states. The charge and current operators include one- and two-body components, leading terms of which are constructed consistently with the two-nucleon interaction. The calculated form factors and radiative widths are in good agreement with available experimental data.Comment: 9 pages, 2 figures, REVTeX, submitted to Physical Review Letters, with updated introduction and reference

    PHP22 EFFECTS OF DECENTRALIZED RESPONSIBILITY FOR COSTS OF OUTPATIENT PRESCRIPTION DRUGS ON THE PHARMACEUTICAL COST DEVELOPMENT IN SWEDEN

    Get PDF

    Single-Brane Cosmological Solutions with a Stable Compact Extra Dimension

    Get PDF
    We consider 5-dimensional cosmological solutions of a single brane. The correct cosmology on the brane, i.e., governed by the standard 4-dimensional Friedmann equation, and stable compactification of the extra dimension is guaranteed by the existence of a non-vanishing \hat{T}^5_5 which is proportional to the 4-dimensional trace of the energy-momentum tensor. We show that this component of the energy-momentum tensor arises from the backreaction of the dilaton coupling to the brane. The same positive features are exhibited in solutions found in the presence of non-vanishing cosmological constants both on the brane (\Lambda_{br}) and in the bulk (\Lambda_B). Moreover, the restoration of the Friedmann equation, with the correct sign, takes place for both signs of ΛB\Lambda_B so long as the sign of Λbr\Lambda_{br} is opposite ΛB\Lambda_B in order to cancel the energy densities of the two cosmological constants. We further extend our single-brane thin-wall solution to allow a brane with finite thickness.Comment: 25 pages, Latex file, no figures, comments added, references updated, final version to appear in Physical Review

    Probing Kaluza-Klein Dark Matter with Neutrino Telescopes

    Get PDF
    In models in which all of the Standard Model fields live in extra universal dimensions, the lightest Kaluza-Klein (KK) particle can be stable. Calculations of the one-loop radiative corrections to the masses of the KK modes suggest that the identity of the lightest KK particle (LKP) is mostly the first KK excitation of the hypercharge gauge boson. This LKP is a viable dark matter candidate with an ideal present-day relic abundance if its mass is moderately large, between 600 to 1200 GeV. Such weakly interacting dark matter particles are expected to become gravitationally trapped in large bodies, such as the Sun, and annihilate into neutrinos or other particles that decay into neutrinos. We calculate the annihilation rate, neutrino flux and the resulting event rate in present and future neutrino telescopes. The relatively large mass implies that the neutrino energy spectrum is expected to be well above the energy threshold of AMANDA and IceCube. We find that the event rate in IceCube is between a few to tens of events per year.Comment: 13 pages, 3 figures, LaTeX; typos fixed, version to appear in PR

    HIGH RESOLUTION AT LOW ENERGIES WITH AN IRON DOUBLE-FOCUSING BETA-RAY SPECTROMETER

    Get PDF

    Gamma Lines without a Continuum: Thermal Models for the Fermi-LAT 130 GeV Gamma Line

    Get PDF
    Recent claims of a line in the Fermi-LAT photon spectrum at 130 GeV are suggestive of dark matter annihilation in the galactic center and other dark matter-dominated regions. If the Fermi feature is indeed due to dark matter annihilation, the best-fit line cross-section, together with the lack of any corresponding excess in continuum photons, poses an interesting puzzle for models of thermal dark matter: the line cross-section is too large to be generated radiatively from open Standard Model annihilation modes, and too small to provide efficient dark matter annihilation in the early universe. We discuss two mechanisms to solve this puzzle and illustrate each with a simple reference model in which the dominant dark matter annihilation channel is photonic final states. The first mechanism we employ is resonant annihilation, which enhances the annihilation cross-section during freezeout and allows for a sufficiently large present-day annihilation cross section. Second, we consider cascade annihilation, with a hierarchy between p-wave and s-wave processes. Both mechanisms require mass near-degeneracies and predict states with masses closely related to the dark matter mass; resonant freezeout in addition requires new charged particles at the TeV scale.Comment: 17 pages, 8 figure

    Statistical coverage for supersymmetric parameter estimation: a case study with direct detection of dark matter

    Full text link
    Models of weak-scale supersymmetry offer viable dark matter (DM) candidates. Their parameter spaces are however rather large and complex, such that pinning down the actual parameter values from experimental data can depend strongly on the employed statistical framework and scanning algorithm. In frequentist parameter estimation, a central requirement for properly constructed confidence intervals is that they cover true parameter values, preferably at exactly the stated confidence level when experiments are repeated infinitely many times. Since most widely-used scanning techniques are optimised for Bayesian statistics, one needs to assess their abilities in providing correct confidence intervals in terms of the statistical coverage. Here we investigate this for the Constrained Minimal Supersymmetric Standard Model (CMSSM) when only constrained by data from direct searches for dark matter. We construct confidence intervals from one-dimensional profile likelihoods and study the coverage by generating several pseudo-experiments for a few benchmark sets of pseudo-true parameters. We use nested sampling to scan the parameter space and evaluate the coverage for the benchmarks when either flat or logarithmic priors are imposed on gaugino and scalar mass parameters. The sampling algorithm has been used in the configuration usually adopted for exploration of the Bayesian posterior. We observe both under- and over-coverage, which in some cases vary quite dramatically when benchmarks or priors are modified. We show how most of the variation can be explained as the impact of explicit priors as well as sampling effects, where the latter are indirectly imposed by physicality conditions. For comparison, we also evaluate the coverage for Bayesian credible intervals, and observe significant under-coverage in those cases.Comment: 30 pages, 5 figures; v2 includes major updates in response to referee's comments; extra scans and tables added, discussion expanded, typos corrected; matches published versio
    corecore