1,681 research outputs found

    A new model for mixing by double-diffusive convection (semi-convection): I. The conditions for layer formation

    Get PDF
    The process referred to as "semi-convection" in astrophysics and "double-diffusive convection in the diffusive regime" in Earth and planetary sciences, occurs in stellar and planetary interiors in regions which are stable according to the Ledoux criterion but unstable according to the Schwarzschild criterion. In this series of papers, we analyze the results of an extensive suite of 3D numerical simulations of the process, and ultimately propose a new 1D prescription for heat and compositional transport in this regime which can be used in stellar or planetary structure and evolution models. In a preliminary study of the phenomenon, Rosenblum et al. (2011) showed that, after saturation of the primary instability, a system can evolve in one of two possible ways: the induced turbulence either remains homogeneous, with very weak transport properties, or transitions into a thermo-compositional staircase where the transport rate is much larger (albeit still smaller than in standard convection). In this paper, we show that this dichotomous behavior is a robust property of semi-convection across a wide region of parameter space. We propose a simple semi-analytical criterion to determine whether layer formation is expected or not, and at what rate it proceeds, as a function of the background stratification and of the diffusion parameters (viscosity, thermal diffusivity and compositional diffusivity) only. The theoretical criterion matches the outcome of our numerical simulations very adequately in the numerically accessible "planetary" parameter regime, and can easily be extrapolated to the stellar parameter regime. Subsequent papers will address more specifically the question of quantifying transport in the layered case and in the non-layered case.Comment: Submitted to Ap

    Dynamics of fingering convection I: Small-scale fluxes and large-scale instabilities

    Get PDF
    Double-diffusive instabilities are often invoked to explain enhanced transport in stably-stratified fluids. The most-studied natural manifestation of this process, fingering convection, commonly occurs in the ocean's thermocline and typically increases diapycnal mixing by two orders of magnitude over molecular diffusion. Fingering convection is also often associated with structures on much larger scales, such as thermohaline intrusions, gravity waves and thermohaline staircases. In this paper, we present an exhaustive study of the phenomenon from small to large scales. We perform the first three-dimensional simulations of the process at realistic values of the heat and salt diffusivities and provide accurate estimates of the induced turbulent transport. Our results are consistent with oceanic field measurements of diapycnal mixing in fingering regions. We then develop a generalized mean-field theory to study the stability of fingering systems to large-scale perturbations, using our calculated turbulent fluxes to parameterize small-scale transport. The theory recovers the intrusive instability, the collective instability, and the gamma-instability as limiting cases. We find that the fastest-growing large-scale mode depends sensitively on the ratio of the background gradients of temperature and salinity (the density ratio). While only intrusive modes exist at high density ratios, the collective and gamma-instabilities dominate the system at the low density ratios where staircases are typically observed. We conclude by discussing our findings in the context of staircase formation theory.Comment: 23 pages, 9 figures, submitted to JF

    Dynamics of fingering convection II: The formation of thermohaline staircases

    Get PDF
    Regions of the ocean's thermocline unstable to salt fingering are often observed to host thermohaline staircases, stacks of deep well-mixed convective layers separated by thin stably-stratified interfaces. Decades after their discovery, however, their origin remains controversial. In this paper we use 3D direct numerical simulations to shed light on the problem. We study the evolution of an analogous double-diffusive system, starting from an initial statistically homogeneous fingering state and find that it spontaneously transforms into a layered state. By analysing our results in the light of the mean-field theory developed in Paper I, a clear picture of the sequence of events resulting in the staircase formation emerges. A collective instability of homogeneous fingering convection first excites a field of gravity waves, with a well-defined vertical wavelength. However, the waves saturate early through regular but localized breaking events, and are not directly responsible for the formation of the staircase. Meanwhile, slower-growing, horizontally invariant but vertically quasi-periodic gamma-modes are also excited and grow according to the gamma-instability mechanism. Our results suggest that the nonlinear interaction between these various mean-field modes of instability leads to the selection of one particular gamma-mode as the staircase progenitor. Upon reaching a critical amplitude, this progenitor overturns into a fully-formed staircase. We conclude by extending the results of our simulations to real oceanic parameter values, and find that the progenitor gamma-mode is expected to grow on a timescale of a few hours, and leads to the formation of a thermohaline staircase in about one day with an initial spacing of the order of one to two metres.Comment: 18 pages, 9 figures, associated mpeg file at http://earth.uni-muenster.de/~stellma/movie_small.mp4, submitted to JF

    Telecommunications and data acquisition support for the Pioneer Venus Project: Pioneers 12 and 13, prelaunch through March 1984

    Get PDF
    The support provided by the Telecommunications and Data Acquisition organization of the Jet Propulsion Laboratory (JPL) to the Pioneer Venus missions is described. The missions were the responsibility of the Ames Research Center (ARC). The Pioneer 13 mission and its spacecraft design presented one of the greatest challenges to the Deep Space Network (DSN) in the implementation and operation of new capabilities. The four probes that were to enter the atmosphere of Venus were turned on shortly before arrival at Venus, and the DSN had to acquire each of these probes in order to recover the telemetry being transmitted. Furthermore, a science experiment involving these probes descending through the atmosphere required a completed new data type to be generated at the ground stations. This new data type is known as the differential very long baseline interferometry. Discussions between ARC and JPL of the implementation requirements involved trade-offs in spacecraft design and led to a very successful return of science data. Specific implementation and operational techniques are discussed, not only for the prime mission, but also for the extended support to the Pioneer 12 spacecraft (in orbit around Venus) with its science instruments including that for radar observations of the planet

    Content analysis of instructor tools for building a learning community

    Get PDF
    This work presents a content analysis of an online discussion forum accompanying a face-to-face introductory physics course. Content analysis is a quantitative method for analyzing text that uses a coding scheme to gain insight into student discussions. We explore the effects of "anchor" tasks, small weekly activities to help students engage with each other. The goal of this analysis was to examine how the distributions of codes are impacted by anchor versus non-anchor tasks, and different types of anchors. The result of this work was that the coding scheme was able to detect some differences between anchor and non-anchor threads, but further work should be done to observe behaviors that would require a more in-depth analysis of the text. This research is significant for physics education research (PER) because there is little PER using content analysis or studying online talk. This is a step towards identifying patterns in conversations between physics students and the tools that may help them have on topic conversations essential for their learning. Identifying such tools can aid instructors in creating effective online learning environments, and this project introduces "anchor" tasks as instructor tools for building a learning community

    Quantifying the linguistic persistence of high and low performers in an online student forum

    Get PDF
    This work uses recurrence quantification analysis (RQA) to analyze the online forum discussion between students in an introductory physics course. Previous network and content analysis found differences in student conversations occurring between semesters of data from an introductory physics course; this led us to probe which concepts occur and persist within conversations. RQA is a dynamical systems technique to map the number and structure of repetitions for a time series. We treat the transcript of forum conversations as a time series to investigate and apply RQA techniques to it. We characterize the forum behaviors of high and low scoring students, such as their percentage of recurring topics and persistence of discussing a topic over time. We quantify how high scoring and low scoring students use online discussion forum and test whether different patterns exist for these groups. This work is the first adaptation of recurrence quantification methods from the field of psychology for physics education research. Using RQA, there was not a general, observable difference in how the two different groups, high- and low-scoring students, used the forum; however, there were differences when focusing in on and comparing one high-scoring student and one low-scoring student. This technique has the potential for analyzing other PER data such as interviews or student discussions

    Assigning channels via the meet-in-the-middle approach

    Full text link
    We study the complexity of the Channel Assignment problem. By applying the meet-in-the-middle approach we get an algorithm for the \ell-bounded Channel Assignment (when the edge weights are bounded by \ell) running in time O((2+1)n)O^*((2\sqrt{\ell+1})^n). This is the first algorithm which breaks the (O())n(O(\ell))^n barrier. We extend this algorithm to the counting variant, at the cost of slightly higher polynomial factor. A major open problem asks whether Channel Assignment admits a O(cn)O(c^n)-time algorithm, for a constant cc independent of \ell. We consider a similar question for Generalized T-Coloring, a CSP problem that generalizes \CA. We show that Generalized T-Coloring does not admit a 22o(n)poly(r)2^{2^{o\left(\sqrt{n}\right)}} {\rm poly}(r)-time algorithm, where rr is the size of the instance.Comment: SWAT 2014: 282-29

    Linear Atom Guides: Guiding Rydberg Atoms and Progress Toward an Atom Laser.

    Full text link
    In this thesis, I explore a variety of experiments within linear, two-wire, magnetic atom guides. Experiments include guiding of Rydberg atoms; transferring between states while keeping the atoms contained within the guide; and designing, constructing, and testing a new experimental apparatus. The ultimate goal of the atom guiding experiments is to develop a continuous atom laser. The guiding of Rydberg atoms is demonstrated. The evolution of the atoms is driven by the combined effects of dipole forces acting on the center-of-mass degree of freedom as well as internal-state transitions. Time delayed microwave and state-selective field ionization are used to investigate the evolution of the internal-state distribution as well as the Rydberg atom motion while traversing the guide. The observed decay time of the guided-atom signal is about five times that of the initial state. A population transfer between Rydberg states contributes to this lengthened lifetime, and also broadens the observed field ionization spectrum. Transfer from one guided ground state to another is studied. In our setup, before the atoms enter the guide, they are pumped into the F=1,mF=1rangle|F=1, m_F=-1rangle state. Using two repumpers, one tuned to the F=1rightarrowF=0F=1rightarrow F'=0 transition and the other tuned to the F=1rightarrowF=2F=1rightarrow F'=2 transition, the atoms are pumped between these guided states. Magnetic reflections within the guide are also studied. Design and construction of a new linear magnetic atom guide is detailed. This guide~betabeta has many improvements over the original guide~alphaalpha: a Zeeman slower, magnetic injection, a physical shutter, and surface adsorption evaporative cooling are some of the main changes. Testing of this new system is underway. It is hoped that the improvements to guide~betabeta will yield an atom density sufficient to reach degeneracy, thereby forming a continuous BEC at the end of the guide. The BEC, which will be continuously replenished by the atoms within the guide, will be outcoupled to form a continuous atom laser.PhDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99907/1/traxlerm_1.pd

    PGS4 ESOMEPRAZOLE AS MAINTENANCE THERAPY IN EROSIVE ESOPHAGITIS: A QUANTITATIVE ASSESSMENT OF EFFICACY USING AN EVIDENCE-BASED APPROACH

    Get PDF

    Functional Projections of Predicates: Experimental Evidence from Coordinate Structure Processing

    Get PDF
    This paper reports the results of six experiments involving an on-line self-paced reading task that examine the processing of coordinate small clause predicate phrases versus coordinated arguments NPs. The results have particular significance for the analysis of small clause complement constructions, and support accounts wherein the small clause complement has an Agr projection associated with it. An adequate explanation of the processing of small clause coordination is shown to motivate a new parsing principle, Coordination Feature-matching, which accounts for the longer reading times observed for the coordination of predicates in small clause complements
    corecore