1,855 research outputs found

    Military Aid to the Civil Authority in the mid-19th Century New Brunswick

    Get PDF
    During the mid–19th century, the role of the military in New Brunswick began to change. Although its primary function remained defence against invasion, the civil power called on it with increasing frequency; first the British regulars and later the militia assisted in capacities ranging from fighting fires to policing. Nevertheless, as New Brunswick changed from colony to province, the militia did not automatically replace the imperial garrison. Civil authorities were reluctant to call on it, and volunteers assumed this role only after the regulars departed in 1869. This article first examines the types of disorder that occurred between the 1830s and the 1870s. It next considers the 18 known instances during this period when the civil authorities called out British regulars and provincial (ie., county–based) militias to aid them. It finaly looks at factors that discouraged such use of the militia

    “The Vast Experiment”: The New Brunswick Militia’s 1865 Camp of Instruction

    Get PDF
    In July 1865, almost 1,000 New Brunswick militiamen assembled in Fredericton for a twenty-four day Camp of Instruction. This was the first time peacetime militia training on this scale was ever attempted in British North America. At the time, and since, the camp was praised as a notable achievement. New Brunswick’s lieutenant-governor and provincial commander-in-chief, Arthur Hamilton Gordon, wrote: “The entire success of this experiment is admitted with an unanimity which is remarkable, because rare in this Province.” Although the uniqueness of the camp has been the subject of some interest, it has never been subjected to more in-depth analysis. This article moves beyond generalizations to examine the purpose of the camp and what actually transpired. The Camp of Instruction tells us a great deal about the nature and character of New Brunswick’s militia on the eve of Confederation, and its outcome helps to explain why the province moved towards political union as an answer to the inadequate state of its defences

    Subsurface Inclusion Quantification Using Multi-Frequency Ultrasonic Surface Waves

    Get PDF
    Rolling contact fatigue (RCF) is one of the major sources of subsurface-initiated spalling in bearing components. These types of spalls initiate in regions where microcracks are formed due to a localized increase in stress from inclusions near the surface. Ultrasonic bulk inspection is a powerful method to detect defects in a large volume of steel. However, inclusions near the surface of a sample can elude detection because reflections from the inclusion can be masked by reflections from the sample surface. This limitation can be eliminated if ultrasonic surface wave methods are used for inspection. Surface waves have material displacements localized near the sample surface but decay with depth giving an effective inspection of depth on the order of the wavelength. Ultrasonic scattering from inclusions also is wavelength dependent and these two aspects can complicate the interpretation of ultrasonic experimental data. In this presentation, a model is described for the scattering of a surface wave by a subsurface spherical inclusion. The amplitude-versus-depth profile of a surface wave is combined with the solution for the scattering of a shear wave from a spherical scatterer in order to approximate the problem of interest. Trends of reflected amplitude with respect to inspection frequency, inclusion depth, and inclusion diameter are discussed first. Then a necessary calibration experiment is described that uses subsurface defects of known size created using femtosecond laser machining. A model of the calibration sample allows measurements on unknown samples to be interpreted quantitatively. The final analysis shows that the reflected amplitude from multiple frequency measurements can be used to characterize the size and depth of the subsurface inclusions

    Systematic Renormalization in Hamiltonian Light-Front Field Theory

    Get PDF
    We develop a systematic method for computing a renormalized light-front field theory Hamiltonian that can lead to bound states that rapidly converge in an expansion in free-particle Fock-space sectors. To accomplish this without dropping any Fock sectors from the theory, and to regulate the Hamiltonian, we suppress the matrix elements of the Hamiltonian between free-particle Fock-space states that differ in free mass by more than a cutoff. The cutoff violates a number of physical principles of the theory, and thus the Hamiltonian is not just the canonical Hamiltonian with masses and couplings redefined by renormalization. Instead, the Hamiltonian must be allowed to contain all operators that are consistent with the unviolated physical principles of the theory. We show that if we require the Hamiltonian to produce cutoff-independent physical quantities and we require it to respect the unviolated physical principles of the theory, then its matrix elements are uniquely determined in terms of the fundamental parameters of the theory. This method is designed to be applied to QCD, but for simplicity, we illustrate our method by computing and analyzing second- and third-order matrix elements of the Hamiltonian in massless phi-cubed theory in six dimensions.Comment: 47 pages, 6 figures; improved referencing, minor presentation change

    Systematic Renormalization in Hamiltonian Light-Front Field Theory: The Massive Generalization

    Get PDF
    Hamiltonian light-front field theory can be used to solve for hadron states in QCD. To this end, a method has been developed for systematic renormalization of Hamiltonian light-front field theories, with the hope of applying the method to QCD. It assumed massless particles, so its immediate application to QCD is limited to gluon states or states where quark masses can be neglected. This paper builds on the previous work by including particle masses non-perturbatively, which is necessary for a full treatment of QCD. We show that several subtle new issues are encountered when including masses non-perturbatively. The method with masses is algebraically and conceptually more difficult; however, we focus on how the methods differ. We demonstrate the method using massive phi^3 theory in 5+1 dimensions, which has important similarities to QCD.Comment: 7 pages, 2 figures. Corrected error in Eq. (11), v3: Added extra disclaimer after Eq. (2), and some clarification at end of Sec. 3.3. Final published versio

    Arrangement of sympathetic fibers within the human common peroneal nerve: Implications for microneurography

    Get PDF
    Recently, interest has grown in the firing patterns of individual or multiunit action potential firing patterns in human muscle sympathetic nerve recordings using microneurography. Little is known, however, about sympathetic fiber distribution in human lower limb nerves that will affect the multiunit recordings. Therefore, the purpose of this study was to examine the sympathetic fiber distribution within the human common peroneal nerve using immunohistochemical techniques (tyrosine hydroxylase, avidin-biotin complex technique). Five-micrometer transverse and 10-μm longitudinal sections, fixed in formaldehyde, were obtained from the peroneal nerve that had been harvested from three human cadavers (83 ± 11 yr) within 24 h of death. Samples of rat adrenal gland and brain served as controls. Sympathetic fiber arrangement varied between left and right nerves of the same donor, and between donors. However, in general, sympathetic fibers were dispersed throughout ∼25-38 fascicles of the peroneal nerve. The fibers were grouped in bundles of ∼2-44 axons or expressed individually throughout the fascicles, and the distribution was skewed toward smaller bundles with median and interquartile ratio values of 5 and 1 axons/bundle, respectively. These findings confirm the bundled organization of sympathetic axons within the peroneal nerve and provide the anatomical basis for outcomes in microneurographic studies. Copyright © 2013 the American Physiological Society

    Glueballs in a Hamiltonian Light-Front Approach to Pure-Glue QCD

    Get PDF
    We calculate a renormalized Hamiltonian for pure-glue QCD and diagonalize it. The renormalization procedure is designed to produce a Hamiltonian that will yield physical states that rapidly converge in an expansion in free-particle Fock-space sectors. To make this possible, we use light-front field theory to isolate vacuum effects, and we place a smooth cutoff on the Hamiltonian to force its free-state matrix elements to quickly decrease as the difference of the free masses of the states increases. The cutoff violates a number of physical principles of light-front pure-glue QCD, including Lorentz covariance and gauge covariance. This means that the operators in the Hamiltonian are not required to respect these physical principles. However, by requiring the Hamiltonian to produce cutoff-independent physical quantities and by requiring it to respect the unviolated physical principles of pure-glue QCD, we are able to derive recursion relations that define the Hamiltonian to all orders in perturbation theory in terms of the running coupling. We approximate all physical states as two-gluon states, and use our recursion relations to calculate to second order the part of the Hamiltonian that is required to compute the spectrum. We diagonalize the Hamiltonian using basis-function expansions for the gluons' color, spin, and momentum degrees of freedom. We examine the sensitivity of our results to the cutoff and use them to analyze the nonperturbative scale dependence of the coupling. We investigate the effect of the dynamical rotational symmetry of light-front field theory on the rotational degeneracies of the spectrum and compare the spectrum to recent lattice results. Finally, we examine our wave functions and analyze the various sources of error in our calculation.Comment: 75 pages, 17 figures, 1 tabl

    A Miniaturized Laser Heterodyne Radiometer for a Global Ground-Based Column Carbon Monitoring Network

    Get PDF
    We present progress in the development of a passive, miniaturized Laser Heterodyne Radiometer (mini-LHR) that will measure key greenhouse gases (C02, CH4, CO) in the atmospheric column as well as their respective altitude profiles, and O2 for a measure of atmospheric pressure. Laser heterodyne radiometry is a spectroscopic method that borrows from radio receiver technology. In this technique, a weak incoming signal containing information of interest is mixed with a stronger signal (local oscillator) at a nearby frequency. In this case, the weak signal is sunlight that has undergone absorption by a trace gas of interest and the local oscillator is a distributive feedback (DFB) laser that is tuned to a wavelength near the absorption feature of the trace gas. Mixing the sunlight with the laser light, in a fast photoreceiver, results in a beat signal in the RF. The amplitude of the beat signal tracks the concentration of the trace gas in the atmospheric column. The mini-LHR operates in tandem with AERONET, a global network of more than 450 aerosol sensing instruments. This partnership simplifies the instrument design and provides an established global network into which the mini-LHR can rapidly expand. This network offers coverage in key arctic regions (not covered by OCO-2) where accelerated warming due to the release of CO2 and CH4 from thawing tundra and permafrost is a concern as well as an uninterrupted data record that will both bridge gaps in data sets and offer validation for key flight missions such as OCO-2, OCO-3, and ASCENDS. Currently, the only ground global network that routinely measures multiple greenhouse gases in the atmospheric column is TCCON (Total Column Carbon Observing Network) with 18 operational sites worldwide and two in the US. Cost and size of TCCON installations will limit the potential for expansion, We offer a low-cost $30Klunit) solution to supplement these measurements with the added benefit of an established aerosol optical depth measurement. Aerosols induce a radiative effect that is an important modulator of regional carbon cycles. Changes in the diffuse radiative flux fraction (DRF) due to aerosol loading have the potential to alter the terrestrial carbon exchange

    Greenhouse Gas Concentration Data Recovery Algorithm for a Low Cost, Laser Heterodyne Radiometer

    Get PDF
    The goal of a coordinated effort between groups at GWU and NASA GSFC is the development of a low-cost, global, surface instrument network that continuously monitors three key carbon cycle gases in the atmospheric column: carbon dioxide (CO2), methane (CH4), carbon monoxide (CO), as well as oxygen (O2) for atmospheric pressure profiles. The network will implement a low-cost, miniaturized, laser heterodyne radiometer (mini-LHR) that has recently been developed at NASA Goddard Space Flight Center. This mini-LHR is designed to operate in tandem with the passive aerosol sensor currently used in AERONET (a well established network of more than 450 ground aerosol monitoring instruments worldwide), and could be rapidly deployed into this established global network. Laser heterodyne radiometry is a well-established technique for detecting weak signals that was adapted from radio receiver technology. Here, a weak light signal, that has undergone absorption by atmospheric components, is mixed with light from a distributed feedback (DFB) telecommunications laser on a single-mode optical fiber. The RF component of the signal is detected on a fast photoreceiver. Scanning the laser through an absorption feature in the infrared, results in a scanned heterodyne signal io the RF. Deconvolution of this signal through the retrieval algorithm allows for the extraction of altitude contributions to the column signal. The retrieval algorithm is based on a spectral simulation program, SpecSyn, developed at GWU for high-resolution infrared spectroscopies. Variations io pressure, temperature, composition, and refractive index through the atmosphere; that are all functions of latitude, longitude, time of day, altitude, etc.; are modeled using algorithms developed in the MODTRAN program developed in part by the US Air Force Research Laboratory. In these calculations the atmosphere is modeled as a series of spherically symmetric shells with boundaries specified at defined altitudes. Temperature, pressure, and species mixing ratios are defined at these boundaries. Between the boundaries, temperature is assumed to vary linearly with altitude while pressure (and thus gas density) vary exponentially. The observed spectrum at the LHR instrument will be the integration of the contributions along this light path. For any absorption measurement the signal at a particular spectral frequency is a linear combination of spectral line contributions from several species. For each species that might absorb in a spectral region, we have pre-calculated its contribution as a function of temperature and pressure. The integrated path absorption spectrum can then by calculated using the initial sun angle (from location, date, and time) and assumptions about pressure and temperature profiles from an atmospheric model. The modeled spectrum is iterated to match the experimental observation using standard multilinear regression techniques. In addition to the layer concentrations, the numerical technique also provides uncertainty estimates for these quantities as well as dependencies on assumptions inherent in the atmospheric models
    corecore