2,357 research outputs found
Low cost automated precise time measurement system
The Aerospace Guidance and Metrology Center (AGMC) has the responsibility for the dissemination of Precise Time and Time Interval (PTTI) to Air Force timing systems requiring microsecond time. In order to maintain traceability to the USNO Master Clock in Washington D.C., and accomplish efficient logging of time and frequency data on individual precision clocks, a simple automatic means of acquiring precise time has been devised. The Automatic Time Interval Measurement System (ATIMS) consists of a minicomputer (8K Memory), teletype terminal, electronic counter, Loran C receiver, time base generator and locally-manufactured relay matrix panel. During the measurement process, the computer controls the relay matrix which selects for comparison 13 atomic clocks against a reference clock and the reference versus Loran C. Through use of the system teletype, the operator is able to set the system clock (hours, minutes and seconds), examine and/or modify all clock data and constants, and set measurement intervals. This is done in a conversational manner. A logic flow diagram, system schematic, source listing and software components are included in the presentation
Development of a daily gridded precipitation data set for the Middle East
We show an algorithm to construct a rain-gauge-based analysis of daily precipitation for the Middle East. One of the key points of our algorithm is to construct an accurate distribution of climatology. One possible advantage of this product is to validate high-resolution climate models and/or to diagnose the impact of climate changes on local hydrological resources. Many users are familiar with a monthly precipitation dataset (New et al., 1999) and a satellite-based daily precipitation dataset (Huffman et al., 2001), yet our data set, unlike theirs, clearly shows the effect of orography on daily precipitation and other extreme events, especially over the Fertile Crescent region. Currently the Middle-East precipitation analysis product is consisting of a 25-year data set for 1979–2003 based on more than 1300 stations
Prediction of rainfall intensity measurement errors using commercial microwave communication links
Commercial microwave radio links forming cellular communication networks are known to be a valuable instrument for measuring near-surface rainfall. However, operational communication links are more uncertain relatively to the dedicated installations since their geometry and frequencies are optimized for high communication performance rather than observing rainfall. Quantification of the uncertainties for measurements that are non-optimal in the first place is essential to assure usability of the data. <br><br> In this work we address modeling of instrumental impairments, i.e. signal variability due to antenna wetting, baseline attenuation uncertainty and digital quantization, as well as environmental ones, i.e. variability of drop size distribution along a link affecting accuracy of path-averaged rainfall measurement and spatial variability of rainfall in the link's neighborhood affecting the accuracy of rainfall estimation out of the link path. Expressions for root mean squared error (RMSE) for estimates of path-averaged and point rainfall have been derived. To verify the RMSE expressions quantitatively, path-averaged measurements from 21 operational communication links in 12 different locations have been compared to records of five nearby rain gauges over three rainstorm events. <br><br> The experiments show that the prediction accuracy is above 90% for temporal accumulation less than 30 min and lowers for longer accumulation intervals. Spatial variability in the vicinity of the link, baseline attenuation uncertainty and, possibly, suboptimality of wet antenna attenuation model are the major sources of link-gauge discrepancies. In addition, the dependence of the optimal coefficients of a conventional wet antenna attenuation model on spatial rainfall variability and, accordingly, link length has been shown. <br><br> The expressions for RMSE of the path-averaged rainfall estimates can be useful for integration of measurements from multiple heterogeneous links into data assimilation algorithms
Ice nucleation from aqueous NaCl droplets with and without marine diatoms
Ice formation in the atmosphere by homogeneous and heterogeneous nucleation is one of the least understood processes in cloud microphysics and climate. Here we describe our investigation of the marine environment as a potential source of atmospheric IN by experimentally observing homogeneous ice nucleation from aqueous NaCl droplets and comparing against heterogeneous ice nucleation from aqueous NaCl droplets containing intact and fragmented diatoms. Homogeneous and heterogeneous ice nucleation are studied as a function of temperature and water activity, <i>a</i><sub>w</sub>. Additional analyses are presented on the dependence of diatom surface area and aqueous volume on heterogeneous freezing temperatures, ice nucleation rates, &omega;<sub>het</sub>, ice nucleation rate coefficients, <i>J</i><sub>het</sub>, and differential and cumulative ice nuclei spectra, <i>k(T)</i> and <i>K(T)</i>, respectively. Homogeneous freezing temperatures and corresponding nucleation rate coefficients are in agreement with the water activity based homogeneous ice nucleation theory within experimental and predictive uncertainties. Our results confirm, as predicted by classical nucleation theory, that a stochastic interpretation can be used to describe the homogeneous ice nucleation process. Heterogeneous ice nucleation initiated by intact and fragmented diatoms can be adequately represented by a modified water activity based ice nucleation theory. A horizontal shift in water activity, &Delta;<i>a</i><sub>w, het</sub> = 0.2303, of the ice melting curve can describe median heterogeneous freezing temperatures. Individual freezing temperatures showed no dependence on available diatom surface area and aqueous volume. Determined at median diatom freezing temperatures for <i>a</i><sub>w</sub> from 0.8 to 0.99, &omega;<sub>het</sub><u>~</u>0.11<sup>+0.06</sup><sub>&minus;0.05</sub> s<sup>â1</sup>, <i>J</i><sub>het</sub><u>~</u>1.0<sup>+1.16</sup><sub>&minus;0.61</sub>&times;10<sup>4</sup> cm<sup>â2</sup> s<sup>â1</sup>, and <i>K</i><u>~</u>6.2<sup>+3.5</sup><sub>&minus;4.1</sub> &times;10<sup>4</sup> cm<sup>â2</sup>. The experimentally derived ice nucleation rates and nuclei spectra allow us to estimate ice particle production which we subsequently use for a comparison with observed ice crystal concentrations typically found in cirrus and polar marine mixed-phase clouds. Differences in application of time-dependent and time-independent analyses to predict ice particle production are discussed
Trainâtheâtrainer: Methodology to learn the cognitive interview
Research has indicated that police may not receive enough training in interviewing cooperative witnesses, specifically in use of the cognitive interview (CI). Practically, for the CI to be effective in realâworld investigations, police investigators must be trained by law enforcement trainers. We conducted a threeâphase experiment to examine the feasibility of training experienced law enforcement trainers who would then train others to conduct the CI. We instructed Federal Bureau of Investigation and local law enforcement trainers about the CI (Phase I); law enforcement trainers from both agencies (n = 4, 100% male, mean age = 50 years) instructed university students (n = 25, 59% female, mean age = 21 years) to conduct either the CI or a standard law enforcement interview (Phase II); the student interviewers then interviewed other student witnesses (n = 50, 73% female, mean age = 22 years), who had watched a simulated crime (phase III). Compared with standard training, interviews conducted by those trained by CIâtrained instructors contained more information and at a higher accuracy rate and with fewer suggestive questions.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147804/1/jip1518_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147804/2/jip1518.pd
The Peculiar Phase Structure of Random Graph Bisection
The mincut graph bisection problem involves partitioning the n vertices of a
graph into disjoint subsets, each containing exactly n/2 vertices, while
minimizing the number of "cut" edges with an endpoint in each subset. When
considered over sparse random graphs, the phase structure of the graph
bisection problem displays certain familiar properties, but also some
surprises. It is known that when the mean degree is below the critical value of
2 log 2, the cutsize is zero with high probability. We study how the minimum
cutsize increases with mean degree above this critical threshold, finding a new
analytical upper bound that improves considerably upon previous bounds.
Combined with recent results on expander graphs, our bound suggests the unusual
scenario that random graph bisection is replica symmetric up to and beyond the
critical threshold, with a replica symmetry breaking transition possibly taking
place above the threshold. An intriguing algorithmic consequence is that
although the problem is NP-hard, we can find near-optimal cutsizes (whose ratio
to the optimal value approaches 1 asymptotically) in polynomial time for
typical instances near the phase transition.Comment: substantially revised section 2, changed figures 3, 4 and 6, made
minor stylistic changes and added reference
Universality in solar flare and earthquake occurrence
Earthquakes and solar flares are phenomena involving huge and rapid releases
of energy characterized by complex temporal occurrence. By analysing available
experimental catalogs, we show that the stochastic processes underlying these
apparently different phenomena have universal properties. Namely both problems
exhibit the same distributions of sizes, inter-occurrence times and the same
temporal clustering: we find afterflare sequences with power law temporal
correlations as the Omori law for seismic sequences. The observed universality
suggests a common approach to the interpretation of both phenomena in terms of
the same driving physical mechanism
Researching the use of force: The background to the international project
This article provides the background to an international project on use of force by the police that was carried out in eight countries. Force is often considered to be the defining characteristic of policing and much research has been conducted on the determinants, prevalence and control of the use of force, particularly in the United States. However, little work has looked at police officersâ own views on the use of force, in particular the way in which they justify it. Using a hypothetical encounter developed for this project, researchers in each country conducted focus groups with police officers in which they were encouraged to talk about the use of force. The results show interesting similarities and differences across countries and demonstrate the value of using this kind of research focus and methodology
- âŠ