26,098 research outputs found

    Hidden Dangers to Researcher Safety While Sampling Freshwater Benthic Macroinvertebrates

    Get PDF
    Abstract This paper reviews hidden dangers that threaten the safety of freshwater (FW) researchers of benthic macroinvertebrates (BMIs). Six refereed journals containing 2,075 papers were reviewed for field research resulting in 505 FW BMI articles. However, danger was reported in only 18% of FW BMI papers. I discussed: 1) papers that did not warn of existing danger and consider researcher safety, 2) metric threshold values (e.g., chemical hazards), and non-metric dangers, (e.g., caves and aquatic habitats), 3), the frequency of danger occurrence, 4) baseline and extreme values. Examples of 28 danger factors that posed a threat to BMI researchers in water were compared by frequency per journal papers. FW dangers identified by metric thresholds present a safety limit not to be exceeded, whereas non-metric dangers do not have a threshold as further explained. Also, discussed was a recent thesis on civil engineering hydraulics that identified low-head dams as deceptive and an increasing source of drownings in 39 states. A safe shallow water maximum depth to wade and collect BMIs is proposed based on researcher height and gender, compared to human height means in a large database. Practical safety recommendations were presented to help protect the FW researcher avoid and survive hidden dangers

    Exponential Smoothing: A Prediction Error Decomposition Principle

    Get PDF
    In the exponential smoothing approach to forecasting, restrictions are often imposed on the smoothing parameters which ensure that certain components are exponentially weighted averages. In this paper, a new general restriction is derived on the basis that the one-step ahead prediction error can be decomposed into permanent and transient components. It is found that this general restriction reduces to the common restrictions used for simple, trend and seasonal exponential smoothing. As such, the prediction error argument provides the rationale for these restrictions.time series analysis, prediction, exponential smoothing, ARIMA models, state space models.

    A Pedant's Approach to Exponential Smoothing

    Get PDF
    An approach to exponential smoothing that relies on a linear single source of error state space model is outlined. A maximum likelihood method for the estimation of associated smoothing parameters is developed. Commonly used restrictions on the smoothing parameters are rationalised. Issues surrounding model identification and selection are also considered. It is argued that the proposed revised version of exponential smoothing provides a better framework for forecasting than either the Box-Jenkins or the traditional multi-disturbance state space approaches.Time Series Analysis, Prediction, Exponential Smoothing, ARIMA Models, Kalman Filter, State Space Models

    A comparison of integrated testlet and constructed-response question formats

    Full text link
    Constructed-response (CR) questions are a mainstay of introductory physics textbooks and exams. However, because of time, cost, and scoring reliability constraints associated with this format, CR questions are being increasingly replaced by multiple-choice (MC) questions in formal exams. The integrated testlet (IT) is a recently-developed question structure designed to provide a proxy of the pedagogical advantages of CR questions while procedurally functioning as set of MC questions. ITs utilize an answer-until-correct response format that provides immediate confirmatory or corrective feedback, and they thus allow not only for the granting of partial credit in cases of initially incorrect reasoning, but furthermore the ability to build cumulative question structures. Here, we report on a study that directly compares the functionality of ITs and CR questions in introductory physics exams. To do this, CR questions were converted to concept-equivalent ITs, and both sets of questions were deployed in midterm and final exams. We find that both question types provide adequate discrimination between stronger and weaker students, with CR questions discriminating slightly better than the ITs. Meanwhile, an analysis of inter-rater scoring of the CR questions raises serious concerns about the reliability of the granting of partial credit when this traditional assessment technique is used in a realistic (but non optimized) setting. Furthermore, we show evidence that partial credit is granted in a valid manner in the ITs. Thus, together with consideration of the vastly reduced costs of administering IT-based examinations compared to CR-based examinations, our findings indicate that ITs are viable replacements for CR questions in formal examinations where it is desirable to both assess concept integration and to reward partial knowledge, while efficiently scoring examinations.Comment: 14 pages, 3 figures, with appendix. Accepted for publication in PRST-PER (August 2014

    Fibre imaging bundles for full-field optical coherence tomography

    Get PDF
    An imaging fibre bundle is incorporated into a full-field imaging OCT system, with the aim of eliminating the mechanical scanning currently required at the probe tip in endoscopic systems. Each fibre within the imaging bundle addresses a Fizeau interferometer formed between the bundle end and the sample, a configuration which ensures down lead insensitivity of the probe fibres, preventing variations in sensitivity due to polarization changes in the many thousand constituent fibres. The technique allows acquisition of information across a planar region with single-shot measurement, in the form of a 2D image detected using a digital CCD camera. Depth scanning components are now confined within a processing interferometer external to the completely passive endoscope probe. The technique has been evaluated in our laboratory for test samples, and images acquired using the bundle-based system are presented. Data are displayed either as en-face scans, parallel to the sample surface, or as slices through the depth of the sample, with a spatial resolution of about 30 ï ­m. The minimum detectable reflectivity at present is estimated to be about 10-3, which is satisfactory for many inorganic samples. Methods of improving the signal-to- noise ratio for imaging of lower reflectivity samples are discuss

    Helical Fields and Filamentary Molecular Clouds II - Axisymmetric Stability and Fragmentation

    Get PDF
    In Paper I (Fiege & Pudritz, 1999), we constructed models of filamentary molecular clouds that are truncated by a realistic external pressure and contain a rather general helical magnetic field. We address the stability of our models to gravitational fragmentation and axisymmetric MHD-driven instabilities. By calculating the dominant modes of axisymmetric instability, we determine the dominant length scales and growth rates for fragmentation. We find that the role of pressure truncation is to decrease the growth rate of gravitational instabilities by decreasing the self-gravitating mass per unit length. Purely poloidal and toroidal fields also help to stabilize filamentary clouds against fragmentation. The overall effect of helical fields is to stabilize gravity-driven modes, so that the growth rates are significantly reduced below what is expected for unmagnetized clouds. However, MHD ``sausage'' instabilities are triggered in models whose toroidal flux to mass ratio exceeds the poloidal flux to mass ratio by more than a factor of 2\sim 2. We find that observed filaments appear to lie in a physical regime where the growth rates of both gravitational fragmentation and axisymmetric MHD-driven modes are at a minimum.Comment: 16 pages with 18 eps figures. Submitted to MNRA

    Reading policies for joins: An asymptotic analysis

    Full text link
    Suppose that mnm_n observations are made from the distribution R\mathbf {R} and nmnn-m_n from the distribution S\mathbf {S}. Associate with each pair, xx from R\mathbf {R} and yy from S\mathbf {S}, a nonnegative score ϕ(x,y)\phi(x,y). An optimal reading policy is one that yields a sequence mnm_n that maximizes E(M(n))\mathbb{E}(M(n)), the expected sum of the (nmn)mn(n-m_n)m_n observed scores, uniformly in nn. The alternating policy, which switches between the two sources, is the optimal nonadaptive policy. In contrast, the greedy policy, which chooses its source to maximize the expected gain on the next step, is shown to be the optimal policy. Asymptotics are provided for the case where the R\mathbf {R} and S\mathbf {S} distributions are discrete and ϕ(x,y)=1or0\phi(x,y)=1 or 0 according as x=yx=y or not (i.e., the observations match). Specifically, an invariance result is proved which guarantees that for a wide class of policies, including the alternating and the greedy, the variable M(n) obeys the same CLT and LIL. A more delicate analysis of the sequence E(M(n))\mathbb{E}(M(n)) and the sample paths of M(n), for both alternating and greedy, reveals the slender sense in which the latter policy is asymptotically superior to the former, as well as a sense of equivalence of the two and robustness of the former.Comment: Published at http://dx.doi.org/10.1214/105051606000000646 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore